id
stringlengths 10
10
| title
stringlengths 7
231
| abstract
stringlengths 3
2.43k
| authors
stringlengths 5
21.5k
| published_date
stringlengths 20
20
| link
stringlengths 33
34
| markdown
stringlengths 133
1.92M
|
---|---|---|---|---|---|---|
2306.04653 | From Data to Action: Exploring AI and IoT-driven Solutions for Smarter
Cities | The emergence of smart cities demands harnessing advanced technologies like
the Internet of Things (IoT) and Artificial Intelligence (AI) and promises to
unlock cities' potential to become more sustainable, efficient, and ultimately
livable for their inhabitants. This work introduces an intelligent city
management system that provides a data-driven approach to three use cases: (i)
analyze traffic information to reduce the risk of traffic collisions and
improve driver and pedestrian safety, (ii) identify when and where energy
consumption can be reduced to improve cost savings, and (iii) detect
maintenance issues like potholes in the city's roads and sidewalks, as well as
the beginning of hazards like floods and fires. A case study in Aveiro City
demonstrates the system's effectiveness in generating actionable insights that
enhance security, energy efficiency, and sustainability, while highlighting the
potential of AI and IoT-driven solutions for smart city development. | Tiago Dias, Tiago Fonseca, João Vitorino, Andreia Martins, Sofia Malpique, Isabel Praça | 2023-06-06T10:22:43Z | http://arxiv.org/abs/2306.04653v1 | # From Data to Action: Exploring AI and IoT-driven Solutions for Smarter Cities
###### Abstract
The emergence of smart cities demands harnessing advanced technologies like the Internet of Things (IoT) and Artificial Intelligence (AI) and promises to unlock cities' potential to become more sustainable, efficient, and ultimately ivable for their inhabitants. This work introduces an intelligent city management system that provides a data-driven approach to three use cases: (i) analyze traffic information to reduce the risk of traffic collisions and improve driver and pedestrian safety, (ii) identify when and where energy consumption can be reduced to improve cost savings, and (iii) detect maintenance issues like potholes in the city's roads and sidewalks, as well as the beginning of hazards like floods and fires. A case study in Aveiro City demonstrates the system's effectiveness in generating actionable insights that enhance security, energy efficiency, and sustainability, while highlighting the potential of AI and IoT-driven solutions for smart city development.
**Keywords:** security, internet of things, smart city, machine learning
## 1 Introduction
The growth and urbanization of cities worldwide, associated with increasing population expectations, has led to a multitude of complex challenges across various domains, such as transportation, public safety, energy consumption, and infrastructure maintenance and management. These challenges can result in significant negative impacts on both the environment and citizens' standard of living, making them into a compelling impetus for the efforts of city planners, policymakers, engineers, and the public at large. Against this backdrop, the emergence of smart cities, powered by the use of advanced Information and Communication Technologies (ICT), such as the Internet of Things (IoT), Artificial Intelligence (AI), Machine Learning (ML), and data analytics, promise to unlock cities' potential to become more sustainable, intelligent, efficient, and ultimately ivable for their inhabitants.
Despite its novelty and innovative nature, several cities around the world have already begun to implement smart city solutions [1]. For instance, Barcelona, Spain, has
established a comprehensive smart city platform [2] that includes various solutions, such as smart parking, waste management, and air quality monitoring, all aimed at improving urban life for citizens. In Singapore [3], the Smart Nation initiative aims to harness technology to improve the quality of life of residents by enhancing transportation, healthcare, and public safety. In Portugal, the city of Aveiro is one of the city's leading these innovations and has made significant investment towards becoming a smarter city. The city has established the Aveiro Tech City Living Lab (ATCLL) [4], which is a research platform that serves as a testing ground for smart city solutions.
This work introduces the Intelligent City Management System (ICMS), the result of our participation in the first edition of the Aveiro Tech City hackathon, promoted by the municipality of Aveiro, that aimed to further enhance the capabilities and services of the city's management platform. ICMS is designed to enhance city management through a scalable and intuitive AI-powered system that integrates multiple data analysis and prediction dashboards. The system impact is evaluated by a live case study using real-world data derived from the ATCLL environment and additional IoT sensors available during the hackathon.
## 2 State-of-the-Art
The concept of a "smart city" has emerged as a response to the economic, social, and political challenges faced by post-industrial societies at the outset of the new millennium. This idea involves the strategic use of digital technologies to develop innovative solutions for urban communities. Therefore, the primary objective is to address the challenges encountered by urban society, including environmental pollution, demographic change, healthcare, the financial crisis, and resource scarcity [5].
Novel advancements in IoT are a key enabler for smart city applications [6], being responsible for generating an enormous quantity of data [7]. Indeed, in [8] Allam et al., have put forth a novel Smart City framework that integrates AI technology and urban systems, with a primary focus on enhancing urban sustainability and liveability. The authors contend that technology ought to serve as a fundamental cornerstone of Smart Cities, where Big Data can be derived from various domains through IoT. AI is proposed as an underlying feature capable of processing, analyzing, and interpreting the generated data. Moreover, to evaluate the electricity consumption patterns in Iran, Ghadamiet al. employed machine learning techniques and implemented dynamic strategies to foster citizen participation in renewable energy generation, informed by expert knowledge. The authors utilized a combination of an Artificial Neural Network and statistical analysis to develop a Decision Support System [9].
Another IoT applications for smart cities are related to vehicular traffic data, which represents one of the most vital data sources in a typical smart city. Effective analysis of this data can yield significant benefits for both citizens and governments. Neyestani et al. proposed a Mixed-Integer Linear Programming model for the traffic behavior of Plug-in Electric Vehicles, which can be integrated as a sub-module in various other studies such as operation and planning, thereby providing decision makers with valuable insights in urban environments [10], [11].
In this context, the ATCLL an open platform for developing, testing, and demonstrating innovative concepts, products, and services related to the urban environment. It includes an advanced communication infrastructure and an urban data management and analytics platform that can collect, process, and analyze data from various sources. The platform offers opportunities for anyone, or any organizations interested in devising novel solutions for the predicaments encountered in contemporary urban settings [12].
ATCLL integrates a communication infrastructure and sensing platform, which comprises an array of smart lamp posts that are equipped with both traffic radars and video cameras. Additionally, the platform integrates buses and other vehicles that are fitted with devices that collect and transmit data. Furthermore, sensors are deployed throughout the city to monitor the number of people present in different zones, as well as to measure environmental quality and other relevant factors. The seamless integration of these components creates a comprehensive and sophisticated technological ecosystem that enables the collection and analysis of vast amounts of data, providing new and innovative ways to address the challenges faced by modern cities. Overall, the ATCLL is a cutting-edge initiative that combines technology, research, and innovation to create a living laboratory for urban development [4], [12].
## 3 Proposed Solution
The literature shows that smarter cities generate an enormous flow of information which can be useful to keep track, improve and solve issues inherent to the city. Ultimately, its progression and development provide its inhabitants with better life quality. However, infrastructure costs, privacy and security issues and interoperability of multiple systems can be an embargo to achieve this goal. As such, this work attempts to facilitate and leverage the implementation of smart devices installed across Aveiro city to create an Intelligent City Management System (ICMS).
The proposed system is an AI-powered comprehensive system that integrates multiple data analysis and prediction dashboards to provide a single point of management for the city. The system provides a holistic view of various sectors of the city, with analytics and forecasting capabilities that allow city managers to make decisions quickly and effectively, improving efficiency and resource allocation. To enable its use across different cities, the system is highly scalable and configurable.
The proposed system is divided into four different components: (i) City Security and Safety (CSS), (ii) City Energy Management (CEM), (iii) City Infrastructure Maintenance (CIM), and (iv) City Management Dashboard (CMD). In this Representational State Transfer (REST) architecture, each component is considered a REST API and their communication is based on HTTP request. The authors decided to follow this architecture, as it allows for very low coupling of the components that represent different city management fields, making the system highly scalable, reliable, portable, and maintainable (Fig. 1).
Each component is implemented in Python and provides a real-time analysis of several aspects of the city, using data gathered from the ATCLL. The following sub chapters define the problem, the goal, and the implementation strategy of each city management component.
### Use Case 1: Security and Safety
Ensuring the security and safety of both vehicles and pedestrians is of paramount importance in modern cities, as it not only protects the well-being of individuals but also contributes to the overall livability and sustainability of the city.
The CSS component focuses on improving road and pedestrian safety, by analyzing data provided by smart posts spread across Aveiro city. As described in Section 2, these smart posts are equipped with multiple sensors, of which cameras and speed radars, that are strategically placed to capture pedestrian and vehicle circulation in the same area. The authors considered the use of this information relevant to monitor in real-time driver and pedestrian safety. The premise of this correlation is that a driver should adapt his/her driving behavior depending on the number of pedestrians in a certain zone, since the probability of an accident that compromises safety of those around is much higher. The goal of this integrated component is to provide intelligently organized data and assist on the decision-making process of security implementations regarding the city's public highways. CSS works similarly to an expert system, as it is capable of correlating information using user defined rules, however, it further expands on it by being capable of performing feature computation. These features and rules should be managed by the city's security decision-makers.
Figure 1: ICMS architecture.
Figure 2: CSS pipeline overview.
As described in Fig. 2, this use case is divided into the data processing and data correlation phases. The firstly the data consumed is segregated by smart post, to ensure correctness of the correlations. Then, the radar data that is unrelated to heavy or light vehicles are discarded. Lastly, for each smart post, the speed average and the number of pedestrians features are computed for the same time frame, which corresponds to the cadency configured, to aggregate the occurrences in each zone.
The second phase takes the processed intelligible data and correlates it according to rules defined by the decision-makers. The resulting correlation is then classified by the rules as warning or danger depending on the severity of the violation. Since the data that is correlated always belong to the same place, the city's security decision-makers can visualize where the violations are occurring and take into consideration a frequency level that is presented to decide whether security measures should be taken.
Even though this implementation only considers information regarding radar sensors and pedestrian count, other information can be included in the rules, as long as it is captured by the smart posts. The addition of other information, such as the existence of walkways or not can be included to attain a more fine-grained security analysis between pedestrians and vehicles.
### Use Case 2: Energy Management
The global environmental changes and ongoing energy crisis have amplified the need for efficient energy management in urban environments. As cities around the world strive to become smarter cities, they are actively exploring ways to optimize their energy consumption and reduce their carbon footprints. In this context, City Energy Management (CEM) has emerged as a crucial component in the design and operation of our smart city platform (Fig. 3).
Our solution utilizes the ATCLL smart lamp posts, which are equipped with a variety of sensors and cameras. Given their historical data on the number of identified pedestrians, vehicles, and other moving objects in each street, our algorithm is designed to predict the number of movements likely to occur on the street in the next 24 hours, accounting for the differences between workdays and weekends. Based on these predictions, CEM can provide recommendations on when to dim public lighting in specific streets. Alternatively, if the lights do not support dimming, CEM could advise shutting
Figure 3: CEM pipeline overview.
off half of the lamp posts in the street. This approach to public lighting enables a city to reduce its energy consumption while maintaining public safety.
Moreover, we highlight the possibility of integrating CEM with smart energy communities and intelligent demand response strategies, such as [13]. This can bring synergistic advantages because utility providers can effectively optimize and schedule flexible energy resources and energy storage across the city, leading to a reduction in costs and peak demand. By forecasting the absence of people on several streets at night, the system can dim the public lights in an event of peak grid consumption, acting as a smart regulating reserve mechanism. Participation in such mechanisms can even generate revenue for the city. Consequently, CEM not only acts as an isolated smart-city solution, but it can also be part of the creation of a more efficient and robust energy grid for urban areas, while incentivizing the use of renewable energy.
Regarding the specifically designed AI time-series forecasting algorithm, its implementation process consisted of four steps: collection and preprocessing of historical data, feature engineering, model selection, and model training. These are the steps that permit our algorithm to learn the patterns and relationships between various factors that influence the number of pedestrians, vehicles, and objects on the streets of a city. First in the data collection and preprocessing step, we collect historical data from the smart posts in Aveiro, which includes pedestrian and vehicle counts, as well as weather conditions, day of the week, holidays, time of day, and local events. The data is then preprocessed to remove any outliers, missing values, and inconsistencies.
Next, during the feature engineering step, we extract relevant features from the preprocessed data, which include temporal features (e.g., hour of the day, day of the week), weather features (e.g., temperature, humidity), and event-based features (e.g., holidays, local events). These features are crucial for improving the accuracy of our model. Finally, we selected and trained our machine learning model for time series forecasting using the preprocessed data and features, adjusting hyperparameters as necessary to minimize the error in the forecasts.
### Use Case 3: Infrastructure Maintenance
A city's infrastructure is essential for its efficient functioning, and regular maintenance is crucial to ensure its longevity. However, regular wear and tear and other factors can cause these infrastructures to fail, often leading to costly maintenance issues. Therefore, monitoring the infrastructure of a city is crucial to its development but the identification of maintenance issues can be challenging and time-consuming.
As part of the ICMS platform, CIM (Fig. 4).attempts to automate and improve the monitorization of Aveiro by leveraging its smart public transportation to efficiently monitor in a distributed way the city's infrastructure, resorting to live-image capturing and computer vision to detect infrastructure defects, which in turn are reported in real-time to the city's infrastructure engineers, allowing them to make data-driven decisions to ensure maintenance of the infrastructure.
The You Only Look Once (YOLOv5) [14] algorithm is utilized within the component to perform object detection of maintenance issues and the beginning of hazards, using a live feed by the smart public transportation. The algorithm was trained using
three annotated datasets, the Pothole Object Detection Dataset [15], the Roadway Flooding Image Dataset [16] and the FIRE Dataset [17], to detect the occurrence of potholes, floods, and fires, which are three concerning aspects for the city of Aveiro.
The CIM execution pipeline consists of analysing the captured images of the city relying on the employed YOLOv5 model. The algorithm detects the city's infrastructure defects, highlighting them with bounding boxes and assigning them a confidence score between 0 and 1. A higher confidence score reflects worse conditioning of the infrastructure and therefore requires more immediate attention. Lastly, the coordinates of detected issues are presented along with the highlighted images to the city's infrastructure engineers in an interactive map, so that they can be remotely analysed to decide which actions should be taken.
## 4 Case Study
An empirical case study was carried out to assess the feasibility and reliability of the proposed solution for the city of Aveiro. The organizing committee of the Aveiro Tech City hackathon provided two months of recorded data from the ATCLL, so the first month could be used for training of ML models, and the second for a holdout evaluation. ICMS was calibrated to the characteristics of the existing smart infrastructure and fine-tuned to the data readings of the city's IoT sensors. Then, the capabilities of the system were demonstrated live in the final stage of the hackathon.
Regarding the first use case, Security and Safety, the ratio between the number of speeding vehicles and the number of pedestrians, per hour of the day, can be visualized for each street of Aveiro equipped with the smart infrastructure. For the analyzed month, the ratio exceeded the allowed threshold in several streets and therefore further speed reduction mechanisms like speed humps and rumble strips are required to compel drivers to reduce speed. For instance, this can be noticed in a street where the threshold was slightly exceeded on a weekly basis (Fig. 5). Additionally, in this street and some nearby streets, there was a significant spike by the end of the month, possibly due to an event occurring in that area of the city with many pedestrians and vehicles in circulation. It could be valuable to correlate the sensor data with information about ongoing events in the city, to better distinguish between sporadic spikes and areas where drivers exhibit dangerous behavior on a regular basis.
Figure 4: CIM pipeline overview.
Regarding the second use case, Energy Management, the number of pedestrians, vehicles, and other moving objects can be visualized for each street. To distinguish between day and night times, the latter have a darker grey background. Furthermore, the hours when no activity was registered on a street are highlighted in green blocks to indicate that power consumption could be reduced in those hours by shutting off public equipment or dimming public lighting. For instance, these blocks where efficiency could be improved can be noticed almost every night in Aveiro (Fig. 6).
Additionally, based on the historical data provided in the first month, the proposed solution can predict the number of movements likely to occur on the street in the next 24 hours at a time, enabling a forecasting of the best hours to apply energy saving measures (Fig. 7). These predictions achieved a good generalization throughout the second month of data, which demonstrates that cost savings could be achieved with more intelligent management of public lighting. If such algorithms were trained with an entire year of data, they could be improved to account for special holidays and events that may affect the hours of activity in different areas of the city.
Figure 5: CSS threshold analysis.
Figure 6: CEM block identification.
Figure 7: CEM activity forecasting.
Regarding the third use case, Infrastructure Maintenance, every maintenance issue was created as an occurrence with location coordinates to be displayed in the interactive map of the ATCLL. For instance, in one of the main roundabouts of the city, a pothole was detected in a vehicle that was simulating the route of a bus, checking if it could be used as a mobile camera platform. The pothole was automatically assigned a confidence score of 0.41, which indicates that it is not an urgent issue, but it is still relevant for the municipality to fix it (Fig. 8). It is pertinent to note that the live feed of the camera was analyzed in real-time and immediately discarded afterwards, so only the frames where a public maintenance issue was detected were stored. Further cybersecurity measures like the anonymization of license plates are essential to comply with privacy regulations in smart city solutions that rely on camera feeds.
## 5 Conclusion
This work addressed several possible applications of AI to smart cities, in the context of the first edition of the Aveiro Tech City hackathon. The proposed system, ICMS, provides a data-driven approach to three use cases: (i) analyze traffic information to reduce the risk of traffic collisions and improve driver and pedestrian safety, (ii) identify when and where energy consumption can be reduced to improve cost savings, and (iii) detect maintenance issues like potholes in the city's roads and sidewalks, as well as the beginning of hazards like floods and fires.
By harnessing the power of AI and IoT, the proposed system can be significantly beneficial to the security, energy efficiency, and sustainability of a smart city. Further research efforts must be made to develop smart city solutions capable of tackling the environmental challenges of urban environments, so cities like Aveiro can provide more security and a better quality of life to their citizens.
**Acknowledgements.** The authors would like to thank the University of Aveiro, Instituto de Telecomunicacoes and Camara Municipal de Aveiro for organizing the event and proving the city data utilized in this work.
This work has received funding from UIDB/00760/2020 and from UIDP/00760/2020.
Figure 8: CIM issue detection. |
2307.13006 | The shadows of quantum gravity on Bell's inequality | This study delves into the validity of quantum mechanical operators in the
context of quantum gravity, recognizing the potential need for their
generalization. A primary objective is to investigate the repercussions of
these generalizations on the inherent non-locality within quantum mechanics, as
exemplified by Bell's inequality. Additionally, the study scrutinizes the
consequences of introducing a non-zero minimal length into the established
framework of Bell's inequality. The findings contribute significantly to our
theoretical comprehension of the intricate interplay between quantum mechanics
and gravity. Moreover, this research explores the impact of quantum gravity on
Bell's inequality and its practical applications within quantum technologies,
notably in the realms of device-independent protocols, quantum key
distribution, and quantum randomness generation. | Hooman Moradpour, Shahram Jalalzadeh, Hamid Tebyanian | 2023-07-24T11:07:48Z | http://arxiv.org/abs/2307.13006v3 | # The shadows of quantum gravity on Bell's inequality
###### Abstract
This study delves into the validity of quantum mechanical operators in the context of quantum gravity, recognizing the potential need for their generalization. A primary objective is to investigate the repercussions of these generalizations on the inherent non-locality within quantum mechanics, as exemplified by Bell's inequality. Additionally, the study scrutinizes the consequences of introducing a non-zero minimal length into the established framework of Bell's inequality. The findings contribute significantly to our theoretical comprehension of the intricate interplay between quantum mechanics and gravity. Moreover, this research explores the impact of quantum gravity on Bell's inequality and its practical applications within quantum technologies, notably in the realms of device-independent protocols, quantum key distribution and quantum randomness generation.
## I Introduction
The quantum realm is governed by the Heisenberg uncertainty principle (HUP), which mandates that the Hamiltonian be written as the starting point, leading to the Schrodinger equation and, eventually, the eigenvalues and wave function of the quantum system under consideration. In Heisenberg's formulation of quantum mechanics (QM) in the Hilbert space, we encounter states rather than wave functions (although they are connected). In general, QM fails to produce satisfactory solutions for systems featuring the Newtonian gravitational potential in their Hamiltonian. Therefore, in conventional and widely accepted quantum mechanics, gravity is not accounted for in terms of its operators or corresponding Hilbert space (quantum states) carrying gravitational information.
The incompatibility of gravity and quantum mechanics is not limited to Newtonian gravity and persists even when general relativity is considered. On the other hand, the existence of gravity, even in a purely Newtonian regime, leads to a non-zero minimum (of the order of \(10^{-35}\)m (Planck length) [1]) for the uncertainty in position measurement [1; 2; 3; 4]. Consistently, various scenarios of quantum gravity (QG), like String theory, also propose a non-zero minimal for the length measurement [3; 4]. The non-zero minimal length existence may affect the operators, and it leads to the generalization of HUP, called generalized uncertainty principle (GUP), [3; 4].
Operators and system states in QG may differ from those in QM. They are, in fact, functions of ordinary operators that appear in QM [4]. For instance, when considering the first order of the GUP parameter (\(\beta\)), we find that the momentum operator \(\hat{P}\) can be expressed as \(\hat{p}(1+\beta\hat{p}^{2})\), where \(\hat{P}\) and \(\hat{p}\) represent momentum operators in QG and QM, respectively. In this representation, \(\beta\) is positive, and the position operator remains unchanged [4]. It follows that gravity could impact our understanding of classical physics-based operator sets that have been established by QM [5; 6]. Consequently, it is generally possible to write \(\hat{O}=\hat{o}+\beta\hat{o}_{p}\), where \(\hat{O}\) and \(\hat{o}\) are operators in QG and QM, respectively, and \(\hat{o}_{p}\) is the first-order correction obtained using perturbation theory [7].
Motivated by the correlation between HUP and quantum non-locality (which is easily demonstrated in the square of Bell's inequality) [8; 9; 10], as well as the impact of GUP on operators, particularly angular momentum [11; 12], recent studies have revealed that minimal length can alter the square of Bell's operator [13]. Furthermore, GUP can affect the entanglement between energy and time, as evidenced by the results of a Franson experiment (which serves as a testing setup for time-energy entanglement) [14]. Table 1 clearly displays the generally expected modifications to operators and states resulting from minimal length. The term \(|\psi\rangle_{p}\) indicates an increase in a quantum superposition, which is a probabilistic signal for entanglement enhancement [5; 6] and, therefore, non-locality beyond quantum mechanics [15]. It is apparent that gravity impacts the information bound [7].
The inquiry into the influence of special and general relativity (SR and GR) on Bell's inequality (quantum non-locality) has been extensively studied over the years [16; 17; 18; 19; 20]. The existing research on the effects of SR on Bell's inequality can be classified into three general categories, depending on the method of applying Lorentz transformations: (i) the operators change while the states remain unchanged, (ii) only the states undergo the Lorentz transformation while the operators remain unaltered (the reverse of the previous one), and (iii) both the operators and states are affected by the Lorentz
\begin{table}
\begin{tabular}{|c|c|} \hline QM & QG \\ \hline \(\Delta\hat{x}\Delta\hat{p}\geq\frac{\hbar}{2}\) (HUP) & \(\Delta\hat{x}\Delta\hat{P}\geq\frac{\hbar}{2}[1+\beta(\Delta\hat{P})^{2}]\) (GUP) \\ \(\hat{o}\) & \(\hat{O}=\hat{o}+\beta\hat{o}_{p}\) \\ \(|\psi\rangle\) & \(|\psi_{GUP}\rangle=|\psi\rangle+\beta|\psi\rangle_{p}\) \\ \hline \end{tabular}
\end{table}
Table 1: A comparison between QM and QG (up to the first order of \(\beta\)). Here, \(|\psi\rangle\) and \(|\psi_{GUP}\rangle\) denote the quantum states in QM and QG, respectively, and \(|\psi\rangle_{p}\) is also calculable using the perturbation theory.
transformation [21; 22; 23; 24; 25; 26; 27; 28; 29; 30; 31; 32]. Furthermore, certain implications of GR and non-inertial observers have also been addressed in Refs. [33; 34; 35; 36]. Given the ongoing effort to bridge QG with QM [37], exploring the effects of QG on quantum non-locality is deemed inevitable and advantageous.
Bell's theorem suggests that certain experimental outcomes are constrained if the universe adheres to local realism. However, quantum entanglement, which seemingly allows distant particles to interact instantaneously, can breach these constraints [45]. This led to cryptographic solutions like quantum key distribution (QKD) [50] and quantum random number generation (QRNG) [43; 46]. However, classical noise can enter QKDs and QRNGs during implementation, which hackers can exploit to gain partial information. A device-independent (DI) method was developed to address this, ensuring security when a particular correlation is detected, irrespective of device noise. DI protocols often hinge on non-local game violations, like the CHSH inequality [39]. Section IV delves into the impacts of QG on these applications.
In this study, our primary goal is to explore the ramifications of QG on Bell's inequality, specifically by investigating the implications of minimal length (up to the first order of \(\beta\)). To address this objective, we adopt a methodology analogous to the three scenarios previously examined concerning the effects of Special Relativity (SR) on quantum non-locality. To facilitate this exploration, we categorize the existing cases into three distinct groups, which we elaborate on in the following section. The paper concludes by providing a comprehensive summary of our research findings, shedding light on the intricate interplay between quantum mechanics and gravity, elucidating the impact of QG on Bell's inequality, and exploring potential applications within various quantum-based systems.
## II Bell's inequality and the implications of QG
In the framework of QM, assume two particles and four operators \(\hat{A},\hat{A}^{\prime},\hat{B},\hat{B}^{\prime}\) with eigenvalues \(\lambda^{J}\) (\(J\in\{\hat{A},\hat{A}^{\prime},\hat{B},\hat{B}^{\prime}\}\)), while the first (second) two operators act on the first (second) particle. Now, operators \(\hat{j}=\frac{J}{|\lambda^{J}|}\in\{\hat{a},\hat{a}^{\prime},\hat{b},\hat{b}^ {\prime}\}\) have eigenvalues \(\pm 1\), and Bell's inequality is defined as
\[\big{\langle}\hat{B}\big{\rangle}\equiv\big{\langle}\hat{a}(\hat{b}+\hat{b}^ {\prime})+\hat{a}^{\prime}(\hat{b}-\hat{b}^{\prime})\big{\rangle}\leq 2. \tag{1}\]
Taking into account the effects of QG (up to the first order), the operators are corrected as \(\hat{J}_{GUP}=\hat{J}+\beta\hat{J}_{p}\) and \(\hat{j}_{GUP}=\frac{J+\beta\hat{J}_{p}}{|\lambda^{J}_{GUP}|}\) where \(\lambda^{J}_{GUP}\) represents the eigenvalue of \(\hat{J}_{GUP}\). Since QM should be recovered at the limit \(\beta\to 0\), one may expect \(\lambda^{J}_{GUP}\simeq\lambda^{J}+\beta\lambda^{J}_{p}\). Moreover, as the \(\beta\lambda^{J}_{p}\) term is perturbative, it is reasonable to expect \(|\beta\frac{\lambda^{J}_{p}}{\lambda^{J}}|<<1\) leading to \(|\lambda^{J}+\beta\lambda^{J}_{p}|=|\lambda^{J}|(1+\beta\frac{\lambda^{J}_{p} }{\lambda^{J}_{p}})\). Applying modifications to the states, operators, or both in quantum gravity can result in three distinct situations. Similar studies conducted on the effects of SR on Bell's inequality have also revealed three cases [21; 22; 23; 24; 25; 26; 27]. Therefore, it is necessary to consider the possibilities arising from these situations to understand the implications of quantum gravitational modifications. In the following paragraphs, we will examine these possibilities in depth.
#### ii.0.1 Purely quantum mechanical entangled states in the presence of operators modified by QG
Firstly, let us contemplate the scenario in which an entangled state (\(|\xi\rangle\)) has been prepared away from the QG influences. This implies that the objective has been accomplished using purely quantum mechanical procedures. Furthermore, it is assumed that an observer utilizes Bell measurements that are constructed through the incorporation of operators containing the QG corrections (\(\hat{j}_{GUP}\)). In the framework of QM, the violation amount of inequality (1) depends on the directions of Bell's measurements. Here, we have \(\hat{j}=\hat{j}_{GUP}+\beta(\frac{\lambda^{J}_{p}}{\lambda^{J}}\hat{j}_{GUP }-\frac{\hat{J}_{p}}{|\lambda^{J}|})\) inserted into Eq. (1) to reach
\[\big{\langle}\hat{B}_{GUP}\big{\rangle}\equiv \tag{2}\] \[\big{\langle}\hat{a}_{GUP}\big{(}\hat{b}_{GUP}+\hat{b}^{\prime} _{GUP}\big{)}+\hat{a}^{\prime}_{GUP}\big{(}\hat{b}_{GUP}-\hat{b}^{\prime}_{GUP }\big{)}\big{\rangle}\leq 2\] \[-\big{\langle}\beta^{\prime}_{a}\hat{a}_{GUP}\big{(}\hat{b}_{GUP }+\hat{b}^{\prime}_{GUP}\big{)}+\beta^{\prime\prime}_{a}\hat{a}^{\prime}_{GUP }\big{(}\hat{b}_{GUP}-\hat{b}^{\prime}_{GUP}\big{)}\big{\rangle}-\] \[+\beta^{\prime\prime}_{a}\big{\langle}\hat{A}_{GUP}\big{(}\hat{b} _{GUP}+\hat{b}^{\prime}_{GUP}\big{)}+\hat{a}^{\prime}_{GUP}\big{(}\hat{b}_{GUP }-\hat{b}^{\prime}_{GUP}\big{)}\big{\rangle}+\] \[\beta^{\prime\prime}_{b}\big{\langle}\hat{a}_{GUP}\big{(}\hat{b} _{GUP}+\hat{b}^{\prime}_{GUP}\big{)}+\hat{a}^{\prime}_{GUP}\big{(}\hat{b}_{GUP }-\hat{b}^{\prime}_{GUP}\big{)}\big{\rangle},\]
where \(\beta^{\prime}_{j}=\beta\frac{\lambda^{J}_{p}}{\lambda_{j}}\), \(\beta^{\prime\prime}_{j}=\beta|\lambda_{J}|^{-1}\) and the last two expressions have been written using \(\beta^{\prime\prime}_{a}=\beta^{\prime\prime}_{a^{\prime}}\) and \(\beta^{\prime\prime}_{b}=\beta^{\prime\prime}_{b^{\prime}}\). In this manner, it is clearly seen that although the state is unchanged, in general, \(\big{\langle}\hat{B}_{GUP}\big{\rangle}\neq\big{\langle}\hat{B}\big{\rangle}\) as the operators are affected by quantum features of gravity [12; 13; 14]. In studying the effects of SR on Bell's inequality, whenever the states remain unchanged, and Lorentz transformations only affect Bell's operator, a similar situation is also obtained [21; 22; 23; 24; 25; 26; 27; 32].
#### ii.0.2 Purely quantum mechanical measurements and quantum gravitational states
Now, let us consider the situation in which the Bell apparatus is built using purely quantum mechanical operators \(j\), and the primary entangled state carries the Planck scale information, i.e., the quantum features of gravity. It means that the entangled state is made using the \(j_{GUP}\) operators. A similar case in studies related to the effects of SR on Bell's inequality is the case where the Bell measurement does not go under the
Lorentz transformation while the system state undergoes the Lorentz transformation [21; 22; 23; 24; 25; 26; 27; 32]. In this setup, we have \(\left|\xi_{GUP}\right\rangle=\left|\xi\right\rangle+\beta\left|\xi\right\rangle_ {p}\) and thus
\[\left\langle\xi_{GUP}\middle|\hat{B}\middle|\xi_{GUP}\right\rangle \equiv\left\langle\hat{B}\right\rangle_{GUP}=\left\langle\hat{B}\right\rangle +2\beta\langle\xi\big{|}\hat{B}\big{|}\xi\rangle_{p}\] \[\Rightarrow\left\langle\hat{B}\right\rangle_{GUP}\leq 2\big{(}1+ \beta\langle\xi\big{|}\hat{B}\big{|}\xi\rangle_{p}\big{)}. \tag{3}\]
Correspondingly, if one considers a Bell measurement apparatus that yields \(\left\langle\hat{B}\right\rangle=2\sqrt{2}\), then such an apparatus cannot lead \(\left\langle\hat{B}\right\rangle_{GUP}\) to its maximum possible value whenever Lorentz symmetry is broken [38].
#### ii.2.3 Bell's inequality in a purely quantum gravitational regime
In deriving Bell's inequality, it is a significant step to ensure that the operators' eigenvalues are only either \(\pm 1\), regardless of their origin, whether it be from QM or QG. If both the Bell measurement and the entangled state were prepared using the quantum gravitational operators, then it is evident that \(\left\langle\xi_{GUP}\middle|\hat{B}_{GUP}\middle|\xi_{GUP}\right\rangle\leq 2\). This result indicates that, when considering the effects of QG on both the state and the operators, Bell's inequality and the classical regime's limit (which is 2 in the inequality) remain unchanged compared to the previous setups. The same outcome is also achieved when it comes to the relationship between SR and Bell's inequality, provided that both the system state and Bell's measurement undergo a Lorentz transformation [26].
## III Results
This section studies QG's implications on Bell's inequality, specifically within the contexts delineated earlier. The CHSH inequality, a specific form of Bell's inequality, provides a quantifiable limit on the correlations predicted by local hidden-variable theories [51]. A violation of the CHSH inequality underscores the inability of such approaches to account for the observed correlations in specific experiments with entangled quantum systems, as predicted by quantum mechanics [47].
Now, we define the scenario where there are two parties where an entangled pair is shared between them. The entangled state of two qubits can be represented by the Bell state:
\[\left|\psi\right\rangle=\frac{1}{\sqrt{2}}(\left|00\right\rangle+\left|11 \right\rangle) \tag{4}\]
Alice and Bob each measure their respective states. They can choose between two measurement settings: \(\hat{a},\hat{a}^{\prime}\) for Alice and \(\hat{b},\hat{b}^{\prime}\) for Bob. The measurement results can be either \(+1\) or \(-1\). The expected value of the CHSH game using the above quantum strategy and the Bell state is given in Eq. 1. Classically, the maximum value of \(\left\langle\hat{B}\right\rangle\) is 2. However, this value can reach \(2\sqrt{2}\) with the quantum strategy, violating the CHSH inequality.
Fig. 1 illustrates that the CHSH inequality can be surpassed by judiciously selecting the appropriate detection angles, denoted as \(\theta_{1}\) and \(\theta_{2}\). The color bar quantitatively represents the value of the inequality, highlighting two distinct regions where the value exceeds the classical limit of 2. In Fig. 1, the simulation of Bell's inequality is conducted solely based on QM representations without incorporating QG impact.
Next, we consider the QG impact on Bell's inequality for various cases; better to say, we extend the well-known Bell inequality to account for the effects of QG. Equations 2 and 3 introduce new terms that are parameterized by \(\beta\), a constant that quantifies the strength of quantum gravitational effects. These equations represent the modified Bell inequalities in the presence of QG. To explore the implications of these modifications, we plot, see Fig. 2, the degree of Bell inequality violation, denoted as \(\left\langle\hat{B}\right\rangle\), as a function of \(\theta\) for various angles \(\beta\). Each sub-figure of Fig. 2 features three curves: the blue curve represents the Bell inequality in the framework of QM, while the orange and green curves correspond to the modified Bell inequalities given by Equations 2 and 3, respectively, which incorporate the effects of QG.
The results notably indicate an escalating violation of the Bell inequality with the introduction of QG. As the parameter \(\beta\) increases, the violation surpasses the quantum mechanical limit of \(\sqrt{8}\), signifying a more pronounced breach of the inequality. This implies that the presence of quantum gravitational effects could lead to a more pronounced violation of the Bell inequality than what is predicted by standard quantum mechanics.
## IV Applications
QKD and QRNG represent two extensively researched and commercially implemented areas where the applications of quantum mechanics come to life. While quantum
Figure 1: The 2D plot of the CHSH inequality values as functions of detection angles \(\theta_{1}/\pi\) and \(\theta_{2}/\pi\). Different colors indicate different \(\left\langle\hat{B}\right\rangle\) values, with a contour distinguishing the classical and quantum regions.
mechanics underpins the security of these systems, experimental imperfections can introduce vulnerabilities. To address this, DI protocols have been developed. These protocols harness the non-local correlations inherent in quantum entanglement. Importantly, they don't rely on an intricate understanding of the devices in use; their security is grounded solely in the observed violation of non-local correlations, such as the Bell inequalities. This approach offers a robust solution to the security challenges posed by device imperfections [40; 42].
In DI QKD, two distant parties share an entangled quantum state. They perform measurements on their respective parts of the state, and due to the non-local nature of entanglement, the outcomes of these measurements are correlated in a way that disobeys classical explanation. These correlations serve as the foundation for key generation, with the security of the key guaranteed by the violation of Bell inequalities. Basically, any eavesdropper attempting to intercept or tamper with the quantum states would disrupt these correlations, making their presence detectable.
The security and randomness of DI QRNG don't depend on trusting the intrinsic workings of the devices. Traditional QRNGs require detailed models and assumptions about the device, but in DI QRNGs, as long as observed outcomes violate Bell inequalities, one can be assured of the randomness. With the rise of quantum computers, many cryptographic methods are at risk. Nevertheless, the unpredictability in DI QRNG is more than just computationally hard for quantum computers; it's theoretically impossible to predict due to the inherent randomness of quantum processes [43; 46].
Incorporating the effects of QG in quantum information science and technology becomes an intellectual exercise and a practical necessity. Given the results in the previous section--significantly that QG effects can enhance the violation of Bell inequalities--let's consider its implications for quantum information science and technology and its applications.
The security of QKD is guaranteed by the quantum mechanical violation of Bell inequalities; increasing the violation value of Bell's inequality makes QKD even more secure against attacks. This disturbance changes the quantum correlations between Alice's and Bob's measurements. In other words, if the eavesdropper is listening in, the observed violations of Bell's inequalities at Alice's and Bob's ends will reduce, moving closer to what would be expected classically. Thus, if you start with a higher violation of Bell's inequalities (thanks to QG effects), you're raising the "quantumness" of your
Figure 2: The figure depicts the Bell inequality values as a function of \(\theta\), the rotation angle modulating the measurement basis. Results are stratified by distinct \(\beta\) values: 0.1, 0.2, 0.5, and 0.9. Each subplot features three curves—blue, orange, and green—corresponding to quantum mechanical (QM, Eq. 1), first quantum gravitational (QG-1, Eq. 2), and second quantum gravitational (QG-2, Eq. 3) formulations of the Bell inequality, respectively. The dual horizontal lines at \(-2\) and \(2\) (colored in faded yellow) demarcate the classical regime, while the regions above and below these lines (highlighted in faded red) signify violations of the classical limit, thereby entering the quantum domain.
initial state. The higher this initial level, the more sensitive your system becomes to any eavesdropping activities. A significant drop in the observed Bell inequality violation from this higher baseline would more quickly and definitively signal the presence of eavesdropping, thus enabling quicker and more reliable detection of any security breaches.
DI protocols prevent the need for trust in the hardware by utilizing Bell inequality violations--the greater the violation, the higher the level of security. The introduction of QG effects adds an additional layer of robustness to DI protocols, fortifying them through quantum mechanical principles and integrating fundamental theories of nature. Similarly, for QRNGs, a heightened violation signifies a more quantum-coherent system, enhancing the quality of randomness, which comprises not merely an incremental advancement but a paradigmatic leap in the entropy of the generated random numbers. Consequently, this reduces the computational time required to achieve a given level of randomness and unpredictability, analogous to transitioning from conventional vehicular propulsion to advanced warp drives, all while adhering to the fundamental constraints of space-time.
More importantly, quantum gravity could offer richer quantum correlations in multipartite systems. Imagine a quantum network secured by quantum gravity effects--each additional party would enhance not just the computational power but the security, generating what could be termed "quantum gravity-secured entanglement." Enabling a brand-new platform for multiparty quantum computations and secret sharing protocols.
In summary, enhanced violations of Bell inequalities render QKD virtually impregnable, elevate QRNGs to sources of high-entropy randomness, and establish DI protocols as the epitome of trust-free security mechanisms. Dismissing QG as a purely academic endeavor could overlook its potential as a critical element in safeguarding quantum data against even the most advanced computational threats. If quantum mechanics is considered the apex of security and efficiency, the advent of QG compels a reevaluation. It promises to redefine the boundaries of what is secure, efficient, and trustworthy in quantum technologies.
## V Conclusion
The study can be summarized by its two main components: \(i)\) the origin of entangled states and \(ii)\) Bell's measurement. Furthermore, the study has introduced the possibility of three outcomes depending on which cornerstone carries the quantum gravitational modifications. The first two scenarios suggest that if only one of the foundations stores the effects of QG, then a precise Bell measurement (depending on the value of \(\beta\)) could detect the effects of QG. This is due to the differences between \(\left\langle\hat{B}\right\rangle\), \(\left\langle\hat{B}_{GUP}\right\rangle\), and \(\left\langle\hat{B}\right\rangle_{GUP}\). In the third case, Bell's inequality remains invariant if we consider the quantum aspects of gravity on both the states and the operators. Moreover, the results demonstrate that the presence of QG enhances Bell's inequality violation, thereby offering avenues for improving the security and performance of DI QRNG and QKD protocols.
## Acknowledgement
S.J. acknowledges financial support from the National Council for Scientific and Technological Development-CNPq, Grant no. 308131/2022-3.
|
2308.09803 | Liquid Crystal-Based RIS for VLC Transmitters: Performance Analysis,
Challenges, and Opportunities | This article presents a novel approach of using reconfigurable intelligent
surfaces (RISs) in the transmitter of indoor visible light communication (VLC)
systems to enhance data rate uniformity and maintain adequate illumination. In
this approach, a liquid crystal (LC)-based RIS is placed in front of the LED
arrays of the transmitter to form an LC-based RIS-enabled VLC transmitter. This
RIS-enabled transmitter is able to perform new functions such as transmit light
steering and amplification and demonstrates very high data rate and
illumination performance when compared with traditional VLC transmitters with
circular and distributed LED arrays and the more recent angle diversity
transmitter. Simulation results reveal the strong potential of LC-based
RIS-aided transmitters in satisfying the joint illumination and communication
needs of indoor VLC systems and positions VLC as a critical essential block for
next generation communication networks. Several challenging and exciting issues
related to the realization of such transmitters are discussed. | Sylvester Aboagye, Telex M. N. Ngatched, Alain R. Ndjiongue, Octavia A. Dobre, Hyundong Shin | 2023-08-18T20:17:34Z | http://arxiv.org/abs/2308.09803v1 | # Liquid Crystal-Based RIS for VLC Transmitters: Performance Analysis, Challenges, and Opportunities
###### Abstract
This article presents a novel approach of using reconfigurable intelligent surfaces (RISs) in the transmitter of indoor visible light communication (VLC) systems to enhance data rate uniformity and maintain adequate illumination. In this approach, a liquid crystal (LC)-based RIS is placed in front of the LED arrays of the transmitter to form an LC-based RIS-enabled VLC transmitter. This RIS-enabled transmitter is able to perform new functions such as transmit light steering and amplification and demonstrates very high data rate and illumination performance when compared with traditional VLC transmitters with circular and distributed LED arrays and the more recent angle diversity transmitter. Simulation results reveal the strong potential of LC-based RIS-aided transmitters in satisfying the joint illumination and communication needs of indoor VLC systems and positions VLC as a critical essential block for next generation communication networks. Several challenging and exciting issues related to the realization of such transmitters are discussed.
LED array arrangement, angle diversity transmitter, liquid crystal, reconfigurable intelligent surfaces, illumination, data rate.
## I Introduction
Visible light communication (VLC) has emerged as one of the key revolutionary technologies to support energy-efficient, secure, and high data rate transmissions with low deployment costs in the next generation of communication networks [1]. This is because of the huge and unlicensed bandwidth availability in the visible light spectrum and the rapid development of light-emitting diodes (LEDs), which serve as transmitters in VLC. However, the practical deployment of VLC systems, especially in indoor environments, has been faced with unique challenges such as non-uniform illumination and data rate coverage [2, 3, 4, 5], line-of-sight (LoS) blockages, random device orientation, and loss of incident signal power in VLC receivers [6]. The focus of this article relates to the non-uniform illumination and data rate coverage design issue. Although this challenge can be considered the most important since there can be no successful communication without adequate network coverage, it has received the least attention. Specifically, large variations in illumination and data rate, especially in the corners of indoors, limit the system's ability to provide ubiquitous services and high data rates to multiple users located at different places. Moreover, the quality of any user's experience should not be defined solely or affected by the location in a room as all users should enjoy high communication quality.
The authors in [2] examined the illumination properties of different LED array arrangements and experimentally demonstrated the communication and illumination performance of a phosphor-based VLC system. In [3], the authors investigated the bit error rate and channel capacity performance of circular and centered LED arrangements. In [5], the authors analyzed the effects of LED array layout on the illumination uniformity of VLC systems. An optimization problem to determine the optimal placement of LED arrays to maximize the average area spectral efficiency has been considered in [7]. The results indicated that LED arrays must be deployed in the middle of the room to support high spectral efficiency and illumination. The authors in [8, 9, 10] studied the use of angle diversity transmitters (ADTs) to reduce illumination fluctuation and provide a more uniform data rate in indoor VLC systems.
Recently, a number of studies such as [1, 11, 12, 13, 6, 1, 4] have investigated the use of optical reconfigurable intelligent surfaces (RISs) to solve the design problems (i) to (iv). In [4], the authors proposed a novel approach of using mirrors to enhance the illumination uniformity and data rate of an indoor multi-cell VLC system. The studies in [11, 1, 12] proposed RIS-aided VLC system models to combat the blockage and random device orientation issues in an indoor environment. In [6, 13], the authors examined a liquid crystal (LC)-based RIS to enhance signal detection in VLC receivers by performing incident light amplification and light steering. In the studies mentioned above, the considered system models involved the deployment of optical RISs either in the transmission channel [11, 12, 4, 1] or in front of the photodetector (PD) of the VLC receiver [6, 13]. To the best of the authors' knowledge, there has yet to be a study on the application of optical RISs at the transmitter side to boost the coverage and enhance key performance metrics of VLC systems.
In VLC systems, it is highly desirable to ensure illumination uniformity and high speed data transmission in the indoor environment. This is necessary to ensure that quality-of-service is not dependent on the location of the user. However, the current approaches of deploying light fixtures (e.g., LEDs or LED arrays) indoors do not achieve uniform illumination coverage as, most often, the corners of various rooms have less illumination. An LED with a large semi-angle can generate wider beam angles to ensure more uniform illumination indoors. However, the generation of such broad beams results in a decrease in the intensity of the optical signals. Only a few studies have
investigated methods of providing even illumination and high data rates in indoor VLC systems. Furthermore, the impact of LED/LED array arrangement on the performance of indoor VLC systems has yet to receive much attention. Note that the LED array arrangement can play a vital role in the overall system performance of VLC. Unlike [6] and [13], for the first time this article explores the use of LC-based RISs in the transmitter of a VLC system to jointly improve the communication and illumination performances in indoor VLC systems. In this article, a novel approach of using LC-based RIS to enhance illumination and data rate uniformity of an indoor VLC system is presented, bringing out the following main contributions:
* We propose a novel application of optical RISs in VLC systems. Specifically, an LC-based RIS-enabled transmitter design is introduced and the impact of the RIS on the emerging light from the LED is analyzed.
* We examine the propagation characteristics of a VLC system with RIS-enabled transmitters as optical signals travel from the LEDs to the receivers using geometric optics. In addition, we provide an expression to characterize the illumination distribution.
* We use simulations to quantify the potential gains of deploying RISs in front of LED arrays and present a performance comparison with a VLC system equipped with ADTs and other popular LED array arrangement schemes.
* Finally, we discuss other potential applications of this novel transmitter design and present several exciting and challenging research opportunities that can further improve its performance and accelerate its realization for future generation optical wireless networks.
## II Traditional Deployment of LEDs Indoors
This section describes the various and modern ways of LED array placement indoors. Although the deployment of indoor light fixtures could depend on design factors such as size and space, occupant's age and preference, ceiling height and shape, to mention only a few, a general description is provided.
As the primary purpose of LED arrays is to provide sufficient indoor illumination, their traditional deployment has always focused solely on lighting and illumination. Due to the Lambertian radiation pattern of LEDs and the fact that most human interactions or conversations occur at the center of indoor environments (i.e., the task or activity area), LED arrays are typically deployed as ceiling fixtures at/or near the center to provide adequate functional illumination. Such a typical LED array arrangement is illustrated in Fig. 1 (a) where, under the point source assumption, 4 LED arrays are deployed as a ceiling light fixture that directs light downwards. Figure 1 (b) depicts another popular LED array arrangement where the 4 LED arrays are distributed on the ceiling in a rectangular/square shape, with each LED array serving as a point source transmitter. The LED array arrangements in Figs. 1 (a) and (b) have been extensively considered in earlier studies on indoor VLC systems such as [2, 3, 4, 7, 10, 11, 12, 13].
A more recent approach for LED array placement, referred to as ADT arrays, is shown in Fig. 1 (c). In this figure, each LED array is inclined at an elevation angle \(\tau\) to point toward a particular direction to improve the illumination level throughout. ADTs have been considered in the design of multi-cell VLC systems in a few recent studies (e.g., [8, 9, 10]). These studies demonstrated the ADTs superior data rate and energy efficiency performance compared to the centralized and distributed LED array placement schemes. However, illumination and data rate uniformity assessments have yet to be reported on ADTs.
## III LC-Based RIS-Enabled VLC System
### _LC-Based RIS-Enabled Transmitter Design_
Figure 1 (d) depicts the proposed RIS-enabled VLC transmitter design. In this figure, an LC-based RIS is deployed in front of a centralized LED array arrangement.1 The motivation for considering LC-based RIS is its light steering and amplification capabilities when subjected to an external electric field. More specifically, LC-based RISs are characterized by their electronically tunable physico-chemical properties (e.g., refractive index, emission, and attenuation coefficients) that can be easily controlled by the arrangement of the LC molecules via an external electric field. Moreover, LCs, in general, have received significant interest in optical wireless networks and have been considered in the development of optical filters for communications, next generation light detection and ranging sensors for self-driving vehicles, and optical receivers [6].
Footnote 1: Note that the discussions on channel gain expression and analysis on the performance for this transmitter design applies to other LED array arrangements such as ADT and distributed LED arrays.
Figure 1 (e) illustrates the principles of light steering and amplification in the LC-based RIS when an external voltage, \(v_{\mathrm{e}}\), greater than the threshold voltage, \(v_{\mathrm{th}}\), is applied. The threshold voltage is a critical voltage at which the reorientation of the LC's molecules begins. As shown in this figure, the emitted light beam from the LED arrays impinges on the LC-based RIS. At the surface of the LC-based RIS, part of the incident light beam gets reflected while the remaining beam undergoes refraction as it passes from the air medium with a refractive index \(n_{a}\) into the LC element with a refractive
Fig. 1: Typical LED arrangement indoors and the proposed LC-based RIS-enabled VLC transmitter: (a) centralized LED array placement; (b) distributed LED array placement; (c) ADT LED array placement; (d) centralized LED arrays with an LC-based RIS at the front end; (e) geometry of light signal propagation and amplification through the LC-based RIS element.
index \(n_{c}\). Inside the LC-based RIS element, the light beam's photons interact with the LC's excited molecules (due to the presence of the external electric field). This causes the excited molecules to drop to a lower energy level, resulting in the generation of identical new photons through the principle of stimulated emission. The direction of the resulting light beam (i.e., incoming photons and the generated photons) as it propagates through and exits the LC-based RIS element with thickness \(d\) can be controlled through an electric field-induced molecular reorientation [6, 13]. More particularly, the transmission coefficient that characterizes light propagation in the LC-based RIS element is a function of the refractive index \(n_{c}\). This refractive index can be tuned electronically by changing the orientation of the LC-based RIS element's molecules through the external voltage \(v_{e}\). Thus, incident light steering and amplification are obtained by subjecting the LC-based RIS to the external electric field.
Figure 2 illustrates the various application scenarios of the LC-based RIS-enabled VLC transmitter for indoor VLC systems. Figure 2 (a) shows how the LC-based RIS can be used to amplify incoming light from LEDs to produce high-intensity emitted visible light signal to jointly enhance coverage, illumination, and received signal strength. Figure 2 (b) illustrates how the LC-based RIS can provide dynamic beam steering, which is useful in indoor environments where users are mobile. Figure 2 (c) demonstrates the joint light amplification and beam focusing capabilities of an LC-based RIS-enabled VLC transmitter. Finally, Fig. 2 (d) highlights how the LC-based RIS-enabled VLC transmitter can produce light beams with narrow beamwidth to reduce interference, provide multiple beams with different intensities via controlled amplification and beam focusing to support differentiated quality-of-service requirements for multiple users, and enhance the security of indoor VLC systems. These application scenarios reveal how higher communication (e.g., energy efficiency, data rate, and security) and illumination performances can be achieved without any additional resources, i.e., no extra transmit power or bandwidth resources. Moreover, this novel RIS-enabled VLC transmitter can assist in addressing the LoS blockage issue through light amplification and beam focusing onto an RIS (e.g., mirror array) in the channel.
### _Channel Model, Data Rate, and Illumination Expression_
The channel model of a VLC system with an LC-based RIS-enabled transmitter characterizes the propagation of light beams from the LED arrays, through the LC-based RIS in front of the LED arrays, and finally through air to the PD of the VLC receiver. The channel gain can be expressed as \(H=\alpha_{LC}\times G_{\mathrm{LoS}}\), where \(\alpha_{LC}\) denotes the LC's transmission coefficient defined in [13] and \(G_{\mathrm{LoS}}\) is the direct current gain of the LoS propagation [4]. The LC's transmission coefficient can be tuned by optimizing the refractive index of the LC in the presence of an external electric field to control the emerged light direction as revealed in [13]. Note that for the VLC transmitters (i.e., LED array arrangements) in Fig. 1, \(H=G_{\mathrm{LoS}}\).
The achievable data rate for any user in an indoor environment served by a VLC system with an LC-based RIS-enabled transmitter with the channel gain, \(H\), the optical transmit power, \(P\), and the amplification gain coefficient, \(\Gamma\), which is defined in [14] can be determined using the rate expression in [13].
Uniform lighting distribution is essential to the reliability of the VLC systems and, as a result, needs to be considered in RIS-aided VLC systems. The reason is to ensure that the RIS placed in front of the LED arrays does not degrade the illumination properties of the VLC transmitter or cause any health risk. The illumination intensity at a surface inside a room with an LC-based RIS-enabled transmitter can be measured by the surface illuminance which is expressed as [4, 13]
\[I=\exp\left(\Gamma d\right)\times P\times\alpha_{\mathrm{LC}}\times\tfrac{ \left(m+1\right)}{2\pi l^{2}\delta}\mathrm{cos}^{m}\left(\Phi\right)\cos\left( \varphi\right)G\left(\varphi\right),\]
where \(\Gamma\) denotes the amplification gain coefficient defined in [13], \(\alpha_{\mathrm{LC}}\) is the transition coefficient, \(m=-\mathrm{log}_{2}\big{(}\mathrm{cos}\left(\phi_{1/2}\right)\big{)}^{-1}\) is the Lambertian emission order with \(\phi_{1/2}\) as the LED's semi-angle at half power, \(l\) is the distance between the transmitter and the surface, \(\delta\) is the optical to luminous flux conversion factor, \(\Phi\) is the angle of irradiance, \(\varphi\) is the angle of incidence, and \(G\left(\varphi\right)=f^{2}/\sin^{2}\varphi,0\leq\varphi\leq\phi_{1/2}\) is the gain of the non-imaging concentrator which focuses the light from the LEDs into the LC-based RIS with \(\phi_{1/2}\) as the LED's semi-angle at half power and \(f\) as the internal refractive index of the concentrator. In this expression, \(P\) is the optical transmit power from the LED array which is incident on the LC-based RIS and \(\exp\left(\Gamma d\right)\times P\times\alpha_{\mathrm{LC}}\) represents the emerged optical power from the LC-based RIS after the incident light has propagated through the LC module and undergone amplification in the presence of an external electric field. The remaining part of the expression denotes the luminous flux of a unit optical power.
Fig. 2: Application scenarios of the proposed LC-based RIS-enabled VLC transmitter: (a) transmit signal amplification and coverage enhancement; (b) dynamic beam steering; (c) transmit signal amplification and beam focusing; (d) secured and interference-free VLC.
## IV Analysis of Achievable Data Rate and Illumination Performance
In this section, a performance comparison of the different LED arrays arrangements and the proposed RIS-enabled VLC transmitter is performed. The performance metrics used in our analysis are the achievable data rate, surface illumination, illumination uniformity, and data rate uniformity. Illumination uniformity can be defined as the ratio between the minimum and the average illumination among all surfaces or sensing points in an indoor environment [4, 5, 15]. Similarly, data rate uniformity refers to the ratio between the minimum and the average data rate among all surfaces or users. According to [5], values close to 1 indicate uniform lighting or data rate condition and, in general, values more than 0.7 are desired. However, a uniformity value of 0.4 is considered the least acceptable for illumination purposes [15].
Without loss of generality, a \(5\) m \(\times\)\(5\) m \(\times\)\(3\) m room size is considered [2, 3, 4, 5]. For the distributed LED array placement scheme, the entire room is assumed to be divided into four equal quadrants, each with an LED array ceiling lamp at the center. For the remaining LED array arrangement schemes and the proposed approach, the LED arrays are deployed at the center of the room. The illumination and data rates are measured at a typical desktop height (i.e., receiver plane) of 0.85 m above the floor. To better determine the performances of the various schemes across the whole room, the receiver plane is divided into \(100\times 100\) grid points which serve as sensing points. This is in accordance with the European lighting standard [15], which requires the use of the grid specification to calculate and measure illumination averages and uniformity. For the considered simulations, the information carrying bandwidth is set as 200 MHz, \(\tau=45^{\circ}\), \(d=0.75\) mm, \(v_{\rm th}=1.34\) V, and \(P=1\) W. The LEDs operate at a wavelength of 510 nm. The analysis for the RIS-enabled transmitter is carried out for an LC-based RIS element with a fixed refractive index \(n_{c}=1.55\) (\(v_{c}=2.1\) V is the required voltage). All other simulation parameters are set according to [13].
Figure 3 compares the illumination distribution in an indoor environment for a VLC system with a transmitter characterized
Fig. 3: Illumination distribution comparison for VLC transmitters characterized by different LED array arrangement and LC-based RIS-enabled VLC transmitter: (a) centralized LED array arrangement (Min. 10 lux, Max. 107 lux); (b) distributed LED array arrangement (Min. 15 lux, Max. 43 lux); (c) ADT LED array placement (Min. 28 lux, Max. 81 lux); (d) proposed centralized LED array with LC-based RIS (Min. 222 lux, Max. 910 lux). The \(x\), \(y\), and \(z\) axes represent the length (m), width (m), and illumination (lux), respectively.
by typical LED array arrangement schemes with that of the proposed RIS-enabled VLC transmitter. This figure is generated by partitioning the receiver plane into small grids and determining the illumination in each grid. Several key observations can be made from Fig. 3. To begin with, the illumination is highest at the center of the room and lowest at the corners for all the different LED array arrangement schemes and the proposed transmitter. This observation can be explained by the Lambertian radiation pattern of LEDs. The highest illumination for the centralized, distributed, and ADT arrangement schemes are significantly below the recommended minimum indoor lighting requirement of 150 lux for any office/gym/home activities as described in [15]. As can be observed, the illumination at the task or activity area, which is typically the center of the room, and the room corners is very low. As a result, these approaches may not always satisfy the illumination function in VLC. However, this is not the case for the proposed transmitter, as illumination is greatly enhanced due to the light amplification and beam focusing capabilities of the LC-based RIS. The minimum illumination requirement is satisfied for the task or activity area and the corners of the room. Secondly, the proposed approach satisfies the minimum required illumination of 400 lux for reading/office work according to the European lighting standard [15], as more than 50\(\%\) of the total room area has illumination above the threshold of 400 lux. Moreover, the illumination of the immediate surrounding area of the task area and that of the background area fall within the recommended values (i.e., over \(500\) lux for the immediate surrounding and above 200 lux for the background area).2 Thus, it is acceptable that the illumination of the room corners is lower than that of the task or activity area. The above analyses reveal that the proposed LC-based RIS-enabled transmitter has superior illumination performance and fulfills the visual comfort needs. Furthermore, compared with the centralized LED array arrangement scheme (i.e., without the LC-based RIS), up to 2100\(\%\) gain
Fig. 4: Data rate distribution comparison for VLC transmitters characterized by different LED array arrangement and an LC-based RIS-enabled VLC transmitter: (a) centralized LED array arrangement (Min. 0.3595 Gbps, Max. 1.6399 Gbps); (b) distributed LED array arrangement (Min. 0.4351 Gbps, Max. 1.5373 Gbps); (c) ADT LED array placement (Min. 0.9179 Gbps, Max. 2.8024 Gbps); (d) proposed centralized LED array with LC-based RIS (Min. 2.6115 Gbps, Max. 3.7363 Gbps). The \(x\), \(y\), and \(z\) axes represent the length (m), width (m), and data rate (bps), respectively.
in illumination is obtained with the deployment of an RIS at the transmitter side. Note that a similar performance gain is expected when an LC-based RIS is used with the distributed and ADT LED array arrangement schemes. In addition, the proposed approach obtains illumination performance gain of up to 2000\(\%\) and 1325\(\%\) when compared to the distributed and ADT placement schemes, respectively.
Figure 4 illustrates the data rate distribution for the considered LED array arrangement schemes and the proposed RIS-enabled VLC transmitter. This figure is generated by partitioning the receiver plane into small grids and determining the data rate in each grid. It can be observed from this figure that the proposed approach achieves up to 626\(\%\), 500\(\%\), and 184\(\%\) improvement in data rate when compared to the centralized, distributed, and ADT LED array arrangement schemes, respectively. Moreover, only the proposed approach has a minimum data rate on the order of Gbps, as the remaining approaches could only support Mbps download speeds. This reveals that all users can enjoy Gbps download speed irrespective of their location (i.e., in the corner of the room or at the center) for the same number of LED arrays, transmit power, and bandwidth resources. Thus, in addition to the significant increase in the peak data rates, higher data rates can be guaranteed for the vast majority of the locations in the indoor environment. This significant performance enhancement is due to the new reconfigurable and transmit light amplification capability of the RIS-enabled transmitter.
Table I shows the illumination and data rate uniformity for the different LED array arrangement schemes and the proposed approach based on the results in Figs. 3 and 4. The data rate (illumination) uniformity value for any LED array arrangement scheme is the ratio between the minimum value of the data rates (illumination) for all the grid points and the average data rate (average illumination) among all the grid points. The centralized LED array arrangement has the worst illumination and data rate uniformity. Its uniformity values of 0.2371 and 0.3535 are below the acceptable values of 0.40 for illumination and 0.7 for data rate, respectively. Hence, the centralized scheme is characterized by high illumination and data rate fluctuations over the entire room and exhibits an unacceptable performance level in achieving the dual role of communication and illumination. The two remaining LED array arrangement schemes and the proposed approach demonstrate uniform illumination and, hence, would experience the least fluctuation. Besides, these uniformity values (i.e., 0.4378, 0.4755, and 0.4628) are practical since it is acceptable for some parts of the room space to have lower illumination. Moreover, a comparison of the data rate uniformity values of 0.3937, 0.4379, and 0.8168 for the distributed, ADT, and the proposed schemes, respectively, suggests that the proposed approach can support high data rates across the entire room. More reliable and high quality communication can be guaranteed for all users in an indoor environment with the proposed RIS-enabled transmitter. Unlike illumination, data rate uniformity values of 0.3535, 0.3937, and 0.4379 are not acceptable for VLC. For such values, the data demands of all users may not be satisfied since the achievable data rate becomes highly dependent on the user's location (i.e., extremely high data rate for users in the center of the room and extremely low data rate for users far away from the center).
Finally, Fig. 5 illustrates the achievable data rate and illumination performance of an indoor VLC system for different transmit power values. In this figure, the minimum data rate and illumination have been plotted for different transmit power values for the centralized, distributed, ADT, and the proposed LED array arrangement schemes. It can be observed that the minimum data rate and minimum illumination increase with the transmit power for all the considered approaches. However, growth trends for the data rate and illumination distribution are different because, unlike surface illumination, data rate is a log function. The proposed design outperforms the remaining approaches in terms of mimimum data rate when the transmit power is less than 4 W. For transmit power values of 4 W and above, the ADT LED array arrangement scheme achieves a higher minimum data rate than the LC-based RIS-enabled transmitter. The reason for this observation is that, while the data rate for the proposed scheme begins to saturate, that of the ADT approach keeps increasing for transmit power values greater than 4 W. This growth trend of the ADT approach, which is similar to that of the distributed LED array arrangement, is due to the spatial configuration of its LED arrays and the fact that the received power depends on the position of the LED arrays and the angle of irradiance. Although the ADT LED arrays arrangement scheme demonstrates the best minimum data rate performance for transmit power values above 4 W, its illumination remains worse than that of the proposed approach. Thus, even for higher transmit power values, the centralized, distributed, and ADT LED array arrangement schemes fail to provide acceptable joint data rate and illumination performance. Note that the proposed approach involves placing the LC-based RIS in front of the centralized LED array arrangement scheme. It can be inferred from the simulation results that higher performance gains could be expected when the LC-based RIS is placed in front of the ADT LED arrays since the ADT arrangement scheme outperforms the centralized LED arrays placement scheme.
## V Open Research Opportunities and Challenges
Unlike the deployment of RISs in the channel, which has received great research attention, the application of RISs in VLC transceiver design and performance optimization has yet to be explored. This article has demonstrated the great potential of LC-based RIS-enabled VLC transmitter. By presenting a novel approach of using LC-based RIS to (i) achieve a much higher level of illumination, (ii) significantly improve data rate, and (iii) jointly enhance illumination and data rate uniformity, the following promising research opportunities and challenges have been identified:
\begin{table}
\begin{tabular}{|l||l|l|} \hline
**LED Array Arrangement Type** & \multicolumn{2}{|c|}{**Uniformity Distribution**} \\ \cline{2-3} & Illumination & Data Rate \\ \hline \hline Centralized LED Array & 0.2371 & 0.3535 \\ \hline Distributed LED Array & 0.4378 & 0.3937 \\ \hline ADT LED Array & 0.4755 & 0.4379 \\ \hline Proposed & 0.4628 & 0.8168 \\ \hline \end{tabular}
\end{table} TABLE I: Illumination and data rate uniformity for indoors.
**1. Performance Optimization under Joint Illumination and Communication Constraints:** The performance of LC-based RISs is highly dependent on the electronic tuning of the refractive index since its value determines both the amplification and transmission gain coefficients of the LC-based RIS. Thus, the expression for the achievable data rate in VLC systems with an RIS-enabled transmitter differs from a VLC system without the RIS. Moreover, it is vital to consider the joint optimization of the LC's refractive index and the available transmit power of the VLC system to enhance the illumination and communication performances further. Several objective functions such as data rate maximization, energy efficiency maximization, load balancing, max-min rate, transmit power minimization, and secrecy rate maximization need to be examined under illumination and communication related constraints (e.g., quality-of-service, quality-of-experience, and LC-based RIS tuning time). Furthermore, novel constraints related to the amplification gain should be studied as a network operator or a customer might want to emphasize or de-emphasize the amplification capability of the LC-based RIS-enabled VLC transmitter. Due to the resulting unique channel gain, illumination, and data rate expressions, the potential high dimensionality of the optimization problems, and the new design constraints, traditional approaches for optimizing the performance of VLC systems cannot be directly adopted. New mathematical optimization techniques with low complexity and machine learning-driven approaches should be explored in further research, proof-of-concept studies, and, finally, the realization of LC-based RIS-enabled transmitters in VLC. These developments could be extended into other VLC applications, such as vehicle-to-vehicle and underwater wireless communication since their channel models differ from that of the indoor environment.
**2. Analysis of the Impact of VLC and RIS Parameters on Illumination and Communication Performance:** VLC transmitters and systems generally have several design parameters such as the LED's semi-angle at half power, available transmit power and subchannels, room dimension and shape, wall reflection coefficients, and the distance between LED arrays. On the other hand, an LC-based RIS element has parameters that can affect its performance gains. Such parameters include the threshold voltage, thickness of the LC, and transmission wavelength. Due to page limitations, this article considered fixed values for the parameters mentioned above. It would be worth examining how different system parameters will impact the system performance and potentially yield further improvements in data rate and illumination. Moreover, optimization problems with such parameters as decision variables need to be considered to determine the optimal values for various performance metrics.
**3. Noise Effect on the Performance of the LC-based RIS:** The LC-based RIS performs transmit light amplification through the process of stimulated emission that generates new photons. In practice, the photon generation process can limit the data rate and illumination performances. This is because tuning LCs takes time and is characterized by the response time (which is typically in the order of milliseconds). In addition, the photon generation process might result in noise affecting the system's performance. It is therefore important to investigate ways of decreasing the response time, how often to perform LC tuning, and noise effects on the performance of LC-based RIS-enabled VLC transmitters.
**4. Adaptive Cell Formation for VLC:** The ability of LC-based RIS-enabled transmitters to focus, amplify, and steer emitted light due to their electronically adjustable refractive index make them suitable for the design of adaptive cell formation algorithms. Unlike ADTs, where the resulting cells formed by the highly directional LED arrays cannot be altered with changes in the indoor environment (e.g., change in user density and distribution, data rate and illumination requirements, and mobility), LC-based RIS-enabled transmitters permit the design of algorithms to control both the refractive index and transmit power to move illumination coverage areas in indoors and create adaptive communication cells. The impact of such joint power allocation and cell formation algorithms on the illumination and communication performance of LC-based RIS-enabled VLC systems is another exciting research area for future work.
**5. Impact on Cost of VLC:** The development of RISs, in general, and optical RISs, in particular, is still at an early stage. As a result, much has yet to be known about the cost of RISs. Inserting an LC-based RIS into an LED package can affect its cost. Since low deployment cost is one of the key features that network operators look out for, it is critical to investigate how RISs would affect the cost of LEDs. Such investigations should focus on theoretical studies and practical deployments that consider imperfections in the system model to accurately examine the cost-benefit analysis.
Fig. 5: Impact of transmit power on the performance of a VLC system for the considered approaches: (a) minimum achievable data rate vs. transmit power; (b) minimum illumination vs. transmit power.
**6. Enhanced VLC for joint communication, illumination, and sensing:** VLC has been identified as a key technology for indoor joint communication and sensing. The quality of the received signal plays a critical role in the communication performance and sensing accuracy of such an integrated sensing and communication system. The signal focusing and amplification capabilities of the proposed LC-based RIS-enabled VLC transmitter can be leveraged to enhance the received signal strength in VLC-based integrated sensing and communication systems.
## VI Conclusion
This article investigated a novel approach of using LC-based RIS to improve the data rate and illumination distribution in indoor VLC systems. Specifically, the illumination and data rate performances of popular approaches for LED array arrangement were first analyzed. Then, a novel VLC transmitter design that involves placing LC-based RIS in front of the LED arrays was introduced, and the channel gain and data rate expressions were presented. Simulation results revealed that, unlike popular LED array arrangement schemes, the proposed LC-based RIS-enabled VLC transmitter achieves a higher data rate and illumination distribution. Moreover, it enhances data rate uniformity and demonstrates that a much higher level of data rate and illumination can be jointly obtained from the same number of LED arrays and transmit power value when an LC-based RIS with a 1.34 V - 10 mA source is employed. Thus, more energy efficient RIS-aided VLC systems can be designed due to the nearly-passive nature of the LC-based RIS. Furthermore, several open challenges have been highlighted. Since the use of RISs at the transmitter side in VLC remains unexplored, this article can provide a helpful guide and inspire further studies on optical RIS-aided communications.
|
2305.06514 | Skew spectrum and smoothed skewness of 21-cm signals from epoch of
reionization | Due to the non-linear ionizing and heating processes, the 21-cm signals from
epoch of reionization (EoR) are expected to have strong non-Gaussian
fluctuations. In this paper, we use the semi-numerical simulations to study the
non-Gaussian statistics i.e. skew spectrum and smoothed skewness of the 21-cm
signals from EoR. We find the 21-cm skew spectrum and smoothed skewness have
similar evolution features with the 21-cm bispectrum. All of them are sensitive
to the EoR models, while not too much to the cosmic volume applied. With the
SKA1-low telescope as reference, we find both the skew spectrum and smoothed
skewness have much higher S/N ratios than the 21-cm bispectrum. | Qing-Bo Ma, Ling Peng | 2023-05-11T01:27:47Z | http://arxiv.org/abs/2305.06514v1 | # Skew spectrum and smoothed skewness of 21-cm signals from epoch of reionization
###### Abstract
Due to the non-linear ionizing and heating processes, the 21-cm signals from epoch of reionization (EoR) are expected to have strong non-Gaussian fluctuations. In this paper, we use the semi-numerical simulations to study the non-Gaussian statistics i.e. skew spectrum and smoothed skewness of the 21-cm signals from EoR. We find the 21-cm skew spectrum and smoothed skewness have similar evolution features with the 21-cm bispectrum. All of them are sensitive to the EoR models, while not too much to the cosmic volume applied. With the SKA1-low telescope as reference, we find both the skew spectrum and smoothed skewness have much higher S/N ratios than the 21-cm bispectrum.
keywords: dark ages, reionization, first stars - methods: numerical - early Universe
## 1 Introduction
Following the first galaxies and first stars formation, the Universe undergoes the phase transition from fully neutral to highly ionized (Furlanetto et al., 2006; Dayal and Ferrara, 2018), which is named as the Epoch of Reionization (EoR). Although many facilities have measured the properties about EoR, e.g. the optical depth measured by the Cosmic Microwave Background (CMB) projects (\(\tau=0.0544\pm 0.007\) by Planck Collaboration et al., 2020) denotes that the mid-point redshift of the EoR is at \(z=7.68\pm 0.79\), the Gunn-Peterson trough measured by the spectra of high-\(z\) quasars (Fan et al., 2006) confirms the end of EoR at \(z\sim 6\), and many high-\(z\) galaxies during EoR have been observed by the Hubble Space Telescope (HST) (Bouwens et al., 2015) and James Webb Space Telescope (JWST) (Donnan et al., 2023), the most promising method is to measure the 21-cm signals of the hyper-fine transition line of neutral hydrogen (Furlanetto et al., 2006; Koopmans et al., 2015). Indeed, the measurements of 21-cm signals, including the global and the interference ones, have been one of the main goals of many radio telescopes, e.g. the Experiment to Detect the Global EoR Signature (EDGES, Bowman et al., 2018), the Shaped Antenna measurement of the background Radio Spectrum 3 telescope (SARAS-3, Bevins et al., 2022), the Low-Frequency Array (LOFAR, Mertens et al., 2020), the Murchison Widefield Array (MWA, Trott et al., 2020), the Hydrogen Epoch of Reionization Array (HERA, Abdurashidova et al., 2022), and the Square Kilometre Array (SKA, Koopmans et al., 2015).
Although without clearly definite results reported by the 21-cm facilities, some telescopes have released the early results of 21-cm signal, e.g. the EDGES project has reported an absorption profile on the global 21-cm signal at 78 MHz (i.e. \(z\sim 17\)) (Bowman et al., 2018), while this is still debated (e.g. Hills et al., 2018; Singh et al., 2022), and not confirmed by the recent observations of SARAS-3 telescope (Bevins et al., 2022). The interference telescopes have given some upper limits on the 21-cm power spectra \(\Delta^{2}_{\rm 21cm}\), e.g. a 2-\(\sigma\) upper limit of \(\Delta^{2}_{\rm 21cm}<(73)^{2}\,{\rm mK}^{2}\) at \(z\approx 9.1\) and \(k=0.075\,h\,{\rm Mpc}^{-1}\) by the LOFAR telescope (Mertens et al., 2020), \(\Delta^{2}_{\rm 21cm}\lesssim(43)^{2}\,{\rm mK}^{2}\) at \(z=6.5\) and \(k=0.14\,h\,{\rm Mpc}^{-1}\) by the MWA telescope (Trott et al., 2020), and \(\Delta^{2}_{\rm 21cm}\leq(30.76)^{2}\,{\rm mK}^{2}\) at \(z=7.9\) and \(k=0.192\,h\,{\rm Mpc}^{-1}\) by the HERA telescope (Abdurashidova et al., 2022). With these results, people already rule out some extreme EoR models (e.g. Ghara et al., 2020, 2021; Greig et al., 2021, 2021).
Since the ionizing and heating processes during EoR are highly non-linear, the fluctuations on the 21-cm images are very non-Gaussian (Majumdar et al., 2018), which can be measured by e.g. the skewness and kurtosis (Kittiwitsit et al., 2022), the three point correlation function (Hoffmann et al., 2019), the bispectrum (Watkinson et al., 2019; Hutter et al., 2020; Ma et al., 2021) and bispectrum phase (Thyagarajan et al., 2020), and the position-dependent power spectra (Giri et al., 2019). Although these high-order statistics may be hard to measure (Watkinson et al., 2021), they are very sensitive to the physical processes of EoR e.g. the reionization history (Majumdar et al., 2018), the X-ray heating (Watkinson et al., 2019; Ma et al., 2021), the ionization topologies (Hutter et al., 2020), and the effect of redshift space distortion (Majumdar et al., 2020). Meanwhile, their combination with 21-cm power spectra observations will improve the constraints on the parameters of EoR model (Shimabukuro et al., 2017).
The calculations of non-Gaussian features (e.g. the three point correlation function and the bispectrum) are usually very expensive
both for simulations and observational data, even with the FFT-based technique developed by Parkinson et al. (2017) to compute the 21-cm bispectrum, while some quantities are easier to compute and can denote the similar features, e.g. the skew spectrum that used to describe the non-Gaussian features of CMB (Cooray, 2001) and galaxy distribution (Moradinezhad Dizgah et al., 2020; Dai et al., 2020). The computing of skewness is also very convenient, while there is only one value for each 21-cm image. However, after smoothing the smaller scale fluctuations with different \(k_{\rm s}\), the 21-cm skewness (called as smoothed skewness) shows similar behaviour with the 21-cm bispectrum (Ma et al., 2021). In this paper, we will study how the skew spectrum and smoothed skewness of 21-cm signals from EoR evolve with redshift, their relation with the reionization history, and their detectability by the 21-cm telescope SKA1-low. We will also compare the results with those of 21-cm bispectrum.
The following paper is organized as: the simulations and the methods to computer bispectrum, skew spectrum and smoothed skewness are described in Section 2, the results are in Section 3, the conclusions are summarized in Section 4. The adopted cosmological parameters are \(\Omega_{\Lambda}=0.685\), \(\Omega_{m}=0.315\), \(\Omega_{b}=0.0493\), \(\sigma_{8}=0.811\), \(n_{s}=0.965\) and \(h=0.674\)(Planck Collaboration et al., 2020).
## 2 Method
We describe the simulations adopted in Sec. 2.1, and then the methods to compute the bispectrum in Sec. 2.2, the skew spectrum in Sec. 2.3, and the smoothed skewness in Sec. 2.4.
### Simulations
We use the semi-numerical simulation 21CMFAST (Mesinger et al., 2011) to mimic the evolution of matter density, ionization and temperature status of gas medium, and the 21-cm differential brightness temperature (DBT, \(\delta T_{\rm 21cm}\)). The simulations start at \(z=35\) and end at \(z=6\), with 57 snapshot outputs from \(z=20\) to 6. The fiducial simulation (named as L600) has box length 600 cMpc and grid number 800\({}^{3}\). The number of ionizing photons per stellar baryon adopted is \(N_{\rm UV}=5000\). The fraction of collapsed gas that form stars is \(f_{\rm s}=f_{\rm s,0}\times(M_{\rm H}/10^{10}{\rm M}_{\odot})^{\alpha_{\rm s}}\), where \(M_{\rm H}\) is the halo mass, \(f_{\rm s,0}=0.05\) and \(\alpha_{\rm s}=0.5\). The escape fraction is \(f_{\rm esc}=f_{\rm esc,0}\times(M_{\rm h}/10^{10}{\rm M}_{\odot})^{\alpha_{\rm esc}}\)(Park et al., 2019), where \(f_{\rm esc,0}=0.1\) and \(\alpha_{\rm esc}=-0.5\). The number of halos hosting active star-forming galaxies (i.e. the duty cycle) is \(f_{\rm duty}\propto\exp(-M_{\rm turnover}/M_{h})\)(Greig et al., 2022), where the halo mass threshold for efficient star formation \(M_{\rm turnover}=5\times 10^{8}\,{\rm M}_{\odot}\). The SED of X-ray binary is from Fragos et al. (2013), with the luminosity \(L_{<2\rm keV}/{\rm SFR}=10^{40.5}{\rm erg\,s^{-1}\,M_{\odot}^{-1}\,yr}\).
We also run three more simulations as comparisons. The first one is with faster ionization process (named as L600_fast), which increases the star forming fraction \(f_{\rm s,0}\) and the halo mass threshold \(M_{\rm turnover}\) for efficient star formation, i.e. with \(f_{\rm s,0}=0.15\) and \(M_{\rm turnover}=5\times 10^{9}\,{\rm M}_{\odot}\). The second one is with slower ionization process (named as L600_slow), i.e. with \(f_{\rm s,0}=0.024\) and \(M_{\rm turnover}=5\times 10^{7}\,{\rm M}_{\odot}\). Note that in these two simulations, we increase/decrease the halo mass threshold \(M_{\rm turnover}\) with one magnitude, while fine-tune the parameter \(f_{\rm s,0}\) to make sure the simulations have the consistent half-ionization redshifts with L600 (see Fig. 1). The third one is with box length 1200 cMpc and grid number 800\({}^{3}\) (named as L1200), and the same parameter values of EoR model with L600. Compared to L600, L1200 covers larger cosmic volume but with lower spatial resolution.
As a reference, Fig. 1 shows the evolution history of volume averaged ionization fraction.\(\bar{x}_{\rm HII}\), global mean \(\bar{G}T_{\rm 21cm}\) and standard deviation \(\sigma_{\rm 21cm}\) of \(\delta T_{\rm 21cm}\). From four simulations. These simulations have \(\bar{x}_{\rm HII}=0.5\) at \(z\approx 7.4\), with the redshift gaps \(\Delta z\) from \(\bar{x}_{\rm HII}=0.1\) to 0.9.9 g\(\Delta z\approx 3.8\) for L600 and L1200, \(\Delta z\approx 2.5\) for L600_fast and \(\Delta z>5\) for L600_slow. The simulation L1200 has similar histories of \(\bar{x}_{\rm HII}\) and \(\bar{G}T_{\rm 21cm}\) with L600, while has smaller \(\sigma_{\rm 21cm}\) than the latter, due to its lower spatial resolution. Compared to L600, the faster ionization of L600_fast results in higher peak of \(\bar{G}T_{\rm 21cm}\) at \(z\approx 8.2\), and higher peak of \(\sigma_{\rm 21cm}\) at \(z\approx 7.5\). The higher halo mass threshold \(M_{\rm turnover}\) delays the X-ray heating in L600_fast, thus the absorption trough on \(\bar{G}T_{\rm 21cm}\) is at \(z\approx 11.5\), lower than that of L600 (\(z\approx 12.2\)), while it increases the peak of \(\sigma_{\rm 21cm}\) in the early time of EoR (\(z\approx 11.4\)). Instead, the slower ionization of L600_slow leads to lower peak on \(\bar{G}T_{\rm 21cm}\) at \(z\approx 8.2\) and \(\sigma_{\rm 21cm}\) at \(z\approx 7.5\). The lower value of \(M_{\rm turnover}\) in L600_slow allows the star formation within halos with lower-limit mass lower
Figure 1: History of volume averaged ionization fraction \(\bar{x}_{\rm HII}\) (top), global 21-cm DBT \(\delta\bar{G}T_{\rm 21cm}\) (central) and standard deviation \(\sigma_{\rm 21cm}\) of \(\delta T_{\rm 21cm}\) (bottom) from simulations L600 (solid black), L600_fast (dotted red), L600_slow (dashed blue) and L1200 (dash-dotted magenta).
than L600, and thus earlier X-ray heating, i.e. the earlier absorption trough on \(\delta\mathcal{T}_{21\rm cm}\) (at \(z\approx 12.6\)) than L600, while it reduces the peak of \(\sigma_{21\rm cm}\) at \(z\approx 12.5\).
### Bispectrum
The bispectrum of 21-cm image (\(b_{21\rm cm}\)) is the statistics of three point correlations of \(\delta\mathcal{T}_{21\rm cm}\) in the Fourier space, which can be expressed as:
\[b_{21\rm cm}(\mathbf{k_{1},k_{2},k_{3}}) = \delta_{D}(\mathbf{k_{1}+k_{2}+k_{3}}) \tag{1}\] \[\times\langle\delta\overline{T}_{21\rm cm}(\mathbf{k_{1}})\overline {\delta T}_{21\rm cm}(\mathbf{k_{2}})\overline{\delta T}_{21\rm cm}(\mathbf{k_{3}})\rangle\]
where \(\overline{\delta T}_{21\rm cm}(\mathbf{k_{l}})\) (\(i=1,2,3\)) is the Fourier transform of \(\delta T_{21\rm cm}\). With different lengths of \(\mathbf{k_{1}},\mathbf{k_{2}}\) and \(\mathbf{k_{3}}\), i.e. \(k_{1},k_{2}\) and \(k_{3},b_{21\rm cm}\) has many modes (see e.g. Majumdar et al., 2020) that denote different non-Gaussian features. In this paper, we will focus only on the one of equilateral triangles i.e. \(k_{1}=k_{2}=k_{3}=k\).
We use the fast Fourier transform (FFT) based technique (Watkinson et al., 2017) to compute the \(b_{21\rm cm}\). It is much faster than the traditional triangle counting technique, while gives the consistent \(b_{21\rm cm}\)(Watkinson et al., 2017). We also normalize the 21-cm bispectrum as \(B_{21\rm cm}(k)=b_{21\rm cm}(k)\times k^{6}/(2\pi^{2})^{2}\)(Ma et al., 2021) in the following.
### Skew spectrum
Following the definition of skew spectrum for the CMB (Cooray, 2001) and the galaxy distribution (Dai et al., 2020; Dai and Xia, 2020), the skew spectrum of 21-cm images (\(s_{21\rm cm}\)) is defined as the cross-spectrum of \(\mathbf{\Phi}_{T_{21\rm cm}}=(\delta T_{21\rm cm}-\delta\overline{T}_{21\rm cm}) ^{2}\) with \(\delta T_{21\rm cm}\), i.e.
\[s_{21\rm cm}(\mathbf{k})=\delta_{D}(\mathbf{k+k^{\prime}})\times\langle\overline{\Phi }_{T_{21\rm cm}}(\mathbf{k})\overline{\delta T}_{21\rm cm}(\mathbf{k^{\prime}})\rangle \tag{2}\]
where \(\overline{\Phi}_{T_{21\rm cm}}(\mathbf{k})\) is the Fourier transform of \(\mathbf{\Phi}_{T_{21\rm cm}}\).
Since \(s_{21\rm cm}\) is the integration of the \(b_{21\rm cm}\) modes (see the discussions in e.g. Dai et al., 2020), it is expected to display the similar features to \(b_{21\rm cm}\). As the computing of \(s_{21\rm cm}\) is actually the two points cross-correlation, it is much faster than \(b_{21\rm cm}\). To keep the same units with \(B_{21\rm cm}\), the 21-cm skew spectrum is normalized as \(S_{21\rm cm}(k)=s_{21\rm cm}(k)\times k^{3}/2\pi^{2}\).
### Smoothed skewness
The smoothed skewness (\(\Gamma_{21\rm cm}\)) is defined as the skewness of the \(\delta T_{21\rm cm}\) images after smoothing the smaller scale fluctuations with wave-number \(>k\), i.e. only keep signals at larger scales than \(k\)s, which can be computed by:
\[\Gamma_{21\rm cm}(k)=\frac{\sum(\delta\mathcal{T}_{21\rm cm,}k-\delta \overline{T}_{21\rm cm})^{3}}{N} \tag{3}\]
where \(\delta\mathcal{T}_{21\rm cm,}k\) denotes the \(\delta T_{21\rm cm}\) images after smoothing the fluctuations with wave-number \(>k\), \(N\) is the cell number after smoothing.
Since the calculation of \(\Gamma_{21\rm cm}\) does not need Fourier transform, it is faster than \(B_{21\rm cm}\), while it loops the whole 21-cm images for each \(k\), thus slower than \(S_{21\rm cm}\) to produce the full spectrum. As showed in Ma et al. (2021), \(\Gamma_{21\rm cm}\) can presents similar non-Gaussian features with \(B_{21\rm cm}\). Actually, \(\Gamma_{21\rm cm}\) is the integration of the \(b_{21\rm cm}\) modes (Shimabukuro et al., 2016):
\[\Gamma_{21\rm cm}(k)=\int^{k}\frac{\rm d^{3}\mathbf{k}_{1}}{(2\pi)^{3}}\int^{k} \frac{\rm d^{3}\mathbf{k}_{2}}{(2\pi)^{3}}b_{21\rm cm}(\mathbf{k_{1},k_{2},-k_{1}-k_{ 2}}). \tag{4}\]
## 3 Results
We present the evolution features of \(B_{21\rm cm}\), \(S_{21\rm cm}\) and \(\Gamma_{21\rm cm}\) in Sec. 3.1, their model dependence in Sec. 3.2, and their detectability in Sec. 3.3.
### Evolution features
Fig. 2 shows the \(B_{21\rm cm}\), \(S_{21\rm cm}\) and \(\Gamma_{21\rm cm}\) from the simulation L600, at three \(z\)s. At the early stage of EoR, e.g. at \(z=11.4\), that after X-ray heating (see Fig. 1), the non-Gaussianity of \(\delta T_{21\rm cm}\) is dominated by the matter density, thus \(B_{21\rm cm}\), \(S_{21\rm cm}\) and \(\Gamma_{21\rm cm}\) are all positive within \(k=0.1-2\,\rm Mpc^{-1}\). With the proper normalization discussed in Sec. 2, they present similar amplitudes. At \(z=8.1\), the ionized bubbles starts to dominate the behaviour of non-Gaussian features, that leads to negative \(B_{21\rm cm}\)(Majumdar et al., 2018). Consistently, the \(S_{21\rm cm}\) and \(\Gamma_{21\rm cm}\) are also negative, except that \(S_{21\rm cm}\) is positive at \(k>1.3\,\rm Mpc^{-1}\). At the end stage of EoR, e.g. at \(z=6.6\), the non-Gaussian features of \(\delta T_{21\rm cm}\) are dominated by the islands of
neutral hydrogen, which results in some positive \(B_{21\rm cm}\)(Majumdar et al., 2018), e.g. at \(k<0.35\,{\rm Mpc}^{-1}\). Differently, the \(S_{21\rm cm}\) and \(\Gamma_{21\rm cm}\) are fully positive within \(k=0.1-2\,{\rm Mpc}^{-1}\). This is due to the fact that both the \(S_{21\rm cm}\) and \(\Gamma_{21\rm cm}\) are the integration of \(B_{21\rm cm}\) modes, i.e. the results of \(S_{21\rm cm}\) and \(\Gamma_{21\rm cm}\) at \(k>0.35\,{\rm Mpc}^{-1}\) can include the contributions of \(B_{21\rm cm}\) modes from the scales with \(k<0.35\,{\rm Mpc}^{-1}\).
Fig. 3 presents the redshift evolution of \(B_{21\rm cm}\), \(S_{21\rm cm}\) and \(\Gamma_{21\rm cm}\) from the simulation L600 at three \(k\)s. They display similar evolution features at all three \(k\)s. Specifically, all of them are negative at \(z>15\), i.e. in the phase of Ly\(\alpha\) pumping that couples the spin temperature of neutral hydrogen to the kinetic temperature of gas medium. When the X-ray radiation dominates the heating of IGM, i.e. within \(z=[10,12]\), they become positive, as the X-ray heating positively couples the spin temperature of neutral hydrogen to the matter density. With the ionization continuing, the ionized bubbles start to dominate the non-Gaussian features of \(\delta T_{21\rm cm}\) and lead to negative \(B_{21\rm cm}\). \(S_{21\rm cm}\) and \(\Gamma_{21\rm cm}\). As mentioned earlier, at the end of EoR e.g. \(z<7\), the non-Gaussian features of \(\delta T_{21\rm cm}\) are dominated by the neutral hydrogen islands, that results in positive \(B_{21\rm cm}\). \(S_{21\rm cm}\) and \(\Gamma_{21\rm cm}\). However, some differences also appear at three \(k\)s. Specifically, at \(k=0.1\,{\rm Mpc}^{-1}\), the evolution of \(\Gamma_{21\rm cm}\) is similar to \(B_{21\rm cm}\), while \(S_{21\rm cm}\) shows higher absolute amplitude. This is because the \(\Gamma_{21\rm cm}\) at \(k=0.1\,{\rm Mpc}^{-1}\) is only from the \(B_{21\rm cm}\) models at \(k\leq 0.1\,{\rm Mpc}^{-1}\), thus show more similar features with \(B_{21\rm cm}\). At \(k=0.3\,{\rm Mpc}^{-1}\), the absolute amplitude of \(\Gamma_{21\rm cm}\) is higher than \(B_{21\rm cm}\), and it is closer to \(S_{21\rm cm}\); since both the \(\Gamma_{21\rm cm}\) and the \(S_{21\rm cm}\) at \(k=0.3\,{\rm Mpc}^{-1}\) includes many \(B_{21\rm cm}\) modes, thus have higher absolute amplitude than \(B_{21\rm cm}\). At \(k=1.0\,{\rm Mpc}^{-1}\), both \(S_{21\rm cm}\) and \(\Gamma_{21\rm cm}\) are positive at \(z<8\), while \(B_{21\rm cm}\) is negative at the same \(z\)s. As mentioned before, this is because \(S_{21\rm cm}\) and \(\Gamma_{21\rm cm}\) are the integration of \(B_{21\rm cm}\) modes, i.e. they include the contributions from \(B_{21\rm cm}\) modes with smaller \(k\)s.
### Model dependence
Fig. 4 shows the redshift evolution of \(B_{21\rm cm}\), \(S_{21\rm cm}\) and \(\Gamma_{21\rm cm}\) from simulations L600_fast, L600_slow and L1200. As showed in Fig. 1, these three simulations have the same half-ionization \(z\) with L600, while L600_fast (L600_slow) has faster (slower) ionization process, and L1200 has larger box length.
With faster ionization, i.e. simulation L600_fast, the 21-cm signals and fluctuations are more significant (see Fig. 1). Thus \(B_{21\rm cm}\), \(S_{21\rm cm}\) and \(\Gamma_{21\rm cm}\) present higher amplitudes than L600. In L600_fast, the evolution features of \(B_{21\rm cm}\), \(S_{21\rm cm}\) and \(\Gamma_{21\rm cm}\) are consistent at \(k=0.1\) and 0.3 Mpc\({}^{-1}\), while obviously different at \(k=1.0\,{\rm Mpc}^{-1}\). With slower ionization, i.e. simulation L600_slow, the 21-cm signals and fluctuations are smaller (see Fig. 1). Thus the amplitudes of \(B_{21\rm cm}\), \(S_{21\rm cm}\) and \(\Gamma_{21\rm cm}\) are lower than L600. At \(k=0.1\,{\rm Mpc}^{-1}\), the common features of \(B_{21\rm cm}\), \(S_{21\rm cm}\) and \(\Gamma_{21\rm cm}\) are not significant like that in L600, e.g. the \(\Gamma_{21\rm cm}\) is obviously different to \(B_{21\rm cm}\) at \(z<9\), while \(B_{21\rm cm}\), \(S_{21\rm cm}\) and \(\Gamma_{21\rm cm}\) have clearly consistent evolution at \(k=0.3\,{\rm Mpc}^{-1}\) and \(1.0\,{\rm Mpc}^{-1}\) especially at \(z>8\). L1200 covers larger cosmic volume than L600, i.e. \(S_{21\rm cm}\) and \(\Gamma_{21\rm cm}\) include more large scale modes of \(B_{21\rm cm}\) than those in L600, while they display almost the same evolution with the latter. This means that the simulations or 21-cm surveys in the near future with larger cosmic volume would not significantly affect the results of \(B_{21\rm cm}\), \(S_{21\rm cm}\) and \(\Gamma_{21\rm cm}\).
As a summary, with the different EoR models, \(S_{21\rm cm}\) and \(\Gamma_{21\rm cm}\) still present similar evolution features with \(B_{21\rm cm}\), and all of them are sensitive to the EoR models, while not too much to the cosmic volume applied.
### Detectability
We adopt SKA1-low telescope as the reference facility to predict the signal/noise ratios (S/N) of \(B_{21\rm cm}\), \(S_{21\rm cm}\) and \(\Gamma_{21\rm cm}\). SKA1-low has \(N_{\rm st}=224\) stations in the compact core with diameter \(D=1\,{\rm km}\) to measure the 21-cm visibility from EoR. The root mean square (rms) of instrumental noise on the \(\delta T_{21\rm cm}\) images can be estimated by:
\[\sigma_{N}=\frac{\lambda^{2}}{S\Omega_{\rm beam}\sqrt{N_{\rm st}(N_{\rm st}-1)R_{ \rm width}f_{\rm int}}} \tag{5}\]
where \(S\) is the telescope sensitivity from Dewdney et al. (2016), the solid angle of beam \(\Omega_{\rm beam}=1.133\theta^{2}\), the angular resolution \(\theta=\lambda/D\), and \(\lambda=21\,{\rm cm}\times(1+z)\). We assume the spectral resolution \(R_{\rm width}=0.1\,{\rm MHz}\), and the integration time \(t_{\rm int}=1000\,{\rm hr}\). We also
Figure 3: Evolution of \(B_{21\rm cm}\) (dashed red), \(S_{21\rm cm}\) (dash-dotted cyan) and \(\Gamma_{21\rm cm}\) (dotted magenta) with redshift, at \(k=0.1\,{\rm Mpc}^{-1}\) (top), \(k=0.3\,{\rm Mpc}^{-1}\) (central), and \(k=1.0\,{\rm Mpc}^{-1}\) (bottom), from the simulation L600.
assume that the total survey area of SKA1-low is \(100\,\mathrm{deg}^{2}\), and the frequency band for each observation is \(2\,\mathrm{MHz}\).
By assuming the instrumental noise is fully Gaussian, the bispectrum noise can be estimated by \(b_{N}=\sigma_{N}^{3}x^{4}y^{2}\)(Yoshiura et al., 2015), where \(x\) and \(y\) are the comoving distances corresponding to the angular resolution \(\theta\) and the frequency resolution \(R_{\mathrm{width}}\). If including the sampling error, the expected S/N ratio of \(B_{\mathrm{21cm}}\) is
\[\frac{\mathrm{S}}{\mathrm{N}}=\frac{b_{\mathrm{21cm}}(k)}{\sqrt{\left(b_{ \mathrm{21cm}}^{2}(k)+b_{N}^{2}\right)/N_{\mathrm{ini}}(k)}} \tag{6}\]
where \(N_{\mathrm{ini}}(k)\) is the triangle number of bispectrum mode with \(k\) measured by SKA1-low (see the formula in Yoshiura et al., 2015; Ma et al., 2021).
The instrumental noise on skew spectrum is approximately computed by \(s_{N}=\sigma_{N}^{3}x^{2}y\). This is similar to the power spectrum (see e.g. Yoshiura et al., 2015), while multiplying an extra \(\sigma_{N}\). The expected S/N of \(S_{\mathrm{21cm}}\) is then
\[\frac{\mathrm{S}}{\mathrm{N}}=\frac{s_{\mathrm{21cm}}(k)}{\sqrt{\left(s_{ \mathrm{21cm}}^{2}(k)+s_{N}^{2}\right)/N_{\mathrm{pair}}(k)}} \tag{7}\]
where \(N_{\mathrm{pair}}(k)\) is the number of power spectrum mode \(k\) that will be measured by SKA1-low (see e.g. Yoshiura et al., 2015; Ma et al., 2021).
After removing the fluctuations with wave-number \(>k\), the skewness noise of smoothed cells (with volume \(V_{\mathrm{cell}}=(2\pi/k)^{3}\)) is estimated as \(\gamma_{N,\,\mathrm{cell}}(k)=(\sigma_{N}/\sqrt{N_{\mathrm{pixel}}(k)})^{3}\), where \(N_{\mathrm{pixel}}(k)\) is the pixel number of measured 21-cm images within the cells with volume \(V_{\mathrm{cell}}\). The S/N ratio of \(\Gamma_{\mathrm{21cm}}\) is then
\[\frac{\mathrm{S}}{\mathrm{N}}=\frac{\Gamma_{\mathrm{21cm}}(k)}{\sqrt{\left( \Gamma_{\mathrm{21cm}}^{2}(k)+\gamma_{N,\,\mathrm{cell}}^{2}(k)\right)/N_{ \mathrm{cell}}(k)}} \tag{8}\]
where \(N_{\mathrm{cell}}(k)\) is the number of smoothed cells with volume \(V_{\mathrm{cell}}\) within the cosmic volume measured by SKA1-low telescope.
Fig. 5 shows the predicted S/N ratios of \(B_{\mathrm{21cm}}\), \(S_{\mathrm{21cm}}\) and \(\Gamma_{\mathrm{21cm}}\) at \(k=0.1\,\mathrm{Mpc}^{-1}\) for SKA1-low telescope. We do not present the results at \(k=0.3\) and \(1.0\,\mathrm{Mpc}^{-1}\), as it is actually hard for the SKA1-low telescope to measure these small scale fluctuations during EoR. Except some \(z\)s of phase transition, the S/N ratios of \(S_{\mathrm{21cm}}\) is \(>20\), and even \(\sim 30\) at \(z=13.5-15\). The S/N ratios of \(\Gamma_{\mathrm{21cm}}\) is lower, while it is \(>20\) at \(z=9-17\). The S/N ratios of \(B_{\mathrm{21cm}}\) is \(>3\). The reason that the S/N ratios of \(S_{\mathrm{21cm}}\) and \(S_{\mathrm{21cm}}\) are much higher than \(B_{\mathrm{21cm}}\) is because both the \(S_{\mathrm{21cm}}\) and \(\Gamma_{\mathrm{21cm}}\) are the integration of many \(B_{\mathrm{21cm}}\) modes, that increases the detectability to measure the non-Gaussian features of 21-cm signals.
Note that we do not consider the foreground noises, which might significantly reduce the S/N ratios of three-points statistics (Watkinson et al., 2021).
Figure 4: Evolution of \(B_{\mathrm{21cm}}\) (dashed red), \(S_{\mathrm{21cm}}\) (dash-dotted cyan) and \(\Gamma_{\mathrm{21cm}}\) (dotted magenta) at \(k=0.1\,\mathrm{Mpc}^{-1}\) (top), \(k=0.3\,\mathrm{Mpc}^{-1}\) (central), and \(k=1.0\,\mathrm{Mpc}^{-1}\) (bottom) from simulation L600_fast (left), L600_slow (middle) and L1200 (right).
## 4 Conclusions
We use semi-numerical simulations (21CMFAST) to study the evolution features of skew spectrum and smoothed skewness of 21-cm signals during the Epoch of Reionization (EoR), and their detectability by SKA1-low telescope. As a comparison, we also present the results of 21-cm bispectrum.
We run four simulations with the 21CMFAST code, one (L600) is with box length 600 cMpc and grid number 800\({}^{3}\), and with the normal EoR model. Another two are with the same box length and resolution, while one (L600_fast) with a faster ionization process and the other one (L600_slow) with a slower ionization process, by increasing and decreasing the star forming efficiency and the halo mass threshold for star formation, respectively. The last one (L1200) has the same EoR model with L600, but with a larger box length 1200 cMpc.
As the skew spectrum is the cross-correlation of 21-cm images with their square, and smoothed skewness is the 1-D statistics of 21-cm signals, their computing is much easier than the 21-cm bispectrum, while all of them show similar evolution features during EoR, e.g. at \(k=0.1\) and \(0.3\,\mathrm{Mpc}^{-1}\). They are negative in the Ly\(\alpha\) pumping phase, and become positive in the X-ray heating phase. With the ionization going on, the ionized bubbles result in negative spectra, and at the end of EoR, they are positive again as the non-Gaussian features of 21-cm signals are dominated by the neutral hydrogen islands. These evolution features are sensitive to the model of reionization history, while the larger box length simulation L1200 shows consistent results with L600.
As both the skew spectrum and smoothed skewness are the integration of bispectrum modes (see e.g. Dai & Xia, 2020; Shimabukuro et al., 2016), their expected S/N ratios for SKA1-low are much higher than that of 21-cm bispectrum, which can be \(>20\) at some redshifts. This denotes that the skew spectrum and smoothed skewness of 21-cm signals should be easier to measure than the 21-cm bispectrum, and can be used to study the physical models of EoR.
Finally, we summarize that although the 21-cm statistics of skew spectrum and smoothed skewness will miss some modes of 21-cm bispectrum, they are easier to compute and expected with larger S/N ratios measured by the 21-cm experiments e.g. SKA1-low. Their measurements in the near future will help to investigate the evolution of EoR.
## Acknowledgements
QM is supported by the National SKA Program of China (grant No. 2020SKA0110402), National Natural Science Foundation of China (Grant No. 12263002), Science and Technology Fund of Guizhou Province (Grant No. [2020]1Y020), and GZNU 2019 Special project of training new academics and innovation exploration. The tools for bibliographic research are offered by the NASA Astrophysics Data Systems and by the JSTOR archive.
## Data Availability
The simulation data and the post-analysis scripts underlying this article will be shared on reasonable request to the corresponding author.
|
2306.08069 | On $(n,m)$-chromatic numbers of graphs having bounded sparsity
parameters | An $(n,m)$-graph is characterised by having $n$ types of arcs and $m$ types
of edges. A homomorphism of an $(n,m)$-graph $G$ to an $(n,m)$-graph $H$, is a
vertex mapping that preserves adjacency, direction, and type. The
$(n,m)$-chromatic number of $G$, denoted by $\chi_{n,m}(G)$, is the minimum
value of $|V(H)|$ such that there exists a homomorphism of $G$ to $H$. The
theory of homomorphisms of $(n,m)$-graphs have connections with graph theoretic
concepts like harmonious coloring, nowhere-zero flows; with other mathematical
topics like binary predicate logic, Coxeter groups; and has application to the
Query Evaluation Problem (QEP) in graph database.
In this article, we show that the arboricity of $G$ is bounded by a function
of $\chi_{n,m}(G)$ but not the other way around. Additionally, we show that the
acyclic chromatic number of $G$ is bounded by a function of $\chi_{n,m}(G)$, a
result already known in the reverse direction. Furthermore, we prove that the
$(n,m)$-chromatic number for the family of graphs with a maximum average degree
less than $2+ \frac{2}{4(2n+m)-1}$, including the subfamily of planar graphs
with girth at least $8(2n+m)$, equals $2(2n+m)+1$. This improves upon previous
findings, which proved the $(n,m)$-chromatic number for planar graphs with
girth at least $10(2n+m)-4$ is $2(2n+m)+1$.
It is established that the $(n,m)$-chromatic number for the family
$\mathcal{T}_2$ of partial $2$-trees is both bounded below and above by
quadratic functions of $(2n+m)$, with the lower bound being tight when
$(2n+m)=2$. We prove $14 \leq \chi_{(0,3)}(\mathcal{T}_2) \leq 15$ and $14 \leq
\chi_{(1,1)}(\mathcal{T}_2) \leq 21$ which improves both known lower bounds and
the former upper bound. Moreover, for the latter upper bound, to the best of
our knowledge we provide the first theoretical proof. | Sandip Das, Abhiruk Lahiri, Soumen Nandi, Sagnik Sen, S Taruni | 2023-06-13T18:33:26Z | http://arxiv.org/abs/2306.08069v2 | ###### Abstract
###### Abstract
An \((n,m)\)-graph is a graph with \(n\) types of arcs and \(m\) types of edges. A homomorphism of an \((n,m)\)-graph \(G\) to another \((n,m)\)-graph \(H\) is a vertex mapping that preserves adjacency, its direction, and its type. The minimum value of \(|V(H)|\) such that \(G\) admits a homomorphism to \(H\) is the \((n,m)\)-chromatic number of \(G\), denoted by \(\chi_{n,m}(G)\). This parameter was introduced by Nesetril and Raspaud (J. Comb. Theory. Ser. B 2000).
In this article, we show that the arboricity of \(G\) is bounded by a function of \(\chi_{n,m}(G)\), but not the other way round. We also show that acyclic chromatic number of \(G\) is bounded by a function of \(\chi_{n,m}(G)\), while the other way round bound was known beforehand. Moreover, we show that \((n,m)\)-chromatic number for the family of graphs having maximum average degree less than \(2+\frac{2}{4(2n+m)-1}\), which contains the family of planar graphs having girth at least \(8(2n+m)\) as a subfamily, is equal to \(2(2n+m)+1\). This improves the previously known result which proved that the \((n,m)\)-chromatic number for the family planar graphs having girth at least \(10(2n+m)-4\) is equal to \(2(2n+m)+1\). It is known that the \((n,m)\)-chromatic number for the family of partial 2-trees bounded below and above by quadratic functions of \((2n+m)\) and that the lower bound is tight when \((2n+m)=2\). We show that the lower bound is not tight when \((2n+m)=3\) by improving the corresponding lower bounds by one. We manage to improve some of the the upper bounds in these cases as well.
**An update on \((n,m)\)-chromatic numbers**
Sandip Das\({}^{\,a}\), Abhiruk Lahiri\({}^{\,b}\), Soumen Nandi\({}^{\,c}\),
Sagnik Sen\({}^{\,d}\), S Taruni\({}^{\,d}\)
\((a)\) Indian Statistical Institute Kolkata, India
\((b)\) Charles University, Czech Republic
\((c)\) Institute of Engineering & Management, Kolkata, India
\((d)\) Indian Institute of Technology Dharwad, India
**Keywords:** colored mixed graph, graph homomorphism, chromatic number, maximum degree, sparse graphs, arboricity.
## 1 Introduction
In 2000, Nesetril and Raspaud [19] generalized the notion of graph homomorphisms by introducing colored homomorphisms of colored mixed graphs1.
Footnote 1: The same notion is studied in literature under slightly different names. We choose the one most suitable for this article.
The \((n,m)\)-graphs:An \((n,m)\)_-graph_ is a graph \(G\) with set of vertices \(V(G)\), set of arcs \(A(G)\), and set of edges \(E(G)\). Moreover, each arc is colored with one of the \(n\) colors from \(\{2,4,\cdots,2n\}\) and each edge is colored with one of the \(m\) colors from \(\{2n+1,2n+2,\cdots,2n+m\}\). The _underlying_ undirected graph of \(G\) is denoted by \(und(G)\). In this article, we restrict ourselves to \((n,m)\)-graphs \(G\) where \(und(G)\) are simple, unless otherwise stated.
Observe that, for \((n,m)=(0,1),(1,0),(0,2)\), and \((0,m)\), the \((n,m)\)-graphs are the same as undirected graphs, oriented graphs [24], \(2\)-edge-colored graphs [13], and \(m\)-edge-colored graphs [1], respectively. These types of graphs and their homomorphisms are well-studied. It is worth mentioning that, a variation of homomorphisms of \((0,2)\)-graphs, called homomorphisms of signed graphs [15], have gained popularity [21, 17, 5, 16] in recent times due to its strong connection with classical graph theory (especially, coloring and graph minor theory). It is known that, homomorphisms of signed graphs are in one-to-one correspondence with a specific restriction of homomorphisms of \((0,2)\)-graphs [21]2. Thus, the notion of colored homomorphism truly manages to unify a lot of important theories related to graph homomorphisms.
Footnote 2: In the language of category theory, there is a bijective covariant functor from the category induced by homomorphisms of signed graphs to a subcategory of the category induced by homomorphisms of \((0,2)\)-graphs.
Homomorphisms:A _homomorphism_ of an \((n,m)\)-graph \(G\) to another \((n,m)\)-graph \(H\) is a function
\[f:V(G)\to V(H)\]
such that if \(uv\) is an arc (resp., edge) of \(G\), then \(f(u)f(v)\) is also an arc (resp., edge) of \(H\) having the same color as \(uv\). The notation \(G\to H\) is used to denote that \(G\) admits a homomorphism to \(H\).
Using the notion of homomorphism, one can define the chromatic number of colored mixed graphs that generalizes [9] the chromatic numbers defined for simple graphs, oriented graphs, \(m\)-edge-colored graphs, etc. The \((n,m)\)_-chromatic number_ of an \((n,m)\)-graph \(G\) is given by
\[\chi_{n,m}(G)=\min\{|V(H)|:G\to H\}.\]
For a simple graph \(S\), the \((n,m)\)-chromatic number is given by
\[\chi_{n,m}(S)=\max\{\chi_{n,m}(G):und(G)=S\}.\]
For a family \(\mathcal{F}\) of graphs, the \((n,m)\)-chromatic number is given by
\[\chi_{n,m}(\mathcal{F})=\max\{\chi_{n,m}(G):G\in\mathcal{F}\}.\]
Notice that, the family \(\mathcal{F}\) may contain simple or \((n,m)\)-graphs.
State of the art, contributions, and organization of the paper:The \((n,m)\)-chromatic number for \((n,m)=(0,1)\) is simply the ordinary chromatic number which is, arguably, the most popularly studied topic of graph theory. However, due to some basic differences in the nature of the homomorphisms, the general study of the \((n,m)\)-chromatic number tends to address the parameter for all values of \((n,m)\neq(0,1)\). Naturally, the \((n,m)\)-chromatic number has been studied for various graph families in the past [19, 8, 14] and, in particular, for \((n,m)=(1,0)\) and \((0,2)\), it has been extensively studied under the names oriented chromatic number [23, 10, 24] and \(2\)-edge-colored chromatic number [5, 1, 13]. It is noteworthy that any result proved for general \((n,m)\)-chromatic number also, in particular, imply results for oriented and \(2\)-edge-colored chromatic numbers.
In the following, we summarize the contributions of this work along with relevant state of the art. We also enumerate them in a way so that it indicates the organization of the article.3
* _Section 2:_ We recall some basic results and preliminaries useful for the proofs in the following sections.
* _Section 3:_ This section is dedicated to establishing relations between the parameters arboricity, acyclic chromatic number, and \((n,m)\)-chromatic number. To begin with, we show that the \((n,m)\)-chromatic number of a graph \(G\) is not bounded above by a function of its arboricity, yet its arboricity is bounded by \(\lceil\log_{p}k+k/2\rceil\), where \(k\) is its \((n,m)\)-chromatic number. In contrast to that, Nesetril and Raspaud [19], generalizing a result due to Raspaud and Sopena [22], showed that the \((n,m)\)-chromatic number of a graph with acyclic chromatic number \(t\) is bounded above by \(t(2n+m)^{t-1}\) while Fabila-Monroy, Flores, Heumer, and Montejano [8], generalizing a result of Ochem [20], proved the tighness of the bound. We prove that the reverse statement is also true, that is, the acyclic chromatic number of a graph \(G\) is bounded above by \(k^{2}+k^{2+\lceil\log_{p}\log_{p}k\rceil}\), where \(k\) is its \((n,m)\)-chromatic number. These upper bounds are generalizations of results due to Kostochka, Sopena and Zhu [10] proved for \((n,m)=(1,0)\). We managed to slightly improve the second bound while generalizing it.
* _Section 4:_ The \((n,m)\)-chromatic number for the family of planar graphs is bounded above by \(5(2n+m)^{4}\) due to Nesetril and Raspaud [19] while there is a lower bound of the same which is a cubic function of \((2n+m)\) by Fabila-Monroy, Flores, Heumer, and Montejano [8]. These two bounds together are the closest analogue of the Four-Color Theorem for \((n,m)\)-graphs. On the other hand, the best (and the only) known possible analogue of the Grotzsch's theorem for \((n,m)\)-graphs is a result due to Montejano, Pinlou, Raspaud, and Sopena [14] which shows that the \((n,m)\)-chromatic number for the family of planar graphs having girth at least \(10(2n+m)-4\) is equal to \(2(2n+m)+1\). We improve this result by showing that the \((n,m)\)-chromatic number for the family of graphs having maximum average degree less than \(2+\frac{2}{4(2n+m)-1}\), which contains the family of planar graphs having girth at least \(8(2n+m)\) as a subfamily, is equal to \(2(2n+m)+1\).
* _Section 5:_ It is known that the \((n,m)\)-chromatic number for the family of partial 2-trees bounded below and above by quadratic functions of \((2n+m)\) due to Fabila-Monroy, Flores, Heumer, and Montejano [8] and Nesetril and Raspaud [19], respectively. The lower bounds, when restricted to the cases of \((2n+m)=2\), are known to be tight [13, 23]. The next natural question to ask is whether the lower bounds are tight for \((2n+m)=3\) as well. Observe that \((2n+m)=3\) only when \((n,m)=(0,3)\) or \((1,1)\). The best known bounds, in these two cases tell us that the \((0,3)\)-chromatic number [8] of partial 2-trees lie in the interval [13, 27], and the \((1,1)\)-chromatic number [14] of partial 2-trees lie in the interval [13, 21]. We improve both these intervals to [14, 15] and [14, 21], respectively. In particular this shows that the general lower bound of the \((n,m)\)-chromatic number for the family of partial 2-trees proved in [8] is not tight for \((2n+m)=3\).
* _Section 6 :_ We share our concluding remarks.
## 2 Preliminaries
For standard graph theoretic notation and terminologies we will follow the book "Introduction to Graph Theory" by West [25] and we will recall all non-standard definitions here for convenience. Moreover, we will use a slightly modified article specific notations for improving the readability
and uniformity of this article. Also, the standard notions relevant for simple graphs, if used for an \((n,m)\)-graph \(G\), will be understood as a property of \(und(G)\).
If \(uv\) is an arc of color \(\alpha\), then we say that \(v\) is an \(\alpha\)-neighbor of \(u\), or equivalently, we say that \(u\) is an \((\alpha-1)\)-neighbor of \(v\). On the other hand, if \(uv\) is an edge of color \(\alpha\), then we say that \(u\) and \(v\) are \(\alpha\)-neighbors of each other. Furthermore, the set of all \(\alpha\)-neighbors of \(u\) is denoted by \(N^{\alpha}(u)\). Also, we will use the Greek smaller case letters such as \(\alpha,\beta,\gamma\) etc. as variables belonging to the set \(\{1,2,\cdots,2n+m\}\) denoting the color and the direction of an adjacency.
A _special \(2\)-path_ is \(2\)-path \(uvw\) of \((n,m)\)-graph \(G\) where \(v\in N^{\alpha}(u)\cap N^{\beta}(w)\) such that \(\alpha\neq\beta\). The following useful property comes in handy in proving some of the results of this paper.
**Lemma 2.1** ([3]).: _Two vertices \(u\) and \(v\) cannot have the same image under any homomorphism of \(G\) to any \(H\), if and only if they are adjacent or connected by a special \(2\)-path in \(G\)._
## 3 Acyclic chromatic number and arboricity
The _arboricity arb(G)_ of a graph \(G\) is the minimum \(r\) such that the edges of \(G\) can be decomposed into \(r\) forests. First we show that an \((n,m)\)-graph having bounded arboricity can have arbitrarily large \((n,m)\)-chromatic number.
**Theorem 3.1**.: _For every positive integer \(k\geq 2\), there exists an \((n,m)\)-graph \(G_{k}\) having arboricity at most \(r\) that satisfies \(\chi_{n,m}(G_{k})\geq k\)._
Proof.: Consider the complete graph \(K_{k}\). For all \((n,m)\neq(0,1)\), it is possible to replace all the edges of \(K_{k}\) by a special \(2\)-path to obtain an \((n,m)\)-colored mixed graph \(G^{\prime}_{k}\). We know that, the end points of the special \(2\)-path must have different image under any homomorphism of \(G^{\prime}_{k}\)[3], thus \(\chi_{(n,m)}(G^{\prime}_{k})\geq k\) whereas, it is easy to note that \(G^{\prime}_{k}\) has arboricity \(2\). Thus, for \(r=2\) take \(G_{k}=G^{\prime}_{k}\). For \(r>2\), simply take the disjoint union of the above \(G^{\prime}_{k}\) with a finite graph \(H\) having arboricity \(r\).
Next we show that it is possible to bound the arboricity of an \((n,m)\)-graph by a function of \((n,m)\)-chromatic number.
**Theorem 3.2**.: _Let \(G\) be a graph with \(\chi_{n,m}(G)=k\) where \(p=2n+m\geq 2\). Then \(arb(G)\leq\lceil\log_{p}k+k/2\rceil\)._
Proof.: Let \(G^{\prime}\) be an arbitrary labeled subgraph of \(G\) consisting \(v_{G^{\prime}}\) vertices and \(e_{G^{\prime}}\) edges. We know from Nash-Williams' Theorem [18] that the arboricity \(arb(G)\) of any graph \(G\) is equal to the maximum of \(\lceil e_{G^{\prime}}/(v_{G^{\prime}}-1)\rceil\) over all subgraphs \(G^{\prime}\) of \(G\). It is sufficient to prove that for any subgraph \(G^{\prime}\) of \(G\), \(e_{G^{\prime}}/(v_{G^{\prime}}-1)\leq\log_{p}k+k/2\). As \(G^{\prime}\) is a labeled graph, so there are \(p^{e_{G^{\prime}}}\) different \((n,m)\)-graphs with underlying graph \(G^{\prime}\). As \(\chi_{n,m}(G)=k\), there exits a homomorphism from \(G^{\prime}\) to a \((n,m)\)-colored mixed graph \(G_{k}\) which has the complete graph on \(k\) vertices as its underlying graph. Observe that, even though it is not a necessary condition for \(G_{k}\) to have the complete graph as its underlying graph, we can always add some extra edges/arcs to make \(G_{k}\) have that property. Note that the number of possible homomorphisms of \(G^{\prime}\) to \(G_{k}\) is at most \(k^{v_{G^{\prime}}}\). For each such homomorphism of \(G^{\prime}\) to \(G_{k}\) there are at most \(p^{\binom{k}{2}}\) different \((n,m)\)-colored mixed graphs with underlying labeled graph \(G^{\prime}\) as there are \(p^{\binom{k}{2}}\) choices of \(G_{k}\). Therefore,
\[p^{\binom{k}{2}}.k^{v_{G^{\prime}}}\geq p^{e_{G^{\prime}}} \tag{1}\]
which implies
\[\log_{p}k\geq(e_{G^{\prime}}/v_{G^{\prime}})-\binom{k}{2}/v_{G^{\prime}}. \tag{2}\]
Suppose if \(v_{G^{\prime}}\leq k\), then \(e_{G^{\prime}}/(v_{G^{\prime}}-1)\leq v_{G^{\prime}}/2\leq k/2\). Now let \(v_{G^{\prime}}>k\). We know that \(\chi_{n,m}(G^{\prime})\leq\chi_{n,m}(G)=k\). So
\[\log_{p}k \geq\frac{e_{G^{\prime}}}{v_{G^{\prime}}}-\frac{k(k-1)}{2v_{G^{ \prime}}}\] \[\geq\frac{e_{G^{\prime}}}{(v_{G^{\prime}}-1)}-\frac{e_{G^{\prime }}}{v_{G^{\prime}}(v_{G^{\prime}}-1)}-\frac{k-1}{2}\] \[\geq\frac{e_{G^{\prime}}}{(v_{G^{\prime}}-1)}-1/2-k/2+1/2\] \[\geq\frac{e_{G^{\prime}}}{(v_{G^{\prime}}-1)}-k/2.\]
Therefore, \(\frac{e_{G^{\prime}}}{(v_{G^{\prime}}-1)}\leq\log_{p}k+k/2\).
A graph \(G\) is _acyclic \(t\)-colorable_ if we can color its vertices with \(t\) colors such that each color class induces an independent set and any two color classes induce a forest. The _acyclic chromatic number_\(\chi_{n,m}(G)\) of a simple graph \(G\) is the minimum \(t\) such that \(G\) is acyclic \(t\)-colorable.
One of the major results proved for \((n,m)\)-chromatic numbers is due to Nesetril and Raspaud [19] which shows that
\[\chi_{n,m}(\mathcal{A}_{t})=t(2n+m)^{t-1}\]
where \(\mathcal{A}_{t}\) denotes the family of graphs having acyclic chromatic number at most \(t\). Moreover, Fabila-Monroy, Flores, Heumer, and Montejano [8] proved the tightness of the same. In particular, this shows that the \((n,m)\)-chromatic number of a graph is bounded by a function of its acyclic chromatic number. It is natural to ask whether the converse is also true.
It turns out to be true, however, before presenting its proof, we will show that the acyclic chromatic number of a graph is finite if its arboricity and \((n,m)\)-chromatic number is finite.
**Theorem 3.3**.: _Let \(G\) be a graph with \(arb(G)=r\) and \(\chi_{n,m}(G)=k\) where \(p=2n+m\geq 2\). Then \(\chi_{a}(G)\leq k^{\lceil\log_{p}r\rceil+1}\)._
Proof.: Let \(G\) be a graph with \(arb(G)=r\) and \(\chi_{n,m}(G)=k\) where \(2n+m=p\). Let \(v_{1},v_{2},...,v_{t}\) be some ordering of the vertices of \(G\). Now consider the \((n,m)\)-colored mixed graph \(G_{0}\) with underlying graph \(G\) such that for any \(i<j\) we have \(v_{j}\in N^{1}(v_{i})\) whenever \(v_{i}v_{j}\) is an edge of \(G\).
Note that the edges of \(G\) can be covered by \(r\) edge disjoint forests \(F_{1},F_{2},...,F_{r}\) as \(arb(G)=r\). Let \(s_{i}\) be the number \(i\) expressed in base \(p\) for all \(i\in\{1,2,...,r\}\). Note that \(s_{i}\) can have at most \(s=\lceil\log_{p}r\rceil\) digits.
Now we will construct a sequence of \((n,m)\)-colored mixed graphs \(G_{1},G_{2},...,G_{s}\) each having underlying graph \(G\). For \(l\in\{1,2,...,s\}\) we are going to describe the construction of the \((n,m)\)-graph \(G_{l}\). Consider any edge \(v_{i}v_{j}\) of \(G\) where \(i<j\). Then \(v_{i}v_{j}\) is an edge of the forest \(F_{t}\) for some \(t\in\{1,2,...,r\}\). Let the \(t^{th}\) digit of \(s_{t}\) be \(\hat{t}\). Then \(G_{l}\) is constructed in such a way that we have \(v_{j}\in N^{\hat{t}+1}(v_{i})\) in \(G_{l}\).
Recall that \(\chi_{n,m}(G)\leq k\) and the underlying graph of \(G_{l}\) is \(G\). Thus, for each \(l\in\{1,2,\cdots,s\}\) there exists an \(H_{l}\) on \(k\) vertices and a homomorphism \(f_{l}:G_{l}\to H_{l}\). Now we claim that \(f(v)=(f_{0}(v),f_{1}(v),...,f_{s}(v))\) for each \(v\in V(G)\) is an acyclic coloring of \(G\).
For adjacent vertices \(u,v\) in \(G\) clearly we have \(f(v)\neq f(u)\) as \(f_{0}(v)\neq f_{0}(u)\). Let \(C\) be a cycle in \(G\). We have to show that at least three colors have been used to color this cycle with respect to the coloring given by \(f\). Note that in \(C\) there must be two incident edges \(uv\) and \(vw\) such that they belong to different forests, say, \(F_{t}\) and \(F_{t^{\prime}}\), respectively. Now suppose that \(C\) received two colors with respect to \(f\), that is, the contrary of what we wish to prove. Then we must have \(f(u)=f(w)\neq f(v)\). In particular we must have \(f_{0}(u)=f_{0}(w)\neq f_{0}(v)\). To have that we must also have \(u,w\in N^{\alpha}(v)\) for some \(\alpha\in\{1,2,...,p\}\) in \(G_{0}\) (even though \(\alpha\) can only take the value \(1\) or \(2\) in this case). Let \(s_{t}\) and \(s_{t^{\prime}}\) differ in their \(j^{th}\) digit. Then in \(G_{j}\) we have \(u\in N^{\alpha}(v)\) and \(w\in N^{\alpha^{\prime}}(v)\) for some \(\alpha\neq\alpha^{\prime}\). Then we must have \(f_{j}(u)\neq f_{j}(w)\) as \(uvw\) is a special \(2\)-path in \(G_{j}\). Therefore, we also have \(f(u)\neq f(w)\). Thus, the cycle \(C\) cannot be colored with two colors under the coloring \(f\).
Thus, combining Theorem 3.2 and 3.3, \(\chi_{a}(G)\leq k^{\lceil\log_{p}\lceil\log_{p}k+k/2\rceilrceil+1}\) for \(\chi_{n,m}(G)=k\) where \(p=2n+m\geq 2\). However, we managed to obtain the following bound which is better in all cases except for some small values of \(k,p\).
**Theorem 3.4**.: _Let \(G\) be an \((n,m)\)-colored mixed graph with \(\chi_{(n,m)}(G)=k\geq 4\) where \(p=2n+m\geq 2\). Then \(\chi_{a}(G)\leq k^{2}+k^{2+\lceil\log_{p}\log_{p}k\rceil}\)._
Proof.: Let \(t\) be the maximum real number such that there exists a subgraph \(G^{\prime}\) of \(G\) with \(v_{G^{\prime}}\geq k^{2}\) and \(e_{G^{\prime}}\geq t.v_{G^{\prime}}\). Let \(G^{\prime\prime}\) be the biggest subgraph of \(G\) with \(e_{G^{\prime\prime}}>t.v_{G^{\prime\prime}}\). Thus, by maximality of \(t\), \(v_{G^{\prime\prime}}<k^{2}\).
Let \(G_{0}=G-G^{\prime\prime}\). Hence \(\chi_{a}(G)\leq\chi_{a}(G_{0})+k^{2}\). By maximality of \(G^{\prime\prime}\), for each subgraph \(H\) of \(G_{0}\), we have \(e_{H}\leq t.v_{H}\).
If \(t\leq\frac{v_{H}-1}{2}\), then \(e_{H}\leq(t+1/2)(v_{H}-1)\). If \(t>\frac{v_{H}-1}{2}\), then \(\frac{v_{H}}{2}<t+1/2\). So \(e_{H}\leq\frac{(v_{H}-1).v_{H}}{2}\leq(t+1/2)(v_{H}-1)\). Therefore, \(e_{H}\leq(t+1/2)(v_{H}-1)\) for each subgraph \(H\) of \(G_{0}\).
By Nash-Williams' Theorem [18], there exists \(r=\lceil t+1/2\rceil\) forests \(F_{1},F_{2},\cdots,F_{r}\) which covers all the edges of \(G_{0}\). We know from Theorem 3.3, \(\chi_{a}(G_{0})\leq k^{s+1}\) where \(s=\lceil\log_{p}r\rceil\).
Using inequality (2) we get \(\log_{p}k\geq t-1/2\). Therefore
\[s=\lceil\log_{p}(\lceil t+1/2\rceil)\rceil\leq\lceil\log_{p}(1+\lceil\log_{p} k\rceil)\rceil\leq 1+\lceil\log_{p}\log_{p}k\rceil.\]
Hence \(\chi_{a}(G)\leq k^{2}+k^{2+\lceil\log_{p}\log_{p}k\rceil}\).
Our bound, when restricted to the case of \((n,m)=(1,0)\), slightly improves the existing bound due to Kostochka, Sopena and Zhu [10].
## 4 Sparse graphs
The _maximum average degree_ of a graph is given by
\[mad(G)=\max\left\{\frac{2|E(H)|}{|V(H)|}:\text{ $H$ is a subgraph of $G$}\right\}.\]
We present a tight bound for the \((n,m)\)-chromatic number of sparse graphs.
**Theorem 4.1**.: _For \(mad(G)<2+\frac{2}{4(2n+m)-1}\) and \(2n+m\geq 2\), we have_
\[\chi_{n,m}(G)\leq 2(2n+m)+1.\]
We begin the proof of Theorem 4.1 by describing a complete \((n,m)\)-graph on \((2p+1)\) vertices where \(p=2n+m\). We know that there exists a Hamiltonian decomposition of \(K_{2p+1}\) due to Walecki [2] which we are going to describe in the following. To do so, first we will label the vertices of \(K_{2p+1}\) is a certain way. Let one specific vertex of it be labeled by the symbol \(\infty\) while the other vertices are labeled by the elements of the cyclic group \(\mathbb{Z}/2p\mathbb{Z}\). Let \(C_{0},C_{1},\cdots,C_{p-1}\) be the edge disjoint Hamiltonian cycles of the decomposition where \(C_{j}\) is the cycle
\[\infty(2p+j)(1+j)(2p-1+j)\cdots(p-1+j)(2p-(p-1)+j)(p+j)\infty\]
For each \(\alpha\in\{2,4,6\cdots 2n\}\) convert the cycles \(C_{\alpha-2}\) and \(C_{\alpha-1}\) to directed cycles having arcs of color \(\alpha\). For each \(\alpha\in\{2n+1,n+2,\cdots,2n+m\}\), convert the cycle \(C_{\alpha-1}\) into a cycle having all edges of color \(\alpha\). Thus what we obtain is a complete \((n,m)\)-mixed graph on \(2p+1\) vertices. We call this so-obtained complete \((n,m)\)-graph as \(T\). We now prove a useful property of \(T\).
**Lemma 4.2**.: _For every \(S\subsetneq V(T)\) we have \(|S|<|N^{\alpha}(S)|\) for all \(\alpha\in\{1,2,\cdots 2n+m\}\)._
Proof.: We divide the proof into three parts depending on the value of \(\alpha\).
_Case 1:_ If \(\alpha\in\{2n+1,2n+2,\cdots 2n+m\}\), then assume that \(C_{\alpha-1}\) is the cycle \(v_{1}v_{2}\cdots v_{2p+1}v_{1}\) and the set \(S\) consist of vertices \(v_{i_{1}},v_{i_{2}},\cdots,v_{i_{l}}\) where \(i_{1}<i_{2}<\cdots<i_{l}\). Now notice that the set of vertices \(A=\{v_{i_{1}+1},v_{i_{2}+1},\cdots,v_{i_{l}+1}\}\) are distinct and are contained in \(N^{\alpha}(S)\). On the other hand, note that the set of vertices \(B=\{v_{i_{1}-1},v_{i_{2}-1},\cdots,v_{i_{l}-1}\}\) are distinct and are contained in \(N^{\alpha}(S)\). As \(|A|=|B|=|S|\), we are done unless \(A=B\).
If \(A=B\), then note that \(v_{t}\in S\) implies \(v_{t+2}\in S\) where the \(+\) operation on indices of \(v\) is taken modulo \(2p+1\). Hence, if there exists some index \(t\) for which we have \(v_{t},v_{t+1}\in S\), then \(S\) must be the whole vertex set, which is not possible. Thus, we must have \(v_{i_{j+1}}=v_{i_{j}+2}\) for all \(j\in\{1,2,\cdots,l\}\) where the \(+\) operation on indices of \(v\) is taken modulo \(2p+1\). However as \(C_{\alpha}\) is an odd cycle on \(2p+1\) vertices, it is impossible to satisfy the above condition. Hence \(A\neq B\), and we are done in this case.
_Case 2:_ If \(\alpha\in\{2,4,\cdots,2n\}\), then observe that \(S\) has exactly \(|S|\) many \(\alpha\)-neighbors in \(C_{\alpha-2}\) and exactly \(|S|\) many \(\alpha\)-neighbors in \(C_{\alpha-1}\). Furthermore, assume that \(A\) and \(B\) are the sets of \(\alpha\)-neighbors of \(S\) in \(C_{\alpha-2}\) and \(C_{\alpha-1}\), respectively. As \(|A|=|B|=|S|\) and \(A\cup B\subseteq N^{\alpha}(S)\), we are done unless \(A=B\). Due to the structure of the cycles \(C_{\alpha-2}\) and \(C_{\alpha-1}\), without loss of generality we may assume that \(\alpha=2\).
Thus let us try to obtain a contradiction assuming \(A=B\). In such a scenario, fix a \(k\in\mathbb{Z}/2p\mathbb{Z}\). Note that \(p\neq k\in S\) if and only if \(x\in A=B\) if and only if \(k+2\in S\), where \(x=(2p-k)\) (resp., \((2p-k+1)\)) Moreover, \(p\in S\) if and only if \(\infty\in A=B\) if and only if \((p+1)\in S\). Also, \(0\in S\) if and only if \(1\in A=B\) if and only if \(\infty\in S\). Hence, for any non-empty \(S\), \(A=B\) forces \(S=V(T)\), a contradiction.
_Case 3:_ If \(\alpha\in\{1,3,\cdots,2n-1\}\), then one can handle it in a way similar to _Case 2_.
Therefore, \(|S|<|N^{\alpha}(S)|\) for all \(\alpha\in\{1,2,\cdots 2n+m\}\) and for every \(S\subsetneq V(K_{2p+1})\).
Let \(T\) be a complete \((n,m)\)-graph on \(2p+1\) vertices satisfying the condition of Lemma 4.2. We want to show that \(G\to T\) whenever \(mad(G)<2+\frac{2}{4p-1}\). That is, it is enough to prove the following lemma.
**Lemma 4.3**.: _If \(mad(G)<2+\frac{2}{4p-1}\), then \(G\to T\)._
We will prove the above lemma by contradiction. Hence we assume a minimal (with respect to number of vertices) \((n,m)\)-graph \(M\) having \(mad(M)<2+\frac{2}{4p-1}\) which does not admit a homomorphism to \(T\). We now give some forbidden configurations for \(M\) stated as lemmas.
**Lemma 4.4**.: _The graph \(M\) does not contain a vertex having degree one._
Proof.: Suppose \(M\) contains a vertex \(u\) having degree one. Observe that the graph \(M^{\prime}\), obtained by deleting the vertex \(u\) from \(M\), admits a homomorphism to \(T\) due to the minimality of \(M\). It is possible to extend the homomorphism \(M^{\prime}\to T\) to a homomorphism of \(M\to T\) as any vertex of \(T\) has exactly two \(\alpha\)-neighbors for all \(\alpha\in\{1,2,\cdots 2n+m\}\).
A path with all internal vertices of degree two is called a _chain_, and in particular a _\(k\)-chain_ is a chain having \(k\) internal vertices. The endpoints (assume them to always have degree at least 3) of a (\(k\)-)chain are called _(\(k\)-)chain adjacent_.
**Lemma 4.5**.: _The graph \(M\) does not contain a \(k\)-chain with \(k\geq 2p-1\)._
Proof.: Suppose \(M\) contains a \(k\)-chain with endpoints \(u,v\). Observe that the graph \(M^{\prime}\), obtained by deleting the internal degree two vertices from the above mentioned chain from \(M\), admits a homomorphism to \(T\) due to the minimality of \(M\). It is possible to extend the homomorphism \(M^{\prime}\to T\) to a homomorphism of \(M\to T\) by Lemma 4.2.
Let us describe another configuration. Suppose \(v\) is chain adjacent to exactly \(l\) vertices \(v_{1},v_{2},\cdots,v_{l}\), each having degree at least three. Let the chains between \(v\) and \(v_{i}\) has \(k_{i}\) internal vertices. Let us refer to such a configuration as configuration \(C_{l}\) for convenience.
**Lemma 4.6**.: _The graph \(M\) does not contain the configuration \(C_{l}\) as a induced subgraph if_
\[\sum_{i=1}^{l}k_{i}>(2p-1)l-2p\]
_where \(p=(2n+m)\)._
Proof.: Suppose \(M\) contains the configuration \(C_{l}\). Let \(M^{\prime}\) be the graph obtained by deleting all vertices of the configuration except \(v_{1},v_{2},\cdots,v_{l}\). Thus there exists a homomorphism \(f:M^{\prime}\to T\) due to minimality of \(M\). We are going to extend \(f\) to a homomorphism \(f_{ext}:M\to T\), which will lead to a contradiction and complete the proof.
As \(T\) has exactly two \(\alpha\)-neighbors for all \(\alpha\in\{1,2,\cdots 2n+m\}\), Lemma 4.2 implies that it is possible to partially extend \(f\) to the chain between \(v\) and \(v_{i}\) in such different ways that will allow us to choose the value of \(f_{ext}(v_{i})\) from a set of \(k_{i}+2\) vertices of \(T\). In other words, the value of \(f(v_{i})\) will forbid at most \(2p-k_{i}-1\) values at \(v_{i}\).
Thus, considering the effects all the chains incident to \(v\), at most
\[2lp-\sum_{i=1}^{l}k_{i}-l\]
values are forbidden at \(v\).
Notice that if this value is less than or equal to \(2p\), then it will be possible to extend \(f\) to a homomorphism \(f_{ext}:M\to T\). That implies the relation \(\sum_{i=1}^{l}k_{i}\leq(2p-1)l-2p\).
Now we are ready to start the discharging procedure. First we define a charge function on the vertices of \(M\).
\[ch(x)=\deg(x)-\left(2+\frac{2}{4p-1}\right),\text{ for all }x\in V(M).\]
Observe that, \(\sum_{x\in V(M)}ch(x)<0\) as \(mad(M)<2+\frac{2}{4p-1}\). Now after the completion of the discharging procedure, all updated charge will become non-negative implying a contradiction. The discharging rule is the following:
\((R1):\)_Every vertex having degree three or more donates \(\frac{1}{4p-1}\) to the degree two vertices which are part of its incident chains._
Let \(ch^{*}(x)\) be the updated charge. Now we are going to calculate this values of this updated charge for vertices of different degrees in \(M\).
**Lemma 4.7**.: _For any degree two vertex \(x\in V(M)\), we have \(ch^{*}(x)=0\)._
Proof.: As \(M\) does not have any degree one vertex due to Lemma 4.4, every degree two vertex \(x\) must be internal vertex of a chain. Thus, by rule \((R1)\) the vertex \(x\) must receive \(\frac{1}{4p-1}\) charge from each side of the chain. Hence the updated charge is
\[ch^{*}(x)=ch(x)+\frac{2}{4p-1}=deg(x)-2-\frac{2}{4p-1}+\frac{2}{4p-1}=0.\]
Thus we are done.
**Lemma 4.8**.: _For any vertex \(x\) having degree three or more, we have \(ch^{*}(x)\geq 0\)._
Proof.: Let \(x\) be a degree \(d\) vertex of \(M\). Thus by Lemma 4.6
\[ch^{*}(x) \geq ch(x)-\frac{(2p-1)d-2p}{4p-1}=d-2-\frac{2}{4p-1}-\frac{2pd-d- 2p}{4p-1}\] \[=\frac{4pd-8p-d+2-2-2pd+d+2p}{4p-1}=\frac{2p(d-3)}{4p-1}\geq 0\]
for \(d\geq 3\).
This implies \(0>\sum_{x\in V(M)}ch(x)=\sum_{x\in V(M)}ch^{*}(x)\geq 0\), a contradiction. Thus, the proof of Lemma 4.3 is completed which proves the Theorem 4.1.
Proof of Theorem 4.1.: Observe that, due to the above lemmas, we have \(0>\sum_{x\in V(M)}ch(x)=\sum_{x\in V(M)}ch^{*}(x)\geq 0\), a contradiction. Thus, the proof of Lemma 4.3 is completed which proves the Theorem 4.1.
As a corollary to this result, we obtain \(\chi_{n,m}(\mathcal{P}_{g})=2(2n+m)+1\) for all \(g\geq 8(2n+m)\).
**Theorem 4.9**.: _For \(g\geq 8(2n+m)\) and \(2n+m\geq 2\), we have,_
\[\chi_{n,m}(\mathcal{P}_{g})=2(2n+m)+1.\]
Proof.: From the Theorem 4.1 along with the result by Borodin[4] that shows that a planar graph \(G\) having girth at least \(g\) has \(mad(G)<\frac{2g}{g-2}\), the following result is obtained as a corollary of Theorem 4.1.
## 5 Partial \(2\)-trees
The only solutions of \((2n+m)=2\) are \((n,m)=(1,0)\) and \((0,2)\). Similarly, the only two solutions of \((2n+m)=3\) are \((n,m)=(1,1)\) and \((0,3)\). Therefore, studying the \((n,m)\)-chromatic number for these values of \((n,m)\) can help understand the trend for the general values of \((n,m)\).
The best known general bounds of \((n,m)\)-chromatic number for the family \(\mathcal{T}^{2}\) of partial \(2\)-trees, or equivalently, \(K_{4}\)-minor free graphs are the following.
**Theorem 5.1** (Fabila-Monroy, Flores, Heumer, and Montejano 2009 [8]; Nesetril and Raspaud 2000 [19]).: _For all non-negative integers \(n\) and \(m\) where \((2n+m)\geq 2\), we have_
\[(2n+m)^{2}+2(2n+m)+1\leq\chi_{n,m}(\mathcal{T}^{2})\leq 3(2n+m)^{2}, \text{ for }m>0\text{ even}\] \[(2n+m)^{2}+(2n+m)+1\leq\chi_{n,m}(\mathcal{T}^{2})\leq 3(2n+m)^{2}, \text{ otherwise.}\]
Recall that, for \((2n+m)=2\) case, it is known that the lower bounds are tight [13]. The best known bounds when \(2n+m=3\) are \(13\leq\chi_{0,3}(\mathcal{T}^{2})\leq 27\) and \(13\leq\chi_{1,1}(\mathcal{T}^{2})\leq 21\). So, if the trend of lower bound in Theorem 5.1 being tight were true, then in particular it would be true for the cases when \(2n+m=3\). However, we show the contrary via the following result where we improve the lower and the upper bounds for both the instances.
**Theorem 5.2**.: _For the family of \(\mathcal{T}^{2}\) of partial \(2\)-trees we have,_
1. \(14\leq\chi_{0,3}(\mathcal{T}^{2})\leq 15\)_,_
2. \(14\leq\chi_{1,1}(\mathcal{T}^{2})\leq 21\)_._
The proof of the theorem is contained in a series of lemmas.
An \((n,m)\)_-universal bound_ of \(\mathcal{F}\) is an \((n,m)\)-graph \(T\) such that \(G\to T\) for all \(\mathit{und}(G)\in\mathcal{F}\). A _minimum \((n,m)\)-universal bound_ of \(\mathcal{F}\) is a universal bound \(T\) on minimum number of vertices having the property that for every proper subgraph \(T^{\prime}\) of \(T\), there exists a \(G^{\prime}\) with \(\mathit{und}(G^{\prime})\in\mathcal{F}\) such that \(G^{\prime}\not\to T^{\prime}\). A _complete_ family \(\mathcal{F}\) of graphs is such that given \(G_{1},G_{2}\in\mathcal{F}\), there exists a graph \(G\in\mathcal{F}\) that contains every \(G_{i}\) as its subgraph.
**Lemma 5.3**.: _Let \(\mathcal{F}\) be a complete family of graphs. Then there exists a minimal \((n,m)\)-universal bound on \(\mathcal{F}\) on \(\chi_{n,m}(\mathcal{F})\) vertices._
Proof.: Suppose not, that is, for every \((n,m)\)-graph \(T\) on \(\chi_{n,m}(\mathcal{F})\) vertices, there exists a graph \((n,m)\)-graph \(G_{T}\in\mathcal{F}\) such that \(G_{T}\not\to T\). Since \(\mathcal{F}\) is a complete family of graphs, there exists a \((n,m)\)-graph \(G\in\mathcal{F}\) that contains every \(G_{T}\) as its subgraph. As \(G\in\mathcal{F}\), there is a homomorphism \(f:G\to\hat{T}\) for some \((n,m)\)-graph \(\hat{T}\) on \(\chi_{n,m}(\mathcal{F})\) vertices. Then the restriction
\[f|_{V(G_{T})}:G_{T}\to\hat{T}\]
for all \((n,m)\)-graphs \(T\) on \(\chi_{n,m}(\mathcal{F})\) vertices, a contradiction.
An \((n,m)\)-graph \(T\) has _property_\(P_{2,1}\) if for any adjacent pair of vertices \(u,v\) of \(T\) the set \(N^{\alpha}(u)\cap N^{\beta}(v)\) is not empty for all \(\alpha,\beta\in\{1,2,\cdots,2n+m\}\).
**Lemma 5.4**.: _Any minimum universal bound \(T\) of the family \(\mathcal{T}^{2}\) of partial \(2\)-trees has property \(P_{2,1}\)._
Proof.: Due to minimality of \(T\), there exists an \((n,m)\)-graph \(G\) with \(und(G)\in\mathcal{T}^{2}\) such that for any homomorphism \(f:G\to T\), given any \(xy\in A(T)\) (resp., \(E(T)\)), there exists \(uv\in A(G)\) (resp., \(E(G)\)) satisfying \(f(u)=x,f(v)=y\).
For any two adjacent vertices \(u,v\) in \(G\) and for any \((\alpha,\beta)\in\{1,2,\cdots,2n+m\}^{2}\), add a new vertex \(w_{\alpha,\beta}\) adjacent to \(u\) and \(v\) in such a way that we have \(w_{\alpha,\beta}\in N^{\alpha}(u)\cap N^{\beta}(v)\). Let the so-obtained \((n,m)\)-graph be \(G^{*}\). Observe that, \(und(G^{*})\in\mathcal{T}^{2}\) by construction. Therefore, \(G^{*}\) admits a homomorphism to \(T\).
For any pair \(x,y\) of adjacent vertices in \(T\) and for any homomorphism \(f^{*}:G^{*}\to T\), there exists \(u,v\) in \(G\) satisfying \(f(u)=x\) and \(f(v)=y\). Note that the newly added common neighbors of \(u,v\) are connected by a special \(2\)-path via either \(u\) or \(v\). Thus, the images of \(u,v\) and their newly added common neighbors must have distinct images under \(f^{*}\). As \(f^{*}\) is any homomorphism of \(G^{*}\) to \(T\), and as \(x,y\) is any pair of adjacent vertices in \(T\), \(T\) must have property \(P_{2,1}\).
The above lemma implies a necessary and sufficient condition useful for computing the \((n,m)\)-chromatic number of partial \(2\)-trees.
**Corollary 5.5**.: _We have \(\chi_{n,m}(\mathcal{T}_{2})=t\) if and only if there exists an \((n,m)\)-graph \(T\) on \(t\) vertices with property \(P_{2,1}\)._
In light of the above corollary, if one can show that there does not exist any \((n,m)\)-graph on \(t\) vertices with property \(P_{2,1}\), then it will imply that \(\chi_{n,m}(\mathcal{T}_{2})\geq t+1\). We will use this observation to prove our lower bounds.
**Lemma 5.6**.: _If \(T\) is a minimal universal \((n,m)\)-bound of \(\mathcal{T}^{2}\) on \((2n+m)^{2}+(2n+m)+1\) vertices, then every vertex \(v\) in \(T\) has exactly \((2n+m)+1\) many \(\alpha\)-neighbors for all \(\alpha\in\{1,2,\cdots,2n+m\}\)._
Proof.: As \(T\) has property \(P_{2,1}\) due to Lemma 5.4, each vertex of \(T\) has all \((2n+m)\) types of adjacencies. Let \(v\) be an \(\alpha\)-neighbor of \(u\) in \(T\). Notice that, there is at least one vertex which is \(\alpha\)-neighbor of \(u\) and \(\beta\)-neighbor of \(v\). As \(\beta\) varies over the set of all \((2n+m)\) types of adjacencies, \(u\) has at least \((2n+m)\)\(\alpha\)-neighbors, which are also adjacent to \(v\). Thus, counting \(v\), \(u\) has at least \((2n+m)+1\) many \(\alpha\)-neighbors.
On the other hand, as \(\alpha\) is any of the \((2n+m)\) many adjacencies and \(|N^{\alpha}(u)|\geq 2n+m+1\), we have \(|N(u)|\geq(2n+m)(2n+m+1)=(2n+m)^{2}+(2n+m)\). As \(T\) has only \((2n+m)^{2}+(2n+m)+1\) vertices, the inequalities must be tight.
**Lemma 5.7**.: _If \(T\) is a minimum universal \((n,m)\)-bound of \(\mathcal{T}^{2}\) on \((2n+m)^{2}+(2n+m)+1\) vertices, then it can not have \(x,y,z\in N^{\alpha}(u)\) such that \(x,z\) are \(\gamma\)-neighbors of \(y\)._
Proof.: Suppose \(x,y,z\in N^{\alpha}(u)\) and \(x,z\) are \(\gamma\)-neighbors of \(y\). We know from the proof of Lemma 5.4 that there is exactly one vertex in the set \(N^{\alpha}(u)\cap N^{\beta}(y)\) for all \(\beta\). However, in this case, \(\{x,z\}\subseteq N^{\alpha}(u)\cap N^{\beta}(y)\) for \(\beta=\gamma\), a contradiction.
Now we are ready to prove the first lower bound.
**Lemma 5.8**.: _The \((n,m)\)-graph \(T\) has at least \(14\) vertices if either of the following happens._
1. \(T\) _is a minimum universal_ \((0,3)\)_-bound of_ \(\mathcal{T}^{2}\)_._
2. \(T\) _is a minimum universal_ \((1,1)\)_-bound of_ \(\mathcal{T}^{2}\)
Proof.: Suppose not, that is, \(T\) is a minimum universal \((0,3)\)-bound or a minimum universal \((1,1)\)-bound of \(\mathcal{T}^{2}\) on \(13\) vertices. Note that \(und(T)\) is a complete graph due to Lemma 5.6. Also using Lemma 5.4 and 5.6 we know that \(T\) contains a \(K_{3}\) with vertices \(u,v,w\) (say) whose all edges are of color \(3\).
Next we will try to count the number of vertices in \(T\). For convenience, let us denote the set \(N^{\alpha}(u)\cap N^{\beta}(v)\cap N^{\gamma}(w)\setminus\{u,v,w\}=A_{\alpha, \beta,\gamma}\). Also, let us denote the set of all common neighbors of \(u,v,w\) by \(A\). As \(T\) has property \(P_{2,1}\), there must exist a \(x\in A\) which is a \(3\)-neighbor of \(u\) and a \(2\)-neighbor of \(v\). Notice that, if \(x\) is a \(2\)-neighbor or a \(3\)-neighbor of \(w\), then the configuration described in Lemma 5.7 is created. Thus, \(x\) must be a \(1\)-neighbor of \(w\). Hence,
\[|A_{3,2,1}|\geq 1.\]
Similarly, we can show that
\[|A_{1,2,3}|=|A_{1,3,2}|=|A_{2,1,3}|=|A_{2,3,1}|=|A_{3,1,2}|=|A_{3,2,1}|\geq 1. \tag{3}\]
Till now, among the vertices we have described, there is none which is a \(2\)-neighbor of both \(u,v\). However as \(T\) has property \(P_{2,1}\), there must exist a vertex \(y\) of \(T\) which is a \(2\)-neighbor of both \(u,v\). Note that, \(x\) can not be an \(3\)-neighbor or a \(2\)-neighbor of \(w\) due to Lemma 5.7. Thus, we are forced to have an edge of color three between \(x\) and \(w\). So
\[|A_{2,2,1}|\geq 1.\]
Similarly, we can show that
\[|A_{2,2,1}|=|A_{1,1,2}|=|A_{2,1,2}|=|A_{1,2,1}|=|A_{1,2,2}|=|A_{2,1,1}|\geq 1. \tag{4}\]
As the sets of the form \(A_{\alpha,\beta,\gamma}\) partitions \(A\), we can combine equations (3) and (4) to conclude that \(|A|\geq 12\). This implies that \(T\) has at least \(15\) vertices including \(u,v,w\), a contradiction.
**Lemma 5.9**.: _There exists a \((0,3)\)-graph \(T_{0,3}\) on \(15\) vertices having property \(P_{2,1}\)._
Proof.: Let \(T_{0,3}\) be an \((0,3)\)-graph with set of vertices \(\mathbb{Z}/5\mathbb{Z}\times\mathbb{Z}/3\mathbb{Z}\). Let \((i,j)\) and \((i^{\prime},j^{\prime})\) be two vertices of \(T_{0,3}\). The adjacency between the vertices are as per the following rules.
* If \(j\neq j^{\prime}\) and \(i=i^{\prime}\), then \((i,j)\) and \((i^{\prime},j^{\prime})\) are not adjacent.
* If \((i^{\prime}-i)\) is a non-zero square in \(\mathbb{Z}/5\mathbb{Z}\), then there is an edge of color \((1+j+j^{\prime})\) (considered modulo \(3\)) between \((i,j)\) and \((i^{\prime},j^{\prime})\).
* If \((i^{\prime}-i)\) is not a non-zero square in \(\mathbb{Z}/5\mathbb{Z}\), then there is an edge of color \((2+j+j^{\prime})\) (considered modulo \(3\)) between \((i,j)\) and \((i^{\prime},j^{\prime})\).
Notice that it is enough to show that \(T_{0,3}\) has property \(P_{2,1}\). Let \((i,j)\) and \((i^{\prime},j^{\prime})\) be any two adjacent vertices in \(T_{0,3}\). Without loss of generality we may assume that either \(i^{\prime}=i+1\) or \(i^{\prime}=i+2\). We can further assume that \(i=0\) and \(i^{\prime}=1\) or \(2\), still without losing generality. Also, for convenience, let \(A_{\alpha,\beta}=N^{\alpha}((i,j))\cap N^{\beta}((i^{\prime},j^{\prime}))\). Thus our objective is to show that all such subsets, which are a total of nine in number, are non-empty.
* If \(i^{\prime}=1\), then \((2,j^{\prime\prime})\in A_{2+j+j^{\prime\prime},1+j^{\prime}+j^{\prime\prime}},(3,j^{\prime\prime})\in A_{2+j+j^{\prime\prime},2+j^{\prime}+j^{\prime\prime}}\), and \((4,j^{\prime\prime})\in A_{1+j+j^{\prime\prime},2+j^{\prime}+j^{\prime\prime}}\) where \(j^{\prime\prime}\) varies over \(\mathbb{Z}/5\mathbb{Z}\). Notice that, as \(j^{\prime\prime}\) varies, we obtain a total of nine non-empty subsets of the type \(A_{\alpha,\beta}\), and we are done by observing that these subsets have distinct ordered pairs as indices.
2. If \(i^{\prime}=2\), then \((1,j^{\prime\prime})\in A_{1+j+j^{\prime\prime},1+j^{\prime}+j^{\prime\prime}},(3,j ^{\prime\prime})\in A_{2+j+j^{\prime\prime},1+j^{\prime}+j^{\prime\prime}}\), and \((4,j^{\prime\prime})\in A_{1+j+j^{\prime\prime},2+j^{\prime}+j^{\prime\prime}}\) where \(j^{\prime\prime}\) varies over \(\mathbb{Z}/5\mathbb{Z}\). Notice that, as \(j^{\prime\prime}\) varies, we obtain a total of nine non-empty subsets of the type \(A_{\alpha,\beta}\), and we are done by observing that these subsets have distinct ordered pairs as indices.
Hence \(T_{0,3}\) has property \(P_{2,1}\).
The existence of a \((1,1)\)-graph on \(21\) vertices having property \(P_{2,1}\) is remarked in the conclusion of [8]. For the sake of completeness, we include an explicit construction of the same.
**Lemma 5.10**.: _There exists a \((1,1)\)-graph \(T_{1,1}\) on \(21\) vertices having property \(P_{2,1}\)._
Proof.: Let \(T_{1,1}\) be an \((1,1)\)-graph with set of vertices \(\mathbb{Z}/7\mathbb{Z}\times\mathbb{Z}/3\mathbb{Z}\). Let \((i,j)\) and \((i^{\prime},j^{\prime})\) be two vertices of \(T_{1.1}\). The adjacency between the vertices are as per the following rules.
* If \(j\neq j^{\prime}\) and \(i=i^{\prime}\), then, \((i,j)\) and \((i^{\prime},j^{\prime})\) are not adjacent.
* If \(j^{\prime}=j\), and \((i^{\prime}-i)\) is a non-zero square in \(\mathbb{Z}/7\mathbb{Z}\), then there is an arc from \((i,j)\) to \((i^{\prime},j^{\prime})\).
* If \(j^{\prime}=j+1\) (mod \(3\)) and \((i^{\prime}-i)\) is a non-zero square in \(\mathbb{Z}/7\mathbb{Z}\), then there is an edge between \((i,j)\) and \((i^{\prime},j^{\prime})\).
* If \(j^{\prime}=j+1\) (mod \(3\)) and \((i^{\prime}-i)\) is not a non-zero square in \(\mathbb{Z}/7\mathbb{Z}\), then there is an arc from \((i^{\prime},j^{\prime})\) to \((i,j)\).
As exactly one among \((i^{\prime}-i)\) or \((i-i^{\prime})\) is a non-zero square in \(\mathbb{Z}/7\mathbb{Z}\), the above indeed describes the whole \((1,1)\)-graph.
Notice that it is enough to show that \(T_{1,1}\) has property \(P_{2,1}\). Let \((i,j)\) and \((i^{\prime},j^{\prime})\) be any two adjacent vertices in \(T_{1,1}\). For convenience, let \(A_{\alpha,\beta}=N^{\alpha}((i,j))\cap N^{\beta}((i^{\prime},j^{\prime}))\). Thus our objective is to show that all such subsets, which are a total of nine in number, are non-empty.
Let \((i^{\prime}-i)\) a non-zero square (resp., non-square) of \(\mathbb{Z}/7\mathbb{Z}\). Define the mapping
\[\phi:\mathbb{Z}/7\mathbb{Z}\rightarrow\mathbb{Z}/7\mathbb{Z}\]
given by \(\phi(x)=(i^{\prime}-i)^{2}(x-i)\). This map is a group automorpism of \(\mathbb{Z}/7\mathbb{Z}\) that maps a non-zero square to a non-zero square and vice versa. Therefore, the adjacency rules of the graph, even after applying this automorphism to the three copies of \(\mathbb{Z}/7\mathbb{Z}\) used for describing the graph, remain as it were. Also, notice that the above automorphism maps \(\phi(i)=0\) and \(\phi(i^{\prime})=1\). Therefore, instead of arguing a general case of \(i,i^{\prime}\) where \((i^{\prime}-i)\) is a square, we can argue for the case when \(i=0\) and \(i^{\prime}=1\) without losing any generality. This brings us to the following cases.
1. If \(j^{\prime}=j\) and \((i^{\prime}-i)\) is a non-zero square, then without loss of generality we may assume that \((i,j)=(0,0)\) and \((i^{\prime},j^{\prime})=(1,0)\).
2. If \(j^{\prime}=j+1\) (considered modulo \(3\)) and \((i^{\prime}-i)\) is a non-zero square, then without loss of generality we may assume that \((i,j)=(0,0)\) and \((i^{\prime},j^{\prime})=(1,1)\).
3. If \(j^{\prime}=j+1\) (considered modulo \(3\)) and \((i^{\prime}-i)\) is not a non-zero square, that is, \((i-i^{\prime})\) is a non-zero square, then without loss of generality we may assume that \((i,j)=(1,0)\) and \((i^{\prime},j^{\prime})=(0,1)\).
\begin{tabular}{|c|c||c|c|c|c|c|c|c|c|c|} \hline \((i,j)\) & \((i^{\prime}j^{\prime})\) & \(A_{1,1}\) & \(A_{1,2}\) & \(A_{1,3}\) & \(A_{2,1}\) & \(A_{2,2}\) & \(A_{2,3}\) & \(A_{3,1}\) & \(A_{3,2}\) & \(A_{3,3}\) \\ \hline \((0,0)\) & \((1,0)\) & \((6,0)\) & \((5,0)\) & \((4,2)\) & \((4,0)\) & \((2,0)\) & \((5,1)\) & \((5,2)\) & \((4,1)\) & \((2,1)\) \\ \hline \((0,0)\) & \((1,1)\) & \((5,0)\) & \((4,2)\) & \((6,0)\) & \((2,0)\) & \((5,1)\) & \((4,0)\) & \((4,1)\) & \((2,1)\) & \((5,2)\) \\ \hline \((1,0)\) & \((0,1)\) & \((4,0)\) & \((5,2)\) & \((6,0)\) & \((2,0)\) & \((4,1)\) & \((5,0)\) & \((5,1)\) & \((2,1)\) & \((4,2)\) \\ \hline \end{tabular}
The previously defined nine subsets of the form \(A_{\alpha,\beta}\) are non-empty for each of the above listed cases can be observed from the above table. Hence \(T_{1,1}\) has property \(P_{2,1}\).
Proof of Theorem 5.2.: Using Corollary 5.5, the lower bound follows from Lemma 5.8 and the upper bounds follow from Lemmas 5.9 and 5.10.
## 6 Conclusions
Following the introduction of the \((n,m)\)-graphs, their homomorphisms, and \((n,m)\)-chromatic numbers due to Nesetril and Raspaud [19] in 2000, a number of research works were dedicated towards this topic. In fact, the research in this topic has evolved in different tracks such as, finding out the \((n,m)\)-chromatic number of some graphs families [19, 8, 14], the study of relative and absolute \((n,m)\)-clique numbers (that is, analogue of clique numbers) [3, 7], study of the complexity dichotomy of homomorphisms of an input \((n,m)\)-graph \(G\) to a prefixed \((n,m)\)-graph \(H\)[11, 12], and studies related to the algebraic properties of such systems, and their interactions with certain modifications such a permutation of adjacencies of a vertex (or several vertices) [11, 12, 6]. Our work clearly pertains to the first of the research tracks listed above.
In this track, the graph families for which the \((n,m)\)-chromatic number is studied are paths and forests [19], graphs with bounded maximum degree [14], graphs with bounded acyclic chromatic number [19], partial 2-trees [8], planar graphs and planar graphs with high girth [14]. However, the study of \((n,m)\)-chromatic number of graph families is significantly less in volume compared to the the cases when \((n,m)=(1,0)\) or \((0,2)\).
To the best of our understanding, the reason for this is the lack of knowledge regarding well-structured \((n,m)\)-graphs which can be used as target graphs for homomorphisms. For \((n,m)=(1,0)\) or \((0,2)\), the _Paley tournaments_, _signed Paley graphs_ or the _Tromp constructions_ play this role. The way we constructed the target graph in Section 4 maybe one of the approaches towards tackling this issue.
**Acknowledgements:** This work is partially supported by IFCAM project "Applications of graph homomorphisms" (MA/IFCAM/18/39), SERB-SRG project "Graph homomorphisms and its extensions" (SRG/2020/001575), SERB-MATRICS "Oriented chromatic and clique number of planar graphs" (MTR/2021/000858), and NBHM project "Graph theoretic model of Channel Assignment Problem (CAP) in wireless network" (NBHM/RP-8 (2020)/Fresh).
|
2307.07073 | An Incremental Span-Program-Based Algorithm and the Fine Print of
Quantum Topological Data Analysis | We introduce a new quantum algorithm for computing the Betti numbers of a
simplicial complex. In contrast to previous quantum algorithms that work by
estimating the eigenvalues of the combinatorial Laplacian, our algorithm is an
instance of the generic Incremental Algorithm for computing Betti numbers that
incrementally adds simplices to the simplicial complex and tests whether or not
they create a cycle. In contrast to existing quantum algorithms for computing
Betti numbers that work best when the complex has close to the maximal number
of simplices, our algorithm works best for sparse complexes. To test whether a
simplex creates a cycle, we introduce a quantum span-program algorithm. We show
that the query complexity of our span program is parameterized by quantities
called the effective resistance and effective capacitance of the boundary of
the simplex. Unfortunately, we also prove upper and lower bounds on the
effective resistance and capacitance, showing both quantities can be
exponentially large with respect to the size of the complex, implying that our
algorithm would have to run for exponential time to exactly compute Betti
numbers. However, as a corollary to these bounds, we show that the spectral gap
of the combinatorial Laplacian can be exponentially small. As the runtime of
all previous quantum algorithms for computing Betti numbers are parameterized
by the inverse of the spectral gap, our bounds show that all quantum algorithms
for computing Betti numbers must run for exponentially long to exactly compute
Betti numbers. Finally, we prove some novel formulas for effective resistance
and effective capacitance to give intuition for these quantities. | Mitchell Black, William Maxwell, Amir Nayyeri | 2023-07-13T21:46:45Z | http://arxiv.org/abs/2307.07073v1 | An Incremental Span-Program-Based Algorithm and the Fine Print of Quantum Topological Data Analysis+
###### Abstract
We introduce a new quantum algorithm for computing the Betti numbers of a simplicial complex. In contrast to previous quantum algorithms that work by estimating the eigenvalues of the combinatorial Laplacian, our algorithm is an instance of the generic Incremental Algorithm for computing Betti numbers that incrementally adds simplices to the simplicial complex and tests whether or not they create a cycle. In contrast to existing quantum algorithms for computing Betti numbers that work best when the complex has close to the maximal number of simplices, our algorithm works best for sparse complexes.
To test whether a simplex creates a cycle, we introduce a quantum span-program algorithm. We show that the query complexity of our span program is parameterized by quantities called the effective resistance and effective capacitance of the boundary of the simplex. Unfortunately, we also prove upper and lower bounds on the effective resistance and capacitance, showing both quantities can be exponentially large with respect to the size of the complex, implying that our algorithm would have to run for exponential time to exactly compute Betti numbers.
However, as a corollary to these bounds, we show that the spectral gap of the combinatorial Laplacian can be exponentially small. As the runtime of all previous quantum algorithms for computing Betti numbers are parameterized by the inverse of the spectral gap, our bounds show that all quantum algorithms for computing Betti numbers must run for exponentially long to exactly compute Betti numbers.
Finally, we prove some novel formulas for effective resistance and effective capacitance to give intuition for these quantities.
## 1 Introduction.
The past few years has seen the development of quantum algorithms with the potential to speed up computation of topological features of simplicial complexes called _Betti numbers_. Betti numbers are important topological invariants of a space; indeed, there is an entire, rapidly-growing field called _Topological Data Analysis_ (TDA) that studies the application of topological invariants like Betti numbers (among other) [10, 16, 20]. Accordingly, the study of quantum algorithms for computing Betti numbers has been deemed _Quantum Topological Data Analysis_ (QTDA).
Betti numbers can be both time and space inefficient for classical computers to compute. For example, a simplicial complex on vertices can be exponentially large and it can take exponential time to compute Betti numbers in arbitrary dimensions. Quantum computers offer a potential
solution to the shortcomings of the classical algorithm. For example, quantum computers can efficiently store a simplicial complex with \(n\) vertices using only \(O(\operatorname{poly}(n))\) qubits.
However, while these quantum algorithms have certain advantages over their classical counterparts like improved space complexity, QTDA algorithms only achieve significant advantage over classical TDA algorithms under certain circumstances. QTDA algorithms only achieve significant speed up over classical algorithms when the input complex is _clique-dense_--it has close to the maximal number of simplices--and when the spectral gap of the combinatorial Laplacian of the complex is polynomially small. This second point is a particular problem as, before now, it was unknown how small the spectral gap of the combinatorial Laplacian could be. This makes the spectral gap of the combinatorial Laplacian an example of "fine print" [1]: an unbounded parameter in the runtime of a celebrated quantum algorithm.
### Our Contributions.
* In Sections 3 and 4, we provide a novel quantum algorithm for computing Betti numbers using the framework of span programs [40, 59]. As opposed to existing QTDA algorithms that work by estimating the eigenvalues of the combinatorial Laplacian or singular values of the boundary matrices, our algorithm is more similar to classical matrix-reduction algorithms for computing Betti numbers as it works by incrementally adding simplices to the simplicial complex and testing if these simplices create or destroy a cycle. One advantage of our algorithm is that it avoids the step of creating a superpostion over the \(k\)-simplices, which is a bottleneck of existing QTDA algorithms that restricts their utility to the clique-dense regime. In Section 4.3 and Section 4.4, we show that the query and time complexity of our span program algorithm for QTDA is parameterized by the maximum effective resistance and capacitance of cycles in \(\mathcal{K}\). In Section 4.6, we compare our algorithm with existing QTDA algorithms. The culmination of this section is the following theorem.
**Theorem 1.1**.: _Let \(\mathcal{K}\) be a simplicial complex. There is a quantum algorithm for computing the \(d\)th Betti number \(\beta_{d}\) of \(\mathcal{K}\) in time_
\[\tilde{O}\left(\left(\sqrt{\frac{\mathcal{R}_{\max}\mathcal{C}_{\max}}{\tilde {\lambda}_{\min}}}n_{0}+\sqrt{\mathcal{R}_{\max}n_{0}}\right)(n_{d}+n_{d+1}) \right),\]
_where_
* \(n_{i}\) _is the number of_ \(i\)_-simplices of_ \(\mathcal{K}\)_._
* \(\mathcal{R}_{\max}\) _is the maximum finite effective resistance_ \(\mathcal{R}_{\partial\sigma}(\mathcal{L})\) _of the boundary of any_ \(d\)_- or_ \((d+1)\)_-simplex_ \(\sigma\in\mathcal{K}\) _in any subcomplex_ \(\mathcal{L}\subset\mathcal{K}\)_._
* \(\mathcal{C}_{\max}\) _are the maximum finite effective capacitance_ \(\mathcal{C}_{\partial\sigma}(\mathcal{L},\mathcal{K})\) _of the boundary of any_ \(d\)_- or_ \((d+1)\)_-simplex_ \(\sigma\in\mathcal{K}\) _in any subcomplex_ \(\mathcal{L}\subset\mathcal{K}\)_._
* \(\tilde{\lambda}_{\min}\) _is the minimum spectral gap of the normalized up Laplacians_ \(\tilde{L}_{d-1}^{up}[\mathcal{K}]\) _and_ \(\tilde{L}_{d}^{up}[\mathcal{K}]\)_._
* In Section 5, we provide upper bounds on the maximum effective resistance by the size of the simplicial complex and the maximal rank of the torsion subgroup of the simplicial complex, as well as looser upper bounds purely in terms of the size of the complex. These upper bounds show that the effective resistance can be at most exponentially-large with respect to the size of the complex. We also provide similar upper bounds on effective capacitance for special cases. Finally, we provide families of simplicial complexes with cycles whose effective resistance or
effective capacitance is exponentially large, thus matching the upper bound up to the base of the exponent. This implies that our algorithm for QTDA can take exponentially long in the worst case. However, in the next paragraph, we will see that our results imply all other QTDA algorithms must run for exponential time as well.
* In Section 6, we show how the upper and lowers bounds for effective resistance provide lower and upper bounds for the spectral gap of the combinatorial Laplacian respectively; thus, the spectral gap is exponentially small in the worst case. Moreover, we show there are clique-dense complexes that achieve worst-case spectral gap. **Theorem 1.2**.: _Let \(K\) be a simplicial complex. Let \(n_{i}\) be the number of \(i\)-simplices of \(K\). Let \(n=\max\{\min\{n_{d-1},n_{d}\},\min\{n_{d},n_{d+1}\}\}\). Then the spectral gap \(\lambda_{\min}(L_{d}[K])\in\Omega\left(\frac{1}{n^{2}d^{n}}\right)\)._ **Theorem 1.3**.: _Let \(d,n\geq 1\). There are constants \(c_{d},\kappa_{d}\) that depends only on \(d\) and a \(d\)-dimensional simplicial complex \(\mathcal{C}_{d}^{n}\) with \(n_{d}=\Omega(\kappa_{d}\binom{n_{0}}{d})\)\(d\)-simplices such that the spectral gaps \(\lambda_{\min}(L_{d-1}[\mathcal{C}_{d}^{n}])\), \(\lambda_{\min}(L_{d}[\mathcal{C}_{d}^{n}])\in O(\frac{1}{c_{d}^{n_{d}}})\)._ This answers one of the most important question in QTDA: how small can the spectral gap be? As all existing QTDA algorithms are parameterized by the inverse of the spectral gap of the combinatorial Laplacian \(\frac{1}{\lambda_{\min}}\), this implies that all existing QTDA algorithms need exponential time to exactly estimate Betti numbers.1Additionally, the space complexity of some QTDA algorithms is parameterized by \(\log(\frac{1}{\lambda_{\min}})\), so these algorithms will need space proportional to the number of \((d-1)-\), \(d-\), or \((d+1)\)-simplices, rather than space proportional to the number of vertices. Footnote 1: This exponential time complexity is not a result of the quantum nature of these algorithms. Some classical algorithms for computing Betti numbers are also parameterized by the inverse of the spectral gap, so these algorithms would need to run for exponential time as well [3, 23].
* We also prove some interesting formulas for effective resistance and capacitance that give intuition for these quantities. In Appendix B, we provide series and parallel formulas for effective resistance akin to the formulas for effective resistance in graphs. We also show that effective resistance satisfies a Rayeligh monotonicity property akin to effective resistance in graphs. Finally, in Appendix C, we show that effective resistance is dual to effective capacitance for embedded simplicial complexes.
### Related Work
History of QTDA.Lloyd, Garnerone, and Zanardi (LGZ) introduced the first quantum algorithm for computing Betti numbers up to a multiplicative error [48]. Their algorithm works by estimating the eigenvalues of the combinatorial Laplacian, which is inspired by Friedman's classical algorithm for computing Betti numbers [23]. The LGZ algorithm has the advantage that its runtime is only polynomial with respect to the number of vertices, as opposed to the number of simplices like the matrix reduction algorithm. The trade-off is that this algorithm gains a dependence on the inverse of the spectral gap and the ratio of the number of simplices to the number of possible simplices. The LGZ algorithm performs best in the regime where the spectral gap of the combinatorial Laplacian is polynomially lower-bounded and the simplicial complex is clique-dense,
meaning it has close to the maximal number of simplices. Subsequent works have improved the LGZ algorithm in different ways but maintain a runtime dependence on the inverse of the spectral gap and the clique density [29, 50, 66].
Another line of QTDA research has been developing algorithms for _persistent_ Betti numbers. While the LGZ algorithm was initially claimed to be able to compute persistent Betti numbers, this was later disproved by Meijer [51] and Neumann and den Breeijen [55]. Hayakawa was the first to develop a quantum algorithm for computing persistent Betti numbers [33]. McArdle, Gilyen, and Berta have also developed algorithms for computing persistent Betti numbers [50].
Hardness of Computing Betti Numbers.In addition to new algorithms for computing Betti numbers, there have also been a number of works arguing computing Betti numbers is hard in general. Adamaszek and Stacho [2] show that determining if a simplicial complex has non-zero Betti number is NP-Hard when parameterized either by the number of vertices and the number of maximal simplices, or the number of vertices and number of minimal non-faces. Additionally, they show the problem is NP-Hard for clique complexes when parameterized by the number of vertices. Schmidhuber and Lloyd [61] show that computing Betti numbers of a clique complex is #P-Hard and estimating the Betti number up to a multiplicative constant is NP-Hard when parameterized by the number of vertices. Moreover, the hardness results of Schmidhuber and Lloyd hold for clique-dense clique complexes. This is an important restriction as the runtime of LGZ and and other QTDA algorithms are lowest for clique-dense complexes. Here, the assumptions on the input are vital. Computing Betti numbers is in \(P\) when parameterized by the number of all simplices in the complex. This does not contradict \(P\neq NP\) though, as the number of simplices can be exponentially large with respect to the number of vertices.
There have also been a number of works showing that problems related to computing Betti numbers are hard for the quantum computing complexity class DCQ-1. Crichigno and Kohler [12] showed that determining if the Betti number of a clique complex is nonzero is QMA\({}_{1}\)-Hard when parameterized by the number of vertices, and computing the Betti number of a clique complex is #BQP-Hard. Gyurik, Cade, and Dunjko [30] show that a generalization of Betti number estimation called _low-lying spectral density estimation (LLSD)_ is DCQ1-Complete, suggesting that LLSD may be classically intractable. Cade and Crichigno [8] showed that estimating Betti numbers for general chain complexes (not just those arising from simplicial complexes) is also DCQ1-complete.
Lower Bounds on the Spectral Gap of the Combinatorial Laplacian.All existing QTDA algorithms are parameterized by the inverse of the spectral gap of the combinatorial Laplacian. While we show the spectral gap can be exponentially small, there have also been a number of exact or expected lower bounds on the spectral gap of the combinatorial Laplacian for certain families of simplicial complexes [4, 23, 28, 44, 45, 46, 63, 68]. However, these bounds place non-trivial assumptions on the simplicial complex so should not be taken to represent general simplicial complexes.
## 2 Preliminaries.
Algebraic Topology.A _simplicial complex_\(\mathcal{K}\) on a set of vertices \(V\) is a subset of the power set \(\mathcal{K}\subseteq P(V)\) with the property that if \(\sigma\in\mathcal{K}\) and \(\tau\subset\sigma\) then \(\tau\in\mathcal{K}\). An element of \(\mathcal{K}\) is a _simplex_. A simplex \(\sigma\in\mathcal{K}\) of size \(|\sigma|=d+1\) is a _d-simplex_. The set of all \(d\)-simplices of \(\mathcal{K}\) is denoted \(\mathcal{K}_{d}\), and the number of \(d\)-simplices is denoted \(n_{d}=|\mathcal{K}_{d}|\). The _d-skeleton_ of \(\mathcal{K}\), denoted \(\mathcal{K}^{d}\), is the simplicial complex of all simplices of \(\mathcal{K}\) of dimension at most \(d\), i.e. \(\mathcal{K}^{d}=\cup_{i=0}^{d}\mathcal{K}_{i}\). The _dimension_
of \(\mathcal{K}\) is the largest \(d\) such that \(\mathcal{K}\) contains a \(d\)-simplex; a \(1\)-dimensional simplicial complex is a _graph_.
The _\(\boldsymbol{d^{th}}\) chain group_\(C_{d}(\mathcal{K})\) is the vector space over \(\mathbb{R}\) with orthonormal basis \(\mathcal{K}_{d}\). An element of \(C_{d}(\mathcal{K})\) is a _\(\boldsymbol{d}\)-chain_. Unless otherwise stated, all vectors and matrices will be in the basis \(\mathcal{K}_{d}\). For a chain \(f\in C_{d}(\mathcal{K})\), we denote its \(\sigma\) coordinate \(f(\sigma)\). Finally, the _support_ of a chain \(f\) is the set of simplices given a non-zero value by \(f\) and is denoted \(\operatorname{supp}(f)=\{\sigma_{i}\in\mathcal{K}_{d}\colon f(\sigma_{i})\neq 0\}\).
We assume there is a fixed but arbitrary order on the vertices \(V=(v_{1},\ldots,v_{n})\). Let \(\sigma=\{v_{i_{0}},\ldots,v_{i_{d}}\}\) be a \(d\)-simplex in \(\mathcal{K}\) with \(v_{i_{j}}\leq v_{i_{k}}\) whenever \(j\leq k\). The _boundary_ of \(\sigma\) is the \((d-1)\)-chain \(\partial\sigma=\sum_{j=0}^{d}(-1)^{j}\cdot(\sigma\setminus\{v_{i_{j}}\})\). The _\(\boldsymbol{d^{th}}\) boundary map_ is the linear map \(\partial_{d}:C_{d}(\mathcal{K})\to C_{d-1}(\mathcal{K})\) defined \(\partial_{d}f=\sum_{\sigma\in\mathcal{K}_{d}}f(\sigma)\partial\sigma\) where \(f(\sigma)\) denotes the component of \(f\) indexed by the simplex \(\sigma\). An element in \(\ker\partial_{d}\) is a _cycle_, and an element in \(\operatorname{im}\partial_{d}\) is a _boundary_ or a _null-homologous cycle_. See Figure 1. The boundary maps have the property that \(\partial_{d}\circ\partial_{d+1}=0\), so \(\operatorname{im}\partial_{d+1}\subset\ker\partial_{d}\). The _\(\boldsymbol{d^{th}}\) homology group_ is the quotient group \(H_{d}(\mathcal{K})=\ker(\partial_{d})/\operatorname{im}(\partial_{d+1})\). The _\(\boldsymbol{d^{th}}\) Betti number_\(\beta_{d}\) is the dimension of \(H_{d}(\mathcal{K})\). The _\(\boldsymbol{d^{th}}\) coboundary map_ is the map \(\delta_{d}:=\partial_{d+1}^{T}:C_{d}(\mathcal{K})\to C_{d+1}(\mathcal{K})\). An element of \(\ker\partial_{d}\) is a _cocycle_, and an element in \(\operatorname{im}\delta_{d-1}\) is a _coboundary_. We will use the notation \(\partial[\mathcal{K}]\) and \(\delta[\mathcal{K}]\) when we want to specify the complex associated with the (co)boundary operator.
While our algorithms calculate homology with real coefficients, for some of our topological results, we will need to consider homology with integer coefficients. The _integral chain group_\(C_{d}(\mathcal{K},\mathbb{Z})\) is the free abelian group generated by the set of \(d\)-simplices \(\mathcal{K}_{d}\) whose elements are formal sums \(\sum_{\sigma_{i}\in\mathcal{K}_{d}}\alpha_{i}\sigma_{i}\) with coefficients \(\alpha_{i}\in\mathbb{Z}\). The integer homology groups are constructed in the same way as the real homology groups. We define boundary maps \(\partial_{d}:C_{d}(\mathcal{K};\mathbb{Z})\to C_{d-1}(\mathcal{K};\mathbb{Z})\) the same way as for the real chain groups, except now the boundary maps are group homomorphisms rather than linear maps. The _integral homology groups_ are the quotient groups \(H_{d}(\mathcal{K};\mathbb{Z})=\ker\partial_{d}/\operatorname{im}\partial_{d+1}\).
Laplacians.The _\(\boldsymbol{d^{th}}\) up Laplacian_ is \(L_{d}^{up}=\partial_{d+1}\delta_{d}\), the _\(\boldsymbol{d^{th}}\) down Laplacian_ is \(L_{d}^{down}=\delta_{d-1}\partial_{d}\), and the _\(\boldsymbol{d^{th}}\) (combinatorial) Laplacian_ is \(L_{d}=L_{d}^{up}+L_{d}^{down}\). The Laplacians define the following orthogonal decomposition of the \(d^{\text{th}}\) chain group \(C_{d}(\mathcal{K})\) called the _Hodge Decomposition_.
\[C_{d}(\mathcal{K}) =\operatorname{im}L_{d}^{up}\oplus\operatorname{im}L_{d}^{down} \oplus\ker L_{d}\] \[=\operatorname{im}\partial_{d}\oplus\operatorname{im}\delta_{d-1 }\oplus\ker L_{d}\]
where the second equality follows from the fact that \(\operatorname{im}AA^{T}=\operatorname{im}A\) for any matrix \(A\). We call the subspaces \(\operatorname{im}\partial_{d}\), \(\operatorname{im}\delta_{d-1}\), and \(\ker L_{d}\) the _boundary_, _coboundary_, and _harmonic spaces_. Arguably the fundamental theorem of the combinatorial Laplacian is the _Hodge Theorem_.
**Theorem 2.1** (Hodge Theorem, Eckmann [19]).: _The \(d^{\text{th}}\) harmonic space is isomorphic to the \(d^{\text{th}}\) homology group, i.e. \(\ker L_{d}\cong H_{d}(\mathcal{K})\)._
Therefore, the \(d^{\text{th}}\) Betti number can equivalently be computed by computing the rank of \(L_{d}\), a fact used by many existing QTDA algorithms.
The following lemma gives several properties of the spectrum of the combinatorial Laplacian.
**Lemma 2.2**.: _Let \(\operatorname{spec}_{NZ}(A)\) denote the multiset of the non-zero eigenvalues of a linear operator \(A\). Let \(\mathcal{K}\) be a simplicial complex. Let \(d>0\) be a positive integer. Then_
1. _(Goldberg_ _[_24_, Lemma 4.1.8]__)_ \(\operatorname{spec}_{NZ}(L_{d}^{up})=\operatorname{spec}_{NZ}(L_{d+1}^{down})\)__
2. _(Goldberg_ _[_24_, Lemma 4.1.7]__)_ \(\operatorname{spec}_{NZ}(L_{d})=\operatorname{spec}_{NZ}(L_{d}^{up})\cup \operatorname{spec}_{NZ}(L_{d}^{down})\)__
3. _(Goldberg_ _[_24_, Lemma 4.2.3]__) If_ \(\mathcal{K}\) _has connected components_ \(\mathcal{K}_{1},\ldots,\mathcal{K}_{m}\)_, then_ \[\operatorname{spec}_{NZ}(L_{d}[\mathcal{K}])=\operatorname{spec}_{NZ}(L_{d}[ \mathcal{K}_{1}])\cup\cdots\cup\operatorname{spec}_{NZ}(L_{d}[\mathcal{K}_{m}])\]
_where all unions are multiset unions._
The up, down, and combinatorial Laplacian are all _positive-semidefinite_, meaning their eigenvalues are all non-negative [24]. The _spectral gap_\(\lambda_{\min}(L_{d})\) is the smallest non-zero eigenvalue of the combinatorial Laplacian. Lemma 2.2 Part 2 implies the following theorem about the spectral gap of the combinatorial Laplacian.
**Corollary 2.3**.: _Let \(\mathcal{K}\) be a simplicial complex. Let \(d\) be a positive integer. Then_
\[\lambda_{\min}(L_{d}[\mathcal{K}])=\min\{\lambda_{\min}(L_{d}^{down}[ \mathcal{K}]),\,\lambda_{\min}(L_{d}^{up}[\mathcal{K}])\}\]
In Section 6, we discuss upper and lower bounds on the spectral gap. There are also known upper and lower bounds on the _largest_ eigenvalue of the combinatorial Laplacian.
**Theorem 2.4**.: _Let \(\mathcal{K}\) be a simplicial complex with \(n_{0}\) vertices. Let \(d\) be a natural number. Then the maximal eigenvalue of the combinatorial Laplacian \(\lambda_{\max}(L_{d})\leq n_{0}\)._
Proof.: Let \(\Delta_{n_{0}}\) be the complete complex on \(n_{0}\) vertices. The maximum eigenvalue of the \(d^{\text{th}}\) up Laplacian is \(\lambda_{\max}(L_{d}^{up}[\Delta_{n_{0}}])=n_{0}\) for any dimension \(d\)[26, Lemma 2.6]. Moreover, by the interlacing theorem of eigenvalues of the up Laplacian, \(\lambda_{\max}(L_{d}^{up}[\mathcal{K}])\leq\lambda_{\max}(L_{d}^{up}[\Delta_{ n_{0}}])\) for any subcomplex \(\mathcal{K}\subset\Delta_{n_{0}}\)[34, Theorem 1.1]. The theorem follows as \(\lambda_{\max}(L_{d}[\mathcal{K}])=\max\{\lambda_{\max}(L_{d}^{up}[\mathcal{K }]),\)\(\lambda_{\max}(L_{d-1}^{up}[\mathcal{K}])\}\leq n_{0}\)
We also consider two variants of the Laplacian variants of the up-Laplacian: the weighted up Laplacian and the normalized up Laplacian. Let \(w:\mathcal{K}_{d+1}\to\mathbb{R}^{+}\) be a weight function on the (\(d+1\))-simplices. Let \(W:C_{d+1}(\mathcal{K})\to C_{d+1}(\mathcal{K})\) be the diagonal matrix with \(W_{\tau,\tau}=w(\tau)\). The _\(d^{\text{th}}\) weighted up Laplacian_ is \(L_{d}^{up,\,W}=\partial_{d+1}W\delta_{d}\). The _degree_ of a \(d\)-simplex \(\sigma\) is \(\deg(\sigma)=\sum_{\tau\in\mathcal{K}_{d+1}\,:\,\sigma\subset\tau}w(\tau)\). Let \(D:C_{d}(\mathcal{K})\to C_{d}(\mathcal{K})\) be the diagonal matrix with \(D_{\sigma,\sigma}=\deg(\sigma)\). The _\(d^{\text{th}}\) normalized up Laplacian_ is \(\tilde{L}_{d}^{up}=D^{-1/2}\partial_{d+1}W\delta_{d}D^{-1/2}\). The following theorem relates the spectral gap of the normalized and unnormalized Laplacians. A proof can be found in Appendix A
**Lemma 2.5**.: _Let \(\mathcal{K}\) be a simplicial complex. Let \(d_{\min}\) and \(d_{\max}\) be the minimum and maximum degrees of any \(d\)-simplex in \(\mathcal{K}\). Suppose that \(d_{\min}>0\). The normalized and unnormalized spectral gap are related as follows:_
\[\frac{1}{d_{\max}}\lambda_{\min}(L_{d}^{up})\leq\lambda_{\min}(\tilde{L}_{d}^{ up})\leq\frac{1}{d_{\min}}\lambda_{\min}(L_{d}^{up})\]
Pseudoinverse of a Linear Map.Let \(A:\mathbb{R}^{n}\to\mathbb{R}^{m}\) be a rank \(k\) linear operator with singular value decomposition \(A=\sum_{i=1}^{k}\sigma_{i}u_{i}v_{i}^{T}\). The _pseudoinverse_ of \(A\) is the linear operator \(A^{+}:\mathbb{R}^{m}\to\mathbb{R}^{n}\) defined \(A^{+}=\sum_{i=1}^{k}\sigma_{i}^{-1}u_{i}v_{i}^{T}\). While this is in the most compact definition of the pseudoinverse, it is not the most informative. Equivalently, the _pseudoinverse_ of \(A:\mathbb{R}^{m}\to\mathbb{R}^{n}\) is the unique linear operator with the following properties: (1) \(A^{+}\) maps each vector \(x\in\operatorname{im}A\) to the unique vector \(y\in\operatorname{im}A^{T}\) such that \(Ay=x\) and (2) \(A^{+}\) maps each vector in \((\operatorname{im}A)^{\perp}\) to \(0\). The following are well-known properties of the pseudoinverse that follow from these definitions
**Lemma 2.6**.: _Let \(A:\mathbb{R}^{m}\to\mathbb{R}^{n}\) be a linear map._
1. \((AA^{T})^{+}=(A^{T})^{+}A^{+}\)_._
2. _For_ \(x\in\operatorname{im}A\)_,_ \(A^{+}x=\arg\min\{\|y\|:Ay=x\}\)__
Bra-Ket Notation.When discussing quantum algorithms, we will use _bra-ket notation_ for vectors. As this paper may also be of interest to topologists who may be unfamiliar with this notation, we introduce bra-ket notation now. Assuming a fixed basis for a finite vector space, a _bra_ is a row vector represented by the notation \(\langle v|\). A _ket_ is a column vector represented by the notation \(|v\rangle\). Using bras and kets, we can represent an inner product as \(\langle u|v\rangle\), an outer product as \(|u\rangle\langle v|\), or a tensor product as \(|u\rangle|v\rangle\).
## 3 The Incremental Algorithm for Computing Betti Numbers.
In this section, we review the incremental algorithm for computing Betti numbers introduced by Delfinado and Edelsbrunner [13]. The incremental algorithm is a generic framework for computing Betti numbers based around the primitive of _null-homology testing_. See Figure 1
**Problem** (Null-Homology Testing).: _Given a simplicial complex \(\mathcal{K}\) and a cycle \(\gamma\) in \(\mathcal{K}\), determine if \(\gamma\) is null-homologous._
The _Incremental Algorithm for Computing Betti Numbers_ computes \(\beta_{d}\) by testing if the boundary of simplices are null-homologous. Specifically, the incremental algorithm incrementally adds \(d\)-simplices to the simplicial complex and and then performs a null-homology test on their boundaries to see how the dimension of the spaces \(\ker\partial_{d}\) and \(\operatorname{im}\partial_{d+1}\) change.
To see how this works, fix an order on the \(d\)-simplices \(\mathcal{K}_{d}=\{\sigma_{1},\ldots,\sigma_{d}\}\), and then iteratively add each simplex \(\sigma_{i}\) in increasing order of \(i\). Adding the simplex \(\sigma_{i}\) will either increase the dimension of \(\ker\partial_{d}\) or \(\operatorname{im}\partial_{d-1}\) by \(1\). If \(\partial\sigma_{i}\) is null-homologous in \(\mathcal{K}^{d-1}\cup\{\sigma_{1},\ldots,\sigma_{i-1}\}\), then adding \(\sigma_{i}\) will increase the dimension of \(\ker\partial_{d}\) by \(1\), which also increases \(\beta_{d}\) by \(1\). If not, then adding \(\sigma_{i}\) will decrease \(\operatorname{im}\partial_{d-1}\) by \(1\), which also decreases \(\beta_{d-1}\) by \(1\).2 The incremental algorithm is summarized in Algorithm 1.
Footnote 2: In their original paper on the incremental algorithm [13], Delfinado and Edelsbrunner phrase this slightly differently as testing “whether \(\sigma_{i}\) is in [the support of] a cycle.” It is straightforward to verify that \(\sigma_{i}\) is in the support of a cycle if and only if \(\partial\sigma_{i}\) is null-homologous.
In the classical matrix reduction algorithm for computing Betti numbers, testing whether \(\partial\sigma_{i}\) is null-homologous is done by reducing the column corresponding to \(\sigma_{i}\) in the boundary matrix, which takes \(O(n_{d-1}n_{d})\) time in the worst case. However, there are special cases where null-homology testing can be performed much more quickly. For example, when a simplicial complex is embedded in \(\mathbb{R}^{3}\) or the \(3\)-sphere, null-homology testing can be performed in nearly-linear time using the
Figure 1: Left: The \(1\)-cycle is null-homologous as it is the boundary of the pictured \(2\)-chain. Right: The \(1\)-cycle is not null-homologous as it is not the boundary of any \(2\)-chain. Coefficients on colored simplices are \(\pm 1\). Orientations on the simplices have been omitted for simplicity.
union-find algorithm [13]. In the next section, we give a quantum algorithm for null-homology testing.
## 4 A Quantum Algorithm for Null-Homology Testing.
In this section, we provide a quantum algorithm based on the _span program_ model to decide whether or not a cycle \(\gamma\) is null-homologous in a simplicial complex \(\mathcal{K}\).
Our algorithm is a generalization of the quantum algorithm developed by Belovs and Reichardt to decide \(st\)-connectivity in a graph [5]. Their algorithm is parameterized by the effective resistance and capacitance between the vertices \(s\) and \(t\). The query complexity of our algorithm is parameterized by higher-dimensional analogues of effective resistance and capacitance of \(\gamma\) that we introduce in Section 4.3.1.
Upper bounds on the effective resistance and capacitance in graphs imply a query complexity of \(O(n^{3/2})\) for \(st\)-connectivity, where \(n\) is the number of vertices [37]. In Section 5, we provide upper bounds on the effective resistance and capacitance. Our upper bounds on effective resistance and capacitance imply that the query complexity is polynomial in both the number of \(d\)-simplices as well as the cardinality of the largest torsion subgroup of a relative homology group of \(\mathcal{K}\). In the case that \(\mathcal{K}\) is a graph, our analysis of the witness sizes matches the \(O(n^{3/2})\) upper bounds of previous analyses. Specifically, under the assumptions that \(\mathcal{K}\) is relative torsion free and that \(\gamma\) is the boundary of a \(d\)-simplex (which may or may not be included in the complex), we match the \(O(n^{3/2})\) upper bound. These assumptions are always true for \(st\)-connectivity in graphs, which is why we match the query complexity for this problem. However, in Section 5.2, we provide examples of simplicial complexes where the effective resistance or capacitance of \(\gamma\) is exponentially large.
### A Brief Introduction to Span Programs.
Span programs were first defined by Karchmer and Wigderson [40] and were first used for quantum algorithms by Reichardt and Spalek [59]. Intuitively, a span program is a model of computation which encodes a boolean function \(f\colon\{0,1\}^{n}\to\{0,1\}\) into the geometry of two vector spaces and a linear operator between them. Encoding \(f\) into a span program implies the existence of a quantum query algorithm evaluating \(f\) (Theorem 4.1.)
**Definition 1**.: _A **span program**\(\mathcal{P}=(\mathcal{H},\mathcal{U},|\tau\rangle,A)\) over the set of strings \(\{0,1\}^{n}\) is a 4-tuple consisting of:_
1. _A finite dimensional Hilbert space_ \(\mathcal{H}=\mathcal{H}_{1}\oplus\cdots\oplus\mathcal{H}_{n}\) _where_ \(\mathcal{H}_{i}=\mathcal{H}_{i,0}\oplus\mathcal{H}_{i,1}\)_,_
2. _a vector space_ \(\mathcal{U}\)_,_
3. _a non-zero vector_ \(|\tau\rangle\in\mathcal{U}\)_, called the_ _target vector___
4. _a linear operator_ \(A\colon\mathcal{H}\to\mathcal{U}\)_._
_For every string \(x=(x_{1},\ldots,x_{n})\in\{0,1\}^{n}\) we associate the Hilbert space \(\mathcal{H}(x)=\mathcal{H}_{1,x_{1}}\oplus\cdots\oplus\mathcal{H}_{N,x_{n}}\) and the linear operator \(A(x)=A\Pi_{\mathcal{H}(x)}\colon\mathcal{H}\to\mathcal{U}\) where \(\Pi_{\mathcal{H}(x)}\) is the projection of \(\mathcal{H}\) onto \(\mathcal{H}(x)\). A string \(x\in\{0,1\}^{n}\) is a **positive instance** if \(|\tau\rangle\in\operatorname{im}A(x)\) and is a **negative witness** otherwise._
A span program \(\mathcal{P}\)_decides_ the function \(f\colon\{0,1\}^{n}\to\{0,1\}\) if \(f(x)=1\) when \(x\) is a positive instance and \(f(x)=0\) when \(x\) is a negative instance. A span program can also evaluate a partial boolean function \(g\colon D\to\{0,1\}\) where \(D\subset\{0,1\}^{n}\) by the same criteria.
Span programs are a popular method in quantum computing because there are upper bounds on the complexity of evaluating span programs in the _query model_. The query model evaluates the complexity of a quantum algorithm by its _query complexity_, the number of times it queries an input oracle. In our case, the input oracle returns the bits of the binary string \(x\). The _input oracle_\(\mathcal{O}_{x}\) takes \(\mathcal{O}_{x}:|i\rangle|b\rangle\to|i\rangle|b\oplus x_{i}\rangle\) where \(i\in[n]\). Observe that the states \(|i\rangle\) can be stored on \(\lceil\log n\rceil\) qubits. Reichardt [60] showed that the query complexity of a span program is a function of the positive and negative witness sizes of the program, which we now define.
**Definition 2**.: _Let \(\mathcal{P}\) be a span program and let \(x\in\{0,1\}^{n}\). A **positive witness** for \(x\) is a vector \(|w\rangle\in\mathcal{H}(x)\) such that \(A|w\rangle=|\tau\rangle\). The **positive witness size** of \(x\) is_
\[w_{+}(x,\mathcal{P})=\min\{\||w\rangle\|^{2}:|w\rangle\in\mathcal{H}(x),\,A|w \rangle=|\tau\rangle\}.\]
_If no positive witness exists for \(x\), then \(w_{+}(x,\mathcal{P})=\infty\)._
_A **negative witness** for \(x\) is a linear map \(\langle w|:\mathcal{U}\to\mathbb{R}\) such that \(\langle w|A\Pi_{\mathcal{H}(x)}=0\) and \(\langle w|\tau\rangle=1\). The **negative witness size** of \(x\) is_
\[w_{-}(x,\mathcal{P})=\min\{\|\langle w|A\|^{2}:\langle w|:\mathcal{U}\to\mathbb{ R},\,\langle w|A\Pi_{\mathcal{H}(x)}=0,\,\langle w|\tau\rangle=1\}.\]
_If no negative witness exists for \(x\), then \(w_{-}(x,\mathcal{P})=\infty\)._
**Theorem 4.1** (Reichardt [60]).: _Let \(D\subset\{0,1\}^{n}\) and \(f:D\to\{0,1\}\). Let \(\mathcal{P}\) be a span program that decides \(f\). Let \(W_{+}(f,\mathcal{P})=\max_{x\in f^{-1}(1)}w_{+}(x,\mathcal{P})\) and \(W(f,\mathcal{P})_{-}=\max_{x\in f^{-1}(0)}w_{-}(x,\mathcal{P})\). There is a bounded error quantum algorithm that decides \(f\) with query complexity \(O\left(\sqrt{W_{+}(f,\mathcal{P})W_{-}(f,\mathcal{P})}\right)\)._
A caveat to the query complexity model is that in general the time complexity of an algorithm can be much larger than its query complexity.
### A Span Program for Null-Homology Testing.
In this section, we present a span program for testing if a cycle is null-homologous in a simplicial complex. This span program is a generalization of the span program for \(st\)-connectivity defined in [40] and used to develop quantum algorithms in [5, 9, 37, 38].
Let \(\mathcal{K}\) be a \(d\)-dimensional simplicial complex. Let \(\gamma\in C_{d-1}(\mathcal{K})\) be a \((d-1)\)-cycle. Let \(n_{d}\) be the number of \(d\)-simplices in \(\mathcal{K}\). Order the \(d\)-simplices \(\{\sigma_{1},\ldots,\sigma_{n_{d}}\}\). Let \(w\colon\mathcal{K}_{d}\to\mathbb{R}\) be a weight function on the \(d\)-simplices, and let \(W:C_{d}(\mathcal{K})\to C_{d}(\mathcal{K})\) be the diagonal weight matrix. We define a span program over the strings \(\{0,1\}^{n_{d}}\) as follows.
1. \(\mathcal{H}=C_{d}(\mathcal{K})\), with \(\mathcal{H}_{i,1}=\mathrm{span}\{|\sigma_{i}\rangle\}\) and \(\mathcal{H}_{i,0}=\{0\}\).
2. \(\mathcal{U}=C_{d-1}(\mathcal{K})\)
3. \(A=\partial_{d}\sqrt{W}\colon C_{d}(\mathcal{K})\to C_{d-1}(\mathcal{K})\)
4. \(|\tau\rangle=\gamma\)
We denote the above span program by \(\mathcal{P}_{\mathcal{K}}\). Let \(x\in\{0,1\}^{n_{d}}\) be a binary string. We define the subcomplex \(\mathcal{K}(x)\coloneqq\mathcal{K}^{d-1}\cup\{\sigma_{i}:x_{i}=1\}\); that is, \(\mathcal{K}(x)\) contains the \(d\)-simplices \(\sigma_{i}\) such that \(x_{i}=1\). There exists a solution \(v\) to the linear system \(\partial_{d}\sqrt{W}\Pi_{\mathcal{K}(x)}v=\gamma\) if and only if the cycle \(\gamma\) is null-homologous in \(\mathcal{K}(x)\) if and only if \(x\) is a positive instance of \(\mathcal{P}_{\mathcal{K}}\). The span program \(\mathcal{P}_{\mathcal{K}}\) decides the boolean function \(f:\{0,1\}^{n_{d}}\to\{0,1\}\) where \(f(x)=1\) if and only if \(\gamma\) is a null-homologous cycle in the subcomplex \(\mathcal{K}(x)\).
Theorem 4.1 allows us to bound the query complexity of our span program by the size of positive and negative witness. In the next section, we provide bounds on the positive and negative witness size of our span program.
### Witness Sizes of the Null-Homology Testing Span Program.
In this section, we bound the positive and negative witness sizes of our span program for null-homology testing. We will show that they are equal to the quantities called the effective resistance and effective capacitance of the cycle. We first introduce these quantities and show some of their properties. Then, in Section 4.3.4, we show that these quantities are the witness sizes of our span program.
#### 4.3.1 Background: Effective Resistance and Effective Capacitance.
Let \(\gamma\in C_{d-1}(\mathcal{K})\) be a cycle in a simplicial complex. We associate two quantities with \(\gamma\): its _effective resistance_ and _effective capacitance_. The effective resistance is finite if and only if \(\gamma\) is null-homologous, and the effective capacitance is finite if and only if \(\gamma\) is not null-homologous. We begin with the definition of effective resistance.
**Definition 3**.: _Let \(\mathcal{K}\) be a simplicial complex with weight function \(w:\mathcal{K}\to\mathbb{R}^{+}\). Let \(\gamma\) be a \((d{-}1)\)-cycle in \(\mathcal{K}\). The **effective resistance** of \(\gamma\) is_
\[\mathcal{R}_{\gamma}(\mathcal{K},W)=\begin{cases}\gamma^{T}\left(L_{d-1}^{up,\,W}\right)^{+}\gamma&\text{if $\gamma$ is null-homologous}\\ \infty&\text{otherwise}\end{cases}\]
_When obvious or when \(\mathcal{K}\) is unweighted, we drop the weights from the notation and write \(\mathcal{R}_{\gamma}(\mathcal{K})\)._
This definition of effective resistance is consistent with effective resistance in graphs (see [62]) and other definitions of effective resistance in simplicial complexes [42, 57, 31].3However, this definition gives little intuition about effective resistance. We now prove there is an alternative definition of effective resistance in terms of chains with boundary \(\gamma\). We begin with two definitions.
**Definition 4**.: _Given a \(d\)-dimensional simplicial complex \(\mathcal{K}\) and a \((d-1)\)-dimensional null-homologous cycle \(\gamma\), a **unit \(\boldsymbol{\gamma}\)-flow** is a \(d\)-chain \(f\in C_{d}(\mathcal{K})\) such that \(\partial f=\gamma\)._
In the case of graphs, a unit \(st\)-flow is a flow sending \(1\) unit of flow from \(s\) to \(t\).
**Definition 5**.: _Given a \(d\)-dimensional simplicial complex \(\mathcal{K}\) with weight function \(w:C_{d}(\mathcal{K})\to\mathbb{R}^{+}\) and a unit \(\gamma\)-flow \(f\), the **flow energy** of \(f\) on \(\mathcal{K}\) is_
\[\mathsf{J}(f)=\sum_{\sigma\in\mathcal{K}^{(d)}}\frac{f(\sigma)^{2}}{w(\sigma) }=f^{T}W^{-1}f\]
_where \(W\) is the \(n_{d}\times n_{d}\) diagonal matrix whose entries are the weights of the \(d\)-simplices._
We will now relate unit \(\gamma\)-flows and their energy to effective resistance. This generalizes a formula for effective resistance in graphs [6, Chapter IX Corollary 6].
**Lemma 4.2**.: _Let \(\mathcal{K}\) be a simplicial complex and let \(\gamma\) be a null-homologous \(d\)-cycle. The effective resistance of \(\gamma\) is the minimum flow energy over all unit \(\gamma\)-flows, i.e._
\[\mathcal{R}_{\gamma}(\mathcal{K})=\min\{\mathsf{J}(f)\mid\partial f=\gamma\}\]
Proof.: Our first observation is that we can factor the weighted Laplacian as
\[L_{d}^{up,\,W} =\partial_{d+1}W\delta_{d}\] \[=\partial_{d+1}W^{1/2}W^{1/2}\delta_{d}\] \[=(\partial_{d+1}W^{1/2})(\partial_{d+1}W^{1/2})^{T}\]
By Lemma 2.6 Part 1, \((L_{d}^{up,\,W})^{+}=((\partial_{d+1}W^{1/2})^{T})^{+}(\partial_{d+1}W^{1/2}) ^{+}\). Therefore,
\[\mathcal{R}_{\gamma}(\mathcal{K})=\gamma^{T}((\partial_{d+1}W^{1/2})^{T})^{+} (\partial_{d+1}W^{1/2})^{+}\gamma=\|(\partial W^{1/2})^{+}\gamma\|^{2}.\]
By Lemma 2.6 Part 2, \(\mathcal{R}_{\gamma}(\mathcal{K})\) is the minimum squared-norm of a vector that \(\partial_{d+1}W^{1/2}\) maps to \(\gamma\). Let \(f=(\partial W^{1/2})^{+}\gamma\); the vector \(f\) is the unit \(\gamma\)-flow of minimum flow energy, which we now prove.
A vector \(v\) is mapped to \(\gamma\) by \(\partial W^{1/2}\) iff \(W^{1/2}v\) is mapped to \(\gamma\) by \(\partial\) as \(W^{1/2}\) is a bijection; that is all to say, \(W^{1/2}v\) is a unit \(\gamma\)-flow. Moreover, the flow energy of \(W^{1/2}v\) is
\[\mathsf{J}(W^{1/2}v) =(W^{1/2}v)^{T}W^{-1}W^{1/2}v\] \[=v^{T}W^{1/2}W^{-1}W^{1/2}v\] \[=v^{T}v\] \[=\|v\|^{2}\]
Therefore, the minimum flow energy of a unit \(\gamma\)-flow is the minimum squared-norm of a vector that \(\partial W^{1/2}\) maps to \(\gamma\), which we previously saw was \(\mathcal{R}_{\gamma}(\mathcal{K})\).
We call \((\partial W^{1/2})^{+}\gamma\) the _minimum-energy_ unit \(\gamma\)-flow4.
Some of the key properties of effective resistance in graphs are the series and parallel formulas and Rayleigh Monotonicity. In Appendix B, we prove analogous results for higher-dimensional effective resistance.
While effective resistance has previously been generalized from graphs to simplicial complexes [31, 42, 57], to our knowledge, we are the first to generalize effective capacitance from graphs to simplicial complexes. Unfortunately, effective capacitance is more opaque than effective resistance, both in graphs and simplicial complexes. The definition of effective capacitance is less intuitive than the definition for effective resistance, and there are fewer results about effective capacitance in graphs than effective resistance.
Before defining effective capacitance in simplicial complexes, we review the definition of effective capacitance in graphs, which can be found in [37]. Let \(G\) be a graph such that \(s\) and \(t\) are connected in \(G\), and let \(H\subseteq G\) be a subgraph such that \(s\) and \(t\) are not connected in \(H\). A _unit \(st\)-potential_ is a function \(p\colon V(G)\to\mathbb{R}\) such that \(p(t)=1\), \(p(s)=0\), and \(p(u)=p(v)\) for any two vertices \(u,v\) in the same connected component. The _potential energy_ of \(p\) is \(\sum_{\{u,v\}\in E(G)}(p(u)-p(v))^{2}\). The _effective capacitance_ of \(s\) and \(t\) is the minimum potential energy of any \(st\)-potential.
Our definition of effective capacitance in simplicial complexes will be analogous to the defintion in graphs; namely, the effective capacitance of a cycle \(\gamma\) will be the minimum energy of a unit \(\gamma\)-potential.
**Definition 6**.: _Let \(\mathcal{L}\) be a simplicial complex, and let \(\gamma\in C_{d-1}(\mathcal{L})\) be a \((d{-}1)\)-cycle that is not null-homologous in \(\mathcal{L}\). A **unit \(\gamma\)-potential** in \(\mathcal{L}\) is a \((d{-}1)\)-chain \(p\) such that \(\delta_{d-1}[\mathcal{L}]p=0\) and \(p^{T}\gamma=1\)._
Figure 2 shows a \(\gamma\)-potential in a simplicial complex.
**Definition 7**.: _Given simplicial complexes \(\mathcal{L}\subset\mathcal{K}\) with weight function \(w:C_{d}(\mathcal{K})\to\mathbb{R}\) and a \(\gamma\)-potential \(p\) in \(\mathcal{L}\), the **potential energy** of \(p\) on \(\mathcal{K}\) is_
\[\mathcal{J}(p)=\sum_{\sigma\in\mathcal{K}_{d}}\delta[\mathcal{K}]p(\sigma)^{2 }w(\sigma)=(\delta[\mathcal{K}]p)^{T}W(\delta[\mathcal{K}]p).\]
**Definition 8**.: _Let \(\mathcal{L}\subset\mathcal{K}\) be simplicial complexes, and let \(\gamma\in C_{d-1}(\mathcal{L})\) be a \((d-1)\)-cycle that is null-homologous in \(\mathcal{K}\). If \(\gamma\) is not null-homologous in \(\mathcal{L}\), the **effective capacitance** of \(\gamma\) in \(\mathcal{L}\) and \(\mathcal{K}\) is_
Figure 2: Left: A \(1\)-cycle \(\gamma\) with \(\pm 1\) coefficients on the blue edges. Right: A unit \(\gamma\)-potential \(p\) with \(\pm 1\) coefficients on the red edges. If this complex is unweighted, then the potential energy of \(p\) is \(1\). It can be proved that \(p\) is a minimal energy unit \(\gamma\)-potential 5, so \(\mathcal{C}_{\gamma}(\mathcal{L},\mathcal{K})=1\)
\[\mathcal{C}_{\gamma}(\mathcal{L},\mathcal{K})=\begin{cases}\min_{p:p\text{ unit }\gamma \text{-potential}}\mathcal{J}(p)&\text{$\gamma$ is not null-homologous}\\ \infty&\text{$\gamma$ is null-homologous}\end{cases}\]
Our definition of effective capacitance in simplicial complexes matches the definition of effective capacitance in graphs; however, this may not be obvious at first glance, as our definition of \(st\)-potential is more general. A function \(p:V(G)\to\mathbb{R}\) is equivalent to a \(0\)-chain \(p\in C_{0}(H)\), and the requirement that \(p(u)=p(v)\) for any two vertices \(u,v\) in the same connected component is equivalent to saying \(\delta_{0}[H]=0\); however, not all chains \(p\) such that \(p^{T}(s-t)=1\) satisfy \(p(s)=1\) and \(p(t)=0\). (For example, it could be the case that \(p(s)=\frac{1}{2}\) and \(p(t)=-\frac{1}{2}\).) This difference in the definition ends up not mattering though. This is because the all-\(1\)s vectors \(1\in\ker\delta_{0}\) for any graph. Using this fact, we can see that for any \(st\)-potential \(p\) under our definition, there is an \(st\)-potential \(p^{\prime}\) under the previous definition with the same potential energy, namely the potential \(p^{\prime}=p-p(t)1\).
There is one small detail left to show. It is not obvious from the definition that a unit \(\gamma\)-potential will even exist for \(\gamma\). We prove this in the following lemma.
**Lemma 4.3**.: _Let \(\mathcal{L}\) be a simplicial complex, and let \(\gamma\in C_{d-1}(\mathcal{L})\) be a cycle. Then there exists a unit \(\gamma\)-potential in \(\mathcal{L}\) if and only if \(\gamma\) is not null-homologous in \(\mathcal{L}\)._
Proof.: Observe that \(\ker\delta_{d-1}[\mathcal{L}]=(\operatorname{im}\partial_{d}[\mathcal{L}])^{\perp}\) as \(\delta_{d-1}[\mathcal{L}]=\partial_{d}[\mathcal{L}]^{T}\). Assume there is a \(\gamma\)-potential \(p\) in \(\mathcal{L}\). As \(\delta[\mathcal{L}]p=0\), then \(p\in\ker\delta_{d-1}[\mathcal{L}]=(\operatorname{im}\partial_{d}[\mathcal{L}] )^{\perp}\). As \(\gamma^{T}p=1\) we see that \(\gamma\) has a non-zero component in \((\operatorname{im}\partial_{d}[\mathcal{L}])^{\perp}\), so \(\gamma\not\in\operatorname{im}\partial_{d}[\mathcal{L}]\).
Alternatively, suppose that \(\gamma\) is not null-homologous in \(\mathcal{L}\). Then \(\gamma\) has a non-zero component in \((\operatorname{im}\partial_{d}[\mathcal{L}])^{\perp}=\ker\delta_{d-1}[ \mathcal{L}]\). Let \(q=\Pi_{\ker\delta[\mathcal{L}]}\gamma\), where \(\Pi_{\ker\delta[\mathcal{L}]}\) is the projection operator onto \(\ker\delta_{d-1}[\mathcal{L}]\). Then \(\gamma^{T}q\neq 0\) and \(\delta_{d-1}[\mathcal{L}]q=0\). The vector \(q\) is not necessarily a unit \(\gamma\)-potential as it is not necessarily the case that \(\gamma^{T}q=1\), but the scaled vector \(p=\frac{1}{\gamma^{T}q}q\) is a unit \(\gamma\)-potential.
One interesting property of effective resistance and capacitance in graphs is that, in planar graphs, the effective resistance between certain pair of nodes in the dual graph equals the effective capacitance between certain pairs of nodes in the primal graph. In Appendix C, we show that an analogous property holds for higher-dimensional embedded simplicial complexes.
#### 4.3.2 Effective Resistance and the Spectral Gap.
In this section, we give a characterization of the spectral gap of the combinatorial Laplacian in terms of the effective resistance of a cycle. While the proof of this lemma follows from some simple linear algebra, the advantage of this theorem comes down to the fact that effective resistance is easier to work with than eigenvectors of the Laplacian (in our opinion). We first relate effective resistance to the spectral gap of the up Laplacian. We then show how this relates effective resistance to the spectral gap of the combinatorial Laplacian. We prove this relationship for unweighted simplicial complexes; however, the theorems also hold for weighted simplicial complexes.
**Lemma 4.4**.: _The spectral gaps of the up Laplacian \(L^{up}_{d-1}\) and down Laplacian \(L^{down}_{d}\) are_
\[\lambda_{\min}(L^{down}_{d})=\lambda_{\min}(L^{up}_{d-1})=\min\{\mathcal{R} _{\gamma}^{-1}(\mathcal{K}):\gamma\in\operatorname{im}\partial_{d},\,\|\gamma \|=1\}.\]
Proof.: We first prove this is the case for the spectral gap of the up Laplacian \(L^{up}_{d}\). We then show the equivalence of \(\lambda_{\min}(L^{down}_{d})\) and \(\lambda_{\min}(L^{up}_{d-1})\).
The lemma follows from some standard facts about symmetric matrices. First, because \(L_{d}^{up}\) is symmetric, a vector \(x\) is an eigenvector of \(L_{d}^{up}\) with non-zero eigenvalue \(\lambda\) if and only if \(x\) is an eigenvector of \((L_{d}^{up})^{+}\) with non-zero eigenvalue \(\lambda^{-1}\). This follows from the fact that the singular values and vectors of a symmetric matrix are also its eigenvalues and eigenvectors. Therefore, the smallest non-zero eigenvalue of \(L_{d}^{up}\) is the inverse of the largest non-zero eigenvalue of \((L_{d}^{up})^{+}\), or \(\lambda_{\min}(L_{d}^{up})=\lambda_{\max}^{-1}((L_{d}^{up})^{+})\) for short.
Next, we can characterize the eigenvalues of the symmetric matrix \((L_{d}^{up})^{+}\) with the _Courant-Fischer Theorem_. We use a special case of the theorem, which says that \(\lambda_{\max}((L_{d}^{up})^{+})=\max\{x^{T}(L_{d}^{up})^{+}x:\|x\|=1\}\) and \(x_{\max}=\arg\max\{x^{T}(L_{d}^{up})^{+}x:\|x\|=1\}\), where \(x_{\max}\) is an eigenvector corresponding to \(\lambda_{\max}((L_{d}^{up})^{+})\). The lemma follows from the fact that \(x_{\max}\in\operatorname{im}L_{d}^{up}=\operatorname{im}\partial_{d}\), which is the case because \(x_{\max}\) is the eigenvector of a non-zero eigenvalue of \(L_{d}^{up}\).
Finally, \(\lambda_{\min}(L_{d}^{up})=\lambda_{\min}(L_{d-1}^{down})\) by Lemma 2.2 Part 1.
#### 4.3.3 Effective Capacitance and the Spectral Gap.
In the previous section, we saw that the effective resistance of a unit-length cycle is always bounded above by the inverse of the spectral gap of the combinatorial Laplacian. While we don't know such a bound for the effective capacitance of arbitrary cycles, we can prove such a bound for the effective capacitance for the boundaries of simplices. This is sufficient for our analysis of the incremental algorithm as the only cycles we consider are the boundaries of simplices.
Before proving our upper bound on the effective capacitance of a cycle, we need to prove an upper bound on the largest singular value of the coboundary matrix.
**Lemma 4.5**.: _Let \(\mathcal{K}\) be a simplicial complex with \(n_{0}\) vertices. For any \(d\geq 1\), the largest singular value of the coboundary matrix \(\delta_{d-1}[\mathcal{K}]\) is \(\sigma_{\text{max}}(\delta_{d-1})=O(\sqrt{n_{0}})\)._
Proof.: This follows as the squared singular values of \(\delta_{d-1}\) are the eigenvalues of the up Laplacian \(\delta_{d-1}^{T}\delta_{d-1}=L_{d-1}^{up}\). (This is true for any matrix of the form \(A^{T}A\).) The maximum eigenvalue of \(L_{d-1}^{up}\) is known to be at most \(n_{0}\) by Theorem 2.4.
**Theorem 4.6**.: _Let \(\mathcal{L}\subset\mathcal{K}\) be \(d\)-dimensional simplicial complexes. Let \(\gamma\in C_{d-1}(\mathcal{L})\) be a \((d-1)\)-cycle that is null-homologous in \(\mathcal{K}\) but not in \(\mathcal{L}\). Assume that \(\gamma=\partial\sigma\) for a \(d\)-simplex \(\sigma\notin\mathcal{L}\).6The effective capacitance of \(\gamma\) in \(\mathcal{K}\) is bounded above by \(\mathcal{C}_{\gamma}(\mathcal{L},\mathcal{K})=O\left(n_{0}\cdot\lambda_{\min} ^{-1}(L_{d-1}[\mathcal{L}\cup\{\sigma\}])\right)\)._
Footnote 6: The theorem holds whether or not \(\sigma\in\mathcal{K}\).
Proof.: We can express the constraints of a \(\gamma\)-potential \(p\) in the following set of linear equations:
\[\begin{bmatrix}\delta[\mathcal{L}]\\ \delta[\mathcal{L}]\\ \gamma^{T}\end{bmatrix}p=\begin{bmatrix}0\\ 0\\ \vdots\\ 1\end{bmatrix}\]
To simplify notation, let \(C=\begin{bmatrix}\delta[\mathcal{L}]^{T}&\gamma\end{bmatrix}^{T}\) and \(b=\begin{bmatrix}0&0&\cdots&1\end{bmatrix}^{T}\)
We consider the smallest vector \(p\) which satisfies these equations, which is \(p=C^{+}b\). Because \(\gamma=\partial\sigma\), we can see that \(C=\delta[\mathcal{L}\cup\{\sigma\}]\). Therefore, \(\|p\|=O(\|C^{+}b\|)=O(\sigma_{\min}^{-1}(\delta[\mathcal{L}\cup\{\sigma\}])\), where \(\sigma_{\min}(\delta[\mathcal{L}\cup\{\sigma\}])\) is the smallest non-zero singular value of \(\delta[\mathcal{L}\cup\{\sigma\}]\). However, we know that \(\sigma_{\min}=\sqrt{\lambda_{\min}(L_{d-1}^{up}[\mathcal{L}\cup\{\sigma\}])} \in\Omega(\sqrt{\lambda_{\min}(L_{d-1}[\mathcal{L}\cup\{\sigma\}])})\). Therefore, \(\|p\|\in O\left(\sqrt{\lambda_{\min}^{-1}(L_{d-1}[\mathcal{L}\cup\{\sigma\}]) }\right)\).
We now want to bound the potential energy of \(p\). Using Lemma 4.5, we can bound \(\|\delta[\mathcal{K}]p\|^{2}\in O(n_{0}\cdot\lambda_{\min}^{-1}(L_{d-1}[\mathcal{L }\cup\{\sigma\}]))\).
#### 4.3.4 Connecting Effective Resistance and Capacitance to Witness Sizes.
Given a string \(x\in\{0,1\}^{n_{d}}\), we show in the following two lemmas that \(w_{+}(x,\mathcal{P}_{\mathcal{K}})=\mathcal{R}_{\gamma}(\mathcal{K}(x))\) and \(w_{-}(x,\mathcal{P}_{\mathcal{K}})=\mathcal{C}_{\gamma}(\mathcal{K}(x), \mathcal{K})\). The proofs are simple calculations following from the definitions of effective resistance and capacitance.
**Lemma 4.7**.: _Let \(x\in\{0,1\}^{n_{d}}\) be a positive instance. There is a bijection between positive witnesses \(|w\rangle\) for \(x\) and unit \(\gamma\)-flows \(f\) in \(\mathcal{K}(x)\). Moreover, the positive witness size is equal to the effective resistance of \(\gamma\) in \(\mathcal{K}(x)\); that is, \(w_{+}(x,\mathcal{P}_{\mathcal{K}})=\mathcal{R}_{\gamma}(\mathcal{K}(x))\)._
Proof.: Let \(|w_{+}\rangle\in C_{d}(\mathcal{K})\) be a positive witness for \(x\), so \(\partial_{d}\sqrt{W}|w_{+}\rangle=\gamma\). We construct a unit \(\gamma\)-flow \(f\) in \(\mathcal{K}(x)\) by \(f=\sqrt{W}|w_{+}\rangle\); \(f\) is indeed a unit \(\gamma\)-flow as \(\partial_{d}f=\partial_{d}\sqrt{W}|w_{+}\rangle=\gamma\). Moreover, \(|w_{+}\rangle=W^{-1/2}|f\rangle\). The flow energy of \(\gamma\) is
\[\mathsf{J}(f) =\langle f|W^{-1}|f\rangle\] \[=\langle W^{-1/2}f|W^{-1/2}f\rangle\] \[=\langle w_{+}|w_{+}\rangle\] \[=\||w_{+}\rangle\|^{2}.\]
Hence, the flow energy of \(f\) equals the witness size of \(x\).
Conversely, let \(f\) be a unit \(\gamma\)-flow in \(\mathcal{K}(x)\) and define the positive witness for \(x\) as \(|w_{+}\rangle=W^{-1/2}|f\rangle\). The same computation in the above paragraph shows that the flow energy of \(f\) equals the positive witness size of \(x\).
**Lemma 4.8**.: _Let \(x\in\{0,1\}^{n_{d}}\) be a negative instance. There is a bijection between negative witnesses \(\langle w_{-}|\) for \(x\) and unit \(\gamma\)-potentials \(p\) in \(\mathcal{K}(x)\). Moreover, the negative witness size is equal to the effective capacitance of \(\gamma\) in \(\mathcal{K}(x)\); that is, \(w_{-}(x,\mathcal{P}_{\mathcal{K}})=C_{\gamma}(\mathcal{K}(x))\)._
Proof.: Let \(\langle w_{-}|\) be a negative witness for \(x\). As \(\langle w|\) is a linear function from \(C_{d-1}(\mathcal{K})\) to \(\mathbb{R}\) we may view it as a \((d-1)\)-chain \(p^{T}=\langle w|\). Since \(\langle w_{-}|\gamma\rangle=1\), then \(p^{T}\gamma=1\). To show that \(p\) is a unit \(\gamma\)-potential we must show that the coboundary of \(p\) is zero in \(\mathcal{K}(x)\). By the definition of a negative witness we have
\[0 =\langle w_{-}|\partial_{d}\sqrt{W}\Pi_{\mathcal{K}(x)}\] \[=\langle p|\partial_{d}\sqrt{W}\Pi_{\mathcal{K}(x)}\] \[=\langle\delta_{d}(p)|\sqrt{W}\Pi_{\mathcal{K}(x)}.\]
Since \(\sqrt{W}\) is a diagonal matrix and \(\Pi_{\mathcal{K}(x)}\) restricts the coboundary to the subcomplex \(\mathcal{K}(x)\) we see that \(\langle\delta_{d}(p)|\sigma\rangle=0\) for any \(\sigma\in\mathcal{K}(x)_{d}\). To show that the witness size of \(\langle w_{-}|\) is equal to the potential energy of \(p\) we have
\[\|\langle w_{-}|\partial_{d}\sqrt{C}\|^{2} =\langle p\partial_{d}\sqrt{W}|p\partial_{d}\sqrt{W}\rangle\] \[=\langle\sqrt{W}\delta_{d}(p)|\sqrt{W}\delta_{d}(p)\rangle\] \[=\sum_{\sigma\in\mathcal{K}_{d}}\langle\delta_{d}(p)|\sigma\rangle ^{2}w(\sigma)\] \[=\mathcal{J}(p).\]
Conversely, let \(p\) be a unit \(\gamma\)-potential for \(\mathcal{K}(x)\) we construct a negative witness for \(x\) by setting \(\langle w_{-}|\coloneqq p^{T}\). Since the coboundary of \(p\) is zero in \(\mathcal{K}(x)\) we have \(\langle\delta_{p}(p)|\sigma\rangle=0\) for each \(\sigma\in\mathcal{K}(x)_{d}\) which implies \(\langle w_{-}|\partial_{d}\sqrt{W}\Pi_{\mathcal{K}(x)}=0\) by the reasoning in the previous paragraph. Also by the previous paragraph we have that the potential energy of \(p\) is equal to the negative witness size of \(\langle w_{-}|\) which concludes the proof.
From these two lemmas we obtain the main theorem of the section, the quantum query complexity of \(\gamma\).
**Theorem 4.9**.: _Given a \(d\)-dimensional simplicial complex \(\mathcal{K}\), a \((d-1)\)-dimensional cycle \(\gamma\) that is null-homologous in \(\mathcal{K}\), and a \(d\)-dimensional subcomplex \(\mathcal{K}(x)\subseteq\mathcal{K}\), there exists a quantum algorithm deciding whether or not \(\gamma\) is null-homologous in \(\mathcal{K}(x)\) with quantum query complexity \(O\left(\sqrt{\mathcal{R}_{\max}(\gamma)\mathcal{C}_{\max}(\gamma)}\right)\), where \(\mathcal{R}_{\max}(\gamma)\) is the maximum finite effective resistance \(\mathcal{R}_{\gamma}(\mathcal{L})\) in any subcomplex \(\mathcal{L}\subset\mathcal{K}\), and \(\mathcal{C}_{\max}(\gamma)\) is the maximum finite effective capacitance \(\mathcal{C}_{\gamma}(\mathcal{L},\mathcal{K})\) in any subcomplex \(\mathcal{L}\subset\mathcal{K}\)._
Proof.: By Theorem 4.1, the span program \(\mathcal{P}_{\mathcal{K}}\) can be converted into a quantum algorithm whose query complexity is \(O\left(\sqrt{W_{+}(f,\mathcal{P}_{\mathcal{K}})W_{-}(f,\mathcal{P}_{\mathcal{ K}})}\right)\) where \(W_{+}(f,\mathcal{P}_{\mathcal{K}})=\max_{x\in f^{-1}(1)}\mathcal{R}_{\gamma}( \mathcal{K}(x))=\mathcal{R}_{\max}(\gamma)\) and \(W_{-}(f,\mathcal{P}_{\mathcal{K}})=\max_{x\in f^{-1}(0)}\mathcal{C}_{\gamma}( \mathcal{K}(x),\mathcal{K})=\mathcal{C}_{\max}(\gamma)\).
### Time Efficient Implementations of the Span Program.
We have given bounds on the query complexity of null-homology testing; however, this does not imply a bound on the time complexity of evaluating this span program, as the query complexity does not account for the work outside of the oracle calls. In Appendix D, we describe the details of an implementation of this algorithm. For certain special cases, we are able to analyse the time complexity of the algorithm. We describe this special case below.
There are two obstacles to a time-efficient implementation of the span program: the weights and the input cycle \(\gamma\). The weights on the \(d\)-simplices make it difficult to implement the matrix \(\partial\sqrt{W}\), as the weights on the simplices can be arbitrary real numbers. The input cycle \(\gamma\) is difficult to create on a quantum computer as the entries of \(\gamma\) can also be arbitrary real numbers.
Accordingly, we can give a quantum algorithm of bounded time complexity in one particular instance: when \(\mathcal{K}\) is unweighted and \(\gamma\) is the boundary of a \(d\)-simplex. (We do not require the \(d\)-simplex to actually appear in the complex.) While this is only a special case of the generic null-homology testing algorithm, this is the only case we need for the incremental algorithm for computing Betti numbers (Algorithm 1). The time complexity of this case is given in the following theorem.
**Theorem 4.10**.: _Let \(\mathcal{K}\) be an unweighted simplicial complex with \(n_{0}\) vertices, let \(\gamma\in C_{d-1}(\mathcal{K})\) a null-homologous cycle in \(\mathcal{K}\), and \(\mathcal{K}(x)\subset\mathcal{K}\) be a simplicial complex. Furthermore, assume that \(\gamma\) is the boundary of a \(d\)-simplex and the complex is unweighted. There is a quantum algorithm for deciding if \(\gamma\) is null-homologous in \(\mathcal{K}(x)\) that runs in time_
\[\tilde{O}\left(\sqrt{\frac{\mathcal{R}_{\max}(\gamma)\mathcal{C}_{\max}( \gamma)}{\tilde{\lambda}_{\min}}}n_{0}+\sqrt{\mathcal{R}_{\gamma}(\mathcal{K} )n_{0}}\right),\]
_where \(\mathcal{R}_{\max}(\gamma)\) is the maximum finite effective resistance \(\mathcal{R}_{\gamma}(\mathcal{L})\) of \(\gamma\) in any subcomplex \(\mathcal{L}\subset\mathcal{K}\), \(\mathcal{C}_{\max}(\gamma)\) is the maximum finite effective capacitance \(\mathcal{C}_{\gamma}(\mathcal{L},\mathcal{L})\) in any subcomplex \(\mathcal{K}(x)\), and \(\tilde{\lambda}_{\min}\) is the spectral gap of the normalized up-Laplacian \(\tilde{L}_{d-1}^{up}[\mathcal{K}]\)._
### Runtime of the Quantum Incremental Algorithm
In the previous section, we saw an implementation of an algorithm for testing if the boundary of a \(d\)-simplex was null-homologous. Combined with the framework of the Incremental Algorithm (Algorithm 1), this allows us to compute the \(d\)-Betti number.
**Theorem 1.1**.: _Let \(\mathcal{K}\) be a simplicial complex. There is a quantum algorithm for computing the \(d\)th Betti number \(\beta_{d}\) of \(\mathcal{K}\) in time_
\[\tilde{O}\left(\left(\sqrt{\frac{\mathcal{R}_{\max}\mathcal{C}_{\max}}{\tilde {\lambda}_{\min}}}n_{0}+\sqrt{\mathcal{R}_{\max}n_{0}}\right)(n_{d}+n_{d+1}) \right),\]
_where_
* \(n_{i}\) _is the number of_ \(i\)_-simplices of_ \(\mathcal{K}\)_._
* \(\mathcal{R}_{\max}\) _is the maximum finite effective resistance_ \(\mathcal{R}_{\partial\sigma}(\mathcal{L})\) _of the boundary of any_ \(d\)_- or_ \((d+1)\)_-simplex_ \(\sigma\in\mathcal{K}\) _in any subcomplex_ \(\mathcal{L}\subset\mathcal{K}\)_._
* \(\mathcal{C}_{\max}\) _are the maximum finite effective capacitance_ \(\mathcal{C}_{\partial\sigma}(\mathcal{L},\mathcal{K})\) _of the boundary of any_ \(d\)_- or_ \((d+1)\)_-simplex_ \(\sigma\in\mathcal{K}\) _in any subcomplex_ \(\mathcal{L}\subset\mathcal{K}\)_._
* \(\tilde{\lambda}_{\min}\) _is the minimum spectral gap of the normalized up Laplacians_ \(\tilde{L}_{d-1}^{\,up}[\mathcal{K}]\) _and_ \(\tilde{L}_{d}^{\,up}[\mathcal{K}]\)_._
Proof.: The Incremental Algorithm (Algorithm 1) incrementally adds each \(d\) and \((d+1)\)-simplex \(\sigma\) to the simplicial complex and checks if the cycle \(\partial\sigma\) is null-homologous. We can use the span-program algorithm of Theorem 4.10 to check if \(\partial\sigma\) is null-homologous. The theorem follows by using this algorithm for each of the \((n_{d}+n_{d+1})\)\(d\)- and \((d+1)\)-simplices.
### Comparison with Existing Algorithms.
In this section, we compare our algorithm to existing algorithms for QTDA. This presentation specifically compares our algorithm to the LGZ algorithm [48], but most of these ideas also hold for other existing QTDA algorithms.
Input.Our algorithm makes different assumptions about how the simplicial complex is stored compared to previous algorithms. We assume we have a list of simplices in the simplicial complex; this is required for the incremental algorithm as we must iteratively add the simplices and test if their boundaries are null-homologous. Compare this to existing quantum algorithms, which assume we have a way of checking if a simplex is included in the simplicial complex.
Our algorithm assumes we have a _list oracle_ that can return the simplices in the simplicial complex:
\[\mathcal{O}_{list}:|i\rangle|0\rangle\rightarrow|i\rangle|\sigma_{i}\rangle,\]
where \(\sigma_{i}\) is the \(i\)th \(d\)-simplex of our simplicial complex.
Compare this to the _membership oracle_ used in other QTDA algorithms that can check whether a simplex is in the simplicial complex:
\[\mathcal{O}_{memb}:|\sigma_{i}\rangle|j\rangle\rightarrow|\sigma_{i}\rangle|j \oplus b_{i}\rangle,\]
where \(b_{i}\) is a bit indicating if \(\sigma_{i}\in\mathcal{K}_{d}\).
These oracles come with different trade-offs. The oracle \(\mathcal{O}_{memb}\) does not require computing the set of simplices in advanced, while \(\mathcal{O}_{list}\) does. However, algorithms that use the membership oracle \(\mathcal{O}_{memb}\) pay for this in the time it takes to compute a uniform superposition of the \(d\)-simplices, a costly operation leading to a factor of \(\zeta_{d}=n_{d}/\binom{n_{0}}{d+1}\) in the runtime. Thus, our algorithm is better suited for _sparse_ simplicial complexes--complexes where \(n_{d}<<\binom{n}{d+1}\) and where the list of simplices can be computed efficiently--a family of complexes where existing QTDA algorithms perform poorly; see the section "Runtime" below for more discussion.
Output.The LGZ algorithm estimates the \(d\)th Betti number up to an additive factor by returning a value \(\chi_{d}\) such that \(\left|\chi_{d}-\frac{\beta_{d}}{\dim\mathcal{C}_{d}(\mathcal{K})}\right|\leq\epsilon\); the problem of computing \(\chi_{d}\) has been deemed _Betti number estimation_. Our algorithm instead returns the Betti number \(\beta_{k}\).
Runtime.To compare our algorithm to existing quantum algorithms, we bound the runtime of our algorithm with respect to the spectral gap of the combinatorial Laplacian. Note that while we can bound the runtime of our algorithm by the inverse of the spectral gap, this bound is not necessarily tight.
**Corollary 4.11**.: _Let \(\mathcal{K}\) be a simplicial complex with \(n_{i}\)\(i\)-simplices. There is a quantum algorithm for computing the \(d\)th Betti number \(\beta_{d}\) in time_
\[\tilde{O}\left(\Lambda_{\min}^{-3/2}n_{0}^{5/2}\cdot(n_{d}+n_{d+1})\right)\]
_where \(\Lambda_{\min}\) is the minimum spectral gap of \(L_{d}[\mathcal{L}]\) over all subcomplexes \(\mathcal{L}\subset\mathcal{K}\)._
Proof.: This follows from Theorem 1.1 by applying the bounds of Lemma 4.4, Theorem 4.6, and Lemma 2.5 to bound \(\mathcal{R}_{\max}\), \(\mathcal{C}_{\max}\), and \(\frac{1}{\lambda_{\min}}\) respectively. The bounds on the effective resistance of Lemma 4.4 only apply to unit vectors, so one factor of \(n_{0}\) is because of the fact that \(\|\partial\sigma\|=\sqrt{d}\), so \(\mathcal{R}_{\max}(\partial\sigma)\leq d\Lambda_{\min}^{-1}\).
Compare this to the LGZ algorithm, which runs in time
\[O\left(\epsilon^{-2}\lambda_{\min}^{-1}n_{0}^{4}\sqrt{\zeta_{d}^{-1}}\right)\]
where \(n_{0}\) is the number of vertices, \(\epsilon\) is the error term, \(\lambda_{\min}\) is the spectral gap of the combinatorial Laplacian, and \(\zeta_{d}\) is a density term given by
\[\zeta_{d}=\frac{n_{d}}{\binom{n_{0}}{d+1}}.\]
The density term is the ratio of the number \(k\)-simplices in the \(\mathcal{K}\) to number of \(k\)-simplices in the complete complex on \(n_{0}\) vertices, which may be exponentially small. For example, when \(\mathcal{K}\) is sparse, meaning that the number of \(d\)-simplices is polynomial in the number of vertices, the density may be exponentially small with respect to \(d\). Specifically, if \(\zeta_{d}=\Omega\left(n_{0}^{O(1)}/n_{0}^{d+1}\right)=\Omega\left(1/n_{0}^{O(d +1)}\right)\), then the runtime of LGZ is
\[O\left(\epsilon^{-2}\lambda_{\min}^{-1}n_{0}^{O(d+1)}\right).\]
Compare this to our algorithm, which in this case has runtime
\[\tilde{O}\left(\Lambda_{\min}^{-3/2}n_{0}^{O(1)}\right).\]
In this case, our algorithm has a better asymptotic dependence on the size of the complex as it avoids the factor of \(\binom{n_{0}}{d+1}\). This factor of \(\binom{n_{0}}{d+1}\) shows up in many of the alternatives to the LGZ algorithm, so our algorithm has a more favorable dependence on \(n_{0}\) compared to these algorithms as well. Additionally, we note that the term \(\Lambda_{\min}\) in our algorithm and \(\lambda_{\min}\) in the LGZ algorithm are similar but not directly comparable. See the following section.
Effective Resistance and Capacitance vs. Spectral Gap.Our algorithm is parameterized by the maximum effective resistance and capacitance of all subcomplexes of a simplicial complex and the square root of the inverse of spectral gap of the simplicial complex, whereas previous QTDA algorithm are only parameterized by the inverse of the spectral gap of the simplicial complex. Although for a _fixed_ complex, effective resistance and capacitance are upper bounded by the inverse of the spectral gap, the fact that our algorithm is parameterized by the maximum effective resistance and capacitance over _all_ subcomplexes means that the runtime of our algorithm is not entirely comparable to the runtime of existing QTDA algorithms. It is possible are complexes where the effective resistance and capacitance in subcomplexes are significantly lower than the spectral gap of the entire complex, and complexes where the effective resistance or capacitance of a cycle in a subcomplex is larger than the spectral gap of the entire complex. The complete complex is an example of the second case, as it has the maximal possible spectral gap of \(n_{0}\).
Randomized Order for the Incremental Algorithm.Building on the previous point, while our algorithm is parameterized by the maximum effective resistance and capacitance in various subcomplexes, our algorithm can also incrementally add the simplices _in any order_. This is potentially beneficial as a simplex may have smaller effective resistance or capacitance in one order than another. Of course, we likely will not know in advance whether or not a particular order of the simplices results in less or greater resistance and capacitance. However, we still may able to use this fact to our advantage, as we could run our algorithm multiple times with different orders to gain confidence that our Betti number computations are accurate, which is not the case with previous QTDA algorithms.
## 5 Bounds on Effective Resistance and Capacitance.
In this section, we provide upper bounds on the resistance and capacitance of a cycle \(\gamma\) in an simplicial complex \(\mathcal{K}\). Throughout this section, all simplicial complexes are **unweighted**.
Our upper bounds are polynomial in the number of \(d\)-simplices and the cardinality of the torsion subgroup of the relative homology groups. In particular, our bounds on resistance and capacitance are dependent on the maximum cardinality of the torsion subgroup of the relative homology group \(H_{d-1}(\mathcal{L},\mathcal{L}_{0},\mathbb{Z})\), where \(\mathcal{L}\subset\mathcal{K}\) is a \(d\)-dimensional subcomplex and \(\mathcal{L}_{0}\subset\mathcal{L}\) is a \((d\!-\!1)\)-dimensional subcomplex. In the worst case, our upper bounds are exponential.
In Theorem 5.9 we provide an example of a simplicial complex containing a cycle \(\gamma\) whose effective resistance is exponential in the number of simplices in the complex. It is important to reiterate that our bounds are in terms of the torsion of the relative homology groups, not the torsion of the (non-relative) homology groups. The simplicial complex we provide has no torsion in its homology groups, only torsion in its relative homology groups.
### Upper Bounds
Our upper bounds rely on a change of basis on the boundary matrix called the Smith normal form which reveals information about the torsion subgroup of \(H_{d-1}(\mathcal{K},\mathbb{Z})\). We state the normal
form theorem below.
**Theorem 5.1** (Munkres, Chapter 1 Section 11 [54]).: _There are bases for \(C_{d}(\mathcal{K})\) and \(C_{d-1}(\mathcal{K})\) such that the matrix for the boundary operator \(\partial_{d}\colon C_{d}(\mathcal{K},\mathbb{Z})\to C_{d-1}(\mathcal{K}, \mathbb{Z})\) is in **Smith normal form**, i.e._
\[\tilde{\partial}_{d}=\begin{bmatrix}D&0\\ 0&0\end{bmatrix}\]
_where \(D\) is a diagonal matrix with positive integer entries \(d_{1},\ldots,d_{m}\) such that each \(d_{i}\) divides \(d_{i+1}\) and each \(0\) is a zero matrix of appropriate dimensionality. The normal form of \(\partial_{d}\) satisfies the following properties:_
1. _The entries_ \(d_{1},\ldots,d_{m}\) _correspond to the torsion coefficients of_ \(H_{d-1}(\mathcal{K},\mathbb{Z})\cong\mathbb{Z}^{\beta_{d}}\oplus\mathbb{Z}_{d_ {1}}\oplus\cdots\oplus\mathbb{Z}_{d_{m}}\)__\((\)_where_ \(\mathbb{Z}_{1}=0\)_\()\)_,_
2. _The number of zero columns is equal to the dimension of_ \(\ker(\partial_{d})\)_._
_Moreover, the boundary matrix \(\partial\) in the standard basis can be transformed to \(\tilde{\partial}\) by elementary row and column operations. If \(\partial\) is square, these operations multiply \(\det\partial\) by \(\pm 1\)._
Using Theorem 5.1, we obtain an upper bound on the determinants of the square submatrices of the boundary matrix \(\partial_{d}[\mathcal{K}]\) in terms of the _relative homology groups_ of \(\mathcal{K}\). Let \(\mathcal{L}\) be \(d\)-dimensional subcomplex of \(\mathcal{K}\), and let \(\mathcal{L}_{0}\) be a \((d-1)\)-dimensional subcomplex of \(\mathcal{K}\). The _relative boundary matrix_\(\partial_{d}[\mathcal{L},\mathcal{L}_{0}]\) is the submatrix of \(\partial_{d}\) obtained by including the columns of the \(d\)-simplices in \(\mathcal{L}\) and excluding the rows of the \((d-1)\)-simplices in \(\mathcal{L}_{0}\). With the relative boundary matrices, one can define the _relative homology groups_ as \(H_{d}(\mathcal{L},\mathcal{L}_{0},\mathbb{Z})=\ker\partial_{d}[\mathcal{L}, \mathcal{L}_{0}]/\operatorname{im}\partial_{d+1}[\mathcal{L},\mathcal{L}_{0}]\). More information on the relative boundary matrix can be found in [14]. We denote the cardinality of the torsion subgroup of the relative homology group \(H_{d-1}(\mathcal{L},\mathcal{L}_{0},\mathbb{Z})\) by \(\mathcal{T}(\mathcal{L},\mathcal{L}_{0})\). Similarly, we denote the maximum \(\mathcal{T}(\mathcal{L},\mathcal{L}_{0})\) over all relative homology groups as \(\mathcal{T}_{\max}(\mathcal{K})\).
**Lemma 5.2**.: _Let \(\partial_{d}[\mathcal{L},\mathcal{L}_{0}]\) be a \(k\times k\) square submatrix of \(\partial_{d}\) constructed by including columns for the \(d\)-simplices in \(\mathcal{L}\) and excluding rows for the \((d-1)\)-simplices in \(\mathcal{L}_{0}\). The magnitude of the determinant of \(\partial_{d}[\mathcal{L},\mathcal{L}_{0}]\) is bounded above by the cardinality of the torsion subgroup of \(H_{d-1}(\mathcal{L},\mathcal{L}_{0},\mathbb{Z})\), i.e_
\[|\det\left(\partial_{d}[\mathcal{L},\mathcal{L}_{0}]\right)|\leq\mathcal{T}( \mathcal{L},\mathcal{L}_{0}).\]
Proof.: Without loss of generality, we assume that \(\det(\partial_{d}[\mathcal{L},\mathcal{L}_{0}])\neq 0\); if \(\det(\partial_{d}[\mathcal{L},\mathcal{L}_{0}])=0\), the bound is trivial. Since \(\partial_{d}[\mathcal{L},\mathcal{L}_{0}]\) is a non-singular square matrix, its normal form \(\tilde{\partial}_{d}[\mathcal{L},\mathcal{L}_{0}]\) is a diagonal matrix \(D=\operatorname{diag}(d_{1},\ldots,d_{k})\). The determinant is equal to \(\pm\prod_{i=1}^{k}d_{i}\) and by Theorem 5.1 the torsion subgroup of \(H_{d-1}(\mathcal{L},\mathcal{L}_{0})\) is \(\mathbb{Z}_{d_{1}}\oplus\cdots\oplus\mathbb{Z}_{d_{k}}\) which has cardinality \(\mathcal{T}(\mathcal{L},\mathcal{L}_{0})=\prod_{i=1}^{k}d_{i}\).
#### 5.1.1 Upper Bounds on Effective Resistance
We are now ready to upper bound the effective resistance of a cycle in a simplicial complex.
**Theorem 5.3**.: _Let \(\mathcal{K}\) be a \(d\)-dimensional simplicial complex and \(\gamma\) a unit-length null-homologous \((d-1)\)-cycle in \(\mathcal{K}\). Let \(n=\min\{n_{d-1},n_{d}\}\). The effective resistance of \(\gamma\) is bounded above by \(\mathcal{R}_{\gamma}(\mathcal{K})\in O\left(n^{2}\cdot\mathcal{T}_{\max}( \mathcal{K})^{2}\right)\)._
Proof.: First, we remove \(d\)-simplices from \(\mathcal{K}\) to create a new complex \(\mathcal{L}\) such that \(\ker(\partial_{d}[\mathcal{L}])=0\) and \(\operatorname{im}\partial_{d}[\mathcal{K}]=\operatorname{im}\partial_{d}[ \mathcal{L}]\). Theorem B.3 proves that removing \(d\)-simplices only increases the effective resistance, so \(\mathcal{R}_{\gamma}(\mathcal{K})\leq\mathcal{R}_{\gamma}(\mathcal{L})\). As \(\ker(\partial_{d}[\mathcal{L}])=0\), there is a unique unit \(\gamma\)-flow \(f\in C_{d-1}(\mathcal{L})\) which implies \(\mathcal{R}_{\gamma}(\mathcal{L})=\|f\|^{2}\). Let \(n\leq n_{d}\) denote the number of \(d\)-simplices in \(\mathcal{L}\).
The matrix \(\partial_{d}[\mathcal{L}]\) has full column rank, so we can find a non-singular \(n\times n\) square submatrix of \(\partial_{d}[\mathcal{L}]\); call this submatrix \(B\). Let \(\mathcal{L}_{0}\) be the \((d-1)\)-dimensional subcomplex that contains the \((d-1)\)-simplices corresponding to rows excluded from \(B\); \(B\) is the relative boundary matrix \(\partial_{d}[\mathcal{L},\mathcal{L}_{0}]\). We have that \(Bf=c\), where \(c\) is the restriction of \(\gamma\) to the rows of \(B\). Observe that \(\|c\|\leq\|\gamma\|=1\)
We will apply Cramer's rule to upper bound the size of \(f\). By Cramer's rule we have the equality
\[f(\sigma)=\frac{\det(B_{\sigma,c})}{\det(B)}\]
where \(B_{\sigma,c}\) is the matrix obtained by replacing the column of \(B\) indexed by \(\sigma\) with the vector \(c\). Since \(\det(B)\) is integral, \(|\det(B)|\geq 1\), so we drop the denominator and focus on the inequality \(|f(\sigma)|\leq|\det(B_{\sigma,c})|\). We bound \(|\det(B_{\sigma,c})|\) by its cofactor expansion,
\[|\det(B_{\sigma,c})| =\left|\sum_{i=1}^{n_{d}}(-1)^{i}\cdot c_{i}\cdot\det(B_{\sigma,c }^{c,i})\right|\] \[\leq\sum_{i=1}^{n_{d}}|c_{i}|\cdot\mathcal{T}_{\max}(\mathcal{K})\] \[=\|c\|_{1}\cdot\mathcal{T}_{\max}(\mathcal{K})\] \[=O\left(\sqrt{n}\cdot\mathcal{T}_{\max}(\mathcal{K})\right)\]
where \(B_{\sigma,c}^{c,i}\) denotes the submatrix obtained by removing the column \(c\) and removing the \(i\)th row and \(c_{i}\) denotes the \(i\)th component of \(c\). The first inequality comes from Lemma 5.2, as \(B_{\sigma,c}^{c,i}\) is the relative boundary matrix \(\partial_{d}[\mathcal{L}\setminus\{\sigma\},\mathcal{L}_{0}\cup\sigma_{i}]\), where \(\sigma_{i}\) is the \((d-1)\)-simplex corresponding to the \(i\)th row of \(B\). The factor of \(\sqrt{n}\) comes from the fact that \(\|c\|_{1}\leq\sqrt{n}\|c\|_{2}\) and \(\|c\|_{2}\leq 1\). Finally, we compute the flow energy of \(f\) as
\[\mathsf{J}(f) =\sum_{\sigma\in\mathcal{L}_{d}}f(\sigma)^{2}\] \[\leq\sum_{i=1}^{n}n\cdot\mathcal{T}_{\max}(\mathcal{K})^{2}\] \[=O\left(n^{2}\cdot\mathcal{T}_{\max}(\mathcal{K})^{2}\right).\]
The effective resistance of \(\gamma\) is the flow energy of \(f\), so the result follows.
If \(\mathcal{L}\subset\mathcal{K}\), then the boundary matrix \(\partial_{d}[\mathcal{L}]\) is a submatrix of \(\partial_{d}[\mathcal{K}]\). In particular, \(\mathcal{T}_{\max}(\mathcal{L})\leq\mathcal{T}_{\max}(\mathcal{K})\). Therefore, the proof of Theorem 5.3 gives an upper bound on the effective resistance for any subcomplex \(\mathcal{L}\subset\mathcal{K}\).
**Corollary 5.4**.: _Let \(\mathcal{L}\subset\mathcal{K}\) be a \(d\)-dimensional simplicial complex and \(\gamma\) a null-homologous \((d-1)\)-cycle in \(\mathcal{L}\). Let \(n=\min\{n_{d-1}[\mathcal{L}],n_{d}[\mathcal{L}]\}\). The effective resistance of \(\gamma\) in \(\mathcal{L}\) is bounded above by \(\mathcal{R}_{\gamma}(\mathcal{L})=O\left(n^{2}\cdot\mathcal{T}_{\max}( \mathcal{K})^{2}\right)\)._
In Section 5.1.3, we give an upper bound on relative torsion, which implies an upper bound on the effective resistance purely in terms of the size of the complex.
#### 5.1.2 Upper Bounds on Capacitance.
We now provide an upper bound for the effective capacitance of a cycle. While upper bounds on the effective resistance only depended on the norm of the cycle, upper bounds on the capacitance of the cycle are not. Therefore, we consider the special case where \(\gamma\) is the boundary of a \(d\)-simplex. This is a natural assumption as these are exactly the type of cycles considered in the incremental algorithm for computing Betti numbers (Algorithm 1). While we only prove this special case, we note that our proof could be adapted to bound the effective capacitance of a cycle whose entries have constant upper and lower bounds.
**Theorem 5.5**.: _Let \(\mathcal{L}\subset\mathcal{K}\) be \(d\)-dimensional simplicial complexes. Let \(\gamma\in C_{d-1}(\mathcal{L})\) be a \((d-1)\)-cycle that is null-homologous in \(\mathcal{K}\) but not in \(\mathcal{L}\). Let \(n=\min\{n_{d-1},n_{d}\}\). Assume that \(\gamma=\partial\sigma\) for a \(d\)-simplex \(\sigma\notin\mathcal{L}\). The effective capacitance of \(\gamma\) in \(\mathcal{K}\) is bounded above by \(\mathcal{C}_{\gamma}(\mathcal{L},\mathcal{K})\in O\left(n\cdot n_{0}\cdot \mathcal{T}_{\max}(\mathcal{K})^{2}\right)\)._
Proof.: Let \(p\) be a \(\gamma\)-potential. We upper bound the potential energy of \(p\). By definition, \(\delta[\mathcal{L}]p=0\) and \(\gamma^{T}p=1\). We can express these constraints as the linear system
\[\begin{bmatrix}\delta[\mathcal{L}]\\ \gamma^{T}\end{bmatrix}p=\begin{bmatrix}0\\ 0\\ \vdots\\ 1\end{bmatrix}\]
We first remove linearly-dependent columns from this linear system until this system has full column rank. Columns of the matrix are indexed by \((d-1)\) simplices of \(\mathcal{L}\), and rows are indexed by \(d\)-simplices of \(\mathcal{L}\). Removing columns from \(\delta[\mathcal{L}]\) changes it to the relative coboundary matrix \(\delta[\mathcal{L},\mathcal{L}_{0}]\) where \(\mathcal{L}_{0}\) is the \((d-1)\)-subcomplex corresponding to the columns that were removed. Removing linearly-dependent columns does not change the image of the system of equation, so there is still a solution \(r\), i.e.
\[\begin{bmatrix}\delta[\mathcal{L},\mathcal{L}_{0}]\\ c^{T}\end{bmatrix}r=\begin{bmatrix}0\\ 0\\ \vdots\\ 1\end{bmatrix}\]
where \(c\) is the subvector of \(\gamma\) after removing the columns. The vector \(r\) is not a \(\gamma\)-potential as it is a vector in \(C_{d-1}(\mathcal{L},\mathcal{L}_{0})\), not \(C_{d-1}(\mathcal{L})\). However, we can extend \(r\) to be a \(\gamma\)-potential by adding zeros in the entries indexed by \(\mathcal{L}_{0}\). Adding zero-valued entries preserves the length of \(r\).
We now want to remove rows from this matrix so that it has full row rank. Topologically, removing rows corresponds to removing \(d\)-simplices from the complex \(\mathcal{L}\) to create a new complex \(\mathcal{L}_{1}\). Note that we must always include the row \(c\) to have full row rank; otherwise, \(r\) would be a non-zero vector in the kernel of this system, meaning the system does not have full rank. Removing these rows gives the linear system
\[\begin{bmatrix}\delta[\mathcal{L}_{1},\mathcal{L}_{0}]\\ c\end{bmatrix}r=\begin{bmatrix}0\\ 0\\ \vdots\\ 1\end{bmatrix}.\]
Let \(C=\begin{bmatrix}\delta[\mathcal{L}_{1},\mathcal{L}_{0}]^{T}&c^{T}\end{bmatrix} ^{T}\) and \(b=\begin{bmatrix}0&0&\cdots&1\end{bmatrix}^{T}\). Note that \(C\) is an square matrix of size (say) \(m\times m\), where \(m\leq n+1\).
We use Cramer's rule to bound the size of \(\|r\|\). By Cramer's rule, \(r_{i}\), the \(i\)th entry of \(r\), is
\[r_{i}=\frac{\det(C_{i,b})}{\det(C)}.\]
where \(C_{i,b}\) is the matrix obtained by replacing the \(i\)th column with \(b\).
We first lower bound \(|\det{(C)}|\). As \(C\) is a full-rank integral matrix, then \(|\det{(C)}|\geq 1\). We now upper bound \(|\det(C_{i,b})|\). We calculate \(\det(C_{i,b})\) with the cofactor expansion on the column replaced by \(b\). As \(b\) has \(1\) in its last entry and \(0\)s elsewhere, the cofactor expansion gives \(\det(C_{i,b})=\pm 1\cdot\det(C_{i,b}^{i,c})\) where \(C_{i,b}^{i,c}\) is the matrix where we dropped the \(i\)th column and the row \(c\) from \(C_{i,b}\). The matrix \(C_{i,b}^{i,c}\) is a square submatrix of \(\delta[\mathcal{K}]\), so we can bound \(|\det(C_{i,b})|\leq\mathcal{T}_{\max}(\mathcal{K})\). Thus, \(r_{i}=\det(C_{i,b})/\det(C)\leq\mathcal{T}_{\max}(\mathcal{K})\) and
\[\|r\| =\sqrt{\sum_{i=1}^{m}r_{i}^{2}}\] \[\leq\sqrt{m\cdot\mathcal{T}_{\max}(\mathcal{K})^{2}}\] \[\leq\sqrt{n\cdot\mathcal{T}_{\max}(\mathcal{K})^{2}}\] \[=\sqrt{n}\cdot\mathcal{T}_{\max}(\mathcal{K})\]
The potential energy of \(r\) is \(\|\delta[\mathcal{K}]r\|^{2}\). We can use Lemma 4.5 to obtain the bound \(\|\delta[\mathcal{K}]r\|^{2}=O\left(n\cdot n_{0}\cdot\mathcal{T}_{\max}( \mathcal{K})^{2}\right)\).
#### 5.1.3 Upper Bound on Relative Torsion.
To conclude this section, we provide an upper bound on \(\mathcal{T}_{\max}(\mathcal{K})\).
**Lemma 5.6**.: _Let \(\mathcal{K}\) be a simplicial complex. Let \(n=\min\{n_{d-1},n_{d}\}\). Then the maximum rank of any \((d-1)\)-dimensional relative torsion group of \(K\) is \(\mathcal{T}_{\max}(\mathcal{K})\in O((\sqrt{d+1})^{n})\)._
Proof.: By the proof of Lemma 5.2, the quantity \(\mathcal{T}_{\max}(\mathcal{K})\) is the absolute value of the determinant of some submatrix of \(\partial_{d}\). We therefore bound the determinant of such a submatrix. We prove this bound using _Hadamard's Inequality_: the determinant of an \(m\times m\) matrix \(B\) is upper-bounded by the product of the norms of its columns.
Consider a square, \(m\times m\) submatrix \(B\) of \(\partial_{d}\). We know that \(m\leq n\). Moreover, any column of \(B\) has norm bounded above by \(\sqrt{d+1}\). This bound follows from the fact that each column of \(\partial_{d}\) has norm exactly \(\sqrt{d+1}\); each column has \(d+1\) nonzero entries, each of which is \(\pm 1\). The bound of the lemma follows by Hadamard's Inequality.
Lemma 5.6 immediately implies the corollaries to Theorem 5.3 and Theorem 5.5.
**Corollary 5.7**.: _Let \(\mathcal{K}\) be a \(d\)-dimensional simplicial complex and \(\gamma\) a unit-length null-homologous \((d{-}1)\)-cycle in \(\mathcal{K}\). Let \(n=\min\{n_{d-1},n_{d}\}\). The effective resistance of \(\gamma\) is bounded above by \(\mathcal{R}_{\gamma}(\mathcal{K})\in O\left(n^{2}(d+1)^{n}\right)\)._
**Corollary 5.8**.: _Let \(\mathcal{L}\subset\mathcal{K}\) be \(d\)-dimensional simplicial complexes. Let \(\gamma\in C_{d-1}(\mathcal{L})\) be a \((d-1)\)-cycle that is null-homologous in \(\mathcal{K}\) but not in \(\mathcal{L}\). Let \(n=\min\{n_{d-1},n_{d}\}\). Assume that \(\gamma=\partial\sigma\) for a \(d\)-simplex \(\sigma\notin\mathcal{L}_{d}\). The effective capacitance of \(\gamma\) in \(\mathcal{K}\) is bounded above by \(\mathcal{C}_{\gamma}(\mathcal{L},\mathcal{K})\in O\left(n\cdot n_{0}\cdot(d+1) ^{n}\right)\)._
### Lower Bounds.
#### 5.2.1 Lower Bounds on Effective Resistance
At the end of the previous section, we gave an exponential upper bound on the effective resistance of a \((d-1)\)-cycle in a simplicial complex (Corollary 5.7). In this section, we describe a \(d\)-dimensional simplicial complex \(\mathcal{B}_{d}^{n}\) with a \((d-1)\)-cycle \(\gamma\) with exponentially-large effective resistance with respect to the size of the complex.
**Theorem 5.9**.: _Let \(d\), \(n\) be positive integers. There is a constant \(c_{d}\geq 1\) that depends only on \(d\) and a \(d\)-dimensional simplicial complex \(\mathcal{B}_{d}^{n}\) with \(n_{d}\in\Theta((d+1)^{3}n)\)\(d\)-simplices and a unit-length null-homologous cycle \(\gamma\in C_{d-1}(\mathcal{B}_{d}^{n})\) such that \(\mathcal{R}_{\gamma}(\mathcal{B}_{d}^{n})\in\Theta(c_{d}^{n_{d}})\)._
The building block.Our simplicial complex will be obtained by gluing together multiple instances of the same "building block" \(B_{d}\). A formal description of \(B_{d}\) is given in Appendix E; here, we give an intuitive description of the complex. Let \(\Delta^{d}\) denote the closure of the \(d\)-simplex \(\sigma=\{v_{0},\ldots,v_{d}\}\), and let \(\partial\Delta^{d}\) denote the \((d-1)\)-dimensional simplicial complex \(\Delta^{d}\setminus\{\sigma\}\). Our construction starts with a triangulation of the space \(\partial\Delta^{d}\times[0,1]\) that we call the _stellar prism_. The "bottom copy" \(\partial\Delta^{d}\times\{0\}\) is triangulated like the original complex \(\partial\Delta^{d}\), and the "top copy" \(\partial\Delta^{d}\times\{1\}\) is triangulated with the _stellar subdivision_, the subdivision that adds a vertex to the center of each \((d-1)\)-simplex. See Figure 3. The relevant property of this triangulation is that the bottom copy has \(d+1\)\((d-1)\)-simplices, and the top copy has \(d\cdot(d+1)\)\((d-1)\)-simplices. The _building block_\(B_{d}\) is obtained from this triangulation by identifying the vertex in the center of each \((d-1)\)-simplex \(\tau\times\{1\}\) with the unique vertex in \(\sigma\times\{1\}\setminus\tau\times\{1\}\), the unique vertex in the _set_\(\sigma\times\{1\}\) that is not a vertex of the simplex \(\tau\times\{1\}\). See Figure 4.
When we identify the vertices, each \((d-1)\)-simplex in the top of the stellar prism is replaced by one of the \((d-1)\) faces of \(\sigma\times\{1\}\). Moreover, this is replacement is \(d\)-to-\(1\), meaning each face of \(\sigma\times\{1\}\) replaces a \((d-1)\)-simplex exactly \(d\) times, or informally, each \((d-1)\)-simplex "appears \(d\) times" in the top copy of \(B_{d}\). Of course, this is not literally true, as a simplicial complex can only contain a single copy of each simplex. However, something to this effect is true. Namely, there is a \(d\)-chain \(f\in C_{d}(B_{d})\) whose boundary assigns value \(\pm 1\) to each \((d-1)\)-simplex in the bottom copy and value \(\pm d\) to each \((d-1)\)-simplex in the top copy. The key properties of the building block \(B_{d}\) are summarized in the following lemma.
**Lemma 5.10**.: _Let \(\sigma=\{v_{0},\ldots,v_{d}\}\) be a set. There is a \(d\)-dimensional simplicial complex \(B_{d}\) with vertices \(\sigma\times\{0,1\}\) such that_
1. \(B_{d}\) _has_ \(\Theta((d+1)^{3})\)__\(d\)_-simplices._
2. _The_ \(d\)_-simplex_ \(\sigma\) _is_ \(d\)_-simplices._
3. _The_ \(d\)_-simplex_ \(\sigma\) _is_ \(d\)_-simplices._
Proof.: We first show that \(\sigma\) is a \(d\)-dimensional simplicial complex \(\mathcal{B}_{d}^{n}\) with \(n_{d}\in\Theta((d+1)^{3}n)\). By Lemma 5.10, we have \(\Theta((d+1)^{3}n)\)-simplices.
**Lemma 5.11**.: _Let \(\sigma=\{v_{0},\ldots,v_{d}\}\) be a set. There is a \(d\)-dimensional simplicial complex \(B_{d}\) with vertices \(\sigma\times\{0,1\}\) such that_
1. \(B_{d}\) _has_ \(\Theta((d+1)^{3})\)__\(d\)_-simplices._
2. _The_ \(d\)_-simplex_ \(\sigma\) _is_ \(d\)_-simplices._
Proof.: We first show that \(\sigma\) is a \(d\)-simplex. By Lemma 5.10, we have \(\Theta((d+1)^{3}n)\)-simplices.
**Lemma 5.12**.: _Let \(\sigma=\{v_{0},\ldots,v_{d}\}\) be a set. There is a \(d\)-dimensional simplicial complex \(B_{d}\) with vertices \(\sigma\times\{0,1\}\) such that_
1. \(B_{d}\) _has_ \(\Theta((d+1)^{3})\)__\(d\)_-simplices._
Proof.: We first show that \(\Theta((d+1)^{3}n)\)-simplices.
**Lemma 5.13**.: _Let \(\sigma=\{v_{0},\ldots,v_{d}\}\) be a set. There is a \(d\)-dimensional simplicial complex \(B_{d}\) with vertices \(\sigma\times\{0,1\}\) such that_
1. \(B_{d}\) _has_ \(\Theta((d+1)^{3})\)__\(d\)_-simplices._
2. _The_ \(d\)_-simplex_ \(\sigma\) _is_ \(d\)_-simplices._
Proof.: We first show that \(\Theta((d+1)^{3}n)\)-simplices.
**Lemma 5.14**.: _Let \(\sigma=\{v_{0},\ldots,v_{d}\}\) be a set. There is a \(d\)-dimensional simplicial complex \(B_{d}\) with vertices \(\sigma\times\{0,1\}\) such that_
1. \(B_{d}\) _has_ \(\Theta((d+1)^{3})\)__\(d\)_-simplices._
Proof.: We first show that \(\Theta((d+1)^{3}n)\)-simplices.
**Lemma 5.15**.: _Let \(\sigma=\{v_{0},\ldots,v_{d}\}\) be a set. There is a \(d\)-dimensional simplicial complex \(B_{d}\) with vertices \(\sigma\times\{0,1\}\) such that_
1. \(B_{d}\) _has_ \(\Theta((d+1)^{3})\)__\(d\)_-simplices._
2. _The_ \(d\)_-simplex_ \(\sigma\) _is_ \(d\)_-simplices._
Proof.: We first show that \(\Theta((d+1)^{3}n)\)-simplices.
**Lemma 5.16**.: _Let \(\sigma=\{v_{0},\ldots,v_{d}\}\) be a set. There is a \(d\)-dimensional simplicial complex \(B_{d}\) with vertices \(\sigma\times\{0,1\}\) such that_
1. \(B_{d}\) _has_ \(\Theta((d+1)^{3})\)__\(d\)_-simplices._
Proof.: We first show that \(\Theta((d+1)^{3}n)\)-simplices.
Figure 3: Left: Stellar subdivision of a triangle. Middle and Right: Top and bottom view of the stellar prism of a triangle with the tetrahedra pushed apart.
2. _there is a_ \(d\)_-chain_ \(f\in C_{d}(B_{d})\) _such that_ 1. \(\partial f=\partial(\sigma\times\{0\})+d\cdot\partial(\sigma\times\{1\})\)__ 2. \(\|f\|^{2}\in\Omega((d+1)^{3})\)__
Note that neither of the simplices \(\sigma\times\{0\}\) or \(\sigma\times\{1\}\) are in \(B_{d}\); however, all of their faces are in the complex, so the boundary of these simplices are well-defined.
The total complex.The complex \(\mathcal{B}_{d}^{n}\) is obtained by gluing together \(n\) copies of \(B_{d}\). We describe this gluing inductively on \(n\). The vertices of \(\mathcal{B}_{d}^{n}\) are \(\sigma\times\{0,\ldots,n\}\). The base case \(\mathcal{B}_{d}^{0}\) is the complete complex on the vertices \(\sigma\times\{0\}\). Inductively, the complex \(\mathcal{B}_{d}^{n}\) is obtained from \(\mathcal{B}_{d}^{n-1}\) by identifying the vertices \(\sigma\times\{n-1\}\) of \(\mathcal{B}_{d}^{n-1}\) with the vertices \(\sigma\times\{1\}\) of a copy of \(B_{d}\). We will denote this copy of \(B_{d}\) as \(B_{d}^{n}\) and the vertices \(\sigma\times\{0\}\) of \(B_{d}^{n}\) as \(\sigma\times\{n\}\). See Figure 4.
The key property of \(\mathcal{B}_{d}^{n}\) is that is has an exponentially-large chain with constant-sized boundary.
**Lemma 5.11**.: _Let \(d\geq 1\) and \(n\geq 1\). There is a \(d\)-chain \(y_{n}\in C_{d}(\mathcal{B}_{d}^{n})\) such that_
1. \(\|y_{n}\|^{2}\in\Omega(d^{2n}(d+1))\)__
2. \(\|\partial y_{n}\|^{2}=d+1\)__
Proof.: We construct this chain by induction on \(n\). The chain \(y_{n}\) will have \(\partial y_{n}=\partial(\sigma\times\{n\})\). For the base case \(n=0\), the chain \(y_{0}=\sigma\times\{0\}\) clearly has this property.
For the inductive case, recall from Lemma 5.10 that there is a \(d\)-chain \(f\in C_{d}(B_{d})\) such that \(\partial f=\partial(\sigma\times\{0\})+d\cdot\partial(\sigma\times\{1\})\). Let \(f_{n}\) denote this chain in \(B_{d}^{n}\). We define the chain \(y_{n}:=f_{n}-d\cdot y_{n-1}\). We now verify that \(y_{n}\) has boundary \(\partial(\sigma\times\{n\})\).
\[\partial y_{n}= \partial f_{n}-d\cdot\partial y_{n-1}\] \[= \partial(\sigma\times\{n\})+d\cdot\partial(\sigma\times\{n-1\}) -d\cdot\partial(\sigma\times\{n-1\}\] \[= \partial(\sigma\times\{n\})\]
It is clear that \(\|\partial y_{n}\|^{2}=d+1\), so we just need to lower bound \(\|y_{n}\|^{2}\). We prove that \(\|y_{n}\|^{2}\in\Omega(d^{2n}(d+1))\) by induction. For the base case of \(n=1\), we know that \(\|y_{1}\|^{2}\in\Omega((d+1)^{3})\) by Lemma 5.10. For the inductive case, we can see that \(f_{n}\perp y_{n-1}\) as these chains are supported on different sets of simplices. Therefore, \(\|y_{n}\|^{2}=\|f_{n}\|^{2}+d^{2}\cdot\|y_{n-1}\|\in\Omega(d^{2n}(d+1))\).
Lemma 5.11 shows that \(\mathcal{B}_{d}^{n}\) has an exponentially-large \(d\)-chain \(f\) with constant-sized boundary. If we can show that \(\ker\partial_{d}=0\), this will prove that \(\partial f\) has exponentially-large effective resistance, as \(f\) will be the _only_\(d\)-chain with boundary \(\partial f\).
Figure 4: The complex \(\mathcal{B}_{2}^{3}\). Recall that the vertices of \(\mathcal{B}_{2}^{3}\) are of the form \((v_{i},j)\) for \(0\leq i\leq 2\) and \(0\leq j\leq 3\); colors in the figure denote the second coordinate of the vertices. Vertices with the same label are identified.
**Corollary 5.12**.: _The kernel of the boundary matrix \(\partial_{d}[\mathcal{B}^{n}_{d}]\) is trivial, i.e. \(\ker\partial_{d}[\mathcal{B}^{n}_{d}]=0\)._
Proof.: Lemma F.3 in Appendix F shows that \(\mathcal{B}^{n}_{d}\) collapses to a \((d-1)\)-dimensional subcomplex; call this subcomplex \(L\). This implies that \(\mathcal{B}^{n}_{d}\) is homotopy equivalent to \(L\), so in particular, the \(d\)-dimensional homology \(H_{d}(\mathcal{B}^{n}_{d})=0\). As \(\mathcal{B}^{n}_{d}\) is \(d\)-dimensional, then \(\operatorname{im}\partial_{d+1}[\mathcal{B}^{n}_{d}]=0\), so the only way that \(H_{d}(\mathcal{B}^{n}_{d})\) can equal \(0\) is if \(\ker\partial_{d}[\mathcal{B}^{n}_{d}]=0\).
Proof of Theorem 5.9.: Our cycle is the normalized cycle \(\gamma=\partial y_{n}/\|\partial y_{n}\|\) where \(y_{n}\) is the \(d\)-chain from Lemma 5.11. We know that \(\|y_{n}\|^{2}/\|\partial y_{n}\|^{2}\in\Omega(d^{2n})\). Moreover, we know that \(\ker\partial_{d}=0\) by Corollary 5.12. Therefore, we conclude that effective resistance \(\mathcal{R}_{\gamma}(\mathcal{B}^{n}_{d})\in\Theta(d^{2n})\).
Now we need to restate this bound in terms of the number of \(d\)-simplices of \(\mathcal{B}^{n}_{d}\). Each copy of \(B_{d}\) has \(\Theta((d+1)^{3})\)\(d\)-simplices. Therefore, the entire complex \(\mathcal{B}^{n}_{d}\) has \(n_{d}=\Theta(n\cdot(d+1)^{3})\)\(d\)-simplices. If we substitute \(n_{d}\) into our bound, we find that the effective resistance is bound below by \(O((d+1)^{2n_{d}/c\cdot(d+1)^{3}})\) for some constant \(c\). Therefore, the constant in the theorem statement is \(c_{d}=(d+1)^{2/c\cdot(d+1)^{3}}\).
Related Work.Variants of this construction are sometimes called the _Iterated Mapping Cylinder_ and have been used as a worst-case construction for other topological properties like torsion [56], homotopy [21], or embeddability [7, 22]. However, ours is the first work showing these complexes have a cycle with exponentially-large effective resistance (or exponentially-small spectral gap, as we will see later.) We were specifically inspired by the work of Newman [56].
However, our construction is more efficient than previous constructions of the Iterated Mapping Cylinder in terms of the number of \(d\)-simplices; this is why we dedicate several pages in the appendix to this construction. As an example, the building block in Newman's construction is the iterated suspension of the Mobius band, and the number of \(d\)-simplices in the suspension grows exponentially with the dimension \(d\). Comparatively, the number of \(d\)-simplices in our building block only grows polynomially with the dimension. Additionally, our construction has a cycle with \(\pm 1\) coefficients that is homologous to a cycle with \(\pm d\) coefficients; in contrast, previous works have a cycle with \(\pm 1\) coefficients that is homologous to a cycle with \(\pm 2\) coefficients. Both of these properties result in a larger constant \(c_{d}\) in Theorem 5.9. It is an open question if there is a simplicial complex with a cycle whose effective resistance exactly matches the constant of the lower bound, i.e. \(c_{d}\in\Theta(d+1)\).
#### 5.2.2 Lower Bounds on Capacitance.
In this section, we describe a pair of nested simplicial complexes that have a cycle with exponentially large effective capacitance. This complex will be built from the same building block as the complex for lower bounding effective resistance, but the simplicial complex will built by gluing together the building blocks in slightly different ways.
Recall that the building block is a simplicial complex \(B_{d}\) with vertices \(\sigma\times\{0,1\}\), where \(\sigma=\{v_{0},\ldots,v_{d}\}\) is a \(d\)-simplex. For a natural number \(n\), we recursively construct the simplicial complex \(\mathcal{Q}^{n}_{d}\). The simplicial complex \(\mathcal{Q}^{n}_{d}\) is obtained by gluing together \(n\) copies of \(B_{d}\). The vertices of \(\mathcal{Q}^{n}_{d}\) will be denoted \(\sigma\times\{0,\ldots,n\}\). The base case \(\mathcal{Q}^{n}_{d}\) is the complete complex on the vertices \(\sigma\times\{0\}\). Inductively, the complex \(\mathcal{Q}^{n}_{d}\) is obtained from \(\mathcal{Q}^{n-1}_{d}\) by identifying the vertices \(\sigma\times\{n-1\}\) of \(\mathcal{Q}^{n-1}_{d}\) with the vertices \(\sigma\times\{0\}\) of a copy of \(B_{d}\).7We will denote this copy of \(B_{d}\) as \(B^{n}_{d}\) and the vertices \(\sigma\times\{1\}\) as \(\sigma\times\{n\}\). The simplicial complex \(\mathcal{P}^{n}_{d}\) is defined \(Q^{n-1}_{d}\) minus the simplex \(\sigma\times\{0\}\).
Footnote 7: It is worth comparing \(\mathcal{Q}^{n}_{d}\) to the simplicial complex \(\mathcal{B}^{n}_{d}\) from the bounds on effective resistance. Both complexes are obtained by identifying vertices of copies of the building block \(B_{d}\), but the identifications are made in different ways. When a building block is added to \(\mathcal{B}^{i}_{d}\) to form \(\mathcal{B}^{i+1}_{d}\), its “top” \(\sigma\times\{1\}\) is identified with \(\sigma\times\{i\}\). When a building block is added to \(\mathcal{B}^{i}_{d}\) to form \(\mathcal{B}^{i+1}_{d}\), its “top” \(\sigma\times\{1\}\) is identified with \(\sigma\times\{i\}\). When a building block is added to \(\mathcal{B}^{i}_{d}\) to form \(\mathcal{B}^{i+1}_{d}\), its “top” \(\sigma\times\{1\}\) is identified with \(\sigma\times\{1\}\).
We will prove that the cycle \(\gamma=\partial(\sigma\times\{n\})\) has exponentially large capacitance in \(\mathcal{P}_{d}^{n}\subset\mathcal{Q}_{d}^{n}\). In order for \(\gamma\) to have finite capacitance, the cycle \(\gamma\) must be null-homologous in \(\mathcal{Q}_{d}^{n}\) but not null-homologous in \(\mathcal{P}_{d}^{n}\). We prove this in the following lemma.
**Lemma 5.13**.: _Let \(\mathcal{P}_{d}^{n}\) and \(\mathcal{Q}_{d}^{n}\) as described above. Consider the chain \(\partial(\sigma\times\{n\})\in C_{d}(\mathcal{P}_{d}^{n})\)._
1. _The_ \((d-1)\)_-chain_ \(\partial(\sigma\times\{n\})+(-1)^{n-1}\frac{1}{d^{n}}\partial(\sigma\times\{0\})\) _is null-homologous in_ \(\mathcal{P}_{d}^{n}\)_._
2. _The_ \((d-1)\)_-chain_ \(\partial(\sigma\times\{n\})\) _is null-homologous in_ \(Q_{d}^{n}\)_._
3. _The_ \((d-1)\)_-chain_ \(\partial(\sigma\times\{n\})\) _is not null-homologous in_ \(P_{d}^{n}\)_._
Proof.: _Proof of Part (1)_ We will prove that the cycle \(\partial(\sigma\times\{i\})+\frac{1}{d^{i}}\partial(\sigma\times\{0\})\) is null-homologous in \(Q_{d}^{i}\) by induction on \(i\). Specifically, we will find a \(d\)-chain \(y_{i}\) such that \(\partial y_{i}=\partial(\sigma\times\{i\})+\frac{1}{d^{i}}\partial(\sigma \times\{0\})\). For each copy of the building block \(B_{d}^{i}\), let \(f_{i}\in C_{d}(B_{d}^{i})\) be the \(d\)-chain guaranteed by Lemma 5.10 such that \(\partial f_{i}=\partial(\sigma\times\{i-1\})+d\cdot\partial(\sigma\times\{i\})\). For the base case of \(i=1\), the \(d\)-chain \(y_{1}=\frac{1}{d}f_{1}\). Inductively, we define the chain \(y_{i}=\frac{1}{d}f_{i}-\frac{1}{d}y_{i-1}\). We can verify that \(y_{i}\) has the claimed boundary as
\[\partial y_{i}= \frac{1}{d}\partial f_{i}-\frac{1}{d}\partial y_{i-1}\] \[= \partial(\sigma\times\{i\})+\frac{1}{d}\partial(\sigma\times\{i- 1\})-\frac{1}{d}\partial(\sigma\times\{i-1\})+(-1)^{i-1}\frac{1}{d^{i}} \partial(\sigma\times\{0\})\] \[= \partial(\sigma\times\{i\})+(-1)^{i-1}\frac{1}{d^{i}}\partial( \sigma\times\{0\})\]
_Proof of Part (2)_ As the simplex \((\sigma\times\{0\})\in\mathcal{Q}_{d}^{n}\), then the chain \(y_{n}+(-1)^{n}\frac{1}{d^{n}}\cdot(\sigma\times\{0\})\) obviously has boundary \(\partial(\sigma\times\{n\})\).
_Proof of Part (3)_ To prove that \(\partial(\sigma\times\{n\})\) is not null-homologous in \(\mathcal{P}_{d}^{n}\), we will use two lemmas we prove in the appendix. Lemma F.4 shows that \(\mathcal{P}_{d}^{n}\) collapses to a \((d-1)\)-dimensional subcomplex--call it \(L\)--containing the support of \(\partial(\sigma\times\{n\})\). Lemma F.2 shows that for nested complexes \(L\subset K\) such that \(K\) collapses to \(L\), a cycle is null-homologous in \(L\) if and only if it is null-homologous in \(K\). As \(\partial(\sigma\times\{n\})\) is not null-homologous in \(L\) (\(L\) is \((d-1)\)-dimensional, so \(\operatorname{im}\partial_{d}[L]=0\)), then \(\partial(\sigma\times\{n\})\) is not null-homologous in \(\mathcal{P}_{d}^{n}\) either.
**Theorem 5.14**.: _Let \(d\), \(n\) be positive integers. There is a pair of nested \(d\)-dimensional simplicial complexes \(\mathcal{P}_{d}^{n}\subset\mathcal{Q}_{d}^{n}\) with \(n_{d}\in\Theta((d+1)^{3}n)\)\(d\)-simplices, a unit-length null-homologous cycle \(\gamma\in C_{d-1}(\mathcal{Q}_{d}^{n})\), and a constant \(c_{d}\geq 1\) that depends only on \(d\) such that \(\mathcal{C}_{\gamma}(\mathcal{P}_{d}^{n},\mathcal{Q}_{d}^{n})\in\Theta(c_{d}^ {n_{d}})\)._
Proof.: The complexes \(\mathcal{P}_{d}^{n}\) and \(\mathcal{Q}_{d}^{n}\) are the complexes described in the preceding paragraphs. The cycle \(\gamma=\partial(\sigma\times\{n\})/\sqrt{d+1}\), where \(\sqrt{d+1}\) is a normalization factor. We must show that any unit \(\gamma\)-potential \(p\) has exponentially-large potential energy.
Let \(p\) be any unit \(\gamma\)-potential in \(\mathcal{P}_{d}^{n}\). We know that \(\gamma^{T}p=1\) and \(\delta_{d-1}[\mathcal{P}_{n}^{d}]p=0\). As \(\delta_{d-1}[\mathcal{P}_{n}^{d}]=\partial_{d}^{T}[\mathcal{P}_{n}^{d}]\), the second condition is equivalent to saying that \(p^{T}b=0\) for any vector \(b\in\operatorname{im}\partial_{d}[\mathcal{P}_{d}^{n}]\). Lemma 5.13 Part 1 proves that \(\partial(\sigma\times\{n\})+(-1)^{n-1}d^{-n}\partial(\sigma\times\{0\})\in \operatorname{im}\partial_{d}[\mathcal{P}_{d}^{n}]\), so the previous two facts imply \(p^{T}\big{(}\partial(\sigma\times\{n\})+(-1)^{n-1}d^{-n}(\partial(\sigma\times \{0\}))\big{)}=0\). As \(p^{T}\partial(\sigma\times\{n\})=\sqrt{d+1}\) and \(p^{T}\big{(}\partial(\sigma\times\{0\})\big{)}\)
\(\{n\}+(-1)^{n-1}d^{n}(\partial(\sigma\times\{0\}))=0\), then we conclude that \(p^{T}\partial(\sigma\times\{0\})=(-1)^{n-1}d^{n}/\sqrt{d+1}\). Moreover, as the only \(d\)-simplex in \(\mathcal{Q}_{d}^{n}\) that is not in \(\mathcal{P}_{d}^{n}\) is \(\sigma\times\{0\}\), then the fact that \(\delta_{d-1}[\mathcal{P}_{d}^{n}]p=0\) implies that the potential energy \(\|\delta_{d-1}[\mathcal{Q}_{d}^{n}]p\|^{2}=\left(p^{T}\partial(\sigma\times\{ 0\})\right)^{2}=\Omega(d^{2n})\).
This shows that \(p\) has exponentially-large potential energy with respect to \(n\). By the same argument as in the proof of Theorem 5.9, we can show that \(p\) also has exponentially-large potential energy with respect to \(n_{d}\), but for a different (but constant) base of the exponent \(c_{d}\).
## 6 Bounds on the Spectral Gap
Our lower and upper bounds on effective resistance imply lower and upper bounds on the spectral gap of the combinatorial Laplacian. This is because _the spectral gap is the inverse of the maximum effective resistance of all unit-length, null-homologous cycles_, a fact we proved in Lemma 4.4. Therefore, a corollary of Theorem 5.9 is that the spectral gap of the combinatorial Laplacian can be exponentially-small in the worst-case. This resolves one of the most important open questions in the field of Quantum Topological Data Analysis and shows that Betti Number Estimation algorithms must run for an exponentially-long time to exactly compute Betti numbers.
### Exponentially-small spectral gap.
Based on this connection between the spectral gap and effective resistance of Lemmas 4.4 and 2.3, we can derive lower and upper bounds on the spectral gap of the \(d\)-combinatorial Laplacian as corollaries of the upper and lower bounds on the effective resistance (Corollary 5.7 and Theorem 5.9). While lower bounds on the spectral gap were previously known (see for example [22, Proof of Theorem 1.2]), one advantage of our proof is that it provides a necessary condition for large spectral gap, namely the existence of a subcomplex with exponentially-large relative torsion.
**Theorem 1.2**.: _Let \(K\) be a simplicial complex. Let \(n_{i}\) be the number of \(i\)-simplices of \(K\). Let \(n=\max\{\min\{n_{d-1},n_{d}\},\min\{n_{d},n_{d+1}\}\}\). Then the spectral gap \(\lambda_{\min}(L_{d}[K])\in\Omega\left(\frac{1}{n^{2}d^{n}}\right)\)._
**Theorem 6.1**.: _Let \(d\), \(n\geq 1\). There is a \(d\)-dimensional simplicial complex \(\mathcal{B}_{d}^{n}\) with \(n_{d}\in\Theta((d+1)^{3}\cdot n)\)\(d\)-simplices and a constant \(c_{d}\geq 1\) that depends only on \(d\) such that the spectral gaps of \(L_{d-1}[\mathcal{B}_{d}^{n}]\) and \(L_{d}[\mathcal{B}_{d}^{n}]\) are \(O(\frac{1}{c_{d}^{n_{d}}})\)._
We remark that we can also derive lower bound of the spectral gap in terms of relative torsion as a corollary to Theorem 5.3. This implies bounded spectral gap for simplicial complexes with no relative torsion, as remarked upon by Friedman [23, Theorem 7.2] in the case of orientable \(d\)-manifolds.
### Many Small Eigenvalues.
Corollary 6.1 shows there is a simplicial complex with a single small eigenvalue. It is natural to ask whether there is a bound on the number of very small eigenvalues a simplicial complex can have. This is relevant to QTDA algorithms that work by counting the number of eigenvalues smaller than a given threshold. Here, we provide a complex \(\mathcal{M}_{d}^{n}\) with a polynomial number of exponentially-small eigenvalues.
**Corollary 6.2**.: _Let \(n,d\geq 1\). There exists a simplicial complex \(\mathcal{M}_{d}^{n}\) with \(n_{d}\in\Theta((d+1)^{3}\cdot n)\)\(d\)-simplices and a constant \(c_{d}>1\) that depends only on \(d\) such that both \(L_{d-1}[\mathcal{M}_{d}^{n}]\) and \(L_{d}[\mathcal{M}_{d}^{n}]\) have \(\Omega(\sqrt{n_{d}})\) eigenvalues of size \(O(\frac{1}{c_{d}^{\sqrt{n_{d}}}})\)._
Proof of Theorem 6.2.: This follows from Part 2 of Lemma 2.2 relating the spectrum of a complex to the spectra of its connected components. We define the complex to be disjoint copies of the of Corollary 6.1. The complex has -simplices and eigenvalues of size.
### Clique-Dense Complexes.
Existing QTDA algorithms perform best when the simplicial complex is clique-dense, meaning that the simplicial complex has close to the maximal number of -simplices, i.e ; However, the simplicial complex we constructed in Theorem 6.1 is sparse: it only has -simplices. Therefore, Theorem 6.1 does not rule out the possibility that clique-dense complexes avoid worst-case spectral gap.
However, in this section, we show that we can extend the construction of Theorem 6.1 to clique-dense complexes (at the expense of making the constant of the exponent smaller.) To do this, we use a probabilistic coloring argument of Newman [56] that reduces the number of vertices of a simplicial complex while preserving the number of -simplices and the Laplacian.
We begin with definitions. A _coloring_ of a simplicial complex is a map on its vertices. The _pattern complex_ of a simplicial complex with coloring is the simplicial complex ; intuitively, the pattern complex of is the simplicial complex obtained by identify all vertices of of the same color and identifying all simplices whose vertices have the same set of colors. While a simplex in may be mapped to a lower-dimensional simplex in the pattern complex if two of its vertices are the same color, we are only considered with colorings where this does not happen. A _proper coloring_ of a -dimensional simplicial complex is a coloring such that (1) the endpoints of each edge in are different colors and (2) for any distinct -simplices. Note that for a proper coloring, condition (1) guarantees that each simplex in corresponds to a simplex of the same dimension in. Additionally, and have the same set of and -simplices up to recoloring. Proper colorings are relevant to our paper as they preserve the spectral gap.
**Lemma 6.3**.: _Let be a -dimensional simplicial complex and let be a proper coloring of. Then_
Proof.: This follows as and have the same set of and -simplices up to recoloring, so the boundary maps and are the same up to the signs on the simplices; however, the spectrum of the up Laplacians and are unaffected by different orientations of the simplices ([24, Theorem 4.1.1]), so the lemma follows.
Newman's method also requires bounds on a generalized notion of degree. For a -dimensional simplicial complex and natural numbers, define and. Newman used a probabilistic argument to show that for a -dimensional simplicial complex there was always a proper coloring with a bounded number of colors.
**Lemma 6.4** (Lemma 3, Newman [56]).: _Let be a -dimensional simplicial complex such that. Then there is a proper coloring of with at most colors._
This implies the following corollary of our bound on the spectral gap.
**Theorem 1.3**.: _Let. There are constants that depends only on and a -dimensional simplicial complex with -simplices such that the spectral gaps,._
Proof.: This is a corollary to Theorem 6.1, Lemma 6.3, and Lemma 6.4. The simplicial complex \(\mathcal{C}_{d}^{n}\) is a pattern complex of a coloring \(c\) of the complex \(\mathcal{B}_{d}^{n}\) from Theorem 6.1. This coloring \(c\) is the coloring guaranteed by Lemma 6.4. This coloring has \(f(d)\sqrt[n]{n_{0}}\) colors for some function \(f\); we can see this by bounding \(\Delta(\mathcal{B}_{d}^{n})\) by a function of \(d\). Examining its construction in Section 5.2, each simplex in \(\mathcal{B}_{d}^{n}\) is in at most two different building blocks. Each building block has \(2(d+1)\) vertices, so for any \(1\leq i<j\leq d\), an \(i\)-simplex in \(\mathcal{B}_{d}^{n}\) is incident to at most \(\binom{4d-i-1}{j-i}\)\(j\)-simplices. Thus, \(\Delta(\mathcal{B}_{d}^{n})\) is at most some function of \(d\). Therefore, there is a simplicial complex with \(\tilde{n_{0}}=f(d)\sqrt[n]{n_{0}}\) vertices and \(\theta\left(\frac{(d+1)^{3}}{f(d)^{d}}\tilde{n_{0}}^{d}\right)\)\(d\)-simplices. As \(\binom{n_{0}}{d}\leq\left(\frac{e\tilde{n_{0}}}{d}\right)^{d}\), then there are \(\Omega(\kappa_{d}\binom{\tilde{n_{0}}}{d})\)\(d\)-simplices in \(\mathcal{C}_{d}^{n}\) for some appropriate function \(f(d)=\Omega\left(\frac{(d+1)^{3}}{f(d)^{d}e^{d}}\right)\).
### Variants of the Laplacian.
We finish this section by showing how our results imply upper and lower bounds on the spectral gap of several variants of the Laplacian.
#### 6.4.1 Boundary Matrix.
The QTDA algorithm of McArdle, Gilyen, and Berta [50] is not parameterized by the spectral gap of the combinatorial Laplacian; rather, it is parameterized by the spectral gap of the boundary matrices. However, the non-zero singular values of the \(d^{\text{th}}\) boundary matrix \(\partial_{d}\) are the square roots of the eigenvalues of the \((d-1)^{\text{st}}\) up Laplacian \(L_{d-1}^{up}\) as \(L_{d-1}^{up}=\partial_{d}\partial_{d}^{T}\). Therefore, Theorem 1.2 and Theorem 6.1 imply exponential upper and lower bounds on the spectral gap of the boundary matrix.
#### 6.4.2 Normalized Laplacian.
We now show that the normalized up Laplacian \(\tilde{L}_{d}^{up}\) can also have exponentially-small spectral gap. While the eigenvalues of the unnormalized \(d^{\text{th}}\) up Laplacian \(L_{d}^{up}\) are in the range \([0,\,n_{0}]\), the eigenvalues of the normalized \(d^{\text{th}}\) up Laplacian are in the range \([0,\,d+2]\)[35, Theorem 3.2.i]. As the normalized up Laplacian has a constant upper bound on its eigenvalues, it is reasonable to suspect the normalized up Laplacian also has a constant lower bound on its eigenvalues. Theorem 6.5 shows this is not the case.
**Corollary 6.5**.: _Let \(d\), \(n\geq 1\). There is a \(d\)-dimensional simplicial complex \(\mathcal{B}_{d}^{n}\) with \(n_{d}\in\Theta(\operatorname{poly}(d)\cdot n)\)\(d\)-simplices and a constant \(c_{d}\geq 1\) that depends only on \(d\) such that the spectral gap the normalized up Laplacian is \(\lambda_{\min}(\tilde{L}_{d}^{up})\in O(\frac{1}{c_{d}^{nd}})\)._
Proof.: This follows from Theorem 1.2 and Lemma 2.5. The statement follows as \(d_{\min}[\mathcal{B}_{d}^{n}]=1\).
#### 6.4.3 Persistent Laplacian.
Recently, Wang, Nguyen, and Wei [67] introduced the _persistent Laplacian_ of simplicial filtrations as a generalization of the combinatorial Laplacian. The spectral gap of the persistent Laplacian has since appeared as a parameter of quantum algorithms for computing persistent Betti number [33], so lower bounding it is also of interest to QTDA. Memoli, Wan, and Wang [52, Theorem SM5.8] prove that persistent Laplacians preserve effective resistance of cycles; therefore, the bounds on the spectral gap of the combinatorial Laplacian also apply to the persistent Laplacian by Lemma 4.4.
Conclusion and Open Questions.
In this paper, we propose a new span-program-based quantum algorithm for computing Betti numbers. This algorithm is a novel approach to QTDA that is more similar to classical incremental algorithms for computing Betti numbers than previous QTDA algorithms. Unfortunately, we show that, in the worse case, the span-program based algorithm takes exponential time due to cycles with exponentially-large effective resistance or effective capacitance. However, as a corollary to exponentially-large effective resistance, we prove that the spectral gap of the combinatorial Laplacian can be exponentially small. This proves that all known QTDA algorithms also require exponential time in the worst case. Below we discuss some of the questions left open by our work.
Incremental Quantum Algorithm for Persistent Betti numbers.Our algorithm incrementally computes the Betti number of a simplicial complex. While the classical algorithm for computing _persistent_ Betti numbers is incremental [69], our algorithm is unable to perform persistent pairing. In other words, our algorithm can identify when a homology class dies, but it cannot identify when that homology class was born. It is an open question whether our algorithm can be adapted to compute the persistent Betti numbers of a simplicial complex. There are quantum algorithms for computing persistent Betti numbers [33, 50], but these algorithms are not incremental.
Lower Bounds or Expectation of the Spectral Gap.Theorem 6.1 shows that the spectral gap of the combinatorial Laplacian can be exponentially small. However, it is an open question how common these sorts of worst-case complexes are. While there are exact or expected lower bounds for certain families of simplicial complexes [4, 23, 28, 44, 45, 46, 63, 68], it is still unknown what the expected spectral gap is, or if there are lower bounds on the spectral gap, for all simplicial complexes, or for families of simplicial complexes of interest like Vietoris-Rips complexes.
Cheeger Inequalities and Implications of Exponentially-Small Spectral Gaps.The existence of simplicial complexes with exponentially-small spectral gap implies that existing QTDA algorithms cannot exactly compute Betti numbers without running for an exponentially long time; however, they can solve the related problem of _Approximate Betti Number Estimation_[30] of counting the number of eigenvalues of the Laplacian smaller than a given threshold. However, it remains an open question how useful approximate Betti number estimation is in practice.
A potential interpretation for approximate Betti number estimation could come in the form of a higher-dimensional Cheeger inequality. The _Cheeger inequality_ in graphs relates the smallest eigenvalue(s) of the graph Laplacian to a value called the _Cheeger constant_ that measures the existence of (multi-way) sparse graph cuts [11, 43]. Intuitively, if the Hodge Theorem (Theorem 2.1) says that the graph Laplacian has more than one zero eigenvalue if and only if the graph is disconnected, then the Cheeger inequality says that it has small non-zero eigenvalues if and only if it is "almost" disconnected.
However, higher-dimensional generalizations of the Cheeger inequality remain elusive. Ideally, a higher-dimensional Cheeger inequality would say something similar: a simplicial complex "almost has non-trivial \(d\)-homology" or "has a sparse cut" if and only the \(d^{\text{th}}\) combinatorial Laplacian has small non-zero eigenvalues. One hurdle is that it is not clear how to generalize the notion of "sparse cut" to higher dimensions. While there have been several definitions proposed for a Cheeger constant for higher-dimensional Laplacians [25, 47, 53, 58], one or both sides of a Cheeger inequality have failed for these constants [27, 28, 58, 64]. Our work provides another counterexample to these
Cheeger inequalities; the spectral gap of our worst-case complexes is exponentially small, but the proposed notions of Cheeger constant cannot be.
A recent paper presents a two-sided Cheeger inequality [39] that connects the spectral gap of the combinatorial Laplacian to a Cheeger constant based on the 1-norm of chains. This Cheeger inequality does not have the same interpretation as the graph Cheeger inequality of implying that simplicial complexes with small eigenvalues "almost have non-trivial homology" though. It remains an open question if such a higher-dimensional Cheeger inequality exists.
## Acknowledgements.
This work was supported by NSF grants CCF-1816442 and CCF-1617951
|
2305.11068 | ORKG-Leaderboards: A Systematic Workflow for Mining Leaderboards as a
Knowledge Graph | The purpose of this work is to describe the Orkg-Leaderboard software
designed to extract leaderboards defined as Task-Dataset-Metric tuples
automatically from large collections of empirical research papers in Artificial
Intelligence (AI). The software can support both the main workflows of
scholarly publishing, viz. as LaTeX files or as PDF files. Furthermore, the
system is integrated with the Open Research Knowledge Graph (ORKG) platform,
which fosters the machine-actionable publishing of scholarly findings. Thus the
system output, when integrated within the ORKG's supported Semantic Web
infrastructure of representing machine-actionable 'resources' on the Web,
enables: 1) broadly, the integration of empirical results of researchers across
the world, thus enabling transparency in empirical research with the potential
to also being complete contingent on the underlying data source(s) of
publications; and 2) specifically, enables researchers to track the progress in
AI with an overview of the state-of-the-art (SOTA) across the most common AI
tasks and their corresponding datasets via dynamic ORKG frontend views
leveraging tables and visualization charts over the machine-actionable data.
Our best model achieves performances above 90% F1 on the \textit{leaderboard}
extraction task, thus proving Orkg-Leaderboards a practically viable tool for
real-world usage. Going forward, in a sense, Orkg-Leaderboards transforms the
leaderboard extraction task to an automated digitalization task, which has
been, for a long time in the community, a crowdsourced endeavor. | Salomon Kabongo, Jennifer D'Souza, Sören Auer | 2023-05-10T13:19:18Z | http://arxiv.org/abs/2305.11068v1 | # ORKG-Leaderb boards: A Systematic Workflow for Mining Leaderboards as a Knowledge Graph
###### Abstract
The purpose of this work is to describe the orkg-Leaderboard software designed to extract _leaderboards_ defined as _Task-Dataset-Metric_ tuples automatically from large collections of empirical research papers in Artificial Intelligence (AI). The software can support both the main workflows of scholarly publishing, viz. as LaTeX files or as PDF files. Furthermore, the system is integrated with the Open Research Knowledge Graph (ORKG) platform, which fosters the machine-actionable publishing of scholarly findings. Thus the system output, when integrated within the ORKG's supported Semantic Web infrastructure of representing machine-actionable'resources' on the Web, enables: 1) broadly, the integration of empirical results of researchers across the world, thus enabling transparency in empirical research with the potential to also being complete contingent on the underlying data source(s) of publications; and 2) specifically, enables researchers to track the progress in AI with an overview of the state-of-the-art (SOTA) across the most common AI tasks and their corresponding datasets via dynamic ORKG frontend views leveraging tables and visualization charts over the machine-actionable data. Our best model achieves performances above 90% F1 on the _leaderboard_ extraction task, thus proving orkg-Leaderbboards a practically viable tool for real-world usage.
Going forward, in a sense, ORKG-Leaderboards transforms the _leaderboard_ extraction task to an automated digitalization task, which has been, for a long time in the community, a crowdsourced endeavor.
Table mining, Information extraction, Scholarly text mining, Neural machine learning, Semantic networks, Knowledge graphs.
## 1 Introduction
Shared tasks--a long-standing practice in the Natural Language Processing (NLP) community--are competitions to which researchers or teams of researchers submit systems that address a specific _Task_, evaluated based on a predefined _Metric_[1]. Seen as "drivers of progress" for empirical research, they attract diverse participating groups from both academia and industry, as well as are harnessed as test-beds for new emerging shared tasks on under-researched and under-resourced topics [2]. Examples of long-standing Shared Tasks include the Conference and Labs of the Evaluation Forum (CLEF)1organized at the Conference on Natural Language Learning (CoNLL)2, the International Workshop on Semantic Evaluation (SEMEVAL)3, or the biomedical domain-specific BioNLP Shared Task Series [3] and the Critical Assessment of Information Extraction in Biology (BioCreative) 4. Being inherently competitive, Shared Tasks offer as a main outcome _Leaderboards_ that publish participating system rankings.
Footnote 1: [http://www.clef-initiative.eu/](http://www.clef-initiative.eu/)
Footnote 2: [https://www.signl.org/confl](https://www.signl.org/confl)
Footnote 3: [https://semeval.github.io/](https://semeval.github.io/)
Footnote 4: [https://biocreative.bioinformatics.udel.edu/tasks/](https://biocreative.bioinformatics.udel.edu/tasks/)
Inspired by Shared Tasks, the _Leaderboards_ construct of progress trackers is simultaneously taken up for the recording of results in the field of empirical Artificial Intelligence (AI) at large. Here the information is made available via the traditional scholarly publishing flow as PDFs and preprints, unlike in Shared Tasks where the community is relegated to a list of researchers wherein tracking the dataset creators and individual systems applied is less cumbersome as they can be found within the list of researchers that sign up to organize or participate in the task. On the other hand, general publishing avenues bespeak of a deluge of peer-reviewed scholarly publications [4] and PDF preprints ahead (or even instead) of peer-reviewed publications [5]. This high-volume publication trend problem is only compounded by the diversity in empirical AI research where _Leaderboards_ can potentially be searched and tracked on research problems in various fields such as Computer Vision, Time Series Analysis, Games, Software engineering, Graphs, Medicine, Speech, Audio processing, Adversarial learning, etc. Thus the problem of obtaining completed _Leaderboard_ representations of empirical research seems a tedious if not completely insurmountable task.
Regardless of the setup, i.e. from Shared Tasks or empirical AI research, another problem in the current methodology is the information representation of _Leaderboards_ which is often via Github repositories, shared task websites, or researchers' personal websites. Some well-known websites that exist to this end are: PapersWithCode (PwC) [6],5 NLP-Progress [7], AI-metrics [8], SQuaD explorer [9], Reddit SOTA [10]. The problem with leveraging websites for storing _Leaderboards_ is the resulting rich data's lack of machine-actionability and integrability. In other words, unstructured, non-machine-actionable information from scholarly articles is converted to semi-structured information on the websites which still unfortunately remain non-machine-actionable. In the broader context of scholarly knowledge, the FAIR guiding principles for scientific data management and stewardship [11] identify general guidelines for making data and metadata machine-actionable by making them maximally Findable, Accessible, Interoperable, and Reusable for machines and humans alike. Semantic Web technologies such as the W3C recommendations Resource Description Framework (RDF) and Web Ontology Language (OWL) are the most widely-accepted choice for implementing the FAIR guiding principles [12]. In this context, the Open Research Knowledge Graph (ORKG) [13] [https://orkg.org/](https://orkg.org/) as a next-generation library for digitalized scholarly knowledge publishing presents a framework fitted with the necessary Semantic Web technologies to enable the encoding of _Leaderboards_ as FAIR, machine-actionable data. Adopting semantic standards to represent _Leaderboards_ not just _Task-Dataset-Metric_ but also related information such as code links, pre-trained models, and so on can be made machine-actionable and consequently queryable. This would directly address the lack of transparency and integration of various results' problems identified in current methods of recording empirical research [1; 2; 14].
This work, taking note of the two main problems around _Leaderboard_ construction, i.e. _information capture_ and _information representation_, proposes solutions to address them directly. First, regarding information capture, we recognize due to the overwhelming volume of data, now more than ever, that it is of paramount importance to empower scientists with automated methods to generate the _Leaderboards_ oversight. The community could greatly benefit from an automatic system that can generate a _Leaderboard_ as a _Task-Dataset-Metric_ tuple over large collections of scholarly publications both covering empirical AI, at large and encapsulating Shared Tasks, specifically. Thus, we empirically tackle the _Leaderboard_ knowledge mining machine learning (ML) task via a detailed set of evaluations involving large datasets for the two main publishing workflows, i.e. as LaTeX source and PDF, with several ML models. For this purpose, we extend the experimental settings from our prior work [15] by adding support for information extraction from LaTeX code source and compared empirical evaluations on longer input sequences (beyond 512 tokens) for both XLNet and BigBird [16]. Our ultimate goal with this study is to
help the Digital Library (DL) stakeholders to select the optimal tool to implement knowledge-based scientific information flows w.r.t. _Leaderboards_. To this end, we evaluate four state-of-art transformer models, viz. BERT, SciBERT, XLNet, and BigBird, each of which has its own unique strengths. Second, regarding information representation, orkg-Leaderboards workflow, is integrated in the knowledge-graph-based DL infrastructure of the ORKG [13]. Thus the resulting data will be made machine-actionable and served via the dynamic ORKG Frontend views 6 and further queryable via structured queries over the larger scholarly KG using SPARQL7.
Footnote 6: [https://orkg.org/benchmarks](https://orkg.org/benchmarks)
Footnote 7: [https://orkg.org/triplestore](https://orkg.org/triplestore) or [https://orkg.org/sparql/](https://orkg.org/sparql/)
In summary, the contributions of our work are:
1. we construct a large empirical corpus containing over 4,000 scholarly articles and 1,548 _leaderboards_ TDM triples for the development of text mining systems;
2. we empirically evaluate three different transformer models and leverage the best model, i.e. orkg-Leaderboards\({}_{XLNet}\), for the ORKG benchmarks curation platform;
3. produced a pipeline that works both with the raw PDF and the LaTeX code source of a research publication.
4. we extended our previous work [15] by empirically investigating our approach with longer input beyond the traditional 512 sequence length limit by BERT-based models, and added support for both mainstreams of research publication PDFs and LaTeX code source.
5. in a comprehensive empirical evaluation of orkg-Leaderboards for both LaTeX and PDFs based pipelines, we obtain around 93% micro and 92% macro F1 scores which outperform existing systems by over 20 points.
To the best of our knowledge, the orkg-Leaderboards system obtains state-of-the-art results for the _Leaderboard_ extraction defined as _Task-Dataset-Metric_ triples extraction from empirical AI research articles handling both LaTeX and PDF formats. Thus orkg-Leaderboards can be readily leveraged within KG-based DLs and be used to comprehensively construct _Leaderboards_ with more concepts beyond the TDM triples. To facilitate further research, our data8 and code9 are made publicly available.
Footnote 8: [https://doi.org/10.5281/zenodo.7419877](https://doi.org/10.5281/zenodo.7419877)
Footnote 9: [https://github.com/Kabongosalomon/task-dataset-metric-nli-extraction/tree/latex](https://github.com/Kabongosalomon/task-dataset-metric-nli-extraction/tree/latex)
## 2 Definitions
This section defines the central concepts in the _Task-Dataset-Metric_ extraction schema of orkg-Leaderboards. Furthermore, the semantic concepts used in the information representation for the data in the ORKG are defined.
#### Task.
It is a natural language mention phrase of the theme of the investigation in a scholarly article. Alternatively referred to as research problem [17] or focus [18]. An article can address one or more tasks. _Task_ mentions being often found in the article Title, Abstract, Introduction, or Results tables and discussion. E.g., question answering, image classification, drug discovery, etc.
#### Dataset.
A mention phrase of the dataset encapsulates a particular _Task_ used in the machine learning experiments reported in the respective empirical scholarly articles. An article can report experiments on one or more datasets. _Dataset_ mentions are found in similar places in the article as _Task_ mentions. E.g., HIV dataset10, MNIST [19], Freebase 15K [20], etc.
Footnote 10: [https://wiki.nci.nih.gov/display/NCIDTPdata/AIDS+Antiviral+Screen+Data](https://wiki.nci.nih.gov/display/NCIDTPdata/AIDS+Antiviral+Screen+Data)
_Metric._
Phrasal mentions of the standard of measurement11 used to evaluate and track the performance of machine learning models optimizing a _Dataset_ objective based on a _Task_. An article can report performance evaluations on one or more metrics. _Metrics_ are generally found in Results tables and discussion sections in scholarly articles. E.g., BLEU (bilingual evaluation understudy) [21] used to evaluate "machine translation" tasks, F-measure [22] used widely in "classification" tasks, MRR (mean reciprocal rank) [23] used to evaluate the correct ordering of a list of possible responses in "information retrieval" or "question answering" tasks, etc.
Footnote 11: [https://www.merriam-webster.com/dictionary/metric](https://www.merriam-webster.com/dictionary/metric)
_Benchmark._
ORKG _Benchmarks_ ([https://orkg.org/benchmarks](https://orkg.org/benchmarks)) organize the state-of-the-art empirical research within ORKG _research fields_12 and are powered in part by automated information extraction supported by the orkg-Leaderboards software within a human-in-the-loop curation model. A benchmark per research field is fully described in terms of the following elements: research problem or _Task_, _Dataset_, _Metric_, _Model_, and _Code_. E.g., a specific instance of an ORKG Benchmark 13 on the "Language Modelling" _Task_, evaluated on the "WikiText-2" _Dataset_, evaluated by "Validation perplexity" _Metric_ with a listing of various reported Models with respective Model scores.
Footnote 12: [https://orkg.org/benchmark/R121022/problem/R120872](https://orkg.org/benchmark/R121022/problem/R120872)
Footnote 13: [https://www.mergian-webster.com/dictionary/metric](https://www.mergian-webster.com/dictionary/metric)
_Leaderboard._
Is a dynamically computed trend-line chart on respective ORKG Benchmark pages leveraging their underlying machine-actionable data from the Knowledge Graph. Thus, _Leaderboards_ depict the performance trend-line of models developed over time based on specific evaluation _Metrics_.
## 3 Related Work
There is a wealth of research in the NLP community on specifying a collection of extraction targets as a unified information-encapsulating unit from scholarly publications. The two main related lines of work that are at the forefront are: 1) extracting instructional scientific content that captures the experimental process [24; 25; 26; 27; 28]; and 2) extracting terminology as named entity recognition objectives [18; 29; 30; 31; 32] to generally obtain a concise representation of the scholarly article which also includes the _Leaderboard_ information unit [33; 34; 35].
Starting with the capture of the experimental process, [24] proposed an AI-based clustering method for the automatic semantification of bioassays based on the specification of the BAO ontology14. In [26], they annotate wet lab protocols, covering a large spectrum of experimental biology w.r.t. lab procedures and their attributes including materials, instruments, and devices used to perform specific actions as a prespecified machine-readable format as opposed to the ad-hoc documentation norm. Within scholarly articles, such instructions are typically published in the Materials and Method section in Biology and Chemistry fields. Similarly, in [25; 27], to facilitate machine learning models for automatic extraction of materials syntheses reactions and procedures from text, they present datasets of synthesis procedures annotated with semantic structure by domain experts in Materials Science. The types of information captured include synthesis operations (i.e. predicates), and the materials, conditions, apparatus, and other entities participating in each synthesis step.
Footnote 14: [https://github.com/BioAssayOntology/BAO](https://github.com/BioAssayOntology/BAO)
In terms of extracting terminology to obtain a concise representation of the article, an early dataset called the FTD corpus [18] defined _focus_, _technique_, and _domain_ entity types which were leveraged to examine the influence between research communities. Another dataset, the ACL RD-TEC corpus [29] identified seven conceptual classes for terms in the full-text of scholarly publications in Computational Linguistics, viz. _Technology and Method_, _Tool and Library_, _Language Resource_, _Language Product_, _Models_, _Measures and Measurements_, and _Other_ to generate terminology lists. Similarly, terminology mining is the task of scientific keyphrase extraction. Extracting keyphrases is an important task in publishing platforms as they help recommend articles to readers, highlight missing citations to authors, identify potential reviewers for submissions, and analyze research trends over time. Scientific keyphrases, in particular, of type _Processes_, _Tasks_ and _Materials_ were the focus of the SemEval17 corpus annotations [30] which included full-text articles in Computer Science, Material Sciences, and Physics. The SciERC corpus [31] provided a resource of annotated abstracts in Artificial Intelligence which annotations for six concepts, viz. _Task_, _Method_, _Metric_, _Material_, _Other-Scientific Term_, and _Generic_ to facilitate the downstream task of generating a searchable KG of these entities. On the other hand, the
STEM-ECR corpus [32] notable for its multidisciplinary included 10 different STEM domains annotated with four generic concept types, viz. _Process_, _Method_, _Material_, and _Data_ that mapped across all domains, and further with terms grounded in the real world via Wikipedia/Wiktionary links. Finally, several works have recently emerged targeting the task of Leaderboard extraction, with the TDM-IE pioneering work [33] also addressing the much harder _Score_ element as an extraction target. Later works attempted the document-level information extraction task by defining explicit relations _evaluatedOn_ between _Task_ and _Dataset_ elements and _evaluatedBy_ between _Task_ and _Metric_[34; 35]. In contrast, in our prior ORKG-TDM system [15] and in this present extended ORKG-Leaderboards experimental report, we attempt the _Task-Dataset-Metric_ tuple extraction objective assuming implicitly encoded relations. This simplifies the pipelined entity and relation extraction objectives as a single tuple inference task operating over the entire document. Nevertheless, [34; 35] also defined coreference relations between similar term mentions, which can be leveraged complementarily in our work to enrich the respective _Task-Dataset-Metric_ mentions.
## 4 The ORKG-Leaderboards Task Dataset
### Task Definition
The _Leaderboard_ extraction task addressed in ORKG-Leaderboards can be formalized as follows. Let \(p\) be a paper in the collection \(P\). Each \(p\) is annotated with at least one triple \((t_{i},d_{j},m_{k})\) where \(t_{i}\) is the \(i^{th}\)_Task_ defined, \(d_{j}\) the \(j^{th}\)_Dataset_ that encapsulates _Task_\(t_{i}\), and \(m_{k}\) is the \(k^{th}\) evaluation _Metric_ used to evaluate a system performance on a _Task_'s _Dataset_. While each paper has a varying number of _Task-Dataset-Metric_ triples, they occur at an average of roughly 4 triples per paper.
In the supervised inference task, the input data instance corresponds to the pair: a paper \(p\) represented as the DocTAET context feature \(p_{DocTAET}\) and its _Task-Dataset-Metric_ triple \((t,d,m)\). The inference data instance, then is \((c,\ [(t,d,m),p_{DocTAET}])\) where \(c\in\{true,false\}\) is the inference label. Thus, specifically, our _Leaderboard_ extraction problem is formulated as a natural language inference task between the DocTAET context feature \(p_{DocTAET}\) and the \((t,d,m)\) triple annotation. \((t,d,m)\) is \(true\) if it is among the paper's _Task-Dataset-Metric_ triples, where they are implicitly assumed to be related, otherwise \(false\). The \(false\) instances are artificially created by a random selection of inapplicable \((t,d,m)\) annotations from other papers. Cumulatively, _Leaderboard_ construction is a multi-label, multi-class inference problem.
#### DocTAET Context Feature
The DocTAET context feature representation [33] selects only the parts of a paper where the _Task-Dataset-Metric_ mentions are most likely to be found. While the _Leaderboard_ extraction task is applicable on the full scholarly paper
content, feeding a machine learning model with the full article is disadvantageous since the model will be fed with a large chunk of text which would be mostly noise as it is redundant to the extraction task. Consequently, an inference model fed with large amounts of noise as contextual input cannot generalize well. Instead, the DocTAET feature was designed to heuristically select only those parts of an article that are more likely to contain _Task-Dataset-Metric_ mentions as true contextual information signals. Specifically, as informative contextual input to the machine learning model, DocTAET captures sentences from four specific places in the article that are most likely to contain _Task-Dataset-Metric_ mentions, viz. the **D**ocument **T**itle, **A**bstract, first few lines of the **E**xperimental setup section and **T**able content and captions.
### Task Dataset
To facilitate supervised system development for the extraction of _Leaderboards_ from scholarly articles, we built an empirical corpus that encapsulates the task. _Leaderboard_ extraction is essentially an inference task over the document. To alleviate the otherwise time-consuming and expensive corpus annotation task involving expert annotators, we leverage distant supervision from the available crowdsourced metadata in the PwC ([https://paperswithcode.com/](https://paperswithcode.com/)) KB. In the remainder of this section, we explain our corpus creation and annotation process.
#### 4.2.1 Scholarly Papers and Metadata from the PwC Knowledge Base.
We created a new corpus as a collection of scholarly papers with their _Task-Dataset-Metric_ triple annotations for evaluating the _Leaderboards_ extraction task inspired by the original IBM science result extractor [33] corpus. The collection of scholarly articles for defining our _Leaderboard_ extraction objective is obtained from the publicly available crowdsourced leaderboards PwC. It predominantly represents articles in the Natural Language Processing and Computer Vision domains, among other AI domains such as Robotics, Graphs, Reasoning, etc. Thus, the corpus is representative for empirical AI research. The original downloaded collection (timestamp 2021-05-10 at 12:30:21)15 was pre-processed to be ready for analysis. While we use the same method here as the science result extractor, our corpus is different in terms of both labels and size, i.e. number of papers, as many more _Leaderboards_ have been crowdsourced and added to PwC since the original work. Furthermore, as an extension to our previous work [15] on this theme, based on the two main scholarly publishing workflows i.e. as LaTeX or PDF, correspondingly two variants of our corpus are created and their models respectively developed.
Recently, publishers are increasingly encouraging paper authors to provide the supporting LaTeX files accompanying the corresponding PDF article. The advantage of having the LaTeX source files is that they contain the original article in plain-text format and thus result in cleaner data in downstream analysis tasks. Our prior orkg-TDM [15] model was finetuned only on the parsed plain-text output of PDF articles wherein the plain text was scraped from the PDF which results in partial information loss. Thus, in this work, we modify our previous workflow deciding to tune one model on LaTeX source files as input data, given the increasing impetus of authors also submitting the LaTeX source code; and a second model following our previous work on plain-text scraped from PDF articles.
1. _BTEX pre-processed corpus._ To obtain the LaTeX sources, we queried arXiv based on the paper titles from the 5361 articles of our original corpus leveraged to developed orkg-TDM [15]. Resultingly, LaTeX sources for roughly 79% of the papers from the training and test datasets in our original work were obtained. Thus the training set size was reduced from 3,753 papers in the original work to 2,951 papers in this work with corresponding LaTeX sources. Similarly, the test set size was reduced from 1,608 papers in the original work to 1,258 papers in this work for which LaTeX sources could be obtained. Thus the total size of our corpus reduced from 5,361 papers to 4,209 papers. Once the LaTeX sources were respectively gathered for the training and test sets, the data had to undergo one additional step of preprocessing. With the help of pandoc16, latex format files were converted into the XML TEI17 markup format files. This is the required input for the heuristics-based script that produces the DocTAET feature. Thus the resulting XML files were then fed as input to the DocTAET feature extraction script. The pipeline to reproduce this process is released in our code repository 18. Footnote 16: [https://pandoc.org/](https://pandoc.org/)
2. _PDF pre-processed corpus._ For the 4,209 papers with LaTeX sources, we created an equivalent corpus but this time using the PDF files. This is the second experimental corpus variant of this work. To convert PDF to plain text, following along the lines of our previous work [15], the GROBID parser [36] was applied. The resulting files in XML TEI markup format were then fed into the DocTAET feature extraction script similar to the LaTeX document processing workflow.
#### Task-Dataset-Metric Annotations
Since the two corpus variants used in the empirical investigations in this work are a subset of the corpus in our earlier work [15], the 4,209 papers in our present corpus, regardless of the variant, i.e. LaTeX or PDF, retained their originally obtained _Task-Dataset-Metric_ labels via distant labeling supervision on the PwC knowledge base (KB).
### Task Dataset Statistics
Our overall corpus statistics are shown in Table 1. The column "Ours-Prior" reports the dataset statistics of our prior work [15] for comparison purposes. The column "Ours-Present" reports the dataset statistics of the subset corpus used in the empirical investigations reported in this paper. The corpus size is the same for both the LaTeX and PDF corpus variants. In all, our corpus contains 4,208 papers split as 2,946 as training data and 1,262 papers as test data. There were 1,724 unique TDM-triples overall. Note that since the test labels were a subset of the training labels, the unique labels overall can be considered as those in the training data. Table 1 also shows the distinct _Tasks_, _Datasets_, _Metrics_ in the last three rows. Our corpus contains 262 _Tasks_ defined on 853 _Datasets_ and evaluated by 528 _Metrics_. This is significantly larger than the original corpus which had 18 _Tasks_ defined on 44 _Datasets_ and evaluated by 31 _Metrics_.
#### DocTAET Context Feature Statistics
Figure 1 shows in detail the variance of the DocTAET Context Feature over three datasets proposed for _Leaderboard_ extraction as _Task-Dataset-Metric_ triples: 1) Figure 0(a) for the dataset from the pioneering science result extractor system [33]; 2) Figure 0(b) for the dataset from our prior ORKG-TDM work [15]; 3) Figure 0(c) and Figure 0(d) for the dataset in our present paper from the Grobid and LaTeX workflows, respectively (column "Ours-Present" in Table 1)).
\begin{table}
\begin{tabular}{l|r r r r r r} & \multicolumn{2}{c}{**Ours-Prior**} & \multicolumn{2}{c}{**Ours-Present**} & \multicolumn{2}{c}{**Original**} \\ \cline{2-7} & Train & Test & Train & Test & Train & Test \\ \hline Papers & 3,753 & 1,608 & 2,946 & 1,262 & 170 & 167 \\ “unknown” annotations & 922 & 380 & 2,359 & 992 & 46 & 45 \\ Total TDM-triples & 11,724 & 5,060 & 9,614 & 4,096 & 327 & 294 \\ Avg. number of TDM-triples per paper & 4.1 & 4.1 & 4.3 & 4.2 & 2.64 & 2.41 \\ Distinct TDM-triples & 1,806 & 1,548 & 1,668 & 1,377 & 78 & 78 \\ Distinct _Tasks_ & 288 & 252 & 262 & 228 & 18 & 18 \\ Distinct _Datasets_ & 908 & 798 & 853 & 714 & 44 & 44 \\ Distinct _Metrics_ & 550 & 469 & 528 & 434 & 31 & 31 \\ \hline \end{tabular}
\end{table}
Table 1: Ours-Prior [15] vs. Ours-Present vs. the original science result extractor [33] corpora statistics. The “unknown” labels were assigned to papers with no TDM-triples after the label filtering stage.
Both the prior datasets, i.e., the original science result extractor dataset [33] and the ORKG-TDM dataset [15], followed the Grobid processing workflow and reported roughly the same average length of the DocTAET feature. This reflects the consistency preserved in the method of computing the DocTAET feature of between 300 to 400 tokens. Note the ORKG-TDM corpus was significantly larger than the original science result extractor corpus; hence their DocTAET feature length statistics do not match exactly.
In our present paper, as reported earlier, we use a subset of papers from the ORKG-TDM dataset for which the corresponding LaTeX sources could be obtained to ensure similar experimental settings between the Grobid and LaTeX processing workflows. This is why the DocTAET feature length statistics between the ORKG-TDM dataset (Figure 0(b)) and our present dataset in
Figure 1: DocTAET feature length of papers in the original science result extractor dataset [33] Figure 0(a), the dataset used in our prior ORKG-TDM experiements [15] Figure 0(a), the dataset from the Grobid workflow in our present work Figure 0(c) and the dataset from the LaTeX workflow in our present work Figure 0(d).
the Grobl processing workflow (Figure 0(c)) do not match exactly. Still, we see that they are roughly in similar ranges. Finally, of particular interest is observing the DocTAET feature length statistics that could be obtained from the LaTeX processing workflow introduced in this work (Figure 0(d)). Since from the LaTeX processing workflow cleaner plain-text output could be obtained, the corresponding DocTAET feature lengths in many of the papers were longer than all the rest of the datasets considered, which operated in the Grobli processing workflow over PDFs.
## 5 The ORKG-Leaderboards System
This section depicts the overall end-to-end ORKG-Leaderboards, including details on the deep learning models used in our Natural Language Inference (NLI) task formulation.
### Workflow
The overall ORKG-Leaderboards workflow as depicted in Figure 2 includes the following steps:
1. A user provides the article input as either the main '.tex' file or a PDF file.
2. If the input is provided as a '.tex' file, the pandoc script is applied to convert the LaTeX to the corresponding XML TEI marked-up format.
Figure 2: The ORKG-Leaderboards end-to-end system workflow in the context of the Open Research Knowledge Graph (ORKG) digital library [https://orkg.org/](https://orkg.org/)
3. Alternatively, if the input is provided as a PDF file, the Grobid parser is applied to obtain the corresponding scraped plain text in the XML xxxx marked-up format.
4. Once the XML xxx marked-up files are obtained, the DocTAET feature extraction script is applied to obtain the paper context representations.
5. Furthermore, if in the training phase, the collection of papers in the training set is assigned their respective \(true\)_Task-Dataset-Metric_ labels and a random set of "False" _Task-Dataset-Metric_ labels.
6. Otherwise, if in the test phase, the query paper is assigned all the _Task-Dataset-Metric_ inference targets as candidate labels.
7. Finally, on the one hand, for the training phase, for each of the input file formats i.e., '.tex' or PDF, an optimal inference model is trained by testing four transformer model variants, viz. BERT, SciBERT, XLNet, and BigBird.
8. On the hand, for the test phase, depending on the input file format i.e., '.tex' or PDF, the corresponding trained optimal model is applied to the query instance.
9. Finally, from the test phase, the predicted _Task-Dataset-Metric_ tuples output are integrated in the ORKG.
### Leaderboards Natural Language Inference (NLI)
To support _Leaderboard_ inference [33], we employ deep transfer learning modeling architectures that rely on a recently popularized neural architecture - the transformer [37]. Transformers are arguably the most important architecture for natural language processing (NLP) today since they have shown and continue to show impressive results in several NLP tasks [38]. Owing to the self-attention mechanism in these models, they can be fine-tuned on many downstream tasks. These models have thus crucially popularized the transfer learning paradigm in NLP. We investigate three transformer-based model variants for _leaderboard_ extraction in a Natural Language Inference configuration.
Natural language inference (NLI), generally, is the task of determining whether a "hypothesis" is true (entailment), false (contradiction), or undetermined (neutral) given a "premise" [39]. For _leaderboard_ extraction, the slightly adapted NLI task is to determine that the (_task_, _dataset_, _metric_) "hypothesis" is true (entailed) or false (not entailed) for a paper given the "premise" as the DocTAET context feature representation of the paper.
Currently, there exist several transformer-based models. In our experiments, we investigated four core models: three variants of BERT, i.e., the vanilla BERT [38], scientific BERT (SciBERT) [40], and BigBird [16]. We also tried a different type of transformer model than BERT called XLNet [41], which employs Transformer-XL as the backbone model. Next, we briefly describe the four variants we use.
BERT Models
BERT (i.e., Bidirectional Encoder Representations from Transformers), is a bidirectional autoencoder (AE) language model. As a pre-trained language representation built on the deep neural technology of transformers, it provides NLP practitioners with high-quality language features from text data simply out of the box and thus improves performance on many NLP tasks. These models return contextualized word embeddings that can be directly employed as features for downstream tasks [42].
The first BERT model we employ is BERTbase (12 layers, 12 attention heads, and 110 million parameters), which was pre-trained on billions of words from the BooksCorpus (800M words) and the English Wikipedia (2,500M words).
The second BERT model we employ is the pre-trained scientific BERT called SciBERT [40]. SciBERT was pretrained on a large corpus of scientific text. In particular, the pre-training corpus is a random sample of 1.14M papers from Semantic Scholar19 consisting of full texts of 18% of the papers from the computer science domain and 82% from the broad biomedical field. We used their uncased variants for both BERTbase and SciBERT.
Footnote 19: [https://semanticscholar.org](https://semanticscholar.org)
XLNet is an autoregressive (AR) language model [41] that enables learning bidirectional contexts using Permutation Language Modeling. This is unlike BERT's Masked Language Modeling strategy. Thus in PLM, all tokens are predicted but in random order, whereas in MLM, only the masked (15%) tokens are predicted. This is also in contrast to the traditional language models, where all tokens are predicted in sequential order instead of randomly. Random order prediction helps the model to learn bidirectional relationships and, therefore, better handle dependencies and relations between words. In addition, it uses Transformer XL [43] as the base architecture, which models long contexts, unlike the BERT models with contexts limited to 512 tokens. Since only cased models are available for XLNet, we used the cased XLNetbase (12 layers, 12 attention heads, and 110 million parameters).
BigBird
BigBird is a sparse-attention-based transformer that extends Transformer based models, such as BERT, to much longer sequences. Moreover, BigBird comes along with a theoretical understanding of the capabilities of a complete transformer that the sparse model can handle [16]. BigBird takes inspiration from graph sparsification methods by relaxing the need for the attention to fully attend to all the input tokens. Formally the model first builds a set of \(g\) global tokens attending on all parts of the sequence, then all tokens attend to a set of \(w\) local neighboring tokens, and finally, all tokens attend to a set
of \(r\) random tokens. The empirical configuration explained in the last paragraph leads to a high-performing attention mechanism scaling to much longer sequence lengths (8x) [16].
## 6 ORKG-Leaderboards System Experiments
### Experimental Setup
#### Parameter Tuning
We use the Hugging Transformer libraries (20) with their BERT variants and XLNet implementations. In addition to the standard fine-tuned setup for NLI, the transformer models were trained with a learning rate of \(1e^{-5}\) for 14 epochs; and used the \(AdamW\) optimizer with a weight decay of 0 for _bias_, _gamma_, _beta_ and 0.01 for the others. Our models' hyperparameters details can be found in our code repository online at 21.
Footnote 20: [https://github.com/huggingface/transformers](https://github.com/huggingface/transformers)
Footnote 21: [https://github.com/Kabongosalomon/task-dataset-metric-nli-extraction/blob/main/train_tdm.py](https://github.com/Kabongosalomon/task-dataset-metric-nli-extraction/blob/main/train_tdm.py)
In addition, we introduced a task-specific parameter that was crucial in obtaining optimal task performance from the models. It was the number of \(false\) triples per paper. This parameter controls the discriminatory ability of the model. The original science result extractor system [33] considered \(n-t\)_false_ instances for each paper, where \(n\) was the distinct set of triples overall and \(t\) was the number of \(true\)_leaderboard_ triples per paper. This approach would not generalize to our larger corpus with over 1,724 distinct triples. In other words, considering that each paper had on average 4 _true_ triples, it would have a larger set of _false_ triples which would strongly bias the classifier learning toward only _false_ inferences. Thus, we tuned this parameter in a range of values in the set {10, 50, 100} which at each experiment run was fixed for all papers.
Finally, we imposed an artificial trimming of the DocTAET feature to account for BERT and SciBERT's maximum token length of 512. For this, the token lengths of the experimental setup and table info were initially truncated to roughly 150 tokens, after which the DocTAET feature is trimmed at the right to 512 tokens. Whereas, XLNet and BigBird are specifically designed to handle longer contexts of undefined lengths. Nevertheless, to optimize for training speed, we incorporated a context length of 2000 tokens.
#### Evaluation
Similar to our prior work [15], all experiments are performed via two-fold cross-validation. Within the two-fold experimental settings, we report macro- and micro-averaged precision, recall, and F1 scores for our _Leaderboard_ extraction task on the test dataset. The macro scores capture the averaged class-level task evaluations, whereas the micro scores represent fine-grained instance-level task evaluations.
Further, the macro and micro evaluation metrics for the overall task have two evaluation settings: 1) considers papers with _Task-Dataset-Metric_ and
papers with "unknown" in the metric computations; and 2) only papers with _Task-Dataset-Metric_ are considered while the papers with "unknown" are excluded. In general, we focus on the model performances in the first evaluation setting as it directly emulates the real-world application setting that includes papers that do not report empirical research and therefore for which the _Leaderboard_ model does not apply. In the second setting, however, the reader still can gain insights into the model performances when given only papers with _Leaderboards_.
### Experimental Results
In this section, we discuss new experimental findings shown in Table 2, Table 2 v2, Table 3 v1, and Table 3 v2 with respect to four research questions elicited as **RQ1**, **RQ2**, **RQ3**, and **RQ4** respectively.
workflow still performs worse than the model from the Grobid workflow in which case we can conclude that longer contexts regardless of whether they are from a clean source or noisy source are difficult to generalize from, or 2) the model from the LaTeX workflow indeed begins to outperform the model from the Grobid workflow in which case we can safely conclude that for the transformer models to generalize on longer contexts a much larger training dataset is needed. We relegate these further detailed experiments to future work.
RQ3: Which insights can be gleaned from the BERT and SciBERT models operating on shorter context lengths of 512 tokens versus the more advanced models, viz. XLNet and BigBird, operating on longer context lengths of 2000 tokens?
We observed that BERT and SciBERT models show lower performance compared to the XLNet transformer model operating on 2000 tokens. This we hypothesized as expected behavior since the longer contextual information can capture richer signals for the model to learn from, which is highly likely to be lost when imposing the 512 tokens limit. Contrary to this intuition, however, the BigBird model with the longer context is not able to outperform BERT and SciBERT. We suspect the specific attention mechanism in the BigBird model [16] needs further examination over a much larger dataset to conclude that it is ineffective for _Task-Dataset-Metric_ extraction task compared to other transformer-based models.
RQ4: Which of the three _Leaderboard Task-Dataset-Metric_ concepts are easy or challenging to extract?
As a fine-grained examination of our best model, i.e. ORkg-Leaderboards\({}_{\text{XLNet}}\), we examined its performance for extracting each of three concepts \((Task,Dataset,Metric)\) separately. These results are shown in Table 3 v1 and
Table 2 v2. From the results, we observe that _Task_ is the easiest concept to extract, followed by _Metric_, and then _Dataset_. We ascribe the low performance for extracting the _Dataset_ concept due to the variability in its naming seen across papers even when referring to the same real-world entity. For example, the real-world dataset entity 'CIFAR-10' is labeled as 'CIFAR-10, 4000 Labels' in some papers and 'CIFAR-10, 250 Labels' in others. This phenomenon is less prevalent for _Task_ and the _Metric_ concepts. For example, the _Task_ 'Question Answering' is rarely referenced differently across papers addressing the task. Similarly, for _Metric_, 'accuracy' as an example, has very few variations.
\begin{table}
\begin{tabular}{l c c c c c c} \hline \hline \multirow{2}{*}{**Entity**} & \multicolumn{3}{c}{**Macro**} & \multicolumn{3}{c}{**Micro**} \\ \cline{2-7} & P & R & F\({}_{1}\) & P & R & F\({}_{1}\) \\ \hline TDM & 91.9 & 94.4 & 92.0 & 94.9 & 91.2 & 93.0 \\ \hline Task & 94.3 & 97.2 & 95.0 & 96.8 & 95.9 & 96.4 \\ Dataset & 93.8 & 96.7 & 94.4 & 96.2 & 95.4 & 95.8 \\ Metric & 93.7 & 96.9 & 94.4 & 96.0 & 95.3 & 95.6 \\ \hline \hline \end{tabular}
\end{table}
Table 3: **v2**: Performance of our best model, i.e. orkg-Leaderboards\({}_{\text{XLNet}}\), for _Task_, _Dataset_, and _Metric_ concept extraction of the _leaderboard_ for the global workflow
\begin{table}
\begin{tabular}{l c c c c c c} \hline \hline & **Ma-P\({}^{1}\)** & **Ma-R** & **Ma-F1** & **Mi-P\({}^{2}\)** & **Mi-R** & **Mi-F1** \\ \hline \multicolumn{6}{c}{Average Evaluation Across 2-fold} \\ \hline orkg-Leaderboards\({}_{\text{BERT}}\) & **93.5** & 94.2 & **92.8** & **96.0** & 90.0 & 92.9 \\ \hline orkg-Leaderboards\({}_{\text{SciBERT}}\) & 91.7 & 93.9 & 91.6 & 94.6 & 88.6 & 91.5 \\ \hline orkg-Leaderboards\({}_{\text{XLNet}}\) & 91.9 & **94.4** & 92.0 & 94.9 & **91.2** & **93.0** \\ \hline orkg-Leaderboards\({}_{\text{BigBird}}\) & 90.7 & 91.6 & 89.7 & 94.6 & 87.2 & 90.7 \\ \hline \multicolumn{6}{c}{Average Evaluation Across 2-fold (without “Unknown” annotation)} \\ \hline orkg-Leaderboards\({}_{\text{BERT}}\) & **91.2** & 92.3 & **90.6** & **95.4** & 88.0 & 91.5 \\ \hline orkg-Leaderboards\({}_{\text{SciBERT}}\) & 89.4 & 91.7 & 89.2 & 93.7 & 86.0 & 89.7 \\ \hline orkg-Leaderboards\({}_{\text{XLNet}}\) & 89.5 & **92.4** & 89.8 & 94.2 & **89.4** & **91.7** \\ \hline orkg-Leaderboards\({}_{\text{BigBird}}\) & 87.5 & 88.7 & 86.6 & 93.6 & 85.3 & 89.3 \\ \hline \hline \end{tabular}
\end{table}
Table 2: **v2**: BERT\({}_{512}\), SciBERT\({}_{512}\), XLNet\({}_{2000}\) and BigBird\({}_{2000}\) results, based on DocTEAT from LaTeX code source.
\begin{table}
\begin{tabular}{l c c c c c c} \hline \hline \multirow{2}{*}{**Entity**} & \multicolumn{3}{c}{**Macro**} & \multicolumn{3}{c}{**Micro**} \\ \cline{2-7} & P & R & F\({}_{1}\) & P & R & F\({}_{1}\) \\ \hline TDM & 93.1 & 96.4 & 93.7 & 95.1 & 94.6 & 94.8 \\ \hline Task & 94.3 & 97.2 & 95.0 & 96.8 & 95.9 & 96.4 \\ Dataset & 93.8 & 96.7 & 94.4 & 96.2 & 95.4 & 95.8 \\ Metric & 93.7 & 96.9 & 94.4 & 96.0 & 95.3 & 95.6 \\ \hline \hline \end{tabular}
\end{table}
Table 3: **v1**: Performance of our best model, i.e. orkg-Leaderboards\({}_{\text{XLNet}}\), for _Task_, _Dataset_, and _Metric_ concept extraction of the _leaderboard_ for the global workflow
## 7 Integrating ORKG-Leaderboards in the
Open Research Knowledge Graph
In this era of the publications deluge worldwide [4; 5; 44], researchers are faced with a critical dilemma: _How to stay on track with the past and the current rapid-evolving research progress?_ With this work, our main aim is to propose a solution to this problem. And with the orkg-Leaderboards software, we have concretely made advances toward our aim in the domain of empirical AI research. Furthermore, with the software integrated into the next-generation digitalized publishing platform, viz. [https://orkg.org/](https://orkg.org/), the machine-actionable _Task-Dataset-Metric_ data represented as a Knowledge Graph with the help of the Semantic Web's RDF language makes the information skimmable for the scientific community. This is achieved via the dynamic Frontend views of the ORKG Benchmarks feature [https://orkg.org/benchmarks](https://orkg.org/benchmarks). This is illustrated via Figure 3. On the left side of Figure 3 is shown the traditional PDF-based paper format. Highlighted within the view are the Task, Dataset, and Metric phrases. As evident, the phrases are mentioned in several places in the paper. Thus in this traditional model of publishing via non-machine-actionable PDFs, a researcher interested in this critical information would need to scan the full
Figure 3: A contrastive view of _Task-Dataset-Metric_ information in the traditional PDF format of publishing as non-machine-actionable data (on the left) versus as machine-actionable data with _Task-Dataset-Metric_ annotations obtained from orkg-Leaderboards and integrated in the next-generation scholarly knowledge platform as the ORKG Benchmarks view (on the right).
paper content. They are then faced with the intense cognitive burden of repeating such a task over a large collection of articles. On the right side of the Figure 3 is presented a dynamic ORKG Frontend view of the same information, however over machine-actionable RDF semantically represented information of the Task, Dataset, and Metric elements. To generate such a view, the orkg-Leaderboard software would simply be applied on a large collection of articles either in LaTeX or PDF format, and the resulting _Task-Dataset-Metric_ tuples uploaded in the ORKG. Note, however, orkg-Leaderboard does not attempt extraction of the _Score_ element. We observed from some preliminary experiments that the _Score_ element poses a particularly hard extraction target. This is owing to the fact that the underlying contextual data supporting _Score_ extraction is especially noisy-clean table data extraction from PDFs are a challenging problem in the research community that would need to be addressed first to develop promising _Score_ extractors. Nevertheless, in the context of this missing data in the ORKG Benchmarks pages, its human-in-the-loop curation model is relied on. In such a setting, respective article authors with their _Task-Dataset-Metric_ model information being automatically extracted to the KG can simply edit their corresponding model scores in the graph. Thus as concretely shown on the right screen of Figure 3, empirical results are made skimmable and easy to browse for researchers interested in gaining an overview of empirical research progress via a ranked list of papers proposing models and a performance progress trend chart computed over time.
Although the experiments of our study targeted empirical AI research, we are confident, that the approach is transferable to similar scholarly knowledge extraction tasks in other domains. For example in Chemistry or Material Sciences, experimentally observed properties of substances or materials under certain conditions could be obtained from various papers.
## 8 Conclusion and Future Work
In this work we experimented with the empirical construction of Leaderboards, using four recent transformer-based models (BERT, SciBERT, XLNet, BigBird) that have achieved state-of-the-art performance in several tasks and domains in the literature. Leveraging the two main streams of information acquisition used in scholarly communication i.e (Pdf, LaTeX), our work published two models to accurately extract Task, Dataset, and Metric entities from an empirical AI research publication. Therefore as a next step, we will extend the current triples (task, dataset, metric) model with additional concepts that are suitable candidates for a Leaderboard such as score or code URLs, etc. We also envision the task-dataset-metric extraction approach to be transferable to other domains (such as materials science, engineering simulations, etc.). Our ultimate target is to create a comprehensive structured knowledge graph tracking scientific progress in various scientific domains, which can be leveraged for novel machine-assistance measures in scholarly communication, such as question answering, faceted exploration, and contribution correlation tracing.
### Acknowledgments
This work was co-funded by the Federal Ministry of Education and Research (BMBF) of Germany for the project LeibnizKILabor (grant no. 01DD20003), BMBF project SCINEXT (GA ID: 01IS22070), NFDI4DataScience (grant no. 460234259) and by the European Research Council for the project ScienceGRAPH (Grant agreement ID: 819536).
|
2306.10626 | Exclusive $η_c$ production from small-$x$ evolved Odderon at a
electron-ion collider | We compute exclusive $\eta_c$ production in high energy electron-nucleon and
electron-nucleus collisions that is sensitive to the Odderon. In perturbative
QCD the Odderon is a $C$-odd color singlet consisting of at least three
$t$-channel gluons exchanged with the target. By using the Color Glass
Condensate effective theory our result describes the Odderon exchange at the
high collision energies that would be reached at a future electron-ion
collider. The Odderon distribution is evolved to small-$x$ using the
Balitsky-Kovchegov evolution equation with running coupling corrections. We
find that while at low momentum transfers $t$ the cross section off a proton is
dominated by the Primakoff process, the Odderon becomes relevant at larger
momentum transfers of $|t|\geq1.5$ GeV$^2$. We point that the Odderon could
also be extracted at low-$t$ using neutron targets since the Primakoff
component is strongly suppressed. In the case of nuclear targets, the Odderon
cross section becomes enhanced thanks to the mass number of the nuclear target.
The gluon saturation effect induces a shift in the diffractive pattern with
respect to the Primakoff process that could be used as a signal for the
Odderon. | Sanjin Benić, Davor Horvatić, Abhiram Kaushik, Eric Andreas Vivoda | 2023-06-18T19:30:47Z | http://arxiv.org/abs/2306.10626v2 | # Exclusive \(\eta_{c}\) production from small-\(x\) evolved Odderon at a electron-ion collider
###### Abstract
We compute exclusive \(\eta_{c}\) production in high energy electron-nucleon and electron-nucleus collisions that is sensitive to the Odderon. In perturbative QCD the Odderon is a \(C\)-odd color singlet consisting of at least three \(t\)-channel gluons exchanged with the target. By using the Color Glass Condensate effective theory our result describes the Odderon exchange at the high collision energies that would be reached at a future electron-ion collider. The Odderon distribution is evolved to small-\(x\) using the Balitsky-Kovchegov evolution equation with running coupling corrections. We find that while at low momentum transfers \(t\) the cross section off a proton is dominated by the Primakoff process, the Odderon becomes relevant at larger momentum transfers of \(|t|\geq 1.5\) GeV\({}^{2}\). We point that the Odderon could also be extracted at low-\(t\) using neutron targets since the Primakoff component is strongly suppressed. In the case of nuclear targets, the Odderon cross section becomes enhanced thanks to the mass number of the nuclear target. The gluon saturation effect induces a shift in the diffractive pattern with respect to the Primakoff process that could be used as a signal for the Odderon.
+
Footnote †: preprint: ZTF-EP-23-03
## I Introduction and motivation
The Odderon was suggested 50 years ago [1; 2] as the \(C\)-odd (\(C=-1\)) partner of the \(C-\)even (\(C=+1\)) Pomeron in mediating a \(t\)-channel colorless exchange in elastic hadronic cross sections. The original idea [3] to measure the Odderon through a difference in \(pp\) vs \(p\bar{p}\) elastic cross sections brought much excitement recently [4] thanks to the precise \(pp\) measurement by the TOTEM collaboration [5] at the collision energies close to the \(p\bar{p}\) D0 Tevatron data [6]. On the other hand, considering elastic hadronic cross sections makes it difficult to understand the Odderon in the context of perturbative QCD.
As opposed to \(pp\) collisions, \(ep\) collisions provide a cleaner environment to extract the Odderon, particularly in the exclusive production of particles with a fixed \(C\)-parity. A prominent example here is the \(\eta_{c}\) production [7; 8; 9; 10; 11; 12; 13; 14; 15; 16] where the heavy charm quarks ensure that the process is sensitive to the gluons in the target. With the \(C\)-parity of \(\eta_{c}\) being \(C=+1\) and that of the emitted photon being \(C=-1\), the amplitude becomes directly proportional to the Odderon. \(\eta_{c}\) plays a role analogous to the \(J/\psi\) production in case of the Pomeron. Unlike \(J/\psi\), which has been extensively measured at HERA, there is no measurement of exclusive \(\eta_{c}\) production so far. This would hopefully change with the high luminosities feasible at the upcoming Electron-Ion Colliders (EIC) [17; 18; 19] (or even with the LHC in the ultra-peripheral mode [20]) and is therefore a motivation for our work.
The high collision energies that will be reached at the EIC can offer unique insights into the small-\(x\) component of the target wavefunction (\(x\) represents the parton momentum fraction) where the gluon density is large according to the effective theory of the Color Glass Condensate (CGC) [21; 22; 23; 24]. Within the framework of CGC, the Odderon is the imaginary part of the dipole distribution [25; 26]
\[\mathcal{O}(\mathbf{x}_{\perp},\mathbf{y}_{\perp})\equiv\frac{1}{2\mathrm{i}N_{c}} \mathrm{tr}\left\langle V^{\dagger}(\mathbf{x}_{\perp})V(\mathbf{y}_{\perp})-V^{ \dagger}(\mathbf{y}_{\perp})V(\mathbf{x}_{\perp})\right\rangle\,, \tag{1}\]
with the trace taken in the fundamental representation. The Wilson line \(V(\mathbf{x}_{\perp})\) is defined in Sec. II below, in Eq. (7). The small-\(x\) evolution of the Odderon is given by the imaginary part of the Balitsky-Kovchegov (BK) equation for the dipole [25; 26; 27]. Indeed, one of our main goals is to numerically solve the coupled Pomeron-Odderon BK system for the case of the proton and for nuclear targets. Whereas in the linear regime the Odderon and the Pomeron evolution is independent, the non-linearity of the BK equations alters the Odderon significantly when the dipole size is of the order of the inverse of the saturation scale \(Q_{S}\)[25; 26; 27; 28; 29].
From a theoretical perspective, the difficulty in computing the \(\eta_{c}\) cross section comes from the uncertainty in the magnitude of the Odderon. While earlier works on \(\eta_{c}\) production [7; 8; 9; 10] suggest a differential photo-production cross section in the range of \(10^{2}\) pb/GeV\({}^{2}\), more recent computations [15] indicate that the cross section would be somewhat smaller - of the order of \(10^{2}\) fb/GeV\({}^{2}\), and therefore overshadowed by the large background due to the Primakoff process in the low-\(|t|\) region. This could be circumvented by considering instead neutron targets for which the low-\(|t|\) Coulomb tail is absent allowing the Odderon to be probed even at low-\(|t|\). These studies so far have focused on the Odderon in the dilute regime where \(x\) is moderate and gluon density is not too large. Theoretical computations of the \(\eta_{c}\) cross sections in the case of a dense proton or a nuclear target are so far unexplored and constitute another of our motivations.
In Sec. II we undertake the computation of the amplitude for \(\eta_{c}\) production in the CGC formalism. In Sec. III we solve the coupled Pomeron-Odderon BK system numerically using the kernel with running-coupling corrections and in the approximation where the impact parameter is treated as an external parameter [30]. For the Pomeron initial condition we are using a fit to the HERA data (supplemented by optical Glauber in case of nuclei) [30]. For the Odderon initial condition in case of nucleon targets we consider a recent computation in the light-cone non-perturbative quark model by Dumitru, Mantysaari and Paatelainen [31]. In case of nuclear targets we rely on a small-\(x\) action with a cubic term in the random color sources [32]. Sec. IV is devoted to the numerical results for the exclusive \(\eta_{c}\) photo-production for the proton and the nuclear targets. Our main findings, laid out in the concluding Sec. V, are as follows. Probing the Odderon using proton targets requires rather high momentum transfers \(|t|\gtrsim 1-3\) GeV\({}^{2}\) to access the region where the Primakoff background is subdominant. In case of neutron targets we find the Primakoff contribution to be negligible, allowing in principle, the extraction of the Odderon even at low-\(|t|\). For nuclear targets the Odderon (Primakoff) cross section becomes enhanced roughly as \(\sim A^{2}\) (\(\sim Z^{2}\)), where \(A\) (\(Z\)) stand for the mass (atomic) number. The diffractive pattern in the Odderon cross section gets shifted by a few percent in comparison to the Primakoff cross section. This could serve as a distinctive signature of the Odderon.
## II The cross section for exclusive \(\eta_{c}\) production in the CGC framework
The amplitude and the cross section for exclusive \(\eta_{c}\) production \(\gamma^{*}(q)p(P)\to\eta_{c}(\Delta)p(P^{\prime})\) has been recently computed using light-cone wave functions at leading twist for the Odderon in [15]. For earlier works see [7; 9]. While some of the results from [15] carry over to our computations we find it worthwhile to quickly go over the derivation of the amplitude starting from the CGC framework [21; 22; 23; 24; 33] in momentum space and also taking into account the all-order multiple scatterings on a target, that is a dense proton or a nucleus. The cross section is computed in the frame where the target is moving along the light-cone minus coordinate, so that its momenta is \(P^{\mu}=(P^{+},0,\mathbf{0}_{\perp})\), and that of the virtual photon \(q^{\mu}=(q^{+},q^{-},\mathbf{0}_{\perp})\)1. As for the kinematic variables of the process we denote with \(t\) the momentum transfer
Footnote 1: We are using light-cone variables: for a general vector \(x^{\mu}=(x^{0},x^{1},x^{2},x^{3})=(x^{+},x^{-},\mathbf{x}_{\perp})\) we have \(x^{\pm}=(x^{0}\pm x^{3})/\sqrt{2}\). Furthermore, we adhere to the following conventions: \(\epsilon_{0123}=+1=-\epsilon^{0123}=\epsilon^{+-12}\) with \(\gamma^{5}={\rm i}\gamma^{0}\gamma^{1}\gamma^{2}\gamma^{3}\).
\[t\equiv(P-P^{\prime})^{2}=-\frac{\mathbf{\Delta}_{\perp}^{2}}{1-x}\,, \tag{2}\]
where \(x\) is the momentum fraction carried by the exchanged Odderon
\[x\equiv\frac{(P-P^{\prime})\cdot q}{P\cdot q}=\frac{Q^{2}+M_{\mathcal{P}}^{2}- t}{W^{2}+Q^{2}}\,, \tag{3}\]
and \(W^{2}=(q+P)^{2}\) is the invariant mass of the \(\gamma^{*}\)-target system. We have \(q^{2}=-Q^{2}\) as the photon virtuality, \(P^{2}=P^{\prime 2}=0\) and \(\Delta^{2}=M_{\mathcal{P}}^{2}\) is the squared mass of the produced \(\eta_{c}\) particle.
Figure 1: Feynman diagram for exclusive \(\eta_{c}\) production amplitude\(\gamma^{*}(q)p(P)\to\eta_{c}(\Delta)p(P^{\prime})\). The crosses where the vertical gluon lines attach to the \(q\bar{q}\) state represent the effective CGC vertex (6).
### The Odderon contribution
The amplitude for exclusive \(\eta_{c}\) production can be written in complete analogy to that for \(J/\psi\) production - for a very clear recent exposition see for example [34]. We follow closely the notation used in [34] and write the CGC amplitude for \(\eta_{c}\) production as
\[\mathcal{S}_{\lambda}=eq_{c}\int_{ll^{\prime}}\mathrm{Tr}\left[S(l)\not{\epsilon }(\lambda,q)S(l-q)\tau(l-q,l^{\prime}-\Delta)S(l^{\prime}-\Delta)(\mathrm{i} \gamma_{5})S(l^{\prime})\tau(l^{\prime},l)\right]\,, \tag{4}\]
where \(q_{c}=2/3\) is the charge of the charm quark in units of \(e=\sqrt{4\pi\alpha}\), \(\alpha=1/137\) with \(l\) and \(l^{\prime}\) representing the charm quark momenta as in Fig. 1. We work in the \(A^{-}=0\) gauge where the virtual photon polarization vector \(\epsilon^{\mu}(\lambda,q)\) is given as \(\epsilon^{\mu}(0,q)=(Q/q^{-},0,\mathbf{0}_{\perp})\), \(\epsilon^{\mu}(\lambda=\pm 1,q)=(0,0,\mathbf{\epsilon}_{\perp}^{\lambda})=(0,0,1, \lambda\mathrm{i})/\sqrt{2}\) and
\[S(l)=\frac{\mathrm{i}(l+m_{c})}{l^{2}-m_{c}^{2}+\mathrm{i}\epsilon}\,, \tag{5}\]
is the charm quark propagator with mass \(m_{c}\). We use \((\mathrm{i}\gamma_{5})\) as the Dirac structure for the vertex for \(\eta_{c}\) production [15; 7], for the moment treating the \(\eta_{c}\) wave function in perturbation theory. For the phenomenological computation this will be replaced with a non-perturbative model \(\eta_{c}\) light-cone wave function [35; 15], see Eq. (18) below. Inserting the effective CGC vertex [36; 37] (see also [38]),
\[\tau(p,p^{\prime})=(2\pi)\delta(p^{-}-p^{\prime-})\gamma^{-}\mathrm{sgn}(p^{- })\int_{\mathbf{z}_{\perp}}\mathrm{e}^{-\mathrm{i}(\mathbf{p}_{\perp}-\mathbf{p}_{\perp}^ {\prime})\cdot\mathbf{z}_{\perp}}V^{\mathrm{sgn}(p^{-})}(\mathbf{z}_{\perp})\,, \tag{6}\]
where
\[V(\mathbf{z}_{\perp})=\mathcal{P}\exp\left[-\mathrm{i}g\int_{-\infty}^{\infty} \mathrm{d}y^{-}\frac{1}{\mathbf{\partial}_{\perp}^{2}}\rho^{a}(y^{-},\mathbf{z}_{\perp })t^{a}\right]\,, \tag{7}\]
with \(\rho^{a}(y^{-},\mathbf{z}_{\perp})\) being the classical color source in the target, the amplitude becomes
\[\begin{split}\mathcal{S}_{\lambda}&=-eq_{c}(2\pi) \delta(q^{-}-\Delta^{-})\int_{ll^{\prime}}(2\pi)\delta(l^{-}-l^{\prime-}) \theta(l^{-})\theta(q^{-}-l^{-})\int_{\mathbf{x}_{\perp}\mathbf{y}_{\perp}}\mathrm{e} ^{-\mathrm{i}(\mathbf{l}_{\perp}^{\prime}-\mathbf{l}_{\perp})\cdot\mathbf{x}_{\perp}} \mathrm{e}^{-\mathrm{i}(\mathbf{l}_{\perp}-\mathbf{l}_{\perp}^{\prime}+\mathbf{\Delta}_{ \perp})\cdot\mathbf{y}_{\perp}}\\ &\times\mathrm{tr}\left[V(\mathbf{x}_{\perp})V^{\dagger}(\mathbf{y}_{ \perp})\right]\mathrm{tr}\left[S(l)\not{\epsilon}(\lambda,q)S(l-q)\gamma^{-}S( l^{\prime}-\Delta)(\mathrm{i}\gamma_{5})S(l^{\prime})\gamma^{-}\right]\,,\end{split} \tag{8}\]
where the \(\theta\)-functions are dictated by the singularities of the quark propagators in the complex \(l^{+}\) and \(l^{\prime+}\) plane.
We can conveniently project out the Odderon by considering a diagram with the fermion flow in the opposite direction. Of course, with appropriate change of integration variables this simply gives back (8). Utilizing instead \(C\)-parity transformation only on the Dirac part the resulting trace has an opposite sign to (8). Combining the two contributions we come up with the (color averaged) amplitude as
\[\langle\mathcal{S}_{\lambda}\rangle=-\langle\mathcal{M}_{\lambda}\rangle\,(2 \pi)\delta(q^{-}-\Delta^{-})\,, \tag{9}\]
where the amplitude \(\langle\mathcal{M}_{\lambda}\rangle\) is
\[\begin{split}\langle\mathcal{M}_{\lambda}\rangle&= eq_{c}\int_{\mathbf{r}_{\perp}}\int_{ll^{\prime}}(2\pi)\delta(l^{-}-l^{\prime-}) \theta(l^{-})\theta(q^{-}-l^{-})\mathrm{e}^{-\mathrm{i}(\mathbf{l}_{\perp}^{\prime} -\mathbf{l}_{\perp}-\frac{1}{2}\mathbf{\Delta}_{\perp})\cdot\mathbf{r}_{\perp}}\\ &\times(-\mathrm{i}N_{c})\mathcal{O}(\mathbf{r}_{\perp},\mathbf{\Delta}_{ \perp})\mathrm{tr}\left[S(l)\not{\epsilon}(\lambda,q)S(l-q)\gamma^{-}S(l^{ \prime}-\Delta)(\mathrm{i}\gamma_{5})S(l^{\prime})\gamma^{-}\right]\,.\end{split} \tag{10}\]
with the Odderon distribution
\[\mathcal{O}(\mathbf{r}_{\perp},\mathbf{\Delta}_{\perp})=\int_{\mathbf{b}_{\perp}}\mathrm{e }^{-\mathrm{i}\mathbf{\Delta}_{\perp}\cdot\mathbf{b}_{\perp}}\mathcal{O}(\mathbf{r}_{ \perp},\mathbf{b}_{\perp})\,, \tag{11}\]
explicitly projected out. We have used \(\mathbf{r}_{\perp}=\mathbf{x}_{\perp}-\mathbf{y}_{\perp}\), \(\mathbf{b}_{\perp}=(\mathbf{x}_{\perp}+\mathbf{y}_{\perp})/2\) and \(\mathcal{O}(\mathbf{r}_{\perp},\mathbf{b}_{\perp})\equiv\mathcal{O}\left(\mathbf{b}_{ \perp}+\frac{\mathbf{r}_{\perp}}{2},\mathbf{b}_{\perp}-\frac{\mathbf{r}_{\perp}}{2}\right)\) for short.
It is convenient to further separate out the Odderon distribution from the rest as
\[\langle\mathcal{M}_{\lambda}\rangle=(2q^{-})\mathrm{i}N_{c}\int_{\mathbf{r}_{ \perp}}\mathcal{O}(\mathbf{r}_{\perp},\mathbf{\Delta}_{\perp})\mathcal{A}_{\lambda}( \mathbf{r}_{\perp},\mathbf{\Delta}_{\perp})\,, \tag{12}\]
where the reduced amplitude \({\cal A}_{\lambda}(\mathbf{r}_{\perp},\mathbf{\Delta}_{\perp})\) (after light-cone \(l^{+}\) and \(l^{\prime+}\) integrals) is given as
\[{\cal A}_{\lambda}(\mathbf{r}_{\perp},\mathbf{\Delta}_{\perp})=eq_{c}\int_{z}\int_{L_{ \perp}L_{\perp}^{\prime}}\frac{\mathrm{e}^{\mathrm{i}(L_{\perp}-\mathbf{l}_{\perp} ^{\prime}+\frac{1}{2}\mathbf{\Delta}_{\perp})\cdot\mathbf{r}_{\perp}}A_{\lambda}(l,l^{ \prime})}{\mathbf{l}_{\perp}^{2}+\varepsilon^{2})\left((\mathbf{l}_{\perp}^{\prime}-z \mathbf{\Delta}_{\perp})^{2}+\varepsilon^{\prime 2}\right)}\,, \tag{13}\]
and
\[A_{\lambda}(l,l^{\prime})=\frac{\mathrm{i}}{(2q^{-2})^{2}}\mathrm{tr}\left[( \not{I}+m_{c})\not{I}(\lambda,q)(\not{I}-\not{q}+m_{c})\gamma^{-}(\not{I}^{ \prime}-\not{\Delta}+m_{c})\gamma_{5}(\not{I}^{\prime}+m_{c})\gamma^{-}\right]\,. \tag{14}\]
We have used the following abbreviations: \(z\equiv l^{\prime-}/q^{-}\), \(\varepsilon\equiv\sqrt{m_{c}^{2}+z\bar{z}Q^{2}}\) and \(\varepsilon^{\prime}\equiv\sqrt{m_{c}^{2}+z\bar{z}Q^{\prime 2}}\) with \(\bar{z}=1-z\) and
\[\int_{z}\equiv\int\frac{\mathrm{d}z}{4\pi}\,. \tag{15}\]
Computing the Dirac trace in (14) we find
\[A_{\lambda}(l,l^{\prime})=2m_{c}\epsilon^{+-ij}\epsilon^{\lambda}_{\perp i}(l_ {\perp}-l_{\perp}^{\prime}+z\Delta_{\perp})_{j}\,. \tag{16}\]
The result (16) is proportional to \(m_{c}\) because the Dirac trace contains 4 vertices and 3 fermion propagators in addition to \(\gamma_{5}\). Intuitively, when the photon splits into a \(q\bar{q}\) pair their spins are aligned, and not flipped by the eikonal interaction with the target. In order for the \(q\bar{q}\) to combine into a spinless meson after the collision, we need a spin flip and this is provided by \(m_{c}\). As another consequence of the eikonal interaction, we find the longitudinal photon \(\lambda=0\) decouples, as already noticed in [7; 15] and in a related process in [39].
After computing the \(\mathbf{l}_{\perp}\) and \(\mathbf{l}_{\perp}^{\prime}\) integrals we find
\[{\cal A}_{\lambda}(\mathbf{r}_{\perp},\mathbf{\Delta}_{\perp}) =eq_{c}\lambda\mathrm{e}^{\mathrm{i}\lambda\phi_{r}}\int_{z} \mathrm{e}^{-\mathrm{i}\mathbf{\delta}_{\perp}\cdot\mathbf{r}_{\perp}}(-1)\frac{\sqrt{ 2}m_{c}}{2\pi}\frac{1}{z\bar{z}}\left[K_{0}(\varepsilon r_{\perp})\partial_{r _{\perp}}\phi_{\mathcal{P}}(z,r_{\perp})-\varepsilon K_{1}(\varepsilon r_{ \perp})\phi_{\mathcal{P}}(z,r_{\perp})\right] \tag{17}\] \[\equiv eq_{c}\lambda\mathrm{e}^{\mathrm{i}\lambda\phi_{r}}\int_{z} \mathrm{e}^{-\mathrm{i}\mathbf{\delta}_{\perp}\cdot\mathbf{r}_{\perp}}{\cal A}(r_{ \perp})\,,\]
where \(\mathbf{\delta}_{\perp}\equiv\frac{1}{2}(z-\bar{z})\mathbf{\Delta}_{\perp}\) is the off-forward phase [40] and we have separated out the \(\lambda\) and \(\mathbf{\Delta}_{\perp}\) independent part of the reduced amplitude as \({\cal A}(r_{\perp})\). We have also introduced the standard replacement [35]\(K_{0}(\varepsilon^{\prime}r_{\perp})/(2\pi)\to\phi_{\mathcal{P}}(z,r_{\perp})\) to write the amplitude in terms of the \(\eta_{c}\) meson light-cone wave function \(\phi_{\mathcal{P}}(z,r_{\perp})\)[15; 35]. In the numerical computations we are using a "Boosted Gaussian" ansatz from [15]
\[\phi_{\mathcal{P}}(z,r_{\perp})={\cal N}_{P}z\bar{z}\exp\left(-\frac{m_{c}^{2}{ \cal R}_{\mathcal{P}}^{2}}{8z\bar{z}}-\frac{2z\bar{z}r_{\perp}^{2}}{{\cal R}_{ \mathcal{P}}^{2}}+\frac{1}{2}m_{c}^{2}{\cal R}_{\mathcal{P}}^{2}\right)\,, \tag{18}\]
with \({\cal N}_{\mathcal{P}}=0.547\), \({\cal R}_{\mathcal{P}}^{2}=2.48\) GeV\({}^{-2}\) and \(m_{c}=1.4\) GeV [15]. The integrand in (17) can be understood as a \(\gamma^{*}-\eta_{c}\) wave function overlap. Our result differs from (48) in [15] obtained using light-cone wave function approach by a relative sign between the two terms in the square bracket. Ref. [15] uses the \(\gamma^{*}\) wave function from [35], however this is known to be incorrect, see e. g. [41]. Using instead the \(\gamma^{*}\) wave function from [41; 42] we have explicitly confirmed the result in (17).
It is useful to parametrize the Odderon distribution by the Fourier series
\[{\cal O}(\mathbf{r}_{\perp},\mathbf{b}_{\perp})=2\sum_{k=0}^{\infty}{\cal O}_{2k+1}(r_{ \perp},b_{\perp})\cos((2k+1)\phi_{rb})\,, \tag{19}\]
where \(\phi_{rb}\equiv\phi_{r}-\phi_{b}\). We calculate \({\cal O}_{2k+1}(r_{\perp},b_{\perp})\) as
\[{\cal O}_{2k+1}(r_{\perp},b_{\perp})=\frac{1}{2\pi}\int_{0}^{2\pi}\mathrm{d} \phi_{rb}{\cal O}(\mathbf{r}_{\perp},\mathbf{b}_{\perp})\cos((2k+1)\phi_{rb})\,. \tag{20}\]
We will consider its Fourier transform (11) \({\cal O}(\mathbf{r}_{\perp},\mathbf{\Delta}_{\perp})\) and expand it in Fourier series
\[{\cal O}(\mathbf{r}_{\perp},\mathbf{\Delta}_{\perp})=2\sum_{k=0}^{\infty}{\cal O}_{2k+1}( r_{\perp},\Delta_{\perp})\cos((2k+1)\phi_{r\Delta})\,, \tag{21}\]
where
\[{\cal O}_{2k+1}(r_{\perp},\Delta_{\perp})=-2\pi{\rm i}(-1)^{k}\int_{0}^{\infty }b_{\perp}{\rm d}b_{\perp}{\cal O}_{2k+1}(r_{\perp},b_{\perp})J_{2k+1}(\Delta_ {\perp}b_{\perp})\,. \tag{22}\]
With this parametrization the amplitude (12) can be found in the following form
\[\langle{\cal M}_{\lambda}\rangle=q^{-}\lambda{\rm e}^{{\rm i}\lambda\phi_{ \lambda}}\langle{\cal M}\rangle\,, \tag{23}\]
where we have conveniently factored out the polarization independent amplitude \(\langle{\cal M}\rangle\) as
\[\langle{\cal M}\rangle=8\pi{\rm i}eq_{e}N_{c}\sum_{k=0}^{\infty}(-1)^{k}\int_ {z}\int_{0}^{\infty}r_{\perp}{\rm d}r_{\perp}{\cal O}_{2k+1}(r_{\perp},\Delta_ {\perp}){\cal A}(\mathbf{r}_{\perp})\left[J_{2k}(r_{\perp}\delta_{\perp})-\frac{2k +1}{r_{\perp}\delta_{\perp}}J_{2k+1}(r_{\perp}\delta_{\perp})\right]\,. \tag{24}\]
This is the result that will be used in the numerical computations in Sec. IV where we will be keeping only the lowest \(k=0\) mode. The photo-production cross section is obtained as
\[\frac{{\rm d}\sigma}{{\rm d}|t|}=\frac{1}{16\pi}\left|\langle{\cal M}\rangle \right|^{2}\,. \tag{25}\]
It is instructive to provide an estimate of (25) at leading twist. In Appendix A we have performed a model computation of the Odderon distribution and more details can be found in Sec. III.1. Restricting to the first non-trivial Fourier mode we find
\[{\cal O}_{1}(r_{\perp},\Delta_{\perp})\simeq-\frac{3\pi{\rm i}}{8}\frac{C_{3F }}{N_{c}}\alpha_{S}^{3}r_{\perp}^{3}A\Delta_{\perp}T_{A}(\Delta_{\perp})\,, \tag{26}\]
where \(T_{A}(\Delta_{\perp})\) is the Fourier transform of the transverse profile of the target \(T_{A}(\mathbf{b}_{\perp})\), see (44) below. \(C_{3F}\) is defined in (48). Taking the limit \(m_{c}\to\infty\) the cross section (25) is obtained in the following form
\[\frac{{\rm d}\sigma}{{\rm d}|t|}\simeq\frac{9\pi q_{c}^{2}\alpha\alpha_{S}^{6} A^{2}C_{3F}^{2}{\cal R}_{\cal P}^{2}(0)}{4N_{c}m_{c}^{5}}\frac{|t|T_{A}^{2}( \sqrt{|t|})}{m_{c}^{4}}\,, \tag{27}\]
and so the Odderon cross section gets enhanced by \(\sim A^{2}\) in case of nuclear targets. To get this result we have used [15; 43]
\[N_{c}\int_{z}\frac{\phi_{P}(z,0)}{z\bar{z}}=\sqrt{\frac{N_{c}}{32\pi m_{c}^{3} }}{\cal R}_{\cal P}(0)\,, \tag{28}\]
where \({\cal R}_{\cal P}(0)\) is the radial wave function at the origin.
### The Primakoff process
The Primakoff process corresponds to a situation with an odd number of photons instead of gluons is exchanged from the target. Intuitively, we would expect the Primakoff effect to be most important in the region \(\mathbf{\Delta}_{\perp}\simeq 0\) due to the long-range Coulomb tail of the charged target. As in the previous Sec. II.1 we can consider the eikonal approximation for the target interaction, with photons instead of gluons in the Wilson lines [44; 45; 46]. We thus write
\[2{\rm i}\Omega(\mathbf{r}_{\perp},\mathbf{b}_{\perp})\equiv U^{\dagger}(\mathbf{x}_{ \perp})U(\mathbf{y}_{\perp})-U^{\dagger}(\mathbf{y}_{\perp})U(\mathbf{x}_{\perp})\,, \tag{29}\]
in place of \({\cal O}(\mathbf{r}_{\perp},\mathbf{b}_{\perp})\). Here
\[U(\mathbf{x}_{\perp})=\exp\left[-\frac{{\rm i}e^{2}q_{c}ZT_{A}(\mathbf{x}_{\perp})}{ \mathbf{\partial}_{\perp}^{2}}\right]=\exp\left[4\pi{\rm i}q_{c}Z\alpha\int_{\mathbf{ k}_{\perp}}\frac{T_{A}(k_{\perp})}{\mathbf{k}_{\perp}^{2}}{\rm e}^{{\rm i}\mathbf{k}_{ \perp}\cdot\mathbf{x}_{\perp}}\right]\,, \tag{30}\]
is a Wilson line accounting for multiple scattering on a electromagnetic field of the target \(-ZeT_{A}(\mathbf{x}_{\perp})/\mathbf{\theta}_{\perp}^{2}\)[44; 45; 46]. Here the transverse charge density is given as \(ZT_{A}(\mathbf{x}_{\perp})\). Because of the \(\alpha\) suppression we are ignoring multiple scatterings and expand the phase to the first nontrivial order. Passing to the variable \(\mathbf{\Delta}_{\perp}\) instead of \(\mathbf{b}_{\perp}\) we have
\[\Omega(\mathbf{r}_{\perp},\mathbf{\Delta}_{\perp})=-8\pi\mathrm{i}q_{c}Z\alpha\sin \left(\frac{\mathbf{\Delta}_{\perp}\cdot\mathbf{r}_{\perp}}{2}\right)\frac{T_{A}(\Delta _{\perp})}{\mathbf{\Delta}_{\perp}^{2}}\,, \tag{31}\]
which is the same as Eq. (22) in [15] up to a factor due to the difference in the definition. We also obtain the Fourier moments as
\[\Omega_{2k+1}(r_{\perp},\Delta_{\perp})=\frac{1}{2\pi}\int_{0}^{2\pi}\mathrm{ d}\phi_{r\Delta}\Omega(\mathbf{r}_{\perp},\mathbf{\Delta}_{\perp})\cos((2k+1)\phi_{r \Delta})=-8\pi\mathrm{i}q_{c}Z\alpha(-1)^{k}J_{2k+1}\left(\frac{r_{\perp} \Delta_{\perp}}{2}\right)\frac{T_{A}(\Delta_{\perp})}{\mathbf{\Delta}_{\perp}^{2} }\,, \tag{32}\]
that are to be used directly in (24).
At this point it is useful to obtain an estimate in the \(m_{c}\to\infty\) limit, similar to what was done for the Odderon in (27). We get
\[\frac{\mathrm{d}\sigma}{\mathrm{d}|t|}\simeq\frac{\pi q_{c}^{4}\alpha^{3}Z^{2} N_{c}\mathcal{R}_{\mathcal{P}}^{2}(0)}{m_{c}^{5}|t|}T_{A}^{2}(\sqrt{|t|})\,, \tag{33}\]
which displays the characteristic \(1/t\) Coulomb behavior in contrast to the Odderon case (27) where we have instead a suppression factor \(|t|/m_{c}^{4}\). Note that \(T_{A}(\Delta_{\perp})\) is nothing but the electromagnetic charge form factor from the Rosenbluth formula [47].
In order to evaluate the Primakoff cross section numerically we must specify the profile function \(T_{A}(\mathbf{b}_{\perp})\). For the proton and neutron targets we are using a recent determination of charge form factors from [48]. For a nucleus we use a Woods-Saxon distribution, see (44) below. In this work we do not attempt to differentiate between the nuclear electromagnetic distribution and the strong interaction distribution of a nucleus, although in principle they could be different, see [49; 50].
## III Numerical solutions of the Odderon evolution at small-\(x\)
Denoting the dipole distribution in the fundamental representation as
\[\mathcal{D}(\mathbf{r}_{\perp},\mathbf{b}_{\perp})\equiv\frac{1}{N_{c}}\operatorname{ tr}\left\langle V^{\dagger}\left(\mathbf{b}_{\perp}+\frac{\mathbf{r}_{\perp}}{2} \right)V\left(\mathbf{b}_{\perp}-\frac{\mathbf{r}_{\perp}}{2}\right)\right\rangle\,, \tag{34}\]
the fully impact parameter dependent BK equation reads [51; 52]
\[\frac{\partial\mathcal{D}(\mathbf{r}_{\perp},\mathbf{b}_{\perp})}{\partial Y}=\frac{ \alpha_{S}N_{c}}{2\pi^{2}}\int_{\mathbf{r}_{\perp\perp}}\frac{\mathbf{r}_{\perp}^{2}} {\mathbf{r}_{\perp\perp}^{2}\mathbf{r}_{\perp\perp}^{2}}\left[\mathcal{D}(\mathbf{r}_{1 \perp},\mathbf{b}_{1\perp})\mathcal{D}(\mathbf{r}_{2\perp},\mathbf{b}_{2\perp})-\mathcal{D }(\mathbf{r}_{\perp},\mathbf{b}_{\perp})\right]\,, \tag{35}\]
where \(\mathbf{r}_{2\perp}=\mathbf{r}_{\perp}-\mathbf{r}_{1\perp}\). In general, we have \(\mathbf{b}_{1\perp}=\mathbf{b}_{\perp}+(\mathbf{r}_{\perp}-\mathbf{r}_{1\perp})/2\) and \(\mathbf{b}_{2\perp}=\mathbf{b}_{\perp}-\mathbf{r}_{1\perp}/2\) and so (35) is non-local in \(\mathbf{b}_{\perp}\). Solutions of (35) lead to unphysically large Coulomb tails in \(\mathbf{b}_{\perp}\) originating from a lack of confining interactions in the BK kernel [53]. How about, "This issue has been addressed [54; 55; 56; 57], at different levels of sophistication, by various modifications of the kernel in the infrared. In this work we make no attempt to address this difficult problem and instead resort to a local approximation \(\mathbf{b}_{1\perp}\to\mathbf{b}_{\perp}\) and \(\mathbf{b}_{2\perp}\to\mathbf{b}_{\perp}\) used in [30] (see also a discussion in [58]) where the \(\mathbf{b}_{\perp}\)-dependence effectively becomes an external parameter.
Splitting the dipole into Pomeron and Odderon pieces as \(\mathcal{D}(\mathbf{r}_{\perp},\mathbf{b}_{\perp})=1-\mathcal{N}(\mathbf{r}_{\perp},\mathbf{b} _{\perp})+\mathrm{i}\mathcal{O}(\mathbf{r}_{\perp},\mathbf{b}_{\perp})\) leads to [25; 26]
\[\begin{split}\frac{\partial\mathcal{N}(\mathbf{r}_{\perp},\mathbf{b}_{ \perp})}{\partial Y}=\int_{\mathbf{r}_{1\perp}}\mathcal{K}_{\mathrm{Bal}}(\mathbf{r}_{ \perp},\mathbf{r}_{1\perp},\mathbf{r}_{2\perp})\big{[}&\mathcal{N}(\mathbf{r}_ {1\perp},\mathbf{b}_{\perp})+\mathcal{N}(\mathbf{r}_{2\perp},\mathbf{b}_{\perp})- \mathcal{N}(\mathbf{r}_{\perp},\mathbf{b}_{\perp})\\ &+\mathcal{N}(\mathbf{r}_{1\perp},\mathbf{b}_{\perp})\mathcal{N}(\mathbf{r}_ {2\perp},\mathbf{b}_{\perp})-\mathcal{O}(\mathbf{r}_{1\perp},\mathbf{b}_{\perp})\mathcal{O }(\mathbf{r}_{2\perp},\mathbf{b}_{\perp})\big{]}\,,\end{split} \tag{36}\]
\[\begin{split}\frac{\partial\mathcal{O}(\mathbf{r}_{\perp},\mathbf{b}_{ \perp})}{\partial Y}=\int_{\mathbf{r}_{1\perp}}\mathcal{K}_{\mathrm{Bal}}(\mathbf{r}_ {\perp},\mathbf{r}_{1\perp},\mathbf{r}_{2\perp})\big{[}&\mathcal{O}(\mathbf{r}_ {1\perp},\mathbf{b}_{\perp})+\mathcal{O}(\mathbf{r}_{2\perp},\mathbf{b}_{\perp})-\mathcal{ O}(\mathbf{r}_{\perp},\mathbf{b}_{\perp})\\ &-\mathcal{N}(\mathbf{r}_{1\perp},\mathbf{b}_{\perp})\mathcal{O}(\mathbf{r}_ {2\perp},\mathbf{b}_{\perp})-\mathcal{O}(\mathbf{r}_{1\perp},\mathbf{b}_{\perp})\mathcal{N}( \mathbf{r}_{2\perp},\mathbf{b}_{\perp})\big{]}\,.\end{split} \tag{37}\]
In the above Eqs. (36), (37) we have replaced the conventional BK kernel with the running-coupling kernel (according to the Balitsky's prescription) [59]
\[\frac{\alpha_{S}N_{c}}{2\pi^{2}}\frac{\mathbf{r}_{\perp}^{2}}{\mathbf{r}_{1\perp}^{2}\bm {r}_{2\perp}^{2}}\to\mathcal{K}_{\rm Bal}(\mathbf{r}_{\perp},\mathbf{r}_{1\perp},\mathbf{r }_{2\perp})=\frac{\alpha_{S}(\mathbf{r}_{\perp}^{2})N_{c}}{2\pi^{2}}\left[\frac{1}{ \mathbf{r}_{1\perp}^{2}}\left(\frac{\alpha_{S}(\mathbf{r}_{1\perp}^{2})}{\alpha_{S}( \mathbf{r}_{2\perp}^{2})}-1\right)+\frac{\mathbf{r}_{\perp}^{2}}{\mathbf{r}_{1\perp}^{2}\bm {r}_{2\perp}^{2}}+\frac{1}{\mathbf{r}_{2\perp}^{2}}\left(\frac{\alpha_{S}(\mathbf{r}_{ 2\perp}^{2})}{\alpha_{S}(\mathbf{r}_{1\perp}^{2})}-1\right)\right]\,, \tag{38}\]
that will be used in our numerical computations. Here
\[\alpha_{S}(\mathbf{r}_{\perp}^{2})=\frac{12\pi}{(33-2N_{f})\log\left(\frac{4C^{2} }{\mathbf{r}_{\perp}^{2}\Lambda_{\rm QCD}^{2}}+\hat{a}\right)}\,, \tag{39}\]
with [30]\(N_{f}=3\), \(C^{2}=7.2\), \(\Lambda_{\rm QCD}=0.241\) GeV and \(\hat{a}\) is a parameter determined by the condition \(\lim_{\mathbf{r}_{\perp}^{2}\to\infty}\alpha_{S}(\mathbf{r}_{\perp}^{2})=\alpha_{\rm fr }\) where \(\alpha_{\rm fr}=0.7\).
A similar system of equations was solved in [27; 28; 29; 60; 61], but the \(b_{\perp}\) dependence was not addressed. Nevertheless, some generic conclusions from these works also apply to our computations. Thanks to the non-linearity of the BK equation (35), the Pomeron and the Odderon do not evolve separately. Only in the small-\(r_{\perp}\) limit where \(\mathcal{N}(\mathbf{r}_{\perp},\mathbf{b}_{\perp})\to 0\) the nonlinear terms in (37) can be neglected and the system is decoupled. When this happens, the first two terms in the square brackett (37) cancel each other and the Odderon will become exponentially suppressed in rapidity [25; 28; 29]. In contrast, in the large \(r_{\perp}\) region, where \(\mathcal{N}(\mathbf{r}_{\perp},\mathbf{b}_{\perp})\to 1\), the nonlinear terms play an important role to cancel the first and the second term in the square bracket in (37) causing again an exponential suppression [25; 27; 28; 29]. Such a lack of geometric scaling seems to be a general feature of not only the Odderon but also higher dipole moments in general [56].
### Initial conditions
For the initial conditions for the Pomeron we use a fit to HERA data from Ref. [30]. Therein, the Pomeron for the proton is modelled as
\[\mathcal{N}(\mathbf{r}_{\perp},\mathbf{b}_{\perp})=1-\exp\left[-\frac{1}{4}\mathbf{r}_{ \perp}^{2}Q_{0,p}^{2}(\mathbf{r}_{\perp},\mathbf{b}_{\perp})\right]\,, \tag{40}\]
where
\[Q_{0,p}^{2}(\mathbf{r}_{\perp},\mathbf{b}_{\perp})\equiv T_{p}(\mathbf{b}_{\perp})\frac{ \sigma_{0}}{2}Q_{S,0}^{2}\log\left(\frac{1}{r_{\perp}\Lambda_{\rm QCD}}+e_{c }{\rm e}\right)\,, \tag{41}\]
\[T_{p}(\mathbf{b}_{\perp})=\frac{1}{\pi R_{p}^{2}}e^{-\mathbf{b}_{\perp}^{2}/R_{p}^{2} }\,. \tag{42}\]
where we pick up \(R_{p}\) from the relationship \(\pi R_{p}^{2}=\sigma_{0}/2=4\pi B_{p}\). In a recent work by Dumitru, Mantysaari and Paatelainen [31] the Odderon for a proton target was calculated starting from quark light-cone wavefunctions at NLO. We refer to this as the DMP model and employ it in our numerical computations.
In case of a nucleus we use again the results from Ref. [30], with the Pomeron distribution given as in (40) but with
\[Q_{0,A}^{2}(\mathbf{r}_{\perp},\mathbf{b}_{\perp})\equiv AT_{A}(\mathbf{b}_{\perp})\frac{ \sigma_{0}}{2}Q_{S,0}^{2}\log\left(\frac{1}{r_{\perp}\Lambda_{\rm QCD}}+e_{c }{\rm e}\right)\,, \tag{43}\]
in place of \(Q_{0,p}^{2}(\mathbf{r}_{\perp},\mathbf{b}_{\perp})\). \(T_{A}(\mathbf{b}_{\perp})\) is the transverse profile of a nuclear target. The parameters in (41) are given as \(Q_{S,0}^{2}=0.06\) GeV\({}^{2}\), \(e_{c}=18.9\) and \(\frac{\sigma_{0}}{2}=16.36\) mb [30]. \(T_{A}(\mathbf{b}_{\perp})\) is obtained by integrating the Woods-Saxon distribution [30]
\[T_{A}(\mathbf{b}_{\perp})=\int_{-\infty}^{\infty}{\rm d}z\frac{n_{A}}{1+\exp\left[ \frac{\sqrt{\mathbf{b}_{\perp}^{2}+z^{2}}-R_{A}}{d}\right]}\,, \tag{44}\]
which is normalized to unity \(\int_{\mathbf{b}_{\perp}}T_{A}(\mathbf{b}_{\perp})=1\). This fixes \(n_{A}\) as \(-8\pi n_{A}d{\rm Li}_{3}(-{\rm e}^{R_{A}/d})=1\). Here \(d=0.54\) fm, \(R_{A}=1.12A^{1/3}-0.86A^{-1/3}\) fm [30]. These Woods-Saxon parameters are numerically very close to the fit values from [62].
The initial condition of the Odderon for a nuclear target is based on the Jeon-Venugopalan (JV) model [32], which involves a cubic term added to the standard McLerran-Venugopalan small-\(x\) functional
\[W[\rho]=\exp\left[-\int_{\mathbf{x}_{\perp}}\left(\frac{\delta_{ab}\rho^{a}(\mathbf{x}_{ \perp})\rho^{b}(\mathbf{x}_{\perp})}{2\mu^{2}}-\frac{d_{abc}\rho^{a}(\mathbf{x}_{\perp })\rho^{b}(\mathbf{x}_{\perp})\rho^{c}(\mathbf{x}_{\perp})}{\kappa}\right)\right]\,, \tag{45}\]
where
\[\mu^{2}=\frac{g^{2}}{2}\frac{A}{\pi R_{A}^{2}}\,,\qquad\kappa=g^{3}N_{c}\frac{ A^{2}}{(\pi R_{A}^{2})^{2}}\,. \tag{46}\]
In [32] (see also [25]), it was found that the Odderon distribution from the above functional takes the following form
\[\mathcal{O}(\mathbf{x}_{\perp},\mathbf{y}_{\perp})=-g^{3}C_{3F}\frac{\mu^{6}}{\kappa} \Theta(\mathbf{x}_{\perp},\mathbf{y}_{\perp})\exp\left[-\frac{g^{2}C_{F}\mu^{2}}{2} \Gamma(\mathbf{x}_{\perp},\mathbf{y}_{\perp})\right]\,, \tag{47}\]
where
\[C_{F}=\frac{N_{c}^{2}-1}{2N_{c}}\,,\qquad C_{3F}=\frac{(N_{c}^{2}-1)(N_{c}^{2 }-4)}{4N_{c}^{2}}\,, \tag{48}\]
and
\[\begin{split}\Gamma(\mathbf{x}_{\perp},\mathbf{y}_{\perp})& =(\pi R_{A}^{2})\int_{\mathbf{x}_{\perp}}T_{A}(\mathbf{z}_{\perp})\left[ G(\mathbf{x}_{\perp}-\mathbf{z}_{\perp})-G(\mathbf{y}_{\perp}-\mathbf{z}_{\perp})\right]^{2}\,, \\ \Theta(\mathbf{x}_{\perp},\mathbf{y}_{\perp})&=(\pi R_{A}^{2 })\int_{\mathbf{z}_{\perp}}T_{A}(\mathbf{z}_{\perp})\left[G(\mathbf{x}_{\perp}-\mathbf{z}_{ \perp})-G(\mathbf{y}_{\perp}-\mathbf{z}_{\perp})\right]^{3}\,,\end{split} \tag{49}\]
where \(G(\mathbf{x}_{\perp}-\mathbf{z}_{\perp})\) is a 2D Green function (A2) and we have inserted the target profile \(T_{A}(\mathbf{b}_{\perp})\), see the discussion in the Appendix A. Eq. (47) can be interpreted as a single perturbative Odderon with any number of perturbative Pomeron insertions. Starting from (47) we deduce the following result for the Odderon initial condition
\[\mathcal{O}(\mathbf{r}_{\perp},\mathbf{b}_{\perp})=\frac{\lambda}{8}\left[R_{A}\frac{ \mathrm{d}T_{A}(\mathbf{b}_{\perp})}{\mathrm{d}b_{\perp}}A^{2/3}\frac{\sigma_{0}} {2}\right]Q_{S,0}^{3}A^{1/2}r_{\perp}^{3}(\hat{\mathbf{r}}_{\perp}\cdot\hat{\mathbf{b }}_{\perp})\log\left(\frac{1}{r_{\perp}\Lambda_{\mathrm{QCD}}}+e_{c}\mathrm{e }\right)\exp\left[-\frac{1}{4}r_{\perp}^{2}Q_{0,A}^{2}(\mathbf{r}_{\perp},\mathbf{b}_ {\perp})\right]\,, \tag{50}\]
where in the JV model we would have
\[\lambda_{\mathrm{JV}}=-\frac{3}{16}\frac{N_{c}^{2}-4}{(N_{c}^{2}-1)^{2}}\frac {Q_{S,0}^{3}A^{1/2}R_{A}^{3}}{\alpha_{S}^{3}A^{2}}\,. \tag{51}\]
The details of the computation leading to (50) are given in the Appendix A.
### Numerical solutions
The system of BK equations (36)-(37) was solved on a \((r_{\perp},b_{\perp},\phi_{rb})\) grid, where \(\phi_{rb}=\phi_{r}-\phi_{b}\). As mentioned earlier, we consider \(b_{\perp}\) as an external parameter and solve the BK equation for each value of \(b_{\perp}\) separately. The integral over \(\mathbf{r}_{1\perp}\) in the equations (36) and (37) is evaluated over a lattice in \((r_{\perp},\phi_{rb})\) using adaptive cubature [63; 64]. The lattice is equally spaced in \(\log r_{\perp}\) from \(r_{\perp}=10^{-6}\ \mathrm{GeV}^{-1}\) to \(10^{4}\ \mathrm{GeV}^{-1}\) with \(n_{r_{\perp}}=500\) lattice points and in \(\phi_{rb}\) from \(\phi_{rb}=0\) to \(2\pi\) with \(n_{\phi_{rb}}=100\) lattice points. For each value of \(b_{\perp}\), the equations (36) and (37) together represent a system of \(2\times n_{r_{\perp}}\times n_{\phi_{rb}}\) coupled differential equations representing the values of the Pomeron and the Odderon over the grid. This system of differential equations is solved using a three-step third order Adams-Bashforth method with a step size in rapidity \(\Delta Y=0.1\) for up to \(Y=5\). The first two timesteps required to initiate the Adams-Bashforth method were obtained using Ralston's second order method. We have validated our numerical treatment of the BK system in two ways. First, since we have adopted our parametrization of the Pomeron from [30], we have checked that our results for the BK evolved the dipole amplitude in the proton and in the nuclei agree with [30]. Second, we checked that we were able to reproduce fully the results for the BK evolution of the spin-dependent Odderon presented in [29]. We additionally checked several different methods for solving the BK system (including the Euler method, a range of Adam-Bashforth methods, and the fourth order Runge-Kutta method) and found the third-order Adams-Bashforth method to be optimal.
At this point we make a comment about the angular dependence. The Pomeron initial condition (40) is independent of \(\phi_{rb}\), while the \(\cos(\phi_{rb})\) moment in the Odderon initial condition (50) will generate a \(\cos(2\phi_{rb})\) moment in the Pomeron through the \(\sim\mathcal{O}^{2}\) term in (36). In principle, this further backreacts onto the Odderon through the \(\sim\mathcal{NO}\) pieces generating a higher \(\cos(3\phi_{rb})\) moment in the Odderon. However, in our numerical computation we find that already the \(\cos(2\phi_{rb})\) term is numerically tiny in support of the similar findings reported in [27; 29]3. For this reason, in the following results we will discuss the Odderon solution only in the context of its dominant \(\mathcal{O}_{1}(r_{\perp},b_{\perp})\) moment.
Footnote 3: In particular, this also implies the HERA fit [30] of the Pomeron does not get affected by the presence of the Odderon in the BK equation.
On Fig. 2 we show the first Odderon moment \(\mathcal{O}_{1}(r_{\perp},b_{\perp})\) for the proton target using the DMP model as initial condition as a function of \(r_{\perp}\) for several finite values of \(b_{\perp}\). Going from the full line at the initial condition \(x=10^{-2}\) the Odderon is severely affected in magnitude when evolving to smaller \(x\) as can be seen by the thin dashed curve
Figure 3: Left: the first Fourier moment \(O_{1}(r_{\perp},b_{\perp})\) of the Odderon distribution of the proton in the DMP model as a function of \(b_{\perp}\) for different values of \(x\). Right: same quantity, but in the JV model.
Figure 2: The first Fourier moment \(O_{1}(r_{\perp},b_{\perp})\) of the Odderon distribution of the proton in the DMP model as a function of \(r_{\perp}\) for different values of \(x\) and at the impact parameters \(b_{\perp}=0.6\) fm and \(0.4\) fm.
where \(x=10^{-3}\) and a thin dotted curve where \(x=10^{-4}\), verifying numerically the lack of geometric scaling for the Odderon. Moving on to the \(b_{\perp}\) dependence, the left plot on Fig. 3 shows \(\mathcal{O}_{1}(r_{\perp},b_{\perp})\) as a function \(b_{\perp}\) with \(r_{\perp}\) fixed and for different values of \(x\). For illustrative purposes we plot on the right the result for the proton target as obtained in the JV model. Interestingly, while the DMP model Odderon is peaked within the proton the JV model Odderon is peaked at higher \(b_{\perp}\) due to the \(\sim dT_{p}/db_{\perp}\) term.
Comparing the results in the DMP and the JV models, we can quantify some of the model uncertainties concerning the magnitude of the Odderon. For this purpose we take the absolute ratio of the \(\eta_{c}\) production amplitudes in the DMP and the JV models in the case of the nucleon target. In the limit \(\Delta_{\perp}\to 0\), and for \(Q^{2}=1\) GeV\({}^{2}\), we find \(\langle\mathcal{M}\rangle_{p,\,\mathrm{DMP}}/\langle\mathcal{M}\rangle_{p,\, \mathrm{JV}}\to 0.026\). On the other hand, an upper bound on the Odderon is imposed by the group theory constraint [65; 28]
\[(4-3\mathcal{N}(\mathbf{r}_{\perp},\mathbf{b}_{\perp}))\,\mathcal{N}^{3}(\mathbf{r}_{ \perp},\mathbf{b}_{\perp})-6\left(6-6\mathcal{N}(\mathbf{r}_{\perp},\mathbf{b}_{\perp})+ \mathcal{N}^{2}(\mathbf{r}_{\perp},\mathbf{b}_{\perp})\right)\mathcal{O}^{2}(\mathbf{r}_{ \perp},\mathbf{b}_{\perp})-3\mathcal{O}^{4}(\mathbf{r}_{\perp},\mathbf{b}_{\perp})\geq 0\,. \tag{52}\]
In the small-\(r_{\perp}\) limit this simplifies to \(\mathcal{O}^{2}(\mathbf{r}_{\perp},\mathbf{b}_{\perp})\leq\mathcal{N}^{3}(\mathbf{r}_{ \perp},\mathbf{b}_{\perp})/9\)[28]. We have checked that the DMP model satisfies this bound. Using the JV initial condition for nuclei we can quantify (52) as a bound on the magnitude of \(\lambda\) and numerically we find that model coupling is somewhat below the bound, namely
\[\lambda_{\mathrm{max}}^{197}=1.143\lambda_{\mathrm{JV}}^{197}\,,\qquad\lambda _{\mathrm{max}}^{63}=1.553\lambda_{\mathrm{JV}}^{63}\,,\qquad\lambda_{\mathrm{ max}}^{27}=2.26\lambda_{\mathrm{JV}}^{27}\,. \tag{53}\]
where \(\lambda_{\mathrm{JV}}\) is given by (51) and the superscript refers to the atomic number for different species of nuclei. We have checked that (52) is satisfied for all \(r_{\perp}\) and \(b_{\perp}\), where for the latter, we considered the domain for which the nuclear saturation scale is above the minimum bias saturation scale of the proton. We will thus consider \(\lambda\) up to \(\lambda_{\mathrm{max}}\). For orientation purposes, the lowest coupling we consider for nuclei will be given as \(\lambda=0.026\lambda_{\mathrm{JV}}\), where the proportionality factor \(0.026\) is fixed by the DMP vs. JV amplitude ratio for the proton target discussed above.
Finally, on Fig. 4 we show the results for the \(b_{\perp}\) dependence of the \(\mathcal{O}_{1}(r_{\perp},b_{\perp})\) for the nuclear targets: Au (left), Cu (center) and Al (right) using the JV model. Evolving to smaller values in \(x\), the peak in the Odderon distribution drops in magnitude but also shifts to slightly larger \(b_{\perp}\). This will leave an interesting consequence in the diffractive pattern of the cross section as we will explain in the following Section IV.
## IV Numerical results for the cross section
In this Section we show the results of the numerical computation of the photoproduction cross section for the exclusive processes \(\gamma^{*}p\to\eta_{c}p\), \(\gamma^{*}n\to\eta_{c}n\) and \(\gamma^{*}A\to\eta_{c}A\), where we consider the Au, Cu and Al nuclei. The numerical
Figure 4: The first Fourier moment \(O_{1}(r_{\perp},b_{\perp})\) of the Odderon distribution of the nuclei in the JV model as a function of \(b_{\perp}\) for different values of \(x\). Left plot is for the Au, center is for Cu and right is for Al nuclei.
computation of the cross section (25) is based on the amplitude for the Odderon contribution given by (24). To compute the Primakoff cross section we use the same Eq. (24) with the replacement \(\mathcal{O}_{2k+1}(r_{\perp},\Delta_{\perp})\to\Omega_{2k+1}(r_{\perp},\Delta_{ \perp})\), where \(\Omega_{2k+1}(r_{\perp},\Delta_{\perp})\) is given by (32). In all the computations considered, we restrict to the lowest \(k=0\) Fourier moment of the amplitude. We have explicitly checked that the contributions from the higher moments are strongly suppressed both in the case of the Odderon and the Primakoff contributions relative to the \(k=0\) case. For the Fourier transform in the impact parameter \(\mathbf{b}_{\perp}\) we used the Ogata quadrature method [66].
We first discuss the numerical results for exclusive \(\gamma^{*}p\to\eta_{c}p\) photoproduction. Fig. 5 shows the cross-section as a function of \(|t|\) for several values of \(x\) and \(Q^{2}\). The computation is performed using the DMP model. The result shows a rather small \(|t|\)-slope of the cross section. This is a generic feature of the quark based approach as the three gluons in the Odderon can couple to three different quarks leaving the proton intact even at relatively large momentum transfer [7; 67]. The Primakoff cross section overwhelms the Odderon cross section at small \(|t|\), but this gets reversed
Figure 5: \(|t|\) dependence of the \(\gamma^{*}p\to\eta_{c}p\) cross section with the DMP model. The contribution from the Primakoff process is shown separately.
Figure 6: \(|t|\) dependence of the \(\gamma^{*}n\to\eta_{c}n\) cross section with the DMP model. The contribution from the Primakoff process is shown separately.
for \(|t|\gtrsim 1.5\) GeV\({}^{2}\) thanks to a small \(|t|\)-slope of the Odderon cross section.
The small-\(x\) evolution reduces the Odderon cross section by roughly an order of magnitude when going from \(x\sim 10^{-2}\) to \(x\sim 10^{-4}\). However, it is still above the Primakoff background for \(|t|\gtrsim 2\)-3 GeV\({}^{2}\), with the \(|t|\)-slope remaining roughly the same. Our conclusion for proton targets is thus similar to that of [15] where the computation was performed at moderate \(x\sim 0.1\). The Odderon extraction from collisions on the proton target would thus require measurements of the cross section at potentially large momentum transfers even when \(x\) is small \(x\lesssim 0.01\). For neutron targets the Primakoff cross section is only a very small contribution and the Odderon can be probed even at low \(|t|\) and/or low \(x\) - see Fig. 6.
On Fig. 7 we show the numerical results for the \(\gamma^{*}A\to\eta_{c}A\) cross section for Au (left), Cu (center) and Al (right) targets. The Odderon coupling \(\lambda\) is set to the maximal value allowed by the group theory constraint (53). The Odderon (and the Primakoff) cross section become enhanced by the mass (atomic) number of the target. For example, using maximal coupling allowed by the group theory constraint (\(\lambda=\lambda_{\rm max}\)), the Odderon cross section can reach up to about 10 nb/GeV\({}^{2}\) for Au. Taking instead \(\lambda=0.026\lambda_{\rm JV}\) (the factor 0.026 is determined by the DMP vs JV amplitude ratio) as an assumption for the lowest estimate, leads to \(\sim 5\) pb/GeV\({}^{2}\).
Both the Odderon and the Primakoff contributions show characteristic diffractive patterns that are mostly of a geometric origin. However, it is clearly visible that the diffractive pattern for the Odderon cross section is altered compared to the Primakoff case: the diffractive dips are shifted to smaller \(|t|\) even for the initial condition and the shift becomes more pronounced as \(x\) gets smaller or \(|t|\) gets larger. To understand this result, notice that according to the leading twist estimates in (27) and (33) the Odderon and the Primakoff cross sections behave as \({\rm d}\sigma/{\rm d}|t|\propto|t|T_{A}^{2}(\sqrt{|t|})\) and \({\rm d}\sigma/{\rm d}|t|\propto T_{A}^{2}(\sqrt{|t|})/|t|\), respectively. We are lead to the conclusion that the shift of the diffractive pattern when comparing the Odderon and the Primakoff cross section is a consequence of multiple scatterings in the Odderon amplitude. This finds additional support by the evolution to smaller \(x\) where, as a consequence of the growth of the saturation scale, multiple scattering effects become increasingly important, acting to further increase the shift.
Considering the total cross section, where the Odderon and the Primakoff contributions must be added coherently, the relative sign between the two amplitudes determines whether they interfere constructively or destructively. In our computation this is controlled by the sign of the Odderon coupling parameter \(\lambda\). Using the JV model the sign is negative, see (51). Thanks to the \({\rm d}T_{A}/{\rm d}b_{\perp}\) term, this gives a positive \({\cal O}_{1}(r_{\perp},b_{\perp})\) overall, see Fig. 4. For comparison, the DMP model computation for proton targets [31] also yields a positive \({\cal O}_{1}(r_{\perp},b_{\perp})\), see Fig. 3. While positive \({\cal O}_{1}(r_{\perp},b_{\perp})\) seems to be preferred by model computations, on Fig. 8 we compute the total cross section considering both signs of \({\cal O}_{1}(r_{\perp},b_{\perp})\) (or, equivalently, \(\lambda\)). For \({\cal O}_{1}(r_{\perp},b_{\perp})>0\) (\(\lambda<0\)) the results are given on the left panel of Fig. 8. In this case the interference of the Odderon and Primakoff amplitudes is mostly constructive. Our result demonstrates that the multiple scattering effect in the Odderon amplitude, that shifts the diffractive pattern relative to the Primakoff component, can leave its trace also in the total cross section depending on the magnitude of the
Figure 7: The \(\gamma^{*}A\to\eta_{c}A\) cross section for three different targets: Au (left), Cu (center) and Al (right). The odderon coupling is fixed to the maximal value allowed by the group theory constraint (53).
Odderon. On the right panel of Fig. 8, the opposite case of \(\mathcal{O}_{1}(r_{\perp},b_{\perp})<0\) (\(\lambda>0\)) is displayed. The two amplitudes are now out of phase and interfere destructively, resulting in a severe distortion of the diffractive pattern in the total cross section in comparison to the Primakoff contribution only. We conclude that in both cases the known Primakoff diffractive dips could be filled in the total cross section. This could be used as a signal of the Odderon from exclusive \(\eta_{c}\) production off nuclear targets. Considering different nuclear species could be a valuable tool in verifying this suggestion.
## V Conclusion
In this work we have computed the exclusive \(\eta_{c}\) production in \(ep\) and \(eA\) collisions as a potential probe of the Odderon. Our computation relies on the CGC formalism where the effect of multiple scatterings is taken explicitly into account in a description of scattering off a dense target at small-\(x\). We have numerically solved the BK evolution equation in impact parameter \(b_{\perp}\) and dipole size \(r_{\perp}\) for the coupled Pomeron-Odderon system. The numerical results demonstrate a rapid drop in the Odderon with evolution in line with the results in the literature [25; 28; 29].
Due to a large Primakoff background we find that in order to isolate the Odderon component of the cross section for the proton target, it is required to have relatively large momentum transfers: \(|t|\gtrsim 1.5\)-\(3\) GeV\({}^{2}\) for \(x\sim 10^{-2}-10^{-4}\). On a qualitative level this is rather similar to the conclusions drawn in the previous works [7; 8; 9; 10; 15]. A new result is that the \(|t|\)-slope is not altered by small-\(x\) evolution, although the cross section does reduce in magnitude. Exclusive scattering off a neutron leads to a negligible Primakoff component and represents a new opportunity to probe the Odderon at low \(|t|\). In practice this could be done using deuteron or He\({}^{3}\) targets with spectator proton tagging in the near forward direction, see for example [68; 69].
For the nuclear targets we have found that the saturation effects in the Odderon distribution distorts the diffractive pattern in comparison to the Primakoff process. The effect is a few percent in magnitude and accumulates for smaller \(x\) and/or larger momentum transfers. Depending on the coupling of the Odderon, it is possible that the diffractive dips of the Primakoff process get filled by the Odderon component of the cross section. Such a distortion of the diffractive pattern in comparison to the known nuclear charge form factors might be a new way to measure the Odderon component in the nuclear wave function.
As our final remark, we wish to clearly state that the actual experimental measurement of the Odderon component
Figure 8: The \(\gamma^{*}\mathrm{Au}\to\eta_{c}\mathrm{Au}\) cross section for three considered values of the odderon coupling up to the maximal value allowed by the group theory constraint (52). On the left (right) panel the sign of the Odderon coupling parameter is chosen as \(\lambda<0\) (\(\lambda>0\)). The purple curves stand for the total cross section, with individual line styles representing different values of \(\lambda\).
of the exclusive \(\eta_{c}\) cross section is certainly challenging. Firstly, the Odderon itself is small, and so the cross section with proton (or neutron) targets tends to be low (\(\sim 10^{2}\) fb/GeV\({}^{2}\)). This could be circumvented by considering nuclear targets instead as the Odderon cross section is enhanced roughly as \(\sim A^{2}\). With the maximal Odderon coupling allowed by the group theory constraint the cross section can be in the range of nb/GeV\({}^{2}\). Secondly, the branching ratio for \(\eta_{c}\) to charged hadrons is only a few percent [70] with a serious background from feed-down of \(J/\psi\) subsequently decaying as \(J/\psi\to\eta_{c}\gamma\) with \(\gamma\) undetected [71; 7; 14]. Nevertheless, \(\eta_{c}\) has been measured through its hadronic channel in \(e^{+}e^{-}\) by BABAR [72] and so such difficulties might be overcome also at EIC. Measuring at least the Primakoff component seems to be a feasible starting point [16]. In any case, we consider the conclusions drawn from our results to be rather generic that would also be present in case of other quarkonia states or light mesons.
###### Acknowledgements.
S. B., A. K. and E. A. V. are supported by the Croatian Science Foundation (HRZZ) no. 5332 (UIP-2019-04). S. B. thanks Adrian Dumitru and Leszek Motyka for stimulating discussions. A. K would like to thank Brookhaven National Lab, where part of this work was performed, for their warm hospitality.
## Appendix A Initial condition for the Odderon
In Ref. [32] (see also [73]) Jeon and Venugopalan used a model functional with quadratic and cubic interaction (45) in order to find the Odderon operator. The parameters \(\mu^{2}\) and \(\kappa\) (46) were treated as constants. In order to include the impact parameter dependence we assume that \(\mu^{2}\) and \(\kappa\) have a transverse profile \(T_{A}(\mathbf{x}_{\perp})\) with \(\int_{\mathbf{x}_{\perp}}T_{A}(\mathbf{x}_{\perp})=1\) such that the average couplings are given by (46). In this case we are lead to a straightforward generalization of (34) in [32] given by (47). We note in passing that in the Gaussian approximation the Pomeron \(\mathcal{N}(\mathbf{x}_{\perp},\mathbf{y}_{\perp})\) takes the form
\[\mathcal{N}(\mathbf{x}_{\perp},\mathbf{y}_{\perp})=1-\exp\left[-\frac{g^{2}C_{F}\mu^{ 2}}{2}\Gamma(\mathbf{x}_{\perp},\mathbf{y}_{\perp})\right]\,. \tag{100}\]
Inserting (46) into (47), with \(T_{A}(\mathbf{z}_{\perp})\to\frac{1}{\pi R_{A}^{2}}\), formally recovers (34) in [32]. The 2D Green function \(G(\mathbf{x}_{\perp}-\mathbf{y}_{\perp})\) in (47) and (49) is explicitly given as
\[G(\mathbf{x}_{\perp}-\mathbf{y}_{\perp})=\int_{\mathbf{k}_{\perp}}\frac{\mathrm{e}^{- \mathrm{i}\mathbf{k}_{\perp}\cdot(\mathbf{x}_{\perp}-\mathbf{y}_{\perp})}}{\mathbf{k}_{\perp} ^{2}+m^{2}}\,, \tag{101}\]
with \(m\) an IR cutoff. Eq. (47) is the basis point to derive the initial condition for the Odderon. Its derivation essentially rests on the assumption that the cubic (\(\rho^{3}\)) term in (45) is parametrically suppressed as \(A^{-1/6}\)[32; 74] for a large nuclei, as compared to the quadratic (\(\rho^{2}\)) term [32] and so (47) is obtained by expanding to first order in \(\rho^{3}/\kappa\) while summing in \(\rho^{2}/\mu^{2}\) to all orders. In the following we compute \(\Gamma(\mathbf{x}_{\perp},\mathbf{y}_{\perp})\) and \(\Theta(\mathbf{x}_{\perp},\mathbf{y}_{\perp})\). Going to momentum space we have
\[\begin{split}\Gamma(\mathbf{x}_{\perp},\mathbf{y}_{\perp})& =(\pi R_{A}^{2})\int_{\mathbf{p}_{\perp}\mathbf{k}_{\perp}}T_{A}(\mathbf{p} _{\perp})\mathrm{e}^{-\mathrm{i}\mathbf{p}_{\perp}\cdot\mathbf{b}_{\perp}}\frac{1}{\bm {k}_{\perp}^{2}+m^{2}}\frac{1}{(\mathbf{k}_{\perp}-\mathbf{p}_{\perp})^{2}+m^{2}}\\ &\times\left[\mathrm{e}^{-\mathrm{i}\mathbf{k}_{\perp}\cdot\frac{\mathbf{ r}_{\perp}}{2}}-\mathrm{e}^{\mathrm{i}\mathbf{k}_{\perp}\cdot\frac{\mathbf{r}_{\perp}}{2}} \right]\left[\mathrm{e}^{-\mathrm{i}(\mathbf{p}_{\perp}-\mathbf{k}_{\perp})\cdot\frac{ \mathbf{r}_{\perp}}{2}}-\mathrm{e}^{\mathrm{i}(\mathbf{p}_{\perp}-\mathbf{k}_{\perp}) \cdot\frac{\mathbf{r}_{\perp}}{2}}\right]\,,\end{split} \tag{102}\]
where we used \(\mathbf{r}_{\perp}=\mathbf{x}_{\perp}-\mathbf{y}_{\perp}\) and \(\mathbf{b}_{\perp}=(\mathbf{x}_{\perp}+\mathbf{y}_{\perp})/2\). Assuming \(p_{\perp}\) is small (\(b_{\perp}\) is large) and expanding the phase around small \(\mathbf{r}_{\perp}\) we have
\[\begin{split}\Gamma(\mathbf{x}_{\perp},\mathbf{y}_{\perp})& \simeq(\pi R_{A}^{2})T_{A}(\mathbf{b}_{\perp})\int_{\mathbf{k}_{\perp}} \frac{(\mathbf{k}_{\perp}\cdot\mathbf{r}_{\perp})^{2}}{(\mathbf{k}_{\perp}^{2}+m^{2})^{2} }\simeq\frac{T(\mathbf{b}_{\perp})}{4\pi}\mathbf{r}_{\perp}^{2}\int_{0}^{\Lambda}k_{ \perp}\mathrm{d}k_{\perp}\frac{\mathbf{k}_{\perp}^{2}}{(\mathbf{k}_{\perp}^{2}+m^{2})^ {2}}\\ &\simeq(\pi R_{A}^{2})\frac{T_{A}(\mathbf{b}_{\perp})}{4\pi}\mathbf{r}_{ \perp}^{2}\log\left(\frac{1}{r_{\perp}m}+\mathrm{e}\right)\,,\end{split} \tag{103}\]
where in the last equality we extracted the leading log and the UV cuttoff is placed on the \(k_{\perp}\) integral as \(\Lambda\propto 1/r_{\perp}\). Using (103) in the argument of the exponential in (100) the result coincides with [30] with the conventional definition
\[Q_{S}^{2}\equiv\frac{C_{F}g^{2}\mu^{2}}{2\pi}\,. \tag{104}\]
For \(\Theta(\mathbf{x}_{\perp},\mathbf{y}_{\perp})\) we similarly have
\[\begin{split}\Theta(\mathbf{x}_{\perp},\mathbf{y}_{\perp})& \simeq(\pi R_{A}^{2})\mathrm{i}\int_{\mathbf{p}_{\perp}\mathbf{k}_{\perp}\mathbf{k}_{ \perp}^{\prime}}T(\mathbf{p}_{\perp})\mathrm{e}^{-\mathrm{i}\mathbf{p}_{\perp}\cdot\bm {b}_{\perp}}(\mathbf{k}_{\perp}\cdot\mathbf{r}_{\perp})(\mathbf{k}_{\perp}^{\prime}\cdot \mathbf{r}_{\perp})(\mathbf{p}_{\perp}-\mathbf{k}_{\perp}-\mathbf{k}_{\perp}^{\prime})\cdot \mathbf{r}_{\perp}\frac{1}{\mathbf{k}_{\perp}^{2}+m^{2}}\\ &\frac{1}{\mathbf{k}_{\perp}^{\prime 2}+m^{2}}\frac{1}{(\mathbf{p}_{ \perp}-\mathbf{k}_{\perp}-\mathbf{k}_{\perp}^{\prime})^{2}+m^{2}}\,,\end{split} \tag{10}\]
where we already expanded for \(\mathbf{r}_{\perp}\to 0\). Assuming also small \(\mathbf{p}_{\perp}\) we have
\[\begin{split}\frac{(\mathbf{p}_{\perp}-\mathbf{k}_{\perp}-\mathbf{k}_{\perp}^ {\prime})\cdot\mathbf{r}_{\perp}}{(\mathbf{p}_{\perp}-\mathbf{k}_{\perp}-\mathbf{k}_{\perp}^{ \prime})^{2}+m^{2}}&\simeq\frac{(\mathbf{p}_{\perp}\cdot\mathbf{r}_{ \perp})}{(\mathbf{k}_{\perp}+\mathbf{k}_{\perp}^{\prime})^{2}+m^{2}}-\frac{2\left((\bm {k}_{\perp}+\mathbf{k}_{\perp}^{\prime})\cdot\mathbf{r}_{\perp}\right)\left((\mathbf{k}_{ \perp}+\mathbf{k}_{\perp}^{\prime})\cdot\mathbf{p}_{\perp}\right)}{\left[(\mathbf{k}_{ \perp}+\mathbf{k}_{\perp}^{\prime})^{2}+m^{2}\right]^{2}}\,.\end{split} \tag{11}\]
The zeroth order term above vanishes by rotation invariance. Using the second term we perform the angular integrals
\[\begin{split}&\int_{0}^{2\pi}\frac{\mathrm{d}\phi}{2\pi}\int_{0}^{2 \pi}\frac{\mathrm{d}\phi^{\prime}}{2\pi}\frac{(\mathbf{k}_{\perp}\cdot\mathbf{r}_{ \perp})(\mathbf{k}_{\perp}^{\prime}\cdot\mathbf{r}_{\perp})(\mathbf{p}_{\perp}-\mathbf{k}_{ \perp}-\mathbf{k}_{\perp}^{\prime})\cdot\mathbf{r}_{\perp}}{(\mathbf{p}_{\perp}-\mathbf{k}_{ \perp}-\mathbf{k}_{\perp}^{\prime})^{2}+m^{2}}\\ &\simeq-\frac{3}{2}(\mathbf{p}_{\perp}\cdot\mathbf{r}_{\perp})\mathbf{r}_{ \perp}^{2}\frac{\mathbf{k}_{\perp}^{2}\mathbf{k}_{\perp}^{\prime 2}m^{2}}{\left[(\mathbf{k}_{ \perp}^{2}+\mathbf{k}_{\perp}^{\prime 2}+m^{2})^{2}-4\mathbf{k}_{\perp}^{2}\mathbf{k}_{ \perp}^{\prime 2}\right]^{3}\equiv(\mathbf{p}_{\perp}\cdot\mathbf{r}_{\perp})\mathbf{r}_{ \perp}^{2}\mathcal{J}(k_{\perp},k_{\perp}^{\prime})\,.\end{split} \tag{12}\]
Integrating further over \(k_{\perp}^{\prime}\) leads to
\[\begin{split}\frac{1}{2\pi}\int_{0}^{\infty}&\frac{ \mathcal{J}(k_{\perp},k_{\perp}^{\prime})k_{\perp}^{\prime}\mathrm{d}k_{ \perp}^{\prime}}{\mathbf{k}_{\perp}^{\prime 2}+m^{2}}\\ &=-\frac{3}{16\pi}\frac{\mathbf{k}_{\perp}^{2}+2m^{2}}{\mathbf{k}_{\perp}^ {2}+4m^{2}}-\frac{3}{16\pi}\frac{m^{4}}{k_{\perp}(\mathbf{k}_{\perp}^{2}+4m^{2})^{ 3/2}}\log\left[\frac{(\mathbf{k}_{\perp}^{2}+2m^{2})\left(\mathbf{k}_{\perp}^{2}+2m^{ 2}-k_{\perp}\sqrt{\mathbf{k}_{\perp}^{2}+4m^{2}}\right)-2m^{2}}{\left(\mathbf{k}_{ \perp}^{2}+2m^{2}\right)\left(\mathbf{k}_{\perp}^{2}+2m^{2}+k_{\perp}\sqrt{\mathbf{k}_ {\perp}^{2}+4m^{2}}\right)-2m^{2}}\right]\,.\end{split} \tag{13}\]
For the final integration over \(k_{\perp}\) we are only interested in extracting the leading log. We can drop the second term in (13) as it vanishes in the limit \(m\to 0\). Focusing on the first term, we eventually find
\[\frac{1}{2\pi}\int_{0}^{\Lambda}\frac{k_{\perp}\mathrm{d}k_{\perp}}{\mathbf{k}_{ \perp}^{2}+m^{2}}\int_{0}^{\infty}\frac{\mathcal{J}(k_{\perp},k_{\perp}^{ \prime})k_{\perp}^{\prime}\mathrm{d}k_{\perp}^{\prime}}{\mathbf{k}_{\perp}^{\prime 2}+m^{2}} \simeq-\frac{3}{32\pi^{2}}\int_{0}^{\Lambda}\frac{k_{\perp}\mathrm{d}k_{ \perp}}{\mathbf{k}_{\perp}^{2}+m^{2}}\frac{\mathbf{k}_{\perp}^{2}+2m^{2}}{\mathbf{k}_{ \perp}^{2}+4m^{2}}\simeq-\frac{3}{32\pi^{2}}\log\left(\frac{1}{r_{\perp}m}+ \mathrm{e}\right)\,. \tag{14}\]
In total we have
\[\Theta(\mathbf{x}_{\perp},\mathbf{y}_{\perp})=r_{\perp}^{3}(\hat{\mathbf{r}}_{\perp}\cdot \hat{\mathbf{b}}_{\perp})(\pi R_{A}^{2})\frac{\mathrm{d}T_{A}(\mathbf{b}_{\perp})}{ \mathrm{d}b_{\perp}}\frac{3}{32\pi^{2}}\log\left(\frac{1}{r_{\perp}m}+ \mathrm{e}\right)\,. \tag{15}\]
Using (46) the prefactor in (47) is
\[-g^{3}C_{3F}\frac{\mu^{6}}{\kappa}=-\frac{\pi^{2}}{4}\frac{1}{\alpha_{S}^{3}} \frac{N_{c}^{2}-4}{(N_{c}^{2}-1)^{2}}\frac{R_{A}^{4}}{A^{2}}Q_{S}^{6}\,. \tag{16}\]
Combining everything leads to
\[\begin{split}\mathcal{O}(\mathbf{r}_{\perp},\mathbf{b}_{\perp})& =-\frac{3}{128}\frac{N_{c}^{2}-4}{(N_{c}^{2}-1)^{2}}\frac{Q_{S}^{3}R_ {A}^{3}}{\alpha_{S}^{3}A^{2}}\left(R_{A}\frac{\mathrm{d}T_{A}(\mathbf{b}_{\perp})}{ \mathrm{d}b_{\perp}}(\pi R_{A}^{2})\right)(Q_{S}^{3}r_{\perp}^{3})(\hat{\mathbf{r} }_{\perp}\cdot\hat{\mathbf{b}}_{\perp})\log\left(\frac{1}{r_{\perp}m}+\mathrm{e}\right) \\ &\times\exp\left[-\frac{1}{4}Q_{S}^{2}\mathbf{r}_{\perp}^{2}(\pi R_{A }^{2})T_{A}(\mathbf{b}_{\perp})\log\left(\frac{1}{r_{\perp}m}+\mathrm{e}\right) \right]\,.\end{split} \tag{17}\]
A rather similar expression, that also involves the derivative of the transverse profile function was found in [75], see also [76]. This expression is usually found in terms of a single transverse coordinate integral that can be solved [77] to get the \(\mathcal{O}(\mathbf{r}_{\perp},\mathbf{b}_{\perp})\sim r_{\perp}^{3}\) behavior.
|
2305.09627 | Addressing computational challenges in physical system simulations with
machine learning | In this paper, we present a machine learning-based data generator framework
tailored to aid researchers who utilize simulations to examine various physical
systems or processes. High computational costs and the resulting limited data
often pose significant challenges to gaining insights into these systems or
processes. Our approach involves a two-step process: initially, we train a
supervised predictive model using a limited simulated dataset to predict
simulation outcomes. Subsequently, a reinforcement learning agent is trained to
generate accurate, simulation-like data by leveraging the supervised model.
With this framework, researchers can generate more accurate data and know the
outcomes without running high computational simulations, which enables them to
explore the parameter space more efficiently and gain deeper insights into
physical systems or processes. We demonstrate the effectiveness of the proposed
framework by applying it to two case studies, one focusing on earthquake
rupture physics and the other on new material development. | Sabber Ahamed, Md Mesbah Uddin | 2023-05-16T17:31:50Z | http://arxiv.org/abs/2305.09627v1 | # Addressing computational challenges in physical system simulations with machine learning
###### Abstract
In this paper, we present a machine learning-based data generator framework, tailored to aid researchers who utilize simulations to examine various physical systems or processes. High computational costs and the resulting limited data often pose significant challenges to gaining insights into these systems or processes. Our approach involves a two-step process: initially, we train a supervised predictive model using a limited simulated dataset to predict simulation outcome. Subsequently, a reinforcement learning agent is trained to generate accurate, simulation-like data by leveraging the supervised model. With this framework, researchers can generate more accurate data and know the outcomes without running high computational simulations. Which enables them to explore the parameter space more efficiently and gain deeper insights into physical systems or processes. In this paper, we demonstrate the effectiveness of the proposed framework by applying it to two case studies one focusing on earthquake rupture physics and the other on new material development.
Introduction
Understanding physical systems or processes often involves the use of computational simulations, a method that can be both time-consuming and computationally expensive. For instance, simulating the weather and climate dynamics [1], modeling complex biological systems like protein folding [2], or even understating earthquake rupture process [3] or seismic activities [4, 5] requires a significant computational demands.These constraints can severely limit the amount of data available for research and impact our ability to gain deep insights into these systems or processes.
In response to these challenges, we propose a novel machine learning-based generator framework designed to enhance the efficiency of data generation in studies that rely on simulations. This framework operates in a two-step approach. Initially, a supervised predictive model is trained using a small-scale simulated dataset with varying input parameters. This predictive model, functioning as a surrogate for the original physical simulations, is then used to train a reinforcement learning (RL) agent [6]. The RL agent, guided by the feedback from the predictive model, learns to generate more accurate, simulation-like data.
The key advantage of this framework is that it facilitates the generation of larger quantities of data without the need for further high computational simulations. This allows researchers to explore the parameter space more efficiently, thereby gaining deeper insights into the physical systems or processes under study. Furthermore, by employing a reinforcement learning approach, our framework can continually adapt and improve over time, offering the potential for even greater accuracy and efficiency in future data generation.
In this paper, we demonstrate the effectiveness of the proposed framework by applying it to two case studies in material science and geodynamics, thereby highlighting its versatility and broad applicability across various domains of physical sciences.
## 2 Generator Framework
The proposed machine learning framework consists of two main components: (1) a supervised predictive model, (2) a reinforcement learning agent. The framework is consists of these interdependent components that work together to generate more simulations like data and predict the outcomes of simula
tions.
### 2.1 Overview
First, we generate some simulated dataset with varying input parameters (\(X_{s}\)). We then train a supervised predictive model, \(f(X_{s})\) to predict the simulation outcomes (\(y_{s}\)). In the second step, we train a reinforcement learning agent with policy \(\pi_{\theta}(a|s)\) using the supervised model to produce more accurate, simulation-like data. Where \(a\) represents the action (generated data \(X_{g}\)) and \(s\) is the random input parameter (\(s\sim\mathcal{N}\)) for the agent. \(\theta\) in the policy \(\pi_{\theta}(a|s)\) represents the parameters of the policy.
The agent's objective is to learn a policy that maximizes the outcome predicted by the supervised model. The reinforcement learning agent achieves this by adjusting the policy parameters \(\theta\) to increase the probability of actions that lead to higher predicted outcomes [6]. In the following subsections, we describe each component, along with the underlying models and methods.
### Supervised Model: Predicting the Outcome of Simulations
Our framework's core is the supervised predictive model that predicts simulation outcomes (\(y_{s}\)) based on given input parameters (\(X_{s}\)). It functions as a map, approximating the genuine relationship between the inputs and outcomes, and can be applied to both classification and regression tasks.
The model's performance significantly influences the subsequent reinforcement learning agent's effectiveness. A well-performing model is vital for
Figure 1: The schematic diagram shows how the framework works
generating simulation-like data as the agent bases its data generation process on the model's predictions. If the model is inaccurate, it could misdirect the agent, leading to less realistic data. Therefore, a robust and accurate model is crucial in our framework, impacting the quality of the generated data and the framework's overall efficiency.
### RL Agent: Generate simulation like data
In our framework, the reinforcement learning agent learns how to generate data from random inputs (Figure-1). This process is not arbitrary; it's guided by the supervised model, which ensures the generated parameters are contextually relevant. Through iterative learning and exploration, the agent develops an ability to manipulate these random inputs in a way that can maximize or minimize the outcome as predicted by the supervised model. This learning process enables the agent to produce data that closely aligns with the complex dynamics of the physical system or process under study.
#### 2.3.1 Reward Function
The reward function, typically denoted as \(R(s,a)\), is a vital component of the reinforcement learning (RL) process [7, 8, 9]. Here, \(s\) represents the current state and \(a\) represents the action taken. In our context, the reward function is designed to incentivize the RL agent to generate high-quality input parameter combinations that lead to desirable simulation outcomes.
The design of the reward function can vary depending on the specific objectives of the simulation [7]. For instance, if the aim is to minimize the outcome predicted by the supervised model, denoted as \(f(X_{s})\), the reward function can be structured to give higher rewards for actions resulting in lower predicted outcomes. Conversely, if the aim is to maximize the outcome, the reward function can be designed to provide greater rewards for actions that result in higher predicted outcomes.
This flexibility in defining the reward function allows our RL agent to adapt to various objectives, making our framework versatile and capable of generating high-quality data for a broad range of tasks [6, 10, 11, 12].
#### 2.3.2 Policy Learning for Data Generation
In our proposed framework, the reinforcement learning (RL) agent learns a policy, denoted as \(\pi_{\theta}(a|s)\), that determines the likelihood of choosing a particular action \(a\) (or set of input parameters), given the current state \(s\). Here, \(\theta\) denotes the parameters of the policy. The goal of the RL agent is to learn an optimal policy, denoted as \(\pi_{\theta}^{*}(a|s)\), that maximizes the expected cumulative reward, thereby leading to the generation of accurate, simulation-like data.
The learning process is guided by the Bellman equation for policy iteration [6]:
\[V^{\pi}(s)=\sum_{a}\pi_{\theta}(a|s)\left[R(s,a)+\gamma\sum_{s^{\prime}}P(s^{ \prime}|s,a)V^{\pi}(s^{\prime})\right] \tag{1}\]
In this equation, \(V^{\pi}(s)\) represents the value of state \(s\) under policy \(\pi\), \(R(s,a)\) is the reward obtained by performing action \(a\) in state \(s\), \(P(s^{\prime}|s,a)\) denotes the probability of transitioning to state \(s^{\prime}\) from state \(s\) after taking action \(a\), and \(\gamma\) is the discount factor, which determines the present value of future rewards.
Through iterative updates of the value function and the policy, guided by the Bellman equation, the RL agent gradually learns the optimal policy. This learned policy can generate the best actions for each state, leading to the generation of high-quality, simulation-like data. For a more detailed discussion of the reinforcement learning process, please refer to [6].
## 3 Case Studies
In this section, we will demonstrate the application and effectiveness of our framework through a two case studies. The first case study focuses on the field of earthquake rupture physics, a complex and data-scarce area that significantly benefits from our approach. The second case study focuses on the field of material science, where our framework can be used to optimize processing conditions for developing high-performance materials.
### Earthquake Rupture Physics
Simulating dynamic earthquake rupture propagation poses significant challenges due to uncertainties in fault slip physics, stress conditions, and frictional properties. Numerical simulations, while essential in understanding rupture physics, are highly dependent on initial parameters, which are difficult to optimize given the vast parameter space. As a result,researchers often resort to simplifying assumptions or trial-and-error methods to generate simulations that can overlook complexities and are computationally expensive [13, 14, 15, 16]
#### 3.1.1 Rupture simulations and data processing
In this example, we used 2000 simulated earthquake ruptures that were created by [17] to predict if an earthquake can break through a fault with geometric heterogeneity (Figure. 2). The domain is 32 km long and 24 km wide. An open-source C++ and python based library fdfault[18] was used to generate the ruptures.
In each simulation, eight parameters were varied: x and y components of normal stress (\(\sigma_{xx}\) and \(\sigma_{yy}\)), shear stress (\(\sigma_{xy}\)), dynamic friction coefficient, friction drop (\(\mu_{s}-\mu_{d}\)), critical slip distance (\(d_{c}\)), and width and height of the geometric heterogeneity at the center.
Figure 2: A zoomed view of the two-dimensional fault geometry (not to the scale). The domain is 32 km long along the strike of the fault and 24 kilometers wide across the fault. The rupture starts to nucleate 10 km to the left of the barrier and propagates from the hypocenter towards the barrier
#### 3.1.2 Supervised Model to Predict Earthquake Rupture
We used LightGBM[19], a gradient boosting decision tree-based algorithm, to train a supervised machine learning model with the objective of predicting earthquake rupture outcomes. This training was conducted on a dataset of 1600 simulation instances, with an additional 400 simulations used for model validation. The target variable for prediction was the binary rupture outcome, coded as '1' when the rupture successfully breaks through the barrier, and '0' when it does not.
To enhance the model's predictive capabilities, we created additional features. These included the ratio of width to height, the difference in normal stresses (\(\sigma_{xx}-\sigma_{yy}\)), and the friction product (\(\mu_{d}-sdrop\)), among others. These features were derived from the original parameters to capture more complex relationships in the data.
The model was then trained using the expanded feature set and evaluated on the test data. The model performed quite well, achieving a ROC-AUC score of 0.8991 and a macro F1 score of 0.8266. The confusion matrix showed a good balance between sensitivity and specificity, with 100 true positives, 239 true negatives, 33 false positives, and 28 false negatives.
#### 3.1.3 Reinforcement Learning to Generate Rupture Parameters
We used Stable-baseline-3[20], a reinforcement learning library, to train an RL agent. The agent's goal was to create rupture parameters that would produce a rupture outcome mirroring that of the training data. We trained the RL agent using the Proximal Policy Optimization (PPO) algorithm[21], a commonly preferred method due to its effective balance between sample complexity and computational demand.
The reward function is crucial in this setup. We used supervised model to guide the RL agent by providing feedback on the quality of the generated parameters. The generated parameters are used to predict an earthquake rupture outcome using the supervised model. If the predicted outcome is out of the valid range or if the generated parameters are physically implausible (for example, if height or width is negative), a negative reward is given. Otherwise, a positive reward is granted, encouraging the RL agent to generate similar parameter combinations in the future. This mechanism ensures the RL agent learns to produce plausible and high-quality data over time, enhancing the effectiveness of our earthquake rupture prediction framework.
#### Results and Insights
We used our reinforcement learning (RL) agent to generate 5000 data points, from which we filtered those that fall within the prediction range of the supervised model. As illustrated in Figure-3, the data produced by the RL agent spans a broader and more nuanced spectrum of values between 0 and 1, in contrast to the training data which is comprising of only 0s and 1s. This wider range of outcomes generated by the RL agent provides a more detailed insight into the complex rupture process, enhancing the predictive model's performance by offering varied and comprehensive training data. Thus, the RL agent's role in data generation is pivotal to improving the overall understanding and prediction of earthquake ruptures.
Figure-4 presents a 2D scatter plot, demonstrating the correlation between Height and Width, as well as Normal Stresses within both the training and generated datasets. The generated data, a product of the RL agent's environment, displays a normal distribution. This is attributed to the state initialization and action space definition in the RL environment. The state, regularly refreshed with values from a standard normal distribution, coupled with a standard normal transformation applied to the data, ensures the normal distribution of the generated outcome. This approach allows the RL agent to explore a wide array of possible states, thereby producing a diverse set of generated data, allowing the RL agent to explore a broad spectrum of possible states.
Figure 3: Histogram plot representing rupture outcomes. (a) displays the distribution of outcomes in the original training data, which are confined to binary labels of 0 and 1, signifying specific classes of rupture. (b) illustrates the outcomes generated by reinforcement learning agent, which are spread continuously over a range from 0 to 1.
Using the diverse generated data and parameter space provided by the RL agent, we were able to establish a range of plausible values for each parameter - these ranges being defined by the minimum and maximum values. Utilizing these information we used bayesian based optimization to understand the individual impacts of various parameters on earthquake rupture. During the optimization process, each trial involved selecting a combination of the parameters and the parameters of the model. The parameters of the model are set to \(\alpha_{\text{av}}=0.001\) and \(\alpha_{\text{av}}=0.0001\). The parameters of the model are set to \(\alpha_{\text{av}}=0.0001\) and \(\alpha_{\text{av}}=0.0001\). The parameters of the model are set to \(\alpha_{\text{av}}=0.0001\) and \(\alpha_{\text{av}}=0.0001\). The parameters of the model are set to \(\alpha_{\text{av}}=0.0001\) and \(\alpha_{\text{av}}=0.0001\). The parameters of the model are set to \(\alpha_{\text{av}}=0.0001\) and \(\alpha_{\text{av}}=0.0001\). The parameters of the model are set to \(\alpha_{\text{av}}=0.0001\) and \(\alpha_{\text{av}}=0.0001\). The parameters of the model are set to \(\alpha_{\text{av}}=0.0001\) and \(\alpha_{\text{av}}=0.0001\). The parameters of the model are set to \(\alpha_{\text{av}}=0.0001\) and \(\alpha_{\text{av}}=0.0001\). The parameters of the model are set to \(\alpha_{\text{av}}=0.0001\) and \(\alpha_{\text{av}}=0.0001\). The parameters of the model are set to \(\alpha_{\text{av}}=0.0001\) and \(\alpha_{\text{av}}=0.0001\). The parameters of the model are set to \(\alpha_{\text{av}}=0.0001\) and \(\alpha_{\text{av}}=0.0001\). The parameters of the model are set to \(\alpha_{\text{av}}=0.0001\) and \(\alpha_{\text{av}}=0.0001\). The parameters of the model are set to \(\alpha_{\text{av}}=0.0001\) and \(\alpha_{\text{av}}=0.0001\). The parameters of the model are set to \(\alpha_{\text{av}}=0.0001\) and \(\alpha_{\text{av}}=0.0001\). The parameters of the model are set to \(\alpha_{\text{av}}=0.0001\) and \(\alpha_{\text{av}}=0.0001\). The parameters of the model are set to \(\alpha_{\text{av}}=0.0001\) and \(\alpha_{\text{av}}=0.0001\). The parameters of the model are set to \(\alpha_{\text{av}}=0.0001\) and \(\alpha_{\text{av}}=0.0001\). The parameters of the model are set to \(\alpha_{\text{av}}=0.0001\) and \(\alpha_{\text{av}}=0.0001\). The parameters of the model are set to \(\alpha_{\text{av}}=0.0001\) and \(\alpha_{\text{av}}=0.0001\). The parameters of the model are set to \(\alpha_{\text{av}}=0.0001\) and \(\alpha_{\text{av}}=0.0001\). The parameters of the model are set to \(\alpha_{\text{av}}=0.0001\) and \(\alpha_{\text{av}}=0.0001\). The parameters of the model are set to \(\alpha_{\text{av}}=0.0001\) and \(\alpha_{\text{av}}=0.0001\). The parameters of the model are set to \(\alpha_{\text{av}}=0.0001\) and \(\alpha_{\text{av}}=0.0001\). The parameters of the model are set to \(\alpha_{\text{av}}=0.0001\) and \(\alpha_{\text{av}}=0.0001\). The parameters of the model are set to \(\alpha_{\text{av}}=0.0001\) and \(\alpha_{\text{av}}=0.0001\). The parameters of the model are set to \(\alpha_{\text{av}}=0.0001\) and \(\alpha_{\text{av}}=0.0001\). The parameters of the model are set to \(\alpha_{\text{av}}=0.0001\) and \(\alpha_{\text{av}}=0.0001\). The parameters of the model are set to \(\alpha_{\text{av}}=0.0001\) and \(\alpha_{\text{av}}=0.0001\). The parameters of the model are set to \(\alpha_{\text{av}}=0.0001\) and \(\alpha_{\text{av}}=0.0001\). The parameters of the model are set to \(\alpha_{\text{av}}=0.0001\) and \(\alpha_{\text{av}}=0.0001\). The parameters of the model are set to \(\alpha_{\text{av}}=0.0001\) and \(\alpha_{\text{av}}=0.0001\). The parameters of the model are set to \(\alpha_{\text{av}}=0.0001\) and \(\alpha_{\text{av}}=0.0001\). The parameters of the model are set to \(\alpha_{\text{av}}=0.0001\) and \(\alpha_{\text{av}}=0.0001\). The parameters of the model are set to \(\alpha_{\text{av}}=0.0001\) and \(\alpha_{\text{av}}=0.0001\). The parameters of the model are set to \(\alpha_{\text{av}}=0.0001\) and \(\alpha_{\text{av}}=0.0001\). The parameters of the model are set to \(\alpha_{\text{av}}=0.0001\) and \(\alpha_{\text{av}}=0.0001\). The parameters of the model are set to \(\alpha_{\text{av}}=0.0001\) and \(\alpha_{\text{av}}=0.0001\). The parameters of the model are set to \(\alpha_{\text{av}}=0.0001\) and \(\alpha_{\text{av}}=0.0001\). The parameters of the model are set to \(\alpha_{\text{av}}=0.0001\) and \(\alpha_{\text{av}}=0.0001\). The parameters of the model are set to \(\alpha_{\text{av}}=0.0001\) and \(\alpha_{\text{av}}=0.0001\). The parameters of the model are set to \(\alpha_{\text{av}}=0.0001\) and \(\alpha_{\text{av}}=0.0001\). The parameters of the model are set to \(\alpha_{\text{av}}=0.0001\) and \(\alpha_{\text{av}}=0.0001\). The parameters of the model are set to \(\alpha_{\text{av}}=0.0001\) and \(\alpha_{\text{av}}=0.0001\). The parameters of the model are set to \(\alpha_{\text{av}}=0.0001\) and \(\alpha_{\text{av}}=0.0001\). The parameters of the model are set to \(\alpha_{\text{av}}=0.0001\) and \(\alpha_{\text{av}}=0.0001\). The parameters of the model are set to \(\alpha_{\text{av}}=0.0001\) and \(\alpha_{\text{av}}=0.0001\). The parameters of the model are set to \(\alpha_{\text{av}}=0.0001\) and \(\alpha_{\text{av}}=0.0001\). The parameters of the model are set to \(\alpha_{\text{av}}=0.0001\) and \(\alpha_{\text{av}}=0.0001\). The parameters of the model are set to \(\alpha_{\text{av}}=0.0001\) and \(\alpha_{\text{av}}=0.0001\). The parameters of the model are set to \(\alpha_{\text{av}}=0.0001\) and \(\alpha_{\text{av}}=0.0001\). The parameters of the model are set to \(\alpha_{\text{av}}=0.0001\) and \(\alpha_{\text{av}}=0.0001\). The parameters of the model are set to \(\alpha_{\text{av}}=0.0001\) and \(\alpha_{\text{av}}=0.0001\). The parameters of the model are set to \(\alpha_{\text{av}}=0.0001\) and \(\alpha_{\text{av}}=0.0001\). The parameters of the model are set to \(\alpha_{\text{av}}=0.0001\) and \(\alpha_{\text{av}}=0.0001\). The parameters of the model are set to \(\alpha_{\text{av}}=0.0001\) and \(\alpha_{\text{av}}=0.0001\). The parameters of the model are set to \(\alpha_{\text{av}}=0.0001\) and \(\alpha_{\text{av}}=0.0001\). The parameters of the model are set to \(\alpha_{\text{av}}=0.0001\) and \(\alpha_{\text{av}}=0.0001\). The parameters of the model are set to \(\alpha_{\text{av}}=0.0001\) and \(\alpha_{\text{av}}=0.0001\). The parameters of the model are set to \(\alpha_{\text{av}}=0.0001\) and \(\alpha_{\text{av}}=0.0001\). The parameters of the model are set to \(\alpha_{\text{av}}=0.0001\) and \(\alpha_{\text{av}}=0.0001\). The parameters of the model are set to \(\alpha_{\text{av}}=0.0001\) and \(\alpha_{\text{av}}=0.0001\). The parameters of the model are set to \(\alpha_{\text{av}}=0.0001\) and \(\alpha_{\text{av}}=0.001\). The parameters of the model are set to \(\alpha_{\text{av}}=0.0001\) and \(\alpha_{\text{av}}=0.
of parameters within these defined ranges. The objective was to find the combination that maximized the rupture outcome. By conducting this optimization over 1000 trials, we systematically explored the parameter space to identify the combinations that lead to the highest rupture outcomes.
The results, illustrated by the contour plots (Figure-5), underscored the sensitivity of the rupture outcome to both geometric heterogeneities and stress conditions. Notably, the rupture outcome demonstrated high dependence on the height and width of the geometric heterogeneity and a stronger sensitivity to normal stress compared to shear stress.Through this optimization method, which leverages the diverse data generated by the RL agent, we obtained a comprehensive understanding of the intricate dynamics within the rupture process.
### Material Science
Material design involves searching for the best solutions by exploring the design space using material composition and hierarchical structure. Molecular dynamics (MD) simulations have been employed to understand mechanical properties and design materials at the atomic and molecular level [22, 23, 24], but their computational and time constraints limit their application to a few nanoscale simulations, failing to provide a comprehensive understanding. This approach also overlooks the mechanical behavior of materials at larger scales which is relevant to their applications. To overcome this limitation, machine learning models can generate new datasets, offering a more thorough comprehension of material behavior across various size scales.
#### 3.2.1 Simulations and data processing
In this example, we used 18 models with different geometrical and loading parameters to predict the scratching load, normal load, and friction coefficient. Multilayer samples were used, comprising alternating layers of ceramic and metal (Figure-6). The bottom metal layer had a fixed thickness of 6 nm and acted as an elastic foundation for the layers above, mimicking the behavior of ceramic/metal nanolaminates. The width of each multilayer was chosen to minimize strains and boundary effects, and periodic boundary conditions were applied to the side faces. The thickness of the metallic and ceramic layers was varied to investigate their impact on the mechanical and tribological properties of the samples. Nano-indentation was performed using a rigid
spherical nano-indentation with penetration depths ranging from 3 nm to 7 nm, ensuring sufficient penetration of the metallic layers. For a penetration depth of 7 nm, the minimum indenter radius was set to 10 nm (as a note, the indenter radius varied from 5 nm to 40 nm depending on the penetration depth). The indenter speed was set to 100 m/s for nano-indentation and 250 m/s for nano-scratching, with a scratching length of 20 nm.
#### 3.2.2 Supervised Learning to Generate Material Data
We also used LightGBM[19] to develop a supervised learning model to predict the frictional coefficient along the scratching distance. The model was trained on 23 of these models, validated on 5, and tested on the remaining 7 models. To create a unique model, we combined the model layers with indenter radius and penetration depth. Then for each model, we collected frictional coefficient from different scratching data points.
The performance of the supervised learning model was robust, with an R-squared value of 0.9581, a Mean Squared Error of 0.0030, Root Mean Squared Error of 0.0550 and Mean Absolute Error of 0.0455 on the test data.
Figure-7 displays the predicted frictional coefficient over the course of scratching for various test models. Despite the limited size of the training dataset, the model appears to predict well across the test simulations. However, the figure also indicates a difficulty in accurately predicting models with larger indenter radius, as exemplified by the model where intender raius is
Figure 6: Schematic diagram illustrating a simulation cell used for nanoindentation and scratching. The layers are numbered 1-4 from top to bottom. Only the thickness of layer 2 was varied.
20.
## 4 Limitations
Our approach, while innovative, is not without limitations. The quality of the generated data and subsequent insights are tied to the quality of the training data provided to the RL agent. Any biases or gaps in this data could impact the efficacy of the RL agent's learning. Similarly, the supervised learning model's effectiveness is reliant on the richness and diversity of its training data. The current reward function design, although functional, may oversimplify the problem and limit the RL agent's ability to learn complex relationships or adapt for nuanced goals. Additionally, despite the RL agent's broad exploration capabilities, it is still constrained by the defined action and state spaces. Future work should aim to address these limitations, thereby enhancing the robustness and versatility of this model.
## 5 Conclusion and Future Work
The present work has demonstrated the innovative use of reinforcement learning, in generating a diverse parameter space. The designed custom environment and the reward system has proven to be effective in generating data that both supplements and extends the available training data, providing a
Figure 7: Predicted frictional coefficient along the scratching distance for the test data on different models test simulations. Each unique model name, like ’6\(2\)9_1_d7_r10’, encodes key properties: ’6\(2\)9_1’ corresponds to the thickness of each material layer, ’d7’ specifies a 7 nm indenter depth, and ’r10’ represents a 10 nm indenter radius.
broader perspective to the problem at hand.
Furthermore, the methodologies developed in this study have the potential to be applied across a range of other domains. The use of reinforcement learning and optimization techniques can be instrumental in exploring and understanding a plethora of complex systems. This work, thus, lays a solid foundation for harnessing the power of reinforcement learning and optimization in a broad spectrum of applications, opening up numerous exciting avenues for future research.
|
2306.17149 | The recent gravitational wave observation by pulsar timing arrays and
primordial black holes: the importance of non-gaussianities | We study whether the signal seen by pulsar timing arrays (PTAs) may originate
from gravitational waves (GWs) induced by large primordial perturbations. Such
perturbations may be accompanied by a sizeable primordial black hole (PBH)
abundance. We improve existing analyses and show that PBH overproduction
disfavors Gaussian scenarios for scalar-induced GWs at 2{\sigma} and
single-field inflationary scenarios, accounting for non-Gaussianity, at
3{\sigma} as the explanation of the most constraining NANOGrav 15-year data.
This tension can be relaxed in models where non-Gaussianites suppress the PBH
abundance. On the flip side, the PTA data does not constrain the abundance of
PBHs. | Gabriele Franciolini, Antonio Junior Iovino, Ville Vaskonen, Hardi Veermae | 2023-06-29T17:51:16Z | http://arxiv.org/abs/2306.17149v3 | # The recent gravitational wave observation by pulsar timing arrays
###### Abstract
We study whether the signal seen by pulsar timing arrays (PTAs) may originate from gravitational waves (GWs) induced by large primordial perturbations. Such perturbations may be accompanied by a sizeable primordial black hole (PBH) abundance. We improve existing analyses and show that PBH overproduction disfavors Gaussian scenarios for scalar-induced GWs at \(2\sigma\) and single-field inflationary scenarios, accounting for non-Gaussianity, at \(3\sigma\) as the explanation of the most constraining NANOGrav 15-year data. This tension can be relaxed in models where non-Gaussianites suppress the PBH abundance. On the flip side, the PTA data does not constrain the abundance of PBHs.
**Introduction -** The observation of a common spectrum process in the NANOGrav 12.5-year data [1] sparked significant scientific interest and led to numerous interpretations of the signal as potential a stochastic gravitational wave background (SGWB) from cosmological sources, such as first order phase transitions [2; 3; 4; 5; 6; 7; 8; 9; 10; 11; 12; 13], cosmic strings and domain walls [14; 15; 16; 17; 18; 19; 20; 21; 22; 23; 24; 25; 26], or scalar-induced gravitational waves (SIGWs) generated from primordial fluctuations [27; 28; 29; 30; 31; 32; 33; 34; 35; 36; 37; 38; 39; 40; 41; 42; 43; 44; 45; 46] (see also [47]). Consequently, observation of the common spectrum process was reported by other pulsar timing array (PTA) collaborations [48; 49; 50]. The recent PTA data release by the NANOGrav [51; 52], EPTA (in combination with InPTA) [53; 54; 55], PPTA [56; 57; 58] and CPTA [59] collaborations, shows evidence of a Hellings-Downs pattern in the angular correlations which is characteristic of gravitational waves (GW), with the most stringent constraints and largest statistical evidence arising from the NANOGrav 15-year data (NANOGrav15). The analysis of the NANOGrav 12.5 year data release suggested a nearly flat GW spectrum, \(\Omega_{\rm GW}\propto f^{(-1.5,0.5)}\) at \(1\sigma\), in a narrow range of frequencies around \(f=5.5\) nHz. In contrast, the recent 15-year data release finds a steeper slope, \(\Omega_{\rm GW}\propto f^{(1.3,2.4)}\) at \(1\sigma\) (see Fig. S2). Motivated by this finding, a new analysis is necessary to explore which SGWB formation mechanisms can lead to the generation of a signal consistent with these updated observations.
As reported by the NANOGrav collaboration [60], an astrophysical interpretation of the signal (i.e. as SGWB emitted by SMBH mergers) require either a large number of model parameters to be at the edges of expected values or a small number of them being notably different from standard expectations. For example, the naive \(\Omega\propto f^{2/3}\) scaling predicted for GW-driven supermassive black hole (SMBH) binaries is disfavoured at \(2\sigma\) by the latest NANOGrav data [60; 61]. However, environmental and statistical effects can lead to different predictions [60; 61; 62; 63; 64; 65; 66]. As reported by the NANOGrav collaboration, a cosmological explanation seems to provide a better fit to the data [61], even though, at the moment, an astrophysical origin cannot be ruled out.
In this letter, we consider the possibility that the recent PTA data can be explained by the SGWB associated with large curvature fluctuations generated during inflation. The SIGWs are produced by a second-order effect resulting from scalar perturbations re-entering the horizon after the end of inflation [67; 68; 69; 70; 71; 72; 73]. On top of SGWBs, sufficiently large curvature perturbations can lead to the formation of primordial black holes (PBH) at horizon re-entry [74; 75; 76] (see [77; 78] for recent reviews).
In general, PTA experiments are sensitive to frequencies of the SGWB associated with the production of PBHs near the stellar mass range. The possibility of PBHs constituting all dark matter (DM) is restricted in this mass range by optical lensing [79; 80; 81; 82; 83; 84] and GW observations [85; 86; 87; 88; 89; 90] and accretion [91; 92; 93; 94; 95; 96; 97]. However, the merger events involving binary PBHs can potentially account for some of the observed black hole mergers detected by LIGO/Virgo, provided they comprise \(\mathcal{O}(0.1\%)\) of DM [85; 86; 87; 88; 89; 90; 98; 99; 100; 99; 100; 98; 101]. Crucially, requiring no PBH overproduction strongly limits the maximum amplitude of the SIGW from this scenario, as we will see in detail.
Large primordial fluctuations are possible in a wide range of scenarios including single-field inflation with specific features in the inflaton's potential [102; 103; 104; 105; 106; 107; 108; 109; 110; 111; 112; 113; 114; 115; 116; 117; 118; 119; 120; 121; 122; 123; 124; 125; 126; 127], the most common being a quasi-inflection-point, hybrid inflation [128; 129; 130; 131; 132; 133] and models with spectator field, i.e., the curvaton [134; 135; 136; 137; 138; 139; 140; 133; 134; 139; 150; 135; 134; 136; 137; 138; 139; 151; 152; 153]. Even if the models generate similar peaks in the curvature power spectrum and thus also similar SIGW spectra, they may vary in the amount of non-Gaussianity (NG) which has a notable impact on the PBH abundance. We aim to extend the analysis reported by the NANOGrav collaboration [61] by performing a state-of-the-art estimate of the PBH abundance and, most importantly, by considering in detail the impact of NGs in various inflationary models predicting enhanced spectral features.
**Scalar-induced gravitational waves** - Scalar perturbations capable of inducing an observable SGWB and a sizeable PBH abundance must be strongly enhanced when compared to the CMB fluctuations. In the following, we aim to be as model-independent as possible and assume ansatze for spectral peaks applicable for classes of models.
A typical class of spectral peaks encountered, for instance, in single-field inflation and curvaton models can be described by a _broken power-law_ (BPL)
\[\mathcal{P}_{\zeta}^{\rm BPL}(k)=A\frac{\left(\alpha+\beta\right)^{\gamma}}{ \left(\beta\left(k/k_{*}\right)^{-\alpha/\gamma}+\alpha\left(k/k_{*}\right)^{ \beta/\gamma}\right)^{\gamma}}, \tag{1}\]
where \(\alpha,\beta>0\) describe respectively the growth and decay of the spectrum around the peak. One typically has \(\alpha\lesssim 4\)[154]. The parameter \(\gamma\) characterizes the flatness of the peak. Additionally, in quasi-inflection-point models producing stellar-mass PBHs, we expect \(\beta\gtrsim 0.5\), while for curvaton models \(\beta\gtrsim 2\). Another broad class of spectra can be characterized by a _log-normal_ (LN) shape
\[\mathcal{P}_{\zeta}^{\rm LN}(k)=\frac{A}{\sqrt{2\pi}\Delta}\,\exp\left(-\frac {1}{2\Delta^{2}}\ln^{2}(k/k_{*})\right) \tag{2}\]
Such spectra appear, e.g., in a subset of hybrid inflation and curvaton models. We find, however, that our conclusions are only weakly dependent on the details of peak shape.
The present-day SIGNW background emitted during radiation domination is gauge independent [155, 156, 157, 158] and possesses a spectrum
\[h^{2}\Omega_{\rm GW}(k)\!=\!\frac{h^{2}\Omega_{r}}{24}\!\left(\frac{g_{*}}{g_ {*}^{0}}\right)\!\left(\frac{g_{*s}}{g_{*s}^{0}}\right)^{-\frac{4}{3}}\!\! \mathcal{P}_{h}(k), \tag{3}\]
where \(g_{*s}\equiv g_{*s}\left(T_{k}\right)\) and \(g_{*}\equiv g_{*}\left(T_{k}\right)\) are the effective entropy and energy degrees of freedom (evaluated at the time of horizon crossing of mode \(k\) and at present-day with the superscript \(0\)), while \(h^{2}\Omega_{r}=4.2\times 10^{-5}\) is the current radiation abundance. Each mode \(k\) crosses the horizon at the temperature \(T_{k}\) given by the relation
\[k\!=\!1.5\!\times\!10^{7}{\rm Mpc}^{-1}\left(\frac{g_{*}}{106.75}\right)^{ \frac{1}{2}}\!\!\left(\frac{g_{*s}}{106.75}\right)^{-\frac{1}{3}}\!\!\left( \frac{T_{k}}{{\rm GeV}}\right), \tag{4}\]
while corresponding to a current GW frequency
\[f=1.6\,{\rm nHz}\left(\frac{k}{10^{6}\,{\rm Mpc}^{-1}}\right). \tag{5}\]
The tensor mode power spectrum is [159, 160]
\[\mathcal{P}_{h}(k)= 4\int_{1}^{\infty}{\rm d}t\int_{0}^{1}{\rm d}s\left[\frac{(t^{2} -1)(1-s^{2})}{t^{2}-s^{2}}\right]^{2}\] \[\qquad\times\mathcal{I}_{t,s}^{2}\,\mathcal{P}_{\zeta}\left(k\frac {t-s}{2}\right)\mathcal{P}_{\zeta}\left(k\frac{t+s}{2}\right), \tag{6}\]
where the transfer function
\[\mathcal{I}_{t,s}^{2} =\frac{288(s^{2}+t^{2}-6)^{2}}{(t^{2}-s^{2})^{6}}\Bigg{[}\frac{ \pi^{2}}{4}(s^{2}+t^{2}-6)^{2}\Theta(t-\sqrt{3})\] \[+\left(t^{2}-s^{2}-\frac{1}{2}(s^{2}+t^{2}-6)\log\left|\frac{t^{ 2}-3}{3-s^{2}}\right|\right)^{2}\Bigg{]}. \tag{7}\]
In order to speed up the best likelihood analysis, we assume perfect radiation domination and do not account for the variation of sound speed during the QCD era (see, for example, [161, 162]) which also leads specific imprints in the low-frequency tail of any cosmological SGWB [163]. On top of that, cosmic expansion may additionally be affected by unknown physics in the dark sector, which can, e.g., lead to a brief period of matter domination of kination [164, 165, 166, 167, 168, 169]. Both SIGNW and PBH production can be strongly affected in such non-standard cosmologies [170, 171, 172, 173, 174, 175].
Eq. (6) neglects possible corrections due to primordial NGs. This is typically justified because, contrary to the PBH abundance which is extremely sensitive to the tail of the distribution, the GW emission is mostly controlled by the characteristic amplitude of perturbations, and thus well captured by the leading order. In general, the computation of the SGWB is dominated by Eq. (6) and remains the in the perturbative regime if \(A(3f_{\rm NL}/5)^{2}\ll 1\), where \(f_{\rm NL}\) is the coefficient in front of the quadratic piece of the expansion (see Eq. (11) below). For the type of NGs considered in this work, we always remain within this limit. Interestingly, however, both negative and positive \(f_{\rm NL}\) increase the SIGNW abundance, with the next to leading order correction \(\Omega_{\rm GW}^{\rm NL}/\Omega_{\rm GW}\propto A(3f_{\rm NL}/5)^{2}\)[176, 177, 178, 179, 180, 181, 182, 183] (see also [184]). We leave the inclusion of these higher-order corrections for future work.
We perform a log-likelihood analysis of the NANOGrav15 and EPTA data, fitting, respectively, the posterior distributions for \(\Omega_{\rm GW}\) for the 14 frequency bins reported in Ref. [51, 52] and for the 9 frequency bin [53], including only the last 10.3 years of data. The results are shown in Figs. 1 and 2 for the BPL and LN scenarios, respectively. This analysis is simplified when compared to the one reported by PTA collaborations, which fit the PTA time delay data, modelling pulsar intrinsic noise as well as pulsar angular correlations. However, it provides fits consistent with the results of the NANOGrav [61] and EPTA [55] collaborations and thus suffices for the purposes of this letter. We neglect potential astrophysical foregrounds, by assuming that the signal arises purely from SIGWs. Around \(A=\mathcal{O}(1)\) or flat low \(k\) tails, the scenarios considered here are also constrained by CMB observations [185, 186]. However, these constraints tend to be less strict than PBH overproduction and we will neglect them here.
It is striking to see that the posterior distributions shown in Figs. 1 and 2 for both BPL and LN analyses indicate a rather weak dependence on the shape parameters, which are (\(\alpha\),\(\beta\),\(\gamma\)) and \(\Delta\), respectively, as
long as the spectra are sufficiently narrow in the IR, i.e. \(\alpha\gtrsim 1.1\) and \(\Delta\lesssim 2.1\) at \(2\sigma\). This is because the recent PTA data prefers blue-tilted spectra generated below frequencies of SIGW peak around \(k_{*}\).
At small scales (\(k\ll k_{*}\)), the SIGW asymptotes to (for details, see the SM)
\[\Omega_{\rm GW}(k\ll k_{*})\propto k^{3}(1+\tilde{A}\ln^{2}(k/\tilde{k}))\,, \tag{8}\]
where \(\tilde{A}\) and \(\tilde{k}=\mathcal{O}(k_{*})\) are parameters that depend mildly on the shape of the curvature power spectrum, see more details in the Supplementary material (SM). The asymptotic "causality" tail \(\Omega_{\rm GW}\propto k^{3}\) is too steep to fit the NANOGrav15 well, being disfavoured by over \(3\sigma\). However, this tension may be relieved by QCD effects [163]. As a result, the region providing the best fit typically lies between the peak and the causality tail, at scales slightly lower than \(k_{*}\) at which the spectral slope is milder. Such a milder dependence can be observed in the \(k_{*}-A\) panel of Figs. 1 and 2, where \(A\) in the \(1\sigma\) region scales roughly linearly with \(k_{*}\) indicating that \(\Omega_{\rm GW}\) has an approximately quadratic dependence on \(k\) in the frequency range relevant PTA experiments. Additionally, since \(k_{*}\geq 2\times 10^{7}\) at \(2\sigma\), the peaks in the SIGW spectrum lie outside of the PTA frequency range. This can also be observed from Fig. S2.
**PBH abundance** - To properly compute the abundance of PBHs, two kinds of NGs need to be taken into account. Firstly, the relation between curvature and density perturbations in the long-wavelength approximation is intrinsically nonlinear [187, 188]
\[\delta(\vec{x},t)=-\frac{2}{3}\Phi\left(\frac{1}{aH}\right)^{2}e^{-2\zeta} \left[\nabla^{2}\zeta+\frac{1}{2}\partial_{t}\zeta\partial_{t}\zeta\right]\,, \tag{9}\]
where \(a\) denotes the scale factor, \(H\) the Hubble rate, \(\Phi\) is related to the equation of state parameter \(w\) of the universe. For \(w\) constant, \(\Phi=3(1+w)/(5+3w)\)[189]. We have dropped the explicit \(\vec{x}\) and \(t\) dependence or the sake of brevity.
Second, in _curvaton models_[198],
\[\zeta=\log\big{[}X(r_{\rm dec},\zeta_{\rm G})\big{]}, \tag{13}\]
where \(X(r_{\rm dec})\) is a function of \(r_{\rm dec}\) (see Eq. (S7) in the SM for details) which we take to be the free parameter in our analysis. Curvaton self-interactions may modify the NGs (see e.g. Refs. [199; 200]). We omit their contribution here and leave such investigation for future work.
We follow the prescription presented in Ref. [201] (see also [202]) based on threshold statistics on the compaction function \(\mathcal{C}\). The prescription improves upon the recent literature [203; 204; 205; 206; 207; 208; 209; 210; 211; 212; 213; 214; 215; 216; 217; 218; 219] by both including NL and the full primordial NG functional form (10) non-perturbatively.1 The total abundance of PBHs is given by the integral (see e.g. [115])
Footnote 1: We mention here that slight discrepancies remain between peak theory and threshold statistics (see, e.g., Refs. [205; 190; 200]). As the former approach provides slightly smaller amplitudes, our conclusion remain conservative.
\[\begin{split} f_{\rm PBH}&\equiv\frac{\Omega_{\rm PBH }}{\Omega_{\rm DM}}=\frac{1}{\Omega_{\rm DM}}\int\mathrm{d}\ln M_{H}\left(\frac{ M_{H}}{M_{\odot}}\right)^{-1/2}\\ &\times\Big{(}\frac{g_{*}}{106.75}\Big{)}^{\frac{3}{4}}\Big{(} \frac{g_{*s}}{106.75}\Big{)}^{-1}\left(\frac{\beta(M_{H})}{7.9\times 10^{-10}} \right)\,,\end{split} \tag{14}\]
where \(\Omega_{\rm DM}=0.264\) is the cold dark matter density of the universe and the horizon mass corresponds to the temperature
\[M_{H}(T_{k})=4.8\times 10^{-2}M_{\odot}\left(\frac{g_{*}}{106.75}\right)^{- \frac{1}{2}}\left(\frac{T_{k}}{\rm GeV}\right)^{-2}. \tag{15}\]
We compute the mass fraction \(\beta\) by integrating the joint probability distribution function \(P_{\rm G}\)
\[\beta=\int_{\mathcal{D}}\mathcal{K}(\mathcal{C}-\mathcal{C}_{\rm th})^{\gamma} \mathrm{P}_{\rm G}(\mathcal{C}_{\rm G},\zeta_{\rm G})\mathrm{d}\mathcal{C}_{ \rm G}\mathrm{d}\zeta_{\rm G}\,, \tag{16}\]
where the domain of integration is given by \(\mathcal{D}=\left\{\mathcal{C}(\mathcal{C}_{\rm G},\zeta_{\rm G})>\mathcal{C }_{\rm th}\ \wedge\ \mathcal{C}_{1}(\mathcal{C}_{\rm G},\zeta_{\rm G})<2\Phi\right\}\), and the compaction function \(\mathcal{C}=\mathcal{C}_{1}-\mathcal{C}_{1}^{2}/(4\Phi)\) can be built from the linear \(\mathcal{C}_{1}=\mathcal{C}_{\rm G}\,\mathrm{d}F/\mathrm{d}\zeta_{\rm G}\) component, that uses \(\mathcal{C}_{\rm G}=-2\Phi\,r\,\zeta_{\rm G}^{\prime}\). The Gaussian components are distributed as
\[P_{\rm G}\left(\mathcal{C}_{\rm G},\zeta_{\rm G}\right)=\frac{e^{\left[-\frac {1}{2(1-\gamma_{c}^{2})}\left(\frac{\mathcal{C}_{\rm G}}{\sigma_{c}}-\frac{ \gamma_{c}\zeta_{\rm G}}{\sigma_{r}}\right)^{2}-\frac{\mathcal{C}_{\rm G}^{2}} {2\sigma_{r}^{2}}\right]}}{2\pi\sigma_{c}\sigma_{\tau}\sqrt{1-\gamma_{cr}^{2}}}. \tag{17}\]
The correlators are given by
\[\sigma_{c}^{2}=\frac{4\Phi^{2}}{9}\int_{0}^{\infty}\frac{\mathrm{ d}k}{k}\left(kr_{m}\right)^{4}W^{2}\left(k,r_{m}\right)P_{\zeta}^{T}\,, \tag{18a}\] \[\sigma_{cr}^{2}=\frac{2\Phi}{3}\!\int_{0}^{\infty}\!\!\frac{ \mathrm{d}k}{k}(kr_{m})^{2}W(k,r_{m})W_{s}(k,r_{m})P_{\zeta}^{T},\] (18b) \[\sigma_{r}^{2}=\int_{0}^{\infty}\frac{\mathrm{d}k}{k}W_{s}^{2} \left(k,r_{m}\right)P_{\zeta}^{T}\,, \tag{18c}\]
with \(P_{\zeta}^{T}=T^{2}\left(k,r_{m}\right)P_{\zeta}(k)\), and \(\gamma_{cr}\equiv\sigma_{cr}^{2}/\sigma_{c}\sigma_{\tau}\). We have defined \(W\left(k,r_{m}\right),\,W_{s}\left(k,r_{m}\right)\) and \(T\left(k,r_{m}\right)\) as the top-hat window function, the spherical-shell window function, and the radiation transfer function, computed assuming radiation domination [217]. 2
Footnote 2: The softening of the equation of state near the QCD transitions is expected to slightly affect the evolution of sub-horizon modes. Since this is mitigated by the window function that also smooths out sub-horizon modes, we neglect this effect here.
In this work, we have followed the prescription given in Ref. [221] to compute the values of the threshold \(\mathcal{C}_{\rm th}\) and the position of the maximum of the compaction function \(r_{m}\), which depend on the shape of the power spectrum. The presence of the QCD phase transitions is taken into account by considering that \(\gamma\left(M_{H}\right),\mathcal{K}\left(M_{H}\right),\mathcal{C}_{\rm th} \left(M_{H}\right)\) and \(\Phi\left(M_{H}\right)\) are functions of the horizon mass around \(M_{\rm PBH}=\mathcal{O}\left(M_{\odot}\right)\)[222; 90]. We give more details in the SM.
The effect of NGs is illustrated in Fig. 3 for a BPL model with \(\beta=3\), \(\alpha=4\), and \(\gamma=1\). We find this scenario to be one of the more conservative ones, that is, changing the shape parameters or switching to an LN shape would yield similar or less optimistic conclusions for SIGW explanations of the recent PTA data.
Fig. 3 shows that even in the absence of primordial NGs, the region avoiding overproduction of PBHs (black band and below) is excluded at over \(2\sigma\) by NANOGrav15 while EPTA is currently less constraining. This conclusion confirms the results obtained in Ref. [43] based on IPTA-DR2 data [50]. Existing constraints on the PBH abundance force \(A\) to fall at the lower edge of the colored band, and slightly strengthen
Figure 3: PBH abundance for different NG models: non-linearities only (black), quasi-inflection-point models with \(\beta=3\) (red), curvaton models with \(r_{\rm cc}=0.9\) (blue) and negative \(f_{\rm NL}\) (cyan). We assume a BPL power spectrum (1) with \(\alpha=4\), \(\beta=3\) and \(\gamma=1\). The colored bands cover values of PBH abundance in the range \(f_{\rm PBH}\in(1,10^{-3})\) from top to bottom. The green and purple posterior comes from Fig. 1, corresponding to NANOGrav15 and EPTA, respectively. The dashed line indicates an average PBH mass \(\langle m\rangle=M_{\odot}\).
this conclusion. For quasi-inflection-point models, the situation is more dire as NGs tend to assist PBH production which pushes the overproduction limit below the \(3\sigma\) region for NANOGrav15. Although both the slope and the NGs in the \(\beta=3\) case, shown in red, are quite large, reducing the \(\beta\) cannot bring these models above the black band. All in all, we can conclude that constraints on the PBH abundance disfavor quasi-inflection-point models as a potential explanation for NANOGrav15.
On the other hand, the tension between SIGWs and NANOGrav15 can be alleviated in models in which NGs suppress the PBH abundance. This is demonstrated by the blue bands in Fig. 3, which correspond to \(f_{\rm PBH}\in(10^{-3},1)\) curvaton models (13) with a large \(r_{\rm dec}\) and for a generic quadratic ansatz (11) with a large negative \(f_{\rm NL}\). It is important to stress, that both cases displayed in Fig. 3 represent the most optimistic scenarios: increasing \(r_{\rm dec}\) above \(0.9\) would have an unnoticeable effect on \(f_{\rm PBH}\) and decreasing \(f_{\rm NL}\) below \(-2\) has a positive effect on PBH formation, and would shift the lines away from the best-fit region. This is because sizeable _negative_ curvature fluctuations can still generate large fluctuations in the compaction and seed sizeable abundance (16) (see the SM for further details).
The best-fit region for NANOGrav15 lies at scales \(k_{*}>10^{7}{\rm Mpc}^{-1}\) which corresponds to the production of sub-solar mass PBHs (see Fig. 3). Around \(k_{*}\approx 10^{7}{\rm Mpc}^{-1}\), small dents in the colored bands in Fig. 3 can be observed. These arise due to the effect of the QCD phase transition which promotes PBH formation. Thus, we find that the QCD-induced enhancement of \(f_{\rm PBH}\) in the parameter space relevant for NANOGrav15 tends to be negligible.
Although our \(f_{\rm PBH}\) estimates assume quite narrow curvature power spectra, we checked that our conclusions about PBH overproduction in single-field inflation persist also in the case of broad spectra (e.g. see the models in Refs. [23, 35, 45, 223] connecting PTA observations to asteroidal mass PBH dark matter).
As a last remark, limiting our analysis to the absence of NGs in the curvature perturbation field \(\zeta\), we have found that our results differ from those published by the NANOGrav collaboration [61]. These discrepancies arise because their analysis is subject to a few simplifications: the omission of critical collapse and the nonlinear relationship between curvature perturbations and density contrast, the adoption of a different value for the threshold (independently from the curvature power spectrum), and the use of a Gaussian window function (which is incompatible with their choice of threshold [224]). Another minor limitation is that they disregard any corrections from the QCD equation of state, although we find that the result is minimally dependent on this aspect.3
Footnote 3: _Note added:_ Similar and other simplifications were made in Ref. [225, 226, 227] which appeared briefly after the submission of this Letter.
**Conclusions and outlook -** The evidence for the Hellings-Downs angular correlation reported by the NANOGrav, EPTA, PPTA, and CPTA collaborations sets an important milestone in gravitational-wave astronomy. One of the most pressing challenges to follow is to determine the nature signal: is it astrophysical or cosmological?
In this letter, we have analyzed the possibility that this signal may originate from GWs induced by high-amplitude primordial curvature perturbations. This scenario is accompanied by the production of a sizeable abundance of PBHs. Our findings demonstrate that PBH formation models that feature Gaussian primordial perturbations, or positive NGs would overproduce PBHs, unless the amplitude of the spectrum is much smaller than required to explain the GW signal. For instance, most models relying on single-field inflation featuring an inflection point appear to be excluded at \(3\sigma\) as the sole explanation of the NANOGrav 15-year data. However, this tension can be alleviated for models where large negative NGs suppress the PBH abundance. For instance, curvaton scenarios with a large \(r_{\rm dec}\) and models exhibiting only large negative \(f_{\rm NL}\). As a byproduct, however, we conclude that the PTA data does not impose constraints on the PBH abundance.
Several future steps should be taken to improve the analysis of this paper. For instance, it would be important to fully include the impact of NGs and the variation of sound speed during the QCD era when calculating the present-day SIGW background, which provides a significant computational challenge. Beyond that, it would be important to include NGs corrections to the threshold for collapse and to reduce remaining uncertainties in the computation of the abundance. Finally, we expect that a comprehensive joint analysis involving all collaborations within the International Pulsar Timing Array (IPTA) framework will further strengthen the constraints discussed in this work.
_Acknowledgments -_ We thank V. De Luca, G. Ferrante, D. Racco, A. Riotto, F. Rompineve, A. Urbano and J. Urrutia for useful discussions. G.F. acknowledges the financial support provided under the European Union's H2020 ERC, Starting Grant agreement no. DarkGRA-757480 and under the MIUR PRIN programme, and support from the Amaldi Research Center funded by the MIUR program "Dipartimento di Eccellenza" (CUP: B81I18001170001). This work was supported by the EU Horizon 2020 Research and Innovation Programme under the Marie Sklodowska-Curie Grant Agreement No. 101007855 and additional financial support provided by "Progetti per Avvio alla Ricerca - Tipo 2", protocol number AR2221816C515921. A.J.I. acknowledges the financial support provided under the "Progetti per Avvio alla Ricerca Tipo 1", protocol
number AR12218167D66D36, and the "Progetti di mobilita di studenti di dottorato di ricerca". The work of V.V. and H.V. was supported by European Regional Development Fund through the CoE program grant TK133 and by the Estonian Research Council grants PRG803 and PSG869. The work of V.V. has been partially supported by the European Union's Horizon Europe research and innovation program under the Marie Sklodowska-Curie grant agreement No. 101065736.
|
2305.19434 | Arbitrary Lagrangian-Eulerian finite element approximations for
axisymmetric two-phase flow | We analyze numerical approximations for axisymmetric two-phase flow in the
arbitrary Lagrangian-Eulerian (ALE) framework. We consider a parametric
formulation for the evolving fluid interface in terms of a one-dimensional
generating curve. For the two-phase Navier-Stokes equations, we introduce both
conservative and nonconservative ALE weak formulations in the 2d meridian
half-plane. Piecewise linear parametric elements are employed for discretizing
the moving interface, which is then coupled to a moving finite element
approximation of the bulk equations. This leads to a variety of ALE methods,
which enjoy either an equidistribution property or unconditional stability.
Furthermore, we adapt these introduced methods with the help of suitable
time-weighted discrete normals, so that the volume of the two phases is exactly
preserved on the discrete level. Numerical results for rising bubbles and
oscillating droplets are presented to show the efficiency and accuracy of these
introduced methods. | Harald Garcke, Robert Nürnberg, Quan Zhao | 2023-05-30T22:03:59Z | http://arxiv.org/abs/2305.19434v2 | # Arbitrary Lagrangian-Eulerian finite element approximations for axisymmetric two-phase flow
###### Abstract
We analyze numerical approximations for axisymmetric two-phase flow in the arbitrary Lagrangian-Eulerian (ALE) framework. We consider a parametric formulation for the evolving fluid interface in terms of a one-dimensional generating curve. For the two-phase Navier-Stokes equations, we introduce both conservative and nonconservative ALE weak formulations in the 2d meridian half-plane. Piecewise linear parametric elements are employed for discretizing the moving interface, which is then coupled to a moving finite element approximation of the bulk equations. This leads to a variety of ALE methods, which enjoy either an equidistribution property or unconditional stability. Furthermore, we adapt these introduced methods with the help of suitable time-weighted discrete normals, so that the volume of the two phases is exactly preserved on the discrete level. Numerical results for rising bubbles and oscillating droplets are presented to show the efficiency and accuracy of these introduced methods.
keywords: arbitrary Lagrangian-Eulerian, finite element method, energy stability, equidistribution, volume preservation +
Footnote †: journal: Elsevier
## 1 Introduction
Two-phase flows, and more generally multi-phase flows, occur in many natural phenomena and have wide applications in the oil and gas industries, engineering and scientific experiments. Numerical approximations of two-phase flow have been extensively studied in recent decades, and much effort has been devoted to accurate and efficient approximations of the evolving fluid interface. These include diffuse-interface methods [3; 21; 27; 33; 53], volume of fluid methods [35; 48; 50], level set methods [45; 46; 51; 55], and front-tracking methods [2; 5; 9; 10; 20; 24; 28; 47; 49; 56; 57; 59].
Among these front-tracking approximations, one of the most prominent methods is the moving fitted mesh approach, where the discrete interface that separates the two fluids remains fitted to the bulk mesh. In particular, the interface mesh is approximated by a lower-dimensional mesh and made up of faces of elements from the bulk mesh. This means that the bulk mesh needs to deform appropriately in time in order to match the evolving interface. The bulk equations are thus formulated in a moving frame of reference with a reference velocity which on the discrete level defines the movement of the bulk mesh. The natural way to employ this approach is the Lagrangian framework by simply prescribing the bulk mesh velocity according to the fluid velocity. However, this often leads to large distortions of the mesh due to the lack of control on the mesh/fluid velocity. A numerical method which allows for greater flexibility is the so-called arbitrary Lagrangian-Eulerian (ALE) method, where the reference velocity is somehow arbitrary and usually independent of the fluid velocity for the interior points. The original ALE approach was introduced in [23; 34; 44] for hydrodynamic problems in the context of finite difference methods, and then generalized to free surface flows and fluid-structure interaction problems in the context of the finite element method, e.g., [14; 19; 37; 52]. The application of the ALE method to the two-phase flow can be found in e.g., [2; 4; 20;
26; 28; 56]. The main advantage of the ALE approach is the possibility to accurately capture the jumps of physical quantities across the interface, which on the contrary can be a major concern in the unfitted mesh approach [18; 22; 32]. This flexibility of the moving reference frame allows for excellent approximations in the case of small deformations. Nevertheless, dynamic controls of the bulk and interface meshes are often necessary to prevent undesirable mesh distortions, especially when the interface exhibits strong deformations and topological changes.
The stability of ALE finite element methods was first analyzed in [43] for the convection-diffusion equation based on either a conservative or nonconservative ALE formulation. In the paper, a condition of geometric conservative law (GCL) was proposed to guarantee an unconditional stability estimate that does not depend on the velocity of the ALE reference. A further stability analysis of ALE methods with a variety of time discretizations was considered in [16]. As regards ALE methods for multi-phase flow, stability estimates can be found in e.g., [20; 28; 30; 39]. For example, in [39], the energy stability was established based on a GCL-type approximation of a conservative ALE formulation in the context of the single-phase of fluid. On assuming a divergence free velocity of the ALE frame, an energy-stable ALE method of the nonconservative form was recently proposed in [20]. More recently in [28], the authors devised two structure-preserving ALE approximations for the two-phase incompressible flow in both the conservative and nonconservative form, and the introduced methods were shown to satisfy unconditional stability and exact volume preservation on the fully discrete level.
Despite the abundance of numerical work for two-phase flow, the computation for the fully 3d problem remains a very challenging task. The difficulties stem not only from the large size of the problem but also the complicated mesh manipulations which are often required in the front-tracking methods, see [4; 10; 20]. Fortunately, in many situations the complex 3d problem can be reduced to a much simpler two-dimensional problem in the meridian half-plane provided that the considered flow satisfies rotational symmetry. In this axisymmetric setting the fluid interface can also be modeled by a one-dimensional generating curve, which dramatically reduces the computational complexity and troublesome work of mesh control. Existing numerical works for the axisymmetric two-phase flow can be found in Refs. [17; 25; 29; 31; 36; 41; 54]. Very recently unfitted finite element approximations were analyzed by the authors in [29]. In the current work, we aim to explore accurate and efficient numerical approximations for the axisymmetric two-phase flow in the ALE framework. In particular, the main results of this work are stated as follows.
* Based on our recent 2d/3d work in [28], we introduce appropriate conservative and nonconservative ALE approximations which enable the stability of the fluid kinetic energy.
* Inspired by the works in [11; 12], we discuss two possible approximations of the surface tension forces which lead to either an unconditional stability estimate or to an equidistribution property.
* Building on ideas in [6; 29; 40], we adapt the introduced methods to further achieve volume-preserving approximations with the help of suitable time-integrated interface normals on the discrete level.
The remainder of the paper is organized as follows. In Section 2 we discuss the axisymmetric setting for two-phase Navier-Stokes flow. Next, in Section 3, we introduce both the conservative and nonconservative ALE weak formulations for the axisymmetric flow and prove the volume preservation and energy stability for the weak solutions. A variety of finite element approximations of the introduced ALE formulations are then analyzed in Section 4. Subsequently, the solution method and numerical results for the introduced methods are presented in Section 5. Finally we draw some conclusions in Section 6.
## 2 The strong formulation
As shown in Fig. 1, we assume that the two-phase flow satisfies rotational symmetry with respect to the \(z\)-axis. We then consider the problem in a bounded domain \(\mathcal{R}\subset\mathbb{R}^{2}\) in the 2d meridian half-plane such that \(\mathcal{R}=\mathcal{R}_{+}(t)\cup\mathcal{R}_{-}(t)\cup\Gamma(t)\), where \(\mathcal{R}_{\pm}(t)\) correspond to the rotated sets for the domains of the two fluids, and \(\Gamma(t)\) is the generating curve of the axisymmetric fluid interface \(\mathcal{S}(t)\). We further assume that there is no angular velocity and that the velocity components in the \(r,z\)-directions and the pressure are independent of the azimuthal angle. Thus we can introduce the velocity and pressure as variables in \(\mathcal{R}\)
\[\vec{u}(\cdot,t)=\left(u^{r}(\cdot,t),\,\dot{u^{r}}(\cdot,t)\right)^{T}: \mathcal{R}\times[0,T]\rightarrow\mathbb{R}^{2}\qquad\text{and}\qquad p( \cdot,t):\mathcal{R}\times[0,T]\rightarrow\mathbb{R}.\]
We denote by \(\rho_{\pm}\) and \(\mu_{\pm}\) the densities and viscosities of the two fluids, and introduce
\[\nabla=\vec{e}_{1}\frac{\partial}{\partial r}+\vec{e}_{2}\frac{\partial}{ \partial z},\qquad\mathbbm{D}(\vec{u})=\frac{1}{2}[\nabla\vec{u}+(\nabla\vec{u} )^{T}].\]
Then the axisymmetric Navier-Stokes equations in the two phases can be written as
\[\rho_{\pm}\Big{(}\partial_{t}\vec{u}+[\vec{u}\cdot\nabla]\vec{u} \Big{)} =-\nabla p+\frac{2}{r}\nabla\cdot[r\,\mu_{\pm}\mathbbm{D}(\vec{u} )]-\frac{2\mu_{\pm}(\vec{u}\cdot\vec{e}_{1})\vec{e}_{1}}{r^{2}}+\rho_{\pm}\vec{ g} \qquad\text{in}\quad\mathcal{R}_{\pm}(t), \tag{2.1a}\] \[\frac{1}{r}\nabla\cdot[r\vec{u}] =\frac{1}{r}\frac{\partial(r\,u^{r})}{\partial r}+\frac{\partial u ^{\vec{e}}}{\partial z}=0 \text{in}\quad\mathcal{R}_{\pm}(t), \tag{2.1b}\]
where we denote \(\vec{g}=(g^{r},\ g^{z})^{T}\) as the body acceleration. Here (2.1a) and (2.1b) can be obtained via the axisymmetric reduction of the fully 3d governing equations for two-phase flow, where we refer the reader to e.g., [15; 29].
We further assume that the generating curve \(\Gamma(t)\) is an open curve with two end points which lie on the \(z\)-axis, so that the fluid interface \(\mathcal{S}(t)\) is a genus-0 surface without boundary. We introduce a parameterization of \(\Gamma(t)\) as
\[\vec{x}(\cdot,t):\mathbb{I}\to\mathbb{R}_{\geq 0}\times\mathbb{R}\qquad\text{ with}\quad\mathbb{I}=(0,1),\quad\partial\mathbb{I}=\{0,1\}.\]
The attachment of the two end points on the \(z\)-axis implies that
\[\vec{x}(\alpha,t)\cdot\vec{e}_{1} =0\quad\forall\alpha\in\partial\mathbb{I}, \tag{2.2a}\] \[\vec{x}_{\alpha}(\alpha,t)\cdot\vec{e}_{2} =0\quad\forall\alpha\in\partial\mathbb{I}, \tag{2.2b}\]
where (2.2b) is the axisymmetric condition meaning that the contact angle that \(\Gamma(t)\) makes with the \(z\)-axis is \(90^{\circ}\). We assume that \(|\vec{x}_{\alpha}(\cdot,t)|>0\) and \(\vec{x}(\cdot,t)\cdot\vec{e}_{1}>0\) in \(\mathbb{I}\), and the unit tangent and normal to the curve \(\Gamma(t)\) are defined as
\[\vec{\tau}(\alpha,t)=\vec{x}_{s}=|\vec{x}_{\alpha}|^{-1}\vec{x}_{\alpha}, \qquad\vec{\tau}=-(\vec{\tau})^{\perp}, \tag{2.3}\]
where \(s\) is the arc length of \(\Gamma(t)\) with \(\partial_{s}=|\vec{x}_{\alpha}|^{-1}\,\partial_{\alpha}\), and \((\cdot)^{\perp}\) denotes a clockwise rotation of a vector by \(\frac{\pi}{2}\). Besides, we introduce the mean curvature of \(\mathcal{S}(t)\)
\[\varkappa=\kappa-(\vec{x}\cdot\vec{e}_{1})^{-1}(\vec{\nu}\cdot\vec{e}_{1}) \qquad\text{with}\quad\kappa\vec{\nu}=\vec{x}_{ss}, \tag{2.4}\]
where \(\kappa\) is the curvature of the generating curve \(\Gamma(t)\).
We have interface conditions on \(\mathcal{S}(t)\), which lead to
\[[\vec{u}]^{+}_{-}=\vec{0},\qquad[2\,\mu_{\pm}\mathbbm{D}(\vec{u})-p\underline{ \underline{\underline{\underline{\underline{\underline{\underline{\underline{ \underline{\underline{\underline{\underline{\underline{\underline{\underline{ \underline{\underline{ \underlineunderline{ \cdot
where \([\cdot]_{+}^{2}\) denotes the jump values from \(\mathcal{R}_{-}(t)\) to \(\mathcal{R}_{+}(t),\gamma\) is the surface tension of the fluid interface, and \(\underline{\underline{Id}}\in\mathbb{R}^{2\times 2}\) is the identity matrix.
Furthermore, the boundary of \(\mathcal{R}\) is given by \(\partial\mathcal{R}=\partial\mathcal{R}_{1}\cup\partial\mathcal{R}_{2}\cup \partial_{z}\mathcal{R}\), where \(\partial_{z}\mathcal{R}\) is the artificial boundary of \(\mathcal{R}\) on the \(z\)-axis. We impose a no-slip condition on \(\partial_{1}\mathcal{R}\) and a free-slip condition on \(\partial_{2}\mathcal{R}\) as
\[\vec{u}=\vec{0} \text{on}\quad\partial_{1}\mathcal{R}, \tag{2.6a}\] \[\vec{u}\cdot\vec{n}=0,\quad\underline{\underline{\mathbb{D}}}( \vec{u})\vec{n}\cdot\vec{r}=0 \text{on}\quad\partial_{2}\mathcal{R}, \tag{2.6b}\]
where \(\vec{u}\) and \(\vec{r}\) are normal and tangent to \(\partial_{2}\mathcal{R}\). On the artificial boundary \(\partial_{z}\mathcal{R}\), we need
\[\vec{u}\cdot\vec{e}_{1}=0,\qquad\frac{\partial\vec{u}}{\partial r}\cdot\vec{ e}_{2}=0\qquad\text{on}\quad\partial_{z}\mathcal{R}. \tag{2.7}\]
Here the first condition in (2.7) is enforced on recalling the term \(-\frac{2\mu_{n}(\vec{u}\cdot\vec{e}_{1})\vec{e}_{1}}{r^{2}}\) in (2.1a), while the second condition results from the axisymmetry. It is not difficult to show that (2.7) is equivalent to the free-slip condition.
## 3 The ALE weak formulations
We introduce a family of ALE mappings \(\{\vec{\mathcal{A}}[t]\}_{t\in[0,T]}\) such that
\[\vec{\mathcal{A}}[t]:\mathcal{O}\to\mathcal{R},\qquad\vec{y}\mapsto\vec{ \mathcal{A}}[t](\vec{y})=\vec{x}(\vec{y},t)\quad\text{for all}\quad t\in[0,T], \quad\vec{y}\in\mathcal{O}. \tag{3.8}\]
We further assume that \(\vec{\mathcal{A}}[t]\in[W^{1,\infty}(\mathcal{O})]^{2}\) and \(\vec{\mathcal{A}}[t]^{-1}\in[W^{1,\infty}(\mathcal{R})]^{2}\), and define the mesh velocity
\[\vec{w}(\vec{x},t):=\left.\frac{\partial\vec{x}(\vec{y},t)}{\partial t} \right|_{\vec{y}=\vec{\mathcal{A}}[t]^{-1}(\vec{x})}\quad\text{for all}\quad t\in[0,T],\quad\vec{x}\in\mathcal{R}. \tag{3.9}\]
On the interface and boundary, the mesh velocity is required to satisfy
\[(\vec{w}-\vec{u})\cdot\vec{v}=0\quad\text{on}\quad\Gamma(t),\qquad(\vec{w}- \vec{u})\cdot\vec{n}=0\quad\text{on}\quad\partial\mathcal{R}. \tag{3.10}\]
For a vector field \(\vec{\varphi}:\mathcal{R}\times[0,T]\to\mathbb{R}^{2}\), we also introduce the derivative with respect to the moving ALE frame as
\[\partial_{t}^{\circ}\vec{\varphi}=\partial_{t}\vec{\varphi}+[\vec{w}\cdot \nabla]\vec{\varphi}. \tag{3.11}\]
We define \((\cdot,\cdot)\) as the \(L^{2}\)-inner product on \(\mathcal{R}\) and denote by
\[L^{2}_{a}(\mathcal{R}) :=\{\chi:(r^{a},\ \chi^{2})<+\infty\}\quad a\in\mathbb{N},\] \[H^{1}_{a}(\mathcal{R}) :=\{\chi:\chi\in L^{2}_{a}(\mathcal{R}),\ \nabla\chi\in[L^{2}_{a}( \mathcal{R})]^{2}\}\]
the weighted \(L^{2}\) and \(H^{1}\) spaces over \(\mathcal{R}\), respectively. We then introduce the function spaces for the velocity and pressure as
\[\mathbb{U} :=\Big{\{}\vec{\chi}\in[H^{1}_{1}(\mathcal{R})]^{2}\ :\ (\vec{\chi} \cdot\vec{e}_{1})\in L^{2}_{-1}(\mathcal{R}),\ \vec{\chi}=\vec{0}\ \text{on}\ \partial_{1}\mathcal{R},\ \vec{\chi}\cdot\vec{n}=0\ \text{on}\ \partial_{2}\mathcal{R}\Big{\}}, \tag{3.12a}\] \[\mathbb{V} :=H^{1}(0,T;\,[L^{2}_{1}(\mathcal{R})]^{2})\cap L^{2}(0,T;\mathbb{ U}),\qquad\mathbb{P}:=\{\chi\in L^{2}_{1}(\mathcal{R}):(r,\chi)=0\}. \tag{3.12b}\]
Besides, we denote by \(\langle\cdot,\cdot\rangle\) the \(L^{2}\)-inner product on \(\mathbb{I}\), and define the function space
\[V_{\partial}=\Big{\{}\vec{\eta}\in[H^{1}(\mathbb{I})]^{2}\ :\ \vec{\eta}\cdot\vec{e}_{1}=0\ \text{on}\ \partial\mathbb{I}\Big{\}}.\]
### Nonconservative ALE formulations
Denote
\[\rho(\cdot,t)=\rho_{+}\mathcal{X}_{\mathcal{R}_{+}(t)}+\rho_{-}\mathcal{X}_{ \mathcal{R}_{-}(t)},\qquad\mu(\cdot,t)=\mu_{+}\,\mathcal{X}_{\mathcal{R}_{+}(t)}+ \mu_{-}\,\mathcal{X}_{\mathcal{R}_{-}(t)}, \tag{3.13}\]
where \(\mathcal{X}_{E}\) is the characteristic function of the set \(E\). For the pressure and viscous term in (2.1a), we take the inner product with \(\vec{\chi}\,r\) for \(\vec{\chi}\in\mathbb{U}\). This leads to
\[\left(-\nabla p,\;\vec{\chi}\,r\right)+\left(\nabla\cdot[2r\mu \underline{\mathbb{D}}(\vec{u})],\;\vec{\chi}\right)-2(\mu\,r^{-1}\,[\vec{u} \cdot\vec{e}_{1}],\;[\vec{\chi}\cdot\vec{e}_{1}])\] \[=\left(p,\;\nabla[r\,\vec{\chi}]\right)-2(\mu\,\underline{ \mathbb{D}}(\vec{u}),\;\underline{\mathbb{D}}(\vec{\chi})\,r)-2(\mu\,r^{-1}\,[ \vec{u}\cdot\vec{e}_{1}],\;[\vec{\chi}\cdot\vec{e}_{1}])+\gamma\int_{\Gamma(t )}(\vec{\chi}\cdot\vec{e}_{1})\varkappa^{\gamma}\cdot\vec{\chi}\,\mathrm{d}s, \tag{3.14}\]
where we used integration by parts, the interface condition in (2.5), and the boundary conditions (2.6) and (2.7).
Denote by \((\cdot,\cdot)_{\mathcal{R}_{\pm}(t)}\) the \(L^{2}\)-inner products over \(\mathcal{R}_{\pm}(t)\), respectively. For the inertia term in (2.1a), on recalling (3.11) we have that
\[\left(\partial_{t}\vec{u}+[\vec{u}\cdot\nabla]\vec{u},\;\vec{\chi}\,r\right)_ {\mathcal{R}_{\pm}(t)}=\left(\partial_{t}^{\circ}\vec{u},\;\vec{\chi}\,r \right)_{\mathcal{R}_{\pm}(t)}+\left([\vec{u}-\vec{w}]\cdot\nabla\vec{u},\; \vec{\chi}\,r\right)_{\mathcal{R}_{\pm}(t)}\quad\forall\vec{\chi}\in[H^{1}( \mathcal{R})]^{2}. \tag{3.15}\]
We can rewrite
\[\left([\vec{u}-\vec{w}]\cdot\nabla\vec{u},\;\vec{\chi}\,r\right)_ {\mathcal{R}_{\pm}(t)} =\frac{1}{2}\left[\left(([\vec{u}-\vec{w}]\cdot\nabla)\vec{u},\; \vec{\chi}\,r\right)_{\mathcal{R}_{\pm}(t)}-\left(([\vec{u}-\vec{w}]\cdot \nabla)\vec{\chi},\;\vec{u}\,r\right)_{\mathcal{R}_{\pm}(t)}\right]+\frac{1}{ 2}(\vec{u}-\vec{w},\;\nabla(\vec{u}\cdot\vec{\chi})r)_{\mathcal{R}_{\pm}(t)}\] \[=\frac{1}{2}\left[\left(([\vec{u}-\vec{w}]\cdot\nabla)\vec{u},\; \vec{\chi}\,r\right)_{\mathcal{R}_{\pm}(t)}-\left(([\vec{u}-\vec{w}]\cdot \nabla)\vec{\chi},\;\vec{u}\,r\right)_{\mathcal{R}_{\pm}(t)}\right]+\frac{1}{ 2}(\nabla\cdot[r\,\vec{w}],\;\vec{u}\cdot\vec{\chi})_{\mathcal{R}_{\pm}(t)}, \tag{3.16}\]
where the last equality results from the divergence free condition in (2.1b) and the boundary condition in (3.10). We then multiply (3.14) with \(\rho_{\pm}\) and combine these two equations. This yields that
\[\left(\rho\,(\partial_{t}\vec{u}+[\vec{u}\cdot\nabla]\vec{u}),\;\vec{\chi}\,r \right)=\left(\rho\,\partial_{t}^{\circ}\vec{u},\;\vec{\chi}\,r\right)+\frac{1 }{2}(\rho\,\nabla\cdot[r\,\vec{w}],\;\vec{\chi})+\mathcal{A}(\rho,\vec{u}- \vec{w};\vec{u},\vec{\chi}), \tag{3.17}\]
where \(\mathcal{A}(\rho,\vec{v};\vec{u},\vec{\chi})\) is the antisymmetric term
\[\mathcal{A}(\rho,\vec{v};\vec{u},\vec{\chi})=\frac{1}{2}\left[\left(\rho\,( \vec{v}\cdot\nabla)\vec{u},\;\vec{\chi}\,r\right)-\left(\rho\,(\vec{v}\cdot \nabla)\vec{\chi},\;\vec{u}\,r\right)\right].\]
It was shown in [11] that taking the inner product of (2.4) with \((\vec{\chi}\cdot\vec{e}_{1})\vec{v}\cdot\vec{\eta}\,|\vec{\chi}_{\alpha}|\) on \(\mathbb{I}\) with \(\vec{\eta}\in V_{\partial}\) leads to
\[\left\langle(\vec{\chi}\cdot\vec{e}_{1})\varkappa^{\gamma},\;\vec{\eta}\,| \vec{\chi}_{\alpha}|\right\rangle+\left\langle\vec{\eta}\cdot\vec{e}_{1},\;| \vec{\chi}_{\alpha}|\right\rangle+\left\langle(\vec{\chi}\cdot\vec{e}_{1}) \vec{\chi}_{\alpha},\;\vec{\eta}_{\alpha}|\vec{\chi}_{\alpha}|^{-1}\right\rangle =0\qquad\forall\;\vec{\eta}\in V_{\partial}\,. \tag{3.18}\]
Collecting the results in (3.14), (3.17) and (3.18), we then introduce the following ALE weak formulation in the nonconservative form. Let the initial velocity \(\vec{u}_{0}\in\mathbb{U}\) and interface parameterization \(\vec{\chi}_{0}\in V_{\partial}\). For \(t\in(0,T]\), we find \(\vec{u}(\cdot,t)\in\mathbb{V}\), \(p\in\mathbb{P}\), \(\vec{\chi}(\cdot,t)\in V_{\partial}\) and \(\varkappa(\cdot,t)\in L^{2}(\mathbb{I})\) such that
\[\left(\rho\partial_{t}^{\circ}\vec{u},\;\vec{\chi}\,r\right)+\frac {1}{2}(\rho\nabla\cdot[r\vec{w}],\;\vec{u}\cdot\vec{\chi})+\mathcal{A}(\rho, \vec{u}-\vec{w};\vec{u},\vec{\chi})+2(\mu\,r^{-1}\,[\vec{u}\cdot\vec{e}_{1}], \;[\vec{\chi}\cdot\vec{e}_{1}])-\left(p,\;\nabla\cdot[r\vec{\chi}]\right)\] \[\quad+2\left(\mu\,\underline{\mathbb{D}}(\vec{u}),\;\underline{ \mathbb{D}}(\vec{\chi}\,r\right)-\gamma\left((\vec{\chi}\cdot\vec{e}_{1})\varkappa,\; \vec{\gamma}\cdot\vec{\chi}\,|\vec{\chi}_{\alpha}|\right)=\left(\rho\,r\, \vec{g},\,\vec{\chi}\right)\qquad\forall\vec{\chi}\in\mathbb{V}, \tag{3.19a}\] \[\left(\nabla\cdot[r\,\vec{u}],\;q\right)=0\qquad\forall q\in\mathbb{ P},\] (3.19b) \[\left\langle(\vec{\chi}\cdot\vec{e}_{1})\,\vec{\chi}_{t}\cdot\vec{ \gamma},\;\zeta\,|\vec{\chi}_{\alpha}|\right\rangle-\left\langle(\vec{\chi} \cdot\vec{e}_{1})\,\vec{u}\cdot\vec{\gamma},\;\zeta\,|\vec{\chi}_{\alpha}| \right\rangle=0\qquad\forall\zeta\in L^{2}(\mathbb{I}),\] (3.19c) \[\left\langle(\vec{\chi}\cdot\vec{e}_{1})\varkappa^{\gamma},\; \vec{\eta}\,|\vec{\chi}_{\alpha}|\right\rangle+\left\langle\vec{\eta}\cdot \vec{e}_{1},\;|\vec{\chi}_{\alpha}|\right\rangle+\left\langle(\vec{\chi}\cdot \vec{e}_{1})\,\vec{\chi}_{\alpha},\;\vec{\eta}_{\alpha}|\vec{\chi}_{\alpha}|^{-1 }\right\rangle=0\qquad\forall\vec{\eta}\in\;V_{\partial}. \tag{3.19d}\]
We note that (3.19a) results from (3.14) and (3.15), (3.19b) and (3.19c) are due to the incompressibility condition (2.1b) and the kinetic equation in (2.5), while (3.19d) is a direct result of (3.18).
For simplicity we denote \(\vec{\dot{x}}(\cdot,t)=\vec{\dot{x}}(t)\) and \(\vec{u}(\cdot,t)=\vec{u}(t)\). We then introduce \(A(\vec{\dot{x}}(t))\) and \(\operatorname{vol}(\vec{\dot{x}}(t))\) as the surface area and the enclosed volume of \(\mathcal{S}(t)\), respectively:
\[A(\vec{\dot{x}}(t)) =\int_{\mathcal{S}(t)}1\mathrm{d}A=2\pi\int_{\mathbb{I}}(\vec{ \dot{x}}\cdot\vec{e}_{1})|\vec{\dot{x}}_{\alpha}|\,\mathrm{d}\alpha, \tag{3.20a}\] \[\operatorname{vol}(\vec{\dot{x}}(t)) =\pi\int_{\Gamma(t)}(\vec{\dot{x}}\cdot\vec{e}_{1})^{2}(\vec{y} \cdot\vec{e}_{1})\mathrm{d}s=\pi\int_{\mathbb{I}}(\vec{\dot{x}}\cdot\vec{e}_{1 })^{2}(\vec{y}\cdot\vec{e}_{1})\,|\vec{\dot{x}}_{\alpha}|\,\mathrm{d}\alpha, \tag{3.20b}\]
where for (3.20b) the reader can refer to [11, (3.10)]. The total free energy of the system is given by
\[E(\rho,\vec{u}(t),\vec{\dot{x}}(t))=\pi\int_{\mathcal{R}}\rho\,|\vec{u}|^{2}\, r\,\mathrm{d}r\mathrm{d}z+\gamma A(\vec{\dot{x}}(t)). \tag{3.21}\]
A direct calculation also yields that
\[\frac{\mathrm{d}}{\mathrm{d}t}A(\vec{\dot{x}}(t)) =2\pi\int_{\mathbb{I}}\left[(\vec{\dot{x}}\cdot\vec{e}_{1})\,| \vec{\dot{x}}_{\alpha}|+(\vec{\dot{x}}\cdot\vec{e}_{1})\,(\vec{\dot{x}}_{t})_{ \alpha}\cdot\vec{\dot{x}}_{\alpha}|^{-1}\right]\,\mathrm{d}\alpha, \tag{3.22a}\] \[\frac{\mathrm{d}}{\mathrm{d}t}\operatorname{vol}(\vec{\dot{x}}(t)) =2\pi\int_{\mathbb{I}}(\vec{\dot{x}}\cdot\vec{e}_{1})(\vec{\dot{x }}_{t}\cdot\vec{y})|\vec{\dot{x}}_{\alpha}|\,\mathrm{d}\alpha. \tag{3.22b}\]
Now choosing \(\vec{\dot{x}}=\vec{u}\) in (3.19a), \(q=p\) in (3.19b), \(\zeta=\gamma\varkappa\) in (3.19c) and \(\vec{\dot{q}}=\vec{\dot{x}}_{t}\) in (3.19d) and combing these equations yields that
\[\frac{1}{2\pi}\frac{\mathrm{d}}{\mathrm{d}t}E(\rho,\vec{u}(t),\vec{\dot{x}}(t) )=-2\Big{(}\mu\,\underline{\mathbb{D}}(\vec{u}),\underline{\mathbb{D}}(\vec{ u})\,r\Big{)}+2\Big{(}r^{-1}\,\mu,\ [\vec{u}\cdot\vec{e}_{1}]^{2}\Big{)}+\Big{(}\rho\,r\,\vec{u},\ \vec{\dot{g}}\Big{)}, \tag{3.23}\]
on recalling (3.22a) and (A.3). Besides, it is not difficult to show that
\[\mathcal{X}_{\mathcal{X}_{\mathcal{X}_{\mathcal{X}_{\mathcal{X}_{\mathcal{X}_{ \mathcal{X}_{\mathcal{X}_{\mathcal{X}_{\mathcal{X}_{\mathcal{X}_{\mathcal{X}_{ \mathcal{X}_{\mathcal{X}_{\mathcal{X}_{\mathcal{X}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}\,\,\,\, \
### Conservative ALE formulations
On recalling (A.2) and (3.17), it is not difficult to obtain
\[\left(\rho\,\partial_{t}^{\alpha}\vec{u},\,\vec{\chi}\,r\right)+\frac{1}{2}\Big{(} \rho\nabla\cdot[r\vec{w}],\,\,\vec{u}\cdot\vec{\chi}\Big{)}=\frac{\mathrm{d}}{ \mathrm{d}t}\Big{(}\rho\vec{u},\,\vec{\chi}\,r\Big{)}-\frac{1}{2}\Big{(}\rho \nabla\cdot[r\vec{w}],\,\,\vec{u}\cdot\vec{\chi}\Big{)}-\Big{(}\rho\partial_{t }^{\alpha}\vec{\chi},\,\,\vec{u}\,r\Big{)}\quad\forall\vec{\chi}\in[H^{1}( \mathcal{R})]^{2}. \tag{3.26}\]
Then we could introduce an ALE weak formulation in the conservative form by modifying (3.19) slightly. Precisely, we replace (3.19a) with
\[\frac{\mathrm{d}}{\mathrm{d}t}\Big{(}\rho\vec{u},\,\vec{\chi}\,r \Big{)}-\frac{1}{2}\Big{(}\rho\nabla\cdot[r\vec{w}],\,\,\vec{u}\cdot\vec{\chi} \Big{)}-\Big{(}\rho\partial_{t}^{\alpha}\vec{\chi},\,\,\vec{u}\,r\Big{)}+ \mathcal{A}(\rho,\vec{u}-\vec{w};\vec{u},\vec{\chi})+2(\mu\,r^{-1}\,[\vec{u} \cdot\vec{e}_{1}],\,\,[\vec{\chi}\cdot\vec{e}_{1}])\] \[\qquad-\big{(}p,\,\,\nabla\cdot[r\vec{\chi}]\big{)}+2(\mu\,\underline {\underline{\mathbb{D}}}(\vec{u}),\,\underline{\underline{\mathbb{D}}}(\vec{ \chi})\,r\big{)}-\,\,\gamma\big{(}\,(\vec{u}\cdot\vec{e}_{1})\,\varkappa,\,\, \vec{\chi}\,|\vec{\chi}_{\alpha}|\,\big{)}=\big{(}\rho\,r\,\vec{g},\,\vec{ \chi}\big{)}\qquad\forall\vec{\chi}\in\mathbb{V}. \tag{3.27}\]
Similarly for (3.25), we could obtain a new formulation in the conservative form by replacing (3.25a) with
\[\frac{\mathrm{d}}{\mathrm{d}t}\Big{(}\rho\vec{u},\,\vec{\chi}\,r \Big{)}-\frac{1}{2}\Big{(}\rho\nabla\cdot[r\vec{w}],\,\,\vec{u}\cdot\vec{\chi }\Big{)}-\Big{(}\rho\partial_{t}^{\alpha}\vec{\chi},\,\,\vec{u}\,r\Big{)}+ \mathcal{A}(\rho,\vec{u}-\vec{w};\vec{u},\vec{\chi})+2(\mu\,r^{-1}\,[\vec{u} \cdot\vec{e}_{1}],\,\,[\vec{\chi}\cdot\vec{e}_{1}])\] \[\qquad-\big{(}p,\,\,\nabla\cdot[r\vec{\chi}]\big{)}+2(\mu\, \underline{\underline{\mathbb{D}}}(\vec{u}),\,\underline{\underline{\mathbb{D} }}(\vec{\chi})\,r)-\,\gamma\big{(}\,(\vec{u}\cdot\vec{e}_{1})\,\kappa-(\vec{u} \cdot\vec{e}_{1}),\,\,\vec{\nu}\cdot\vec{\chi}\,|\vec{\chi}_{\alpha}|\,\big{)} =\big{(}\rho\,r\,\vec{g},\,\vec{\chi}\big{)}\qquad\forall\vec{\chi}\in \mathbb{V}. \tag{3.28}\]
In a similar manner to (3.19) and (3.25), it is not difficult to prove the energy law (3.23) for the two conservative ALE weak formulations in view of (3.26). In fact, (3.27) can also be derived via an axisymmetric reduction of the 3d conservative ALE formulation in [28, (4.12)].
## 4 Finite element approximations
In this section, we propose ALE finite element approximations for the introduced four weak formulations in SS3, and explore the properties of these approximating methods.
### The discretization
We employ a uniform partition of the time interval and the reference domain \(\mathbb{I}\) as follows
\[[0,T]=\cup_{m=1}^{M}[t_{m-1},t_{m}]\quad\text{with}\quad t_{m}=m \Delta t,\quad\Delta t=T/M,\] \[\mathbb{I}=\cup_{j=1}^{I\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!
where \(\mathcal{P}_{k}({\partial}^{m}_{j})\) denotes the space of polynomials of degree \(k\) on \({\partial}^{m}_{j}\).
We use the moving fitted finite element method so that the discretization of the interface \(\Gamma^{m}\) is fitted to the triangulation of \(\mathcal{R}\), i.e. the line segments making up \(\Gamma^{m}\) are all edges of elements in \(\mathcal{T}^{m}\). Let \(\mathcal{R}^{m}_{+}\) be the interior region enclosed by \(\Gamma^{m}\) and \(\partial_{z}\mathcal{R}\), and \(\mathcal{R}^{m}_{+}\) be the exterior region. We then approximate \(\mu(\cdot,t)\) and \(\rho(\cdot,t)\) with \(\rho^{m}\in S^{m}_{0}\) and \(\mu^{m}\in S^{m}_{0}\) such that
\[\rho^{m}=\rho\mathcal{X}_{\mathcal{R}^{m}_{-}}+\rho_{+}\mathcal{X}_{\mathcal{ R}^{m}_{+}},\qquad\mu^{m}=\mu_{-}\mathcal{X}_{\mathcal{R}^{m}_{-}}+\mu_{+} \mathcal{X}_{\mathcal{R}^{m}_{+}}.\]
We denote by \(\mathbb{U}^{m}\) and \(\mathbb{P}^{m}\) the velocity and pressure approximation spaces, respectively. In the present work we consider the following pair element [9; 10; 28]
\[\text{P2-(P1+P0)}:\quad(\mathbb{U}^{m},\ \mathbb{P}^{m})=([S^{m}_{2}]^{2} \cap\mathbb{U},\ (S^{m}_{1}+S^{m}_{0})\cap\mathbb{P}), \tag{4.2}\]
which is able to capture the pressure jump across the interface.
### Discrete ALE mappings
In order to match the evolving polygonal curves \(\Gamma^{m}\), the bulk mesh \(\mathcal{T}^{m}\) needs to be constructed appropriately. For each \(m\geq 1\), we now assume that we are given the polyhedral curve \(\Gamma^{m}=\check{X}^{m}(\mathbb{I})\). We then construct \(\mathcal{T}^{m}\) based on \(\mathcal{T}^{m-1}\) by simply moving the vertices in \(Q^{m-1}\) according to the displacement vectors
\[\vec{q}^{m}_{k}=\vec{q}^{m-1}_{k}+\vec{\psi}(\vec{q}^{m-1}_{k}),\qquad 1\leq k \leq K,\quad 1\leq m\leq M, \tag{4.3}\]
while preserving the connectivity and topology of the bulk mesh. On introducing
\[\mathbb{Y}^{m-1} =\{\vec{\chi}\in[S^{m-1}_{1}]^{2},\vec{\chi}\cdot\vec{n}=0\ \ \text{on}\ \ \partial\mathcal{R};\,\vec{\chi}=\check{X}^{m}-\check{X}^{m-1}\ \ \text{on}\ \Gamma^{m-1}\},\] \[\mathbb{Y}^{m-1}_{0} =\{\vec{\chi}\in[S^{m-1}_{1}]^{2},\vec{\chi}\cdot\vec{n}=0\ \ \text{on}\ \ \partial\mathcal{R};\,\vec{\chi}=\vec{0}\ \ \text{on}\ \ \Gamma^{m-1}\},\]
we solve for the displacement \(\vec{\psi}^{m}\in\mathbb{Y}^{m-1}\) via an elastic equation such that
\[2(\lambda^{m-1}\,\underline{\mathbb{D}}(\vec{\psi}^{m}),\ \underline{\mathbb{D}}(\vec{\chi}))+(\lambda^{m-1}\,\nabla\cdot\vec{\psi}^{m}, \ \nabla\cdot\vec{\chi})=0\qquad\forall\vec{\chi}\in\mathbb{Y}^{m-1}_{0}, \tag{4.4}\]
where we introduce
\[\lambda^{m-1}_{|\sigma^{-1}_{j}}=1+\frac{\max_{\begin{subarray}{c}\in\mathcal{ T}^{m}\to-1\end{subarray}}|o|-\min_{\begin{subarray}{c}\in\mathcal{T}^{m-1} \end{subarray}}|o|}{|o_{j}^{m-1}|},\qquad j=1,\ldots,J_{\mathcal{R}},\]
to limit the distortion of small elements [42; 58].
In view of (4.3), it is natural to introduce the approximation of the mesh velocity \(\vec{w}\) at time \(t_{m}\) as
\[\vec{W}^{m}(\vec{x}):=\sum_{k=1}^{K}\left(\frac{\vec{q}^{m}_{k}-\vec{q}^{m-1} _{k}}{\Delta t}\right)\phi^{m}_{k}(\vec{x}),\quad\vec{x}\in\mathcal{R}^{m}_{ \pm}, \tag{4.5}\]
where \(\phi^{m}_{k}(\cdot)\) is the nodal basis function of \(S^{m}_{1}\) at \(\vec{q}^{m}_{k}\). The corresponding discrete ALE mappings are given by
\[\vec{\mathcal{A}}^{m}[t](\vec{x}):=\mathrm{i}\vec{\mathrm{d}}-(t_{m}-t)\vec{W }^{m}(\vec{x})=\sum_{k=1}^{K}\left(\frac{t_{m}-t}{\Delta t}\,\vec{q}^{m-1}_{k} +\frac{t-t_{m-1}}{\Delta t}\,\vec{q}^{m}_{k}\right)\phi^{m}_{k}(\vec{x}), \quad\forall t\in[t_{m-1},\ t_{m}],\quad\vec{x}\in\mathcal{R}, \tag{4.6}\]
where \(\vec{\mathrm{i}}\vec{\mathrm{d}}\) is the identity function on \(\mathcal{R}\). It is easy to see that \(\vec{\mathcal{A}}^{m}[t_{m}](\vec{x})\) is the identity map, and
\[\mathcal{A}^{m}[t_{m-1}](\vec{x})=\vec{\mathrm{i}}\vec{\mathrm{d}}-\Delta t\, \vec{W}^{m}(\vec{x})\quad\forall\vec{x}\in\mathcal{R}^{m}_{\pm},\quad\text{ with}\quad\vec{\mathcal{A}}^{m}[t_{m-1}](\mathcal{R}^{m}_{\pm})= \mathcal{R}^{m-1}_{\pm}. \tag{4.7}\]
The Jacobian determinant of the linear map \(\vec{\mathcal{A}}^{m}[t_{m-1}]\) is given by
\[\mathcal{J}^{m}(\vec{x}):=\det(\vec{\mathrm{v}}\vec{\mathrm{d}}-\Delta t\nabla \vec{W}^{m})=1-\Delta t\,\nabla\cdot\vec{W}^{m}+O(\Delta t^{2}),\qquad\vec{x} \in\mathcal{R}^{m}_{\pm}. \tag{4.8}\]
We have the following lemma about the discrete ALE mappings.
**Lemma 4.1**.: _Suppose that \(\varphi\in L^{2}(\mathscr{R}^{m}_{\pm})\). Then it holds that_
\[\int_{\mathscr{R}^{m}_{\pm}}\varphi\left(r-\,\Delta t\,[\,\bar{W}^{m}\cdot \vec{e}_{1}]\right)\mathcal{J}^{m}\,\mathrm{d}r\mathrm{d}z=\int_{\mathscr{R}^{ m-1}_{\pm}}\varphi\circ\bar{\mathcal{A}}^{m}[t_{m-1}]^{-1}\,r\,\mathrm{d}r \mathrm{d}z. \tag{4.9}\]
_Moreover, let \(\mathscr{R}^{h}_{\pm}(t)=\bar{\mathcal{A}}^{m}[t](\mathscr{R}^{m}_{\pm})\) for \(t\in[t_{m-1},t_{m}]\). Then it holds_
\[\int_{\mathscr{R}^{m}_{\pm}}\varphi\,r\,\mathrm{d}r\mathrm{d}z- \int_{\mathscr{R}^{m-1}_{\pm}}\varphi\circ\bar{\mathcal{A}}^{m}[t_{m-1}]^{-1} \,r\,\mathrm{d}r\mathrm{d}z=\int_{t_{m-1}}^{t_{m}}\int_{\mathscr{R}^{h}_{\pm }(t)}\varphi\circ\bar{\mathcal{A}}^{m}[t]^{-1}\nabla\cdot[\,r\,\bar{W}^{m} \circ\bar{\mathcal{A}}^{m}[t]^{-1}]\,\mathrm{d}r\mathrm{d}z\mathrm{d}t. \tag{4.10}\]
Proof.: We recall the definition of \(\mathcal{J}^{m}\) in (4.8) as well as the fact that \(\bar{\mathcal{A}}^{m}[t_{m-1}]=\dot{\mathrm{id}}-\Delta t\,\bar{W}^{m}\) and \(\dot{\mathrm{id}}\cdot\vec{e}_{1}=r\). Then using the change of variables \(\vec{x}=\bar{\mathcal{A}}^{m}[t_{m-1}](\vec{y})\), it is straightforward to obtain
\[\int_{\mathscr{R}^{m}_{\pm}}\varphi\left(\bar{\mathcal{A}}^{m}[t_{m-1}]\cdot \vec{e}_{1}\right)\mathcal{J}^{m}\,\mathrm{d}r\mathrm{d}z=\int_{\mathscr{R}^{ m-1}_{\pm}}\varphi\circ\bar{\mathcal{A}}^{m}[t_{m-1}]^{-1}\,r\,\mathrm{d}r \mathrm{d}z,\]
which then implies (4.9).
It follows directly from (4.6) that
\[\partial_{t}^{\circ}(\vec{x}\circ\bar{\mathcal{A}}^{m}[t]^{-1})=\vec{0}\qquad \forall\vec{x}\in\mathbb{U}^{m}. \tag{4.11}\]
Then applying (A.1) to the domain \(\mathscr{R}^{h}_{\pm}(t)\) and using (4.11) yields that
\[\frac{\mathrm{d}}{\mathrm{d}t}\int_{\mathscr{R}^{h}_{\pm}(t)}\varphi\circ\bar {\mathcal{A}}^{m}[t]^{-1}\,r\,\mathrm{d}r\mathrm{d}z=\int_{\mathscr{R}^{h}_{ \pm}(t)}\varphi\circ\bar{\mathcal{A}}^{m}[t]^{-1}\nabla\cdot[\,r\,\bar{W}^{m} \circ\bar{\mathcal{A}}^{m}[t]^{-1}\,]\,\mathrm{d}r\mathrm{d}z, \tag{4.12}\]
which leads to (4.10) directly after integrating with respect to \(t\) from \(t_{m-1}\) to \(t_{m}\).
**Remark 4.2**.: _We note that (4.10) can be regarded as the axisymmetric analogue of the geometric conservation law that was introduced in [43, (3.22)]. Moreover, on applying the change of variables \(\vec{x}=\bar{\mathcal{A}}^{m}[t]^{-1}(\vec{y})\) to (4.12), we obtain_
\[\int_{\mathscr{R}^{h}_{\pm}(t)}\varphi\circ\bar{\mathcal{A}}^{m}[ t]^{-1}\nabla\cdot[\,r\,\bar{W}^{m}\circ\bar{\mathcal{A}}^{m}[t]^{-1}]\, \mathrm{d}r\mathrm{d}z=\frac{\mathrm{d}}{\mathrm{d}t}\int_{\mathscr{R}^{h}_{ \pm}(t)}\varphi\circ\bar{\mathcal{A}}^{m}[t]^{-1}\,r\,\mathrm{d}r\mathrm{d}z\] \[=\frac{\mathrm{d}}{\mathrm{d}t}\int_{\mathscr{R}^{m}_{\pm}} \varphi\det\underline{G}(\vec{x},t)\left(\bar{\mathcal{A}}^{m}[t]\cdot\vec{e}_ {1}\right)\mathrm{d}r\mathrm{d}z=\int_{\mathscr{R}^{m}_{\pm}}\varphi\frac{ \mathrm{d}}{\mathrm{d}t}\left[\det\underline{G}(\vec{x},t)\left(\bar{ \mathcal{A}}^{m}[t]\cdot\vec{e}_{1}\right)\right]\,\mathrm{d}r\mathrm{d}z, \tag{4.13}\]
_where \(\underline{G}(\vec{x},t)=\frac{\partial\bar{\mathcal{A}}^{m}[t](\vec{x})}{ \partial\vec{x}\,\vec{x}}\). On recalling (4.6), we see that (4.13) is a polynomial of degree \(2\) in the variable \(t\). Thus the right hand side of (4.10) can be integrated exactly with respect to \(t\) via Simpson's rule._
### Nonconservative ALE approximations
In the following, we denote by \(\vec{U}^{m}\), \(P^{m}\), \(\varkappa^{m}\) and \(\kappa^{m}\) the numerical approximations of \(\vec{u}(\cdot,t)\), \(p(\cdot,t)\), \(\varkappa(\cdot,t)\) and \(\kappa(\cdot,t)\) at time \(t_{m}\), respectively. We introduce
\[\vec{\chi}_{\varkappa}=\vec{\chi}\circ\bar{\mathcal{A}}^{m}[t_{m-1}]\in \mathbb{U}^{m}\qquad\text{for}\quad\vec{\chi}\in\mathbb{U}^{m-1}. \tag{4.14}\]
We first consider an unconditionally stable approximation of the weak formulation (3.19) as follows. Let \(\Gamma^{0}:=\vec{X}^{0}(\cdot)\in V^{h}_{\partial}\) and \(\vec{U}^{0}\in\mathbb{U}^{0}\) be the approximations of the initial interface and velocity field, respectively. Moreover, we set \(\Gamma^{-1}=\Gamma^{0}\), \(\mathscr{R}^{-1}_{\pm}=\mathscr{R}^{0}_{\pm}\) with \(\vec{W}^{0}=\vec{0}\) and \(\mathcal{J}^{0}(\vec{x})=1\). Then for \(m\geq 0\), we seek \(\vec{U}^{m+1}\in\mathbb{U}^{m}\), \(P^{m+1}\in\mathbb{P}^{m}\), \(\vec{X}^{m+1}\in V^{h}_{\partial}\)
and \(\varkappa^{m+1}\in V^{h}\) such that
\[\left(\rho^{m}\frac{\tilde{U}^{m+1}-\tilde{U}^{m}_{\mathcal{A}}\ \sqrt{\left(1-\Delta t\ r^{-1}[\tilde{W}^{m}\cdot\tilde{e}_{1}]\right) \mathcal{J}^{m}}}{\Delta t},\ \vec{\chi}\ r\right)^{\diamond}+\mathcal{A}(\rho^{m}, \tilde{U}^{m}_{\mathcal{A}}-\tilde{W}^{m};\tilde{U}^{m+1},\vec{\chi})\] \[\qquad\qquad\qquad+2\left(\mu^{m}\ r^{-1}[\tilde{U}^{m+1}\cdot \tilde{e}_{1}],\ \{\vec{\chi}\cdot\tilde{e}_{1}\}\right)^{\diamond}+2\left(\mu^{m}\ r\, \underline{\mathbb{D}}(\tilde{U}^{m+1}),\ \underline{\mathbb{D}}(\vec{\chi})\right)-\left(P^{m+1},\ \nabla\cdot[r\vec{\chi}]\right)\] \[\qquad\qquad\qquad-\ \gamma\left(\langle\vec{\chi}^{m}\cdot\tilde{e}_{1} \rangle\varkappa^{m+1},\ \vec{\gamma}^{m}\cdot\vec{\chi}|\vec{\chi}^{m}_{\alpha}|\right)=\left(\rho^{m }\vec{g},\ \vec{\chi}\ r\right)\qquad\forall\vec{\chi}\in\mathbb{U}^{m}, \tag{4.15a}\] \[\left(\nabla\cdot[r\tilde{U}^{m+1}],\ q\right)=0\qquad\forall q \in\mathbb{P}^{m},\] (4.15b) \[\frac{1}{\Delta t}\Big{\langle}(\vec{\chi}^{m}\cdot\tilde{e}_{1} )(\vec{\chi}^{m+1}-\vec{\chi}^{m}),\ \zeta\ \vec{\gamma}^{m}\,|\vec{\chi}^{m}_{\alpha}|\Big{\rangle}-\left\langle(\vec{ \chi}^{m}\cdot\tilde{e}_{1})\tilde{U}^{m+1}\cdot\vec{\gamma}^{m},\ \zeta\,|\vec{\chi}^{m}_{\alpha}|\right\rangle=0\qquad\forall\zeta\in V^{h},\] (4.15c) \[\left\langle(\vec{\chi}^{m}\cdot\tilde{e}_{1})\varkappa^{m+1},\ \vec{\eta}\cdot\vec{\gamma}^{m}|\vec{\chi}^{m}_{\alpha}|\right\rangle+\left\langle \vec{\eta}\cdot\tilde{e}_{1},\ |\vec{\chi}^{m+1}_{\alpha}|\right\rangle+\left\langle(\vec{\chi}^{m}\cdot \tilde{e}_{1})\vec{\chi}^{m+1}_{\alpha},\ \vec{\eta}_{\alpha}\,|\vec{\chi}^{m-1}_{\alpha}|\right\rangle=0\qquad \forall\vec{\eta}\in V^{h}_{\vartheta}, \tag{4.15d}\]
where \(\tilde{U}^{m}_{\mathcal{A}}\) and \(\tilde{W}^{m}\) are defined via (4.14) and (4.5), respectively, and we set \(\Gamma^{m+1}=\vec{\chi}^{m+1}(\mathbb{I})\) to construct the new bulk mesh through (4.3) and (4.4). Moreover, \((\cdot,\cdot)^{\diamond}\) represents an approximation of the inner product \((\cdot,\cdot)\) using a high-order Gauss quadrature rule that is exact for polynomial of degree at most \(5\).
Recalling (4.8) and using a Taylor expansion of the first term in (4.15a) yields
\[\left(\rho^{m}\frac{\tilde{U}^{m+1}-\tilde{U}^{m}_{\mathcal{A}}\ \sqrt{\left(1-\Delta t\ r^{-1}[\tilde{W}^{m}\cdot\tilde{e}_{1}]\right) \mathcal{J}^{m}}}{\Delta t},\ \vec{\chi}\ r\right)^{\diamond}\] \[\qquad=\left(\rho^{m}\frac{\tilde{U}^{m+1}-\tilde{U}^{m}_{ \mathcal{A}}\ \sqrt{\left(1-\Delta t\ r^{-1}[\tilde{W}^{m}\cdot\tilde{e}_{1}]\right) \left(1-\Delta t\ \nabla\cdot\ \tilde{W}^{m}\right)+O(\Delta t^{2})}}{\Delta t},\ \vec{\chi}\ r \right)^{\diamond}\] \[\qquad=\left(\rho^{m}\frac{\tilde{U}^{m+1}-\tilde{U}^{m}_{ \mathcal{A}}}{\Delta t},\ \vec{\chi}\ r\right)^{\diamond}+\frac{1}{2}\Big{(}\rho^{m}\nabla\cdot\tilde{W}^ {m},\ \tilde{U}^{m}_{\mathcal{A}}\cdot\vec{\chi}\ r\Big{)}^{\diamond}+\frac{1}{2} \Big{(}\rho^{m}\nabla\cdot[r\tilde{W}^{m}],\ \tilde{U}^{m}_{\mathcal{A}}\cdot\vec{\chi} \Big{)}+O(\Delta t)\] \[\qquad=\left(\rho^{m}\frac{\tilde{U}^{m+1}-\tilde{U}^{m}_{ \mathcal{A}}}{\Delta t},\ \vec{\chi}\ r\right)^{\diamond}+\frac{1}{2}\Big{(}\rho^{m}\nabla\cdot[r\tilde{W} ^{m}],\ \tilde{U}^{m}_{\mathcal{A}}\cdot\vec{\chi}\Big{)}+O(\Delta t), \tag{4.16}\]
which is a consistent temporal discretization of the first two terms in (3.19a). The special approximation in (4.16) allows a stable discretization with a decreasing discrete kinetic energy. We also note that (4.15) is nonlinear due to the presence of \(|\vec{\chi}^{m+1}_{\alpha}|\) in (4.15d), which contributes to the stability of the interface energy. As a consequence, we have the following theorem for the introduced method (4.15), which mimics the energy stability (3.23) on the discrete level.
**Theorem 4.3**.: _Let \((\tilde{U}^{m+1},\,P^{m+1},\vec{\chi}^{m+1},\,\varkappa^{m+1})\) be a solution to (4.15) for \(m=0,1,\cdots,M-1\). Then it holds that_
\[\frac{1}{2\pi}E(\rho^{k},\ \tilde{U}^{k+1},\vec{\chi}^{k+1})+2\Delta t\sum_{m=0}^{k} \Bigl{(}\|\sqrt{\mu^{m}\ r^{-1}}\ (\tilde{U}^{m+1}\cdot\tilde{e}_{1})\|_{ \diamond}^{2}+\|\sqrt{\mu^{m}\ r}\,\underline{\mathbb{D}}(\tilde{U}^{m+1})\|^{2} \Bigr{)}\] \[\qquad\leq\frac{1}{2\pi}E(\rho^{0},\ \tilde{U}^{0},\tilde{X}^{0})+\Delta t\sum_{m=0}^{k} \bigl{(}\rho^{m}\ r\,\vec{g},\ \tilde{U}^{m+1}\bigr{)},\qquad k=0,1,\cdots,M-1, \tag{4.17}\]
_where \(\|\cdot\|\) and \(\|\cdot\|_{\diamond}\) are the induced norms of the inner products \((\cdot,\cdot)\) and \((\cdot,\cdot)^{\diamond}\), respectively._
Proof.: Setting \(\vec{\chi}=\Delta t\tilde{U}^{m+1}\) in (4.15a), \(q=P^{m+1}\) in (4.15b), \(\zeta=\Delta t\gamma\varkappa^{m+1}\) in (4.15c) and \(\vec{\eta}=\gamma(\vec{\chi}^{m+1}-\vec{\chi}^{m})\) in (4.15d), and combining these four equations, yields
\[\left(\rho^{m}\,\delta\tilde{U}^{m},\ \tilde{U}^{m+1}\,r\right)+2 \Delta t\left(\mu^{m}\ r^{-1}[\tilde{U}^{m+1}\cdot\tilde{e}_{1}],\ [\tilde{U}^{m+1}\cdot\tilde{e}_{1}]\right)+2 \Delta t\left(\mu^{m}\ r\,\underline{\mathbb{D}}(\tilde{U}^{m+1}),\ \underline{\mathbb{D}}(\tilde{U}^{m+1})\right)\] \[\qquad\qquad+\gamma\Bigl{(}\langle\vec{\chi}^{m+1}-\vec{\chi}^{m} \cdot\tilde{e}_{1},\ |\vec{\chi}^{m+1}_{\alpha}|\Big{)}+\gamma\Bigl{(}\langle\vec{\chi}^{m}\cdot \tilde{e}_{1}\rangle\vec{\chi}^{m+1}_{\alpha},\ (\vec{\chi}^{m+1}-\vec{\chi}^{m})_{\alpha}\ |\vec{\chi}^{m-1}_{\alpha}| \Bigr{)}=\Delta t\bigl{(}\rho^{m}\ r\,\vec{g},\ \tilde{U}^{m+1}\bigr{)}, \tag{4.18}\]
where we denote \(\delta\tilde{U}^{m}=\tilde{U}^{m+1}-\tilde{U}^{m}_{\mathcal{A}}\sqrt{\left(1- \Delta t\ r^{-1}[\tilde{W}^{m}\cdot\tilde{e}_{1}]\right)\mathcal{J}^{m}}\).
By the inequality \(\vec{d}\cdot(\vec{d}-\vec{b})\geq|\vec{b}|(|\vec{d}|-|\vec{b}|)\), we have
\[\left\langle(\vec{X}^{m+1}-\vec{X}^{m})\cdot\vec{e}_{1},\ |\vec{X}^{m+1}_{\alpha}|\right\rangle+\left\langle(\vec{X}^{m}\cdot\vec{e}_{1}) \vec{X}^{m+1}_{\alpha},\ (\vec{X}^{m+1}-\vec{X}^{m})_{\alpha}\,|\vec{X}^{m}_{\alpha}|^{-1}\right\rangle\] \[\geq\left\langle(\vec{X}^{m+1}-\vec{X}^{m})\cdot\vec{e}_{1},\ |\vec{X}^{m+1}_{\alpha}|\right\rangle+\left\langle(\vec{X}^{m}\cdot\vec{e}_{1}),\ |\vec{X}^{m+1}_{\alpha}|-|\vec{X}^{m}_{\alpha}|\right\rangle\] \[=\left\langle(\vec{X}^{m+1}\cdot\vec{e}_{1}),\ |\vec{X}^{m+1}_{\alpha}|\right\rangle-\left\langle(\vec{X}^{m}\cdot\vec{e}_{1} ),\ |\vec{X}^{m}_{\alpha}|\right\rangle=\frac{1}{2\pi}\left\{A(\vec{X}^{m+1})-A( \vec{X}^{m})\right\}, \tag{4.19}\]
where we invoke the definition of \(A(\vec{X}^{m})\) in (3.20a).
Let \((\cdot,\cdot)_{\mathcal{R}^{m}_{\pm}}\) denote the \(L^{2}\)-inner product over \(\mathcal{R}^{m}_{\pm}\). Moreover, we let \((\cdot,\cdot)^{\circ}_{\mathcal{R}^{m}_{\pm}}\) be an approximation of \((\cdot,\cdot)_{\mathcal{R}^{m}_{\pm}}\) using the prescribed high-order Gauss quadrature rule. Using the inequality \(2\vec{d}\cdot(\vec{d}-\vec{b})\geq|\vec{d}|^{2}-|\vec{b}|^{2}\) yields that
\[\left(\delta\vec{U}^{m},\ \vec{U}^{m+1}\,r\right)^{\circ}_{ \mathcal{R}^{m}_{\pm}} \geq\frac{1}{2}\Big{(}|\vec{U}^{m+1}|^{2},\ r\Big{)}^{\circ}_{ \mathcal{R}^{m}_{\pm}}-\frac{1}{2}\big{(}|\vec{U}^{m}_{\mathcal{G}^{m}}|^{2} \left(r-\Delta t\,[\vec{W}^{m}\cdot\vec{e}_{1}]\right)\mathcal{J}^{m},\ 1\Big{)}^{ \circ}_{\mathcal{R}^{m}_{\pm}}\] \[=\frac{1}{2}\Big{(}|\vec{U}^{m+1}|^{2},\ r\Big{)}_{\mathcal{R}^{m }_{\pm}}-\frac{1}{2}\big{(}|\vec{U}^{m}|^{2},\ r\big{)}_{\mathcal{R}^{m}_{\pm }}, \tag{4.20}\]
where for the last equality we applied (4.9) with \(\varphi=|\vec{U}^{m}_{\mathcal{G}^{m}}|^{2}\). We then multiply (4.20) with \(\rho_{\pm}\) and combine the two equations to give
\[\left(\rho^{m}\,\delta\vec{U}^{m},\ \vec{U}^{m+1}\,r\right)^{\circ}\geq\frac{1}{2} \Big{(}\rho^{m}\,|\vec{U}^{m+1}|^{2},\ r\Big{)}-\frac{1}{2}\Big{(}\rho^{m-1}\, |\vec{U}^{m}|^{2},\ r\Big{)}, \tag{4.21}\]
on recalling (4.2). Inserting (4.19) and (4.21) into (4.18), and recalling (3.21) leads to
\[\frac{1}{2\pi}E(\rho^{m},\vec{U}^{m+1},\vec{X}^{m+1})+2\Delta t \big{(}\|\sqrt{\mu^{m}\,r^{-1}\,(\vec{U}^{m+1}\cdot\vec{e}_{1})}\|_{\circ}^{2} +\|\sqrt{\mu^{m}\,r}\,\underline{\Pi}(\vec{U}^{m+1})\|^{2}\big{)}\] \[\qquad\leq\frac{1}{2\pi}E(\rho^{m-1},\vec{U}^{m},\vec{X}^{m})+ \Delta t(\rho^{m}\,r\,\vec{g},\ \vec{U}^{m+1}). \tag{4.22}\]
Summing (4.22) for \(m=0,\cdots,k\) yields (4.17) immediately on recalling \(\mathcal{R}^{-1}_{\pm}=\mathcal{R}^{0}_{\pm}\).
We next consider a linear approximation of (3.25) as follows. With the discrete initial data as before, for \(m\geq 0\), we find \(\vec{U}^{m+1}\in\mathbb{U}^{m}\), \(P^{m+1}\in\mathbb{P}^{m}\), \(\vec{X}^{m+1}\in V^{h}_{\partial}\) and \(\kappa^{m+1}\in V^{h}\) such that
\[\left(\rho^{m}\frac{\vec{U}^{m+1}-\vec{U}^{m}_{\mathcal{A}}\ \sqrt{\left(1\,-\ \Delta t\,r^{-1}[\vec{W}^{m}\cdot\vec{e}_{1}]\right)\mathcal{J}^{m}}}{\Delta t},\ \vec{\chi}\,r\right)^{\circ}+\mathcal{A}(\rho^{m},\vec{U}^{m}_{\mathcal{A}}- \vec{W}^{m};\vec{U}^{m+1},\vec{\chi})\] \[\qquad\qquad\qquad+2\left(\mu^{m}\,r^{-1}[\vec{U}^{m+1}\cdot \vec{e}_{1}],\ [\vec{\chi}\cdot\vec{e}_{1}]\right)^{\circ}+2\left(\mu^{m}\,r\, \underline{\Pi}(\vec{U}^{m+1}),\ \underline{\Pi}(\vec{\chi})\right)-\left(P^{m+1},\ \nabla\cdot[\vec{\chi}]\right)\] \[\qquad\qquad\qquad-\ \gamma\left\langle(\vec{X}^{m}\cdot\vec{e}_{1})\, \kappa^{m+1}-\vec{y}^{m}\cdot\vec{e}_{1},\ \vec{y}^{m}\cdot\vec{\chi}^{m}_{\alpha}|\right\rangle=\left(\rho^{m}\vec{g},\ \vec{\chi}\,r\right)\qquad\forall\vec{\chi}\in\mathbb{U}^{m}, \tag{4.23a}\] \[\left(\nabla\cdot[r\vec{U}^{m+1}],\ q\right)=0\qquad\forall q\in \mathbb{P}^{m},\] (4.23b) \[\frac{1}{\Delta t}\Big{(}(\vec{X}^{m}\cdot\vec{e}_{1})(\vec{X}^{ m+1}-\vec{X}^{m}),\ \zeta\,\vec{y}^{m}|\vec{X}^{m}_{\alpha}|\Big{)}-\left\langle(\vec{X}^{m}\cdot\vec{e}_ {1})\vec{U}^{m+1}\cdot\vec{y}^{m},\ \zeta|\vec{X}^{m}_{\alpha}|\right\rangle=0\qquad\forall\zeta\in V^{h},\] (4.23c) \[\left\langle\kappa^{m+1}\vec{y}^{m},\ \vec{y}|\vec{X}^{m}_{\alpha}|\right\rangle^{h}+\left\langle\vec{X}^{m+1}_{ \alpha},\ \vec{y}_{\alpha}|\vec{X}^{m}_{\alpha}|^{-1}\right\rangle=0\qquad\forall\vec{ \eta}\in V^{h}_{\partial}, \tag{4.23d}\]
where we introduced \(\langle\cdot,\cdot\rangle^{h}\) as the mass-lumped \(L^{2}\)-inner product over \(\mathbb{I}\), i.e.,
\[\left\langle\vec{v},\vec{u}\right\rangle^{h}=\frac{1}{2}\,h\sum_{j=1}^{J_{\Gamma} }\left[(\vec{v}\cdot\vec{u})(\alpha_{j}^{-})+(\vec{v}\cdot\vec{u})(\alpha_{j-1}^ {+})\right]\quad\text{with}\quad g(\alpha_{j}^{\pm})=\lim_{\partial\searrow \partial}\,g(\alpha_{j}\pm\delta),\]
for two piecewise continuous functions \(\vec{v},\vec{w}\).
It does not seem possible to obtain discrete energy stability for the above introduced method (4.23). Nevertheless, (4.23) gives rise to a system of linear equations, and the approximation in (4.23d) leads to equidistribution, meaning that the mesh points on \(\Gamma^{m}\) tend to be distributed at evenly spaced arc length. The interested reader is referred to [8, 13] for detailed discussions of this property.
### Conservative ALE approximations
Based on (3.27), we propose a conservative ALE method which satisfies an unconditional stability estimate as well. With the same discrete initial data as before, for \(m\geq 0\), we find \(\tilde{U}^{m+1}\in\mathbb{U}^{m}\), \(P^{m+1}\in\mathbb{P}^{m}\), \(\tilde{X}^{m+1}\in V^{h}_{\hat{\partial}}\) and \(\varkappa^{m+1}\in V^{h}\) such that
\[\frac{1}{\Delta t}\left[\left(\rho^{m}\tilde{U}^{m+1},\;\breve{ \chi}\,r\right)-\left(\rho^{m-1}\tilde{U}^{m},\;\breve{\chi}\circ\tilde{ \mathcal{A}}^{m}[t_{m-1}]^{-1}\,r\right)\right]-\mathcal{B}(\rho^{m},\,\tilde {W}^{m};\,\tilde{U}^{m+1}\cdot\breve{\chi})+\mathcal{A}(\rho^{m},\,\tilde{U}^{ m}_{\mathcal{A}}-\tilde{W}^{m};\,\tilde{U}^{m+1},\breve{\chi})\] \[\qquad\qquad\qquad+2\left(\mu^{m}\,r^{-1}[\tilde{U}^{m+1}\cdot \breve{\sigma}_{1}],\;[\breve{\chi}\cdot\breve{\sigma}_{1}]\right)^{\circ}+2 \left(\mu^{m}\,r\,\underline{\mathbb{D}}(\tilde{U}^{m+1}),\;\underline{ \mathbb{D}}(\breve{\chi})\right)-\left(P^{m+1},\;\nabla\cdot[r\breve{\chi}]\right)\] \[\qquad\qquad\qquad-\;\gamma\left(\left\langle\breve{\chi}^{m} \cdot\breve{\sigma}_{1}\right\rangle\varkappa^{m+1},\;\breve{\tau}^{m}\cdot \breve{\chi}|\breve{X}^{m}_{\alpha}\right\rangle=\left(\rho^{m}\tilde{g},\, \breve{\chi}\,r\right)\qquad\forall\breve{\chi}\in\mathbb{U}^{m}, \tag{4.24a}\] \[\left(\nabla\cdot[r\tilde{U}^{m+1}],\;q\right)=0\qquad\forall q \in\mathbb{P}^{m},\] (4.24b) \[\frac{1}{\Delta t}\Big{\langle}(\breve{X}^{m}\cdot\breve{\sigma} _{1})(\breve{X}^{m+1}-\breve{X}^{m}),\;\zeta\,\breve{\tau}^{m}\,|\breve{X}^{m }_{\alpha}|\Big{\rangle}-\left\langle(\breve{X}^{m}\cdot\breve{\sigma}_{1}) \tilde{U}^{m+1}\cdot\breve{\tau}^{m},\;\zeta\,|\breve{X}^{m}_{\alpha}|\Big{\rangle} =0\qquad\forall\breve{\zeta}\in V^{h},\] (4.24c) \[\left\langle(\breve{X}^{m}\cdot\breve{\sigma}_{1})\varkappa^{m+1},\; \breve{\eta}\cdot\breve{\nu}^{m}\,|\breve{X}^{m}_{\alpha}|\right\rangle+ \left\langle\breve{\eta}\cdot\breve{\sigma}_{1},\;|\breve{X}^{m+1}_{\alpha}| \right\rangle+\left\langle(\breve{X}^{m}\cdot\breve{\sigma}_{1})\breve{X}^{m +1}_{\alpha},\;\breve{\eta}_{\alpha}\,|\breve{X}^{m}_{\alpha}|^{-1}\right\rangle =0\qquad\forall\breve{\eta}\in V^{h}_{\hat{\partial}}, \tag{4.24d}\]
where we introduced the time-integrated term
\[\mathcal{B}(\rho^{m},\,\breve{W}^{m};\varphi)=\frac{1}{2\Delta t}\int_{t_{m}} ^{t_{m+1}}\left(\rho^{m}\circ\breve{\mathcal{A}}^{m}[t]^{-1}\nabla\cdot[r\, \breve{W}^{m}\circ\breve{\mathcal{A}}^{m}[t]^{-1}],\;\varphi\circ\breve{ \mathcal{A}}^{m}[t]^{-1}\right)\mathrm{d}t.\]
We have the following theorem for the introduced method (4.24).
**Theorem 4.4**.: _If \((\tilde{U}^{m+1},P^{m+1},\breve{X}^{m+1},\varkappa^{m+1})\) is a solution to (4.24) for \(m=0,\cdots,M-1\), then the energy stability estimate (4.17) holds._
Proof.: We choose \(\breve{\chi}=\Delta t\tilde{U}^{m+1}\) in (4.24a), \(q=P^{m+1}\) in (4.24b), \(\zeta=\Delta t\gamma\varkappa^{m+1}\) in (4.24c) and \(\vec{\eta}=\gamma(\breve{X}^{m+1}-\breve{X}^{m})\) in (4.24d) and combine these equations. Then the proof is very similar to that of Theorem 4.3, and it remains to show the stability for the fluid kinetic energy
\[\left(\rho^{m}\tilde{U}^{m+1},\;\tilde{U}^{m+1}\,r\right)-\left( \rho^{m-1}\tilde{U}^{m},\;\tilde{U}^{m+1}\circ\mathcal{A}^{m}[t_{m-1}]^{-1}\,r \right)-\Delta t\,\mathcal{B}(\rho^{m},\,\breve{W}^{m};|\tilde{U}^{m+1}|^{2})\] \[\qquad\qquad\geq\frac{1}{2}\Big{(}\rho^{m}\,|\tilde{U}^{m+1}|^{2},r\Big{)}-\frac{1}{2}\Big{(}\rho^{m-1}|\tilde{U}^{m}|^{2},\;r\Big{)}. \tag{4.25}\]
In view of the inequality \(2\tilde{d}\cdot\vec{b}\leq|\breve{u}|^{2}+|\vec{b}|^{2}\), we have
\[\frac{1}{2}\Big{(}\rho^{m-1}|\tilde{U}^{m}|^{2},\;r\Big{)}+\frac{1}{2}\Big{(} \rho^{m-1}|\tilde{U}^{m+1}\circ\breve{\mathcal{A}}^{m}[t_{m-1}]^{-1}|^{2},\;r \Big{)}\geq\Big{(}\rho^{m-1}\,\tilde{U}^{m},\;\tilde{U}^{m+1}\circ\breve{ \mathcal{A}}^{m}[t_{m-1}]^{-1}\,r\Big{)}. \tag{4.26}\]
Moreover, choosing \(\varphi=|\tilde{U}^{m+1}|^{2}\) in (4.10), then multiplying (4.10) with \(\rho_{\pm}\) and adding the two equations leads to
\[\frac{1}{2}\Big{(}\rho^{m}\,|\tilde{U}^{m+1}|^{2},\;r\Big{)}-\frac{1}{2}\Big{(} \rho^{m-1}\,|\tilde{U}^{m+1}\circ\breve{\mathcal{A}}^{m}[t_{m-1}]^{-1}|^{2},\;r \Big{)}=\Delta t\,\mathcal{B}(\rho^{m},\,\breve{W}^{m};|\tilde{U}^{m+1}|^{2}). \tag{4.27}\]
Combining (4.26) and (4.27) then implies (4.25).
Based on (3.28) and (4.23), it is natural to introduce the following conservative ALE method that enjoys the property of equidistribution. With the same discrete initial data, for \(m\geq 0\), we find \(\tilde{U}^{m+1}\in\mathbb{U}^{m}\), \(P^{m+1}\in\mathbb{P}^{m}\), \(\breve{X}^{m+1}\in V^{h}_{\hat{\partial}}\)
and \(\kappa^{m+1}\in V^{h}\) such that
\[\frac{1}{\Delta t}\left[\left(\varphi^{m}\tilde{U}^{m+1},\ \vec{\chi} \,r\right)-\left(\varphi^{m-1}\tilde{U}^{m},\ \vec{\chi}\circ\vec{\mathcal{A}}^{m}[t_{m-1}]^{-1}\,r\right)\right]- \mathcal{B}(\rho^{m},\tilde{W}^{m};\tilde{U}^{m+1}\cdot\vec{\chi})+\mathcal{A}( \rho^{m},\tilde{U}^{m}_{\mathcal{A}}-\tilde{W}^{m};\tilde{U}^{m+1},\vec{\chi})\] \[\qquad\qquad\qquad+2\left(\mu^{m}\,r^{-1}[\tilde{U}^{m+1}\cdot \vec{\epsilon}_{1}],\ [\vec{\chi}\cdot\vec{\epsilon}_{1}]\right)^{\circ}+2\left(\mu^{m}\,r\, \underline{\underline{\mathbb{D}}}(\tilde{U}^{m+1}),\ \underline{\underline{\mathbb{D}}}(\vec{\chi})\right)-\left(P^{m+1},\ \nabla\cdot[r\vec{\chi}]\right)\] \[\qquad\qquad\qquad-\ \gamma\left((\vec{\chi}^{m}\cdot\vec{\epsilon}_{1}) \,\kappa^{m+1}-\vec{\gamma}^{m}\cdot\vec{\epsilon}_{1},\ \vec{\gamma}^{m}\cdot\vec{\chi}|\vec{\chi}^{m}_{\alpha}|\right)=\left(\rho^{m }\vec{g},\ \vec{\chi}\,r\right)\qquad\forall\vec{\chi}\in\mathbb{U}^{m}, \tag{4.28a}\] \[\left(\nabla\cdot[r\tilde{U}^{m+1}],\ q\right)=0\qquad\forall q\in \mathbb{P}^{m},\] (4.28b) \[\frac{1}{\Delta t}\Big{\langle}(\vec{\chi}^{m}\cdot\vec{ \epsilon}_{1})(\vec{\chi}^{m+1}-\vec{\chi}^{m}),\ \zeta\ \gamma^{m}|\vec{\chi}^{m}_{\alpha}|\Big{\rangle}-\left\langle(\vec{\chi}^{m} \cdot\vec{\epsilon}_{1})\tilde{U}^{m+1}\cdot\vec{\gamma}^{m},\ \zeta\ |\vec{\chi}^{m}_{\alpha}|\right\rangle=0\qquad\forall \zeta\in V^{h},\] (4.28c) \[\left\langle\kappa^{m+1}\vec{\gamma}^{m},\ \vec{\eta}|\vec{\chi}^{m}_{\alpha}|\right\rangle^{h}+\left\langle\vec{\chi}^{m+ 1}_{\alpha},\ \vec{\eta}_{\alpha}\ |\vec{\chi}^{m}_{\alpha}|^{-1}\right\rangle=0\qquad\forall\vec{\eta}\in V^{h} _{\partial}. \tag{4.28d}\]
### Volume-preserving approximations
We recall from SS3 that it is possible to prove volume preservation for the weak formulations on choosing suitable test functions, see (3.24). However, for the ALE methods introduced in SS4.3 and SS4.4, the volume of the two discrete phases will in general not be exactly preserved. In fact, in order to enable an exact volume preservation on the discrete level, it turns out that suitable time-weighted approximations of the interface normals are necessary [6; 7; 29]. We have a discrete analogue of (3.22b) as follows, and its proof can be found in [6; Lemma 3.1].
**Lemma 4.5**.: _Let \(\vec{X}^{m}\in V^{h}_{\partial}\) and \(\vec{X}^{m+1}\in V^{h}_{\partial}\), then it holds that for \(m=0,1,\cdots,M-1\)_
\[\operatorname{vol}(\vec{X}^{m+1})-\operatorname{vol}(\vec{X}^{m})=2\pi(\vec{X }^{m+1}-\vec{X}^{m},\ \vec{f}^{m+\frac{1}{2}}), \tag{4.29}\]
_where we introduced \(\vec{f}^{m+\frac{1}{2}}\in[L^{\infty}(\mathbb{I})]^{2}\) as an appropriate treatment of the quantity \(\vec{f}=(\vec{\pi}\cdot\vec{\epsilon}_{1})\,|\vec{\chi}_{\alpha}|\,\vec{\eta}\):_
\[\vec{f}^{m+\frac{1}{2}}=-\frac{1}{6}[2(\vec{X}^{m}\cdot\vec{\epsilon}_{1})\, \vec{X}^{m}_{\alpha}+2(\vec{X}^{m+1}\cdot\vec{\epsilon}_{1})\,\vec{X}^{m+1}_{ \alpha}+(\vec{X}^{m}\cdot\vec{\epsilon}_{1})\,\vec{X}^{m+1}_{\alpha}+(\vec{X}^ {m+1}\cdot\vec{\epsilon}_{1})\,\vec{X}^{m}_{\alpha}]^{\perp}. \tag{4.30}\]
Similarly to [29], we are now ready to adapt the methods (4.15) and (4.24) to achieve structure-preserving approximations, meaning that the volume preservation and energy stability are satisfied on the discrete level. This can be done easily by replacing (4.15c)-(4.15d) and (4.24c)-(4.24d) with
\[\frac{1}{\Delta t}\langle\vec{X}^{m+1}-\vec{X}^{m},\ \zeta\ \vec{f}^{m+\frac{1}{2}}\rangle- \langle(\vec{X}^{m}\cdot\vec{\epsilon}_{1})\tilde{U}^{m+1}\cdot\vec{\gamma}^ {m},\ \zeta\ |\vec{X}^{m}_{\alpha}|\rangle=0\qquad\forall\zeta\in V^{h}, \tag{4.31a}\] \[\langle\varkappa^{m+1},\ \vec{\eta}\cdot\vec{f}^{m+\frac{1}{2}} \rangle+\langle\vec{\eta}\cdot\vec{\epsilon}_{1},\ |\vec{X}^{m+1}_{\alpha}|\rangle+\langle(\vec{X}^{m}\cdot\vec{\epsilon}_{1}) \vec{X}^{m+1}_{\alpha},\ \vec{\eta}_{\alpha}\,|\vec{X}^{m-1}_{\alpha}|\rangle=0\qquad\forall\vec{\eta} \in V^{h}_{\partial}. \tag{4.31b}\]
We have the following theorem for the new adapted methods.
**Theorem 4.6**.: _Let \((\tilde{U}^{m+1},P^{m+1},\vec{X}^{m+1},\varkappa^{m+1})\) be a solution to the adapted methods of (4.15) or (4.24) by replacing (4.15c)-(4.15d) or (4.24c)-(4.24d) with (4.31a)-(4.31b). Then the energy stability estimate (4.17) holds. Moreover, it holds for \(m=0,\cdots,M-1\) that_
\[\operatorname{vol}(\vec{X}^{m+1})=\operatorname{vol}(\vec{X}^{m}). \tag{4.32}\]
Proof.: The stability estimate can be established in a similar manner to Theorem 4.3 and Theorem 4.4. For the volume preservation, on recalling (4.2), we choose
\[q=(\vec{X}_{\varkappa^{m}}-\omega^{m})\in\mathbb{P}^{m}\quad\text{with}\qquad \omega^{m}=\frac{\int_{\vec{\varkappa}^{m}}r\mathrm{d}r\mathrm{d}z}{\int_{ \vec{\varkappa}^{m}}r\mathrm{d}r\mathrm{d}z}\]
in (4.15b) or (4.24b) and obtain
\[0 =(\nabla\cdot[r\tilde{U}^{m+1}],\ q)=\int_{\vec{\varkappa}^{m}} \nabla\cdot[r\tilde{U}^{m+1}]\,\mathrm{d}r\mathrm{d}z-\omega^{m}\int_{\vec{ \varkappa}}\nabla\cdot[r\tilde{U}^{m+1}]\,\mathrm{d}r\mathrm{d}z\] \[=\int_{\Gamma^{m}}(\vec{X}^{m}\cdot\vec{\epsilon}_{1})(\tilde{U}^{ m+1}\cdot\vec{\gamma}^{m})\mathrm{d}r\mathrm{d}z=\Big{\langle}\vec{X}^{m}\cdot \vec{\epsilon}_{1},\ \tilde{U}^{m+1}\cdot\vec{\gamma}^{m}\,|\vec{X}^{m}_{\alpha}|\Big{\rangle}. \tag{4.33}\]
On the other hand, setting \(\zeta=\Delta t\) in (4.31a) yields that
\[\left\langle\vec{X}^{m+1}-\vec{X}^{m},\ \hat{J}^{m+\frac{1}{2}}\right\rangle= \Delta t\Big{\langle}\vec{X}^{m}\cdot\vec{e}_{1},\ \hat{U}^{m+1}\cdot\vec{y}^{m}\,|\vec{X}^{m}_{\alpha}|\Big{\rangle}=0,\]
which implies (4.32) on recalling (4.33) and Lemma 4.5.
Similarly, we have the following theorem for the volume-preserving variants of the methods (4.23) and (4.28).
**Theorem 4.7**.: _Let \((\vec{U}^{m+1},P^{m+1},\vec{X}^{m+1},\kappa^{m+1})\) be a solution to the adapted method of (4.23) or (4.28), which is obtained by replacing (4.23c) or (4.28c) with (4.31a). Then it holds for \(m=0,\cdots,M-1\) that_
\[\mathrm{vol}(\vec{X}^{m+1})=\mathrm{vol}(\vec{X}^{m}). \tag{4.34}\]
## 5 Numerical results
### Solutions of the discrete systems
The introduced ALE methods in SS4 are summarized in Table 1, where the adapted variants refer to the corresponding volume-preserving methods, indicated in the name of the scheme with a "V". We construct the discrete ALE mappings explicitly as described in 4.2, and the methods n-Equ\({}^{h}\) and c-Equ\({}^{h}\) lead to a system of linear equations, which can be written in matrix form as
\[\begin{pmatrix}\vec{D}_{\mathcal{R}}^{m}&\vec{C}_{\mathcal{R}}^{m}&-\gamma \vec{N}_{\Gamma,\mathcal{R}}^{m}&0\\ [\vec{C}_{\mathcal{R}}^{m}]^{T}&0&0&0\\ [\vec{N}_{\Gamma,\mathcal{R}}^{m}]^{T}&0&0&-\frac{1}{\Delta t}[\vec{N}_{ \Gamma}^{m}]^{T}\\ 0&0&\vec{\mathcal{N}}_{\Gamma}^{m}&\underline{\underline{\Delta}}_{\Gamma}^{m }\end{pmatrix}\begin{pmatrix}\vec{U}^{m+1}\\ P^{m+1}\\ \kappa^{m+1}\\ \delta\vec{X}^{m+1}\end{pmatrix}=\begin{pmatrix}\vec{c}^{m}\\ 0\\ 0\\ -\underline{\underline{\Delta}}_{\Gamma}^{m}\vec{X}^{m}\end{pmatrix}, \tag{5.1}\]
where with a slight abuse of notations we denote by \((\vec{U}^{m+1},P^{m+1},\kappa^{m+1},\delta\vec{X}^{m+1})\) the coefficients of these finite element functions with respect to the standard bases of the corresponding finite element spaces, see [10, (5.1a)], and \(\delta\vec{X}^{m+1}=\vec{X}^{m+1}-\vec{X}^{m}\). Denote
\[\Xi_{\Gamma}^{m}:=\begin{pmatrix}0&-\frac{1}{\Delta t}[\vec{N}_{\Gamma}^{m}] ^{T}\\ \vec{\mathcal{N}}_{\Gamma}^{m}&\underline{\underline{\Delta}}_{\Gamma}^{m} \end{pmatrix}.\]
Applying the Schur complement approach to (5.1) gives rise to two linear subsystems [2, 10]
\[\begin{pmatrix}\underline{B}_{\mathcal{R}}^{m}+\gamma\left(\vec{N}_{\Gamma, \mathcal{R}}^{m}&0\right)[\Xi_{\Gamma}^{m}]^{-1}\begin{pmatrix}[\vec{N}_{ \Gamma,\mathcal{R}}^{m}]^{T}\\ 0\end{pmatrix}&\vec{C}_{\mathcal{R}}^{m}\\ 0\end{pmatrix}\begin{pmatrix}\vec{U}^{m+1}\\ P^{m+1}\end{pmatrix}=\begin{pmatrix}\vec{c}^{m}-\gamma\left(\vec{N}_{\Gamma, \mathcal{R}}^{m}&0\right)[\Xi_{\Gamma}^{m}]^{-1}\begin{pmatrix}0\\ \underline{\underline{\Delta}}_{\Gamma}^{m}\vec{X}^{m}\\ 0\end{pmatrix}, \tag{5.2}\]
and
\[\begin{pmatrix}\kappa^{m+1}\\ \delta\vec{X}^{m+1}\end{pmatrix}=[\Xi_{\Gamma}^{m}]^{-1}\begin{pmatrix}-\vec{ N}_{\Gamma,\mathcal{R}}^{m}&\vec{U}^{m+1}\\ -\underline{\underline{\Delta}}_{\Gamma}^{m}\vec{X}^{m}\end{pmatrix}. \tag{5.3}\]
In practice, we solve (5.2) using the preconditioner GMRES method, and (5.3) with the help of a sparse LU factorization.
The nonlinear systems of equations resulting from the other introduced methods are solved via a lagged Picard-type iteration. For example, for the numerical method n-Stab\({}^{h}\), which is given by (4.15a)-(4.15b) and (4.31a)-(4.31b), the iterations at the \(m\)-th time step are given as follows. On setting \(\vec{X}^{m+1,0}=\vec{X}^{m}\), for \(\ell\geq 0\), we find
\(\tilde{U}^{m+1,\ell+1}\in\mathbb{U}^{m}\), \(P^{m+1,\ell+1}\in\mathbb{P}^{m}\), \(\varkappa^{m+1,\ell+1}\in V^{h}\) and \(\tilde{X}^{m+1,\ell+1}\in V^{h}_{\dot{\vartheta}}\) such that
\[\left(\rho^{m}\frac{\tilde{U}^{m+1,\ell+1}-\tilde{U}^{m}_{\mathcal{ A}}\sqrt{\left(1-\,\Delta t\,r^{-1}[\tilde{W}^{m}\cdot\tilde{e}_{1}]\right) \mathcal{J}^{m}}}{\Delta t},\;\breve{\chi}\,r\right)^{\diamond}+\mathcal{A}( \rho^{m},\,\tilde{U}^{m}_{\mathcal{A}}-\tilde{W}^{m};\,\tilde{U}^{m+1,\ell+1}, \breve{\chi})\] \[\qquad\qquad+2\left(\mu^{m}\,r^{-1}\,[\tilde{U}^{m+1,\ell+1} \cdot\tilde{e}_{1}],\;[\breve{\chi}\cdot\tilde{e}_{1}]\right)^{\diamond}+2 \left(\mu^{m}\,r\,\underline{\mathbb{D}}(\tilde{U}^{m+1,\ell+1}),\;\underline{ \mathbb{D}}(\tilde{\chi})\right)-\left(P^{m+1,\ell+1},\;\nabla\cdot[r\breve{ \chi}]\right)\] \[\qquad\qquad-\;\gamma\left((\breve{X}^{m}\cdot\tilde{e}_{1}) \varkappa^{m+1,\ell+1},\;\breve{\gamma}^{m}\cdot\breve{\chi}^{\dagger}\breve{ \chi}^{m}_{\alpha}\right)=\left(\rho^{m}\breve{g},\;\breve{\chi}^{\prime} \right)\qquad\forall\breve{\chi}\in\mathbb{U}^{m}, \tag{5.4a}\] \[\left(\nabla\cdot[r\tilde{U}^{m+1,\ell+1}],\;q\right)=0\qquad \forall q\in\mathbb{P}^{m},\] (5.4b) \[\frac{1}{\Delta t}\langle\breve{X}^{m+1,\ell+1}-\breve{X}^{m},\; \zeta\,\breve{f}^{m+\frac{1}{2},\ell}\rangle-\langle(\breve{X}^{m}\cdot\tilde {e}_{1})\tilde{U}^{m+1,\ell+1}\cdot\breve{\gamma}^{m},\;\zeta\,|\breve{X}^{m} _{\alpha}|\rangle=0\qquad\forall\zeta\in V^{h},\] (5.4c) \[\langle\varkappa^{m+1,\ell+1},\;\breve{\eta}\cdot\breve{f}^{m+ \frac{1}{2},\ell}\rangle+\langle\breve{\eta}\cdot\tilde{e}_{1},\;|\breve{X}^{m +1,\ell}_{\alpha}|\rangle+\langle(\breve{X}^{m}\cdot\tilde{e}_{1})\breve{X}^{ m+1,\ell+1}_{\alpha},\;\breve{\eta}_{\alpha}\,|\breve{X}^{m}_{\alpha}|^{-1} \rangle=0\qquad\forall\breve{\eta}\in V^{h}_{\dot{\vartheta}}, \tag{5.4d}\]
where \(\breve{f}^{m+\frac{1}{2},\ell}\) is a lagged approximation which follows (4.30) directly, except that \(\breve{X}^{m+1}\) is replaced by \(\breve{X}^{m+1,\ell}\). The equations in (5.4) lead to a linear system which can be written in a form which is very similar to (5.1), and thus they can be solved via the introduced techniques. In fact, in the new linear system, we need to have \(\breve{\mathcal{N}}^{m}_{\Gamma}\) replaced by \(\breve{N}^{m}_{\Gamma}\), while on the right hand side, we have additional contribution from the second term in (5.4d). We repeat the above iterations for \(\ell=0,\cdots\) until the following condition is satisfied
\[\max_{1\leq j\leq I_{\Gamma}}|\breve{X}^{m+1,\ell+1}(\alpha_{j})-\breve{X}^{m +1,\ell}(\alpha_{j})|\leq\text{tol}, \tag{5.5}\]
where \(\text{tol}\) is the chosen numerical tolerance.
In the following, we implement the introduced ALE methods by considering the experiments of a rising bubble and an oscillating droplet. Unless otherwise stated, we always choose \(\tilde{U}^{0}=\tilde{0}\) and \(\text{tol}=10^{-8}\) in (5.5). We also introduce the discrete quantities
\[\alpha_{\min}|_{t_{\infty}}=\min_{\sigma\in\mathcal{T}^{m}}\min_{\alpha\in \angle(\sigma)},\qquad\Psi_{e}|_{t_{\infty}}:=\frac{\max_{j=1}^{J_{\Gamma}}| \breve{X}^{m}(\alpha_{j})-\breve{X}^{m}(\alpha_{j-1})|}{\min_{j=1}^{J_{\Gamma}} |\breve{X}^{m}(\alpha_{j})-\breve{X}^{m}(\alpha_{j-1})|},\qquad v_{\Delta}|_{t_ {\infty}}:=\frac{\text{vol}(\breve{X}^{m})}{\text{vol}(\breve{X}^{0})}-1,\]
where \(\angle(\sigma)\) is the set of all the angles of the simplex \(\sigma\). Here \(\alpha_{\min}\) and \(\Psi_{e}\) measure the quality of the bulk mesh and interface mesh, respectively, and \(v_{\Delta}\) is the relative volume loss of the inner phase. In practice, we observe that the
\begin{table}
\begin{tabular}{c|c c c c} \hline & (4.15) & (4.23) & (4.24) & (4.28) \\ \hline original method & n-Stab\({}^{h}\) & n-Equi\({}^{h}\) & c-Stab\({}^{h}\) & c-Equi\({}^{h}\) \\ \hline unconditional stability & \(\check{\check{\check{\check{\check{\check{\check{\check{\check{\check{\check{\check{\check
interface mesh is well preserved, especially for the ALE methods that enjoy the property of equidistribution. Besides, the moving mesh approach described in SS4.2 in general works smoothly except when the interface exhibits strong deformations. To avoid the mesh deterioration, we regenerate the bulk mesh once the following condition is violated
\[\alpha_{\min}\geq\frac{\pi}{18}.\]
After the remeshing, we then need to appropriately interpolate the fluid velocity and bulk mesh velocity to the new generated mesh so that the discrete ALE mappings in (4.6) are well defined.
### The rising bubble
We study the dynamics of a rising bubble, which was considered in the case of 2d in [38], see also the generalization to 3d in [10], and to the rotationally symmetric setting in [29]. The physical parameters are given by
* Case I: \[\rho_{+}=1000,\quad\rho_{-}=100,\quad\mu_{+}=10,\quad\mu_{-}=1,\quad\gamma=24.5,\quad\vec{g}=(0,-0.98)^{T}.\]
* Case II: \[\rho_{+}=1000,\quad\rho_{-}=1,\quad\mu_{+}=10,\quad\mu_{-}=0.1,\quad\gamma=1.9 6,\quad\vec{g}=(0,-0.98)^{T}.\]
We follow the numerical setting from [29] and use \(\mathcal{R}=[0,\frac{1}{2}]\times[0,2]\) with \(\partial_{1}\mathcal{R}=[0,0.5]\times\{0,2\},\partial_{2}\mathcal{R}=\{0.5 \}\times[0,2]\). The initial interface is given by \(\Gamma(0)=\left\{\vec{x}\in\mathcal{R}:\,|\vec{x}-(0,\frac{1}{2})^{T}|=\frac {1}{4}\right\}\). To monitor the state of the rising bubble, we introduce the discrete benchmark quantities
\[\beta_{l_{m}}:=\frac{\pi^{\frac{1}{3}}[6\,M(\vec{X}^{m})]^{\frac{2}{3}}}{A( \vec{X}^{m})},\qquad V_{c}l_{l_{m}}:=\frac{2\pi\int_{\mathcal{R}^{m}}(\vec{U} ^{m}\cdot\vec{\sigma}_{2})r\,\mathrm{d}r\mathrm{d}z}{\mathrm{vol}(\vec{X}^{m}) },\qquad z_{c}l_{m}:=\frac{2\pi\int_{\mathcal{R}^{m}_{-}}(\vec{i}\vec{\mathrm{ d}}\cdot\vec{\sigma}_{2})r\,\mathrm{d}r\mathrm{d}z}{\mathrm{vol}(\vec{X}^{m})},\]
where \(\not{s}\) denotes the degree of sphericity, \(V_{c}\) is the rise velocity, and \(z_{c}\) is the centre of mass in the vertical direction.
**Example 1**: We first simulate the rising bubble in case I, using the introduced ALE methods from SS4.3 and SS4.4. We employ three different computational meshes for each considered method, and the discrete benchmark quantities are reported in Table 2. Based on the data, we can conclude that (i) the nonconservative and conservative ALE methods can produce very similar numerical results, despite the different treatments of the inertia terms in the two-phase Navier-Stokes equations; (ii) numerical convergence is observed as the mesh is refined, and the relative volume loss exhibits a second-order convergence. As the numerical results for the nonconservative and conservative methods are almost the same and graphically indistinguishable, in the following presentations we will only show the results from the nonconservative ALE methods.
The benchmark quantities versus time are plotted in Fig. 2, which further verifies the numerical convergence. Visualisations of the fluid interfaces at several times are shown in Fig. 3. In the same figure, we also plot the time history of the energy, the relative volume loss and the mesh quality indicator. Here we observe an excellent agreement of the energy from the methods \(\mathrm{n}\text{-}\mathsf{Stab}^{h}\) and \(\mathrm{n}\text{-}\mathsf{Equ}^{h}\), and the volume of the inner phase is nearly preserved. Moreover, we find that the mesh indicator \(\Psi_{e}\) gradually approaches 1 for the method \(\mathrm{n}\text{-}\mathsf{Equ}^{h}\), which implies the property of equidistribution. On the other hand, for the method \(\mathrm{n}\text{-}\mathsf{Stab}^{h}\) we observe that \(\Psi_{e}\) gradually grows, without exceeding a value of 8. This implies that that the interface mesh quality for the method \(\mathrm{n}\text{-}\mathsf{Stab}^{h}\) is also preserved well.
**Example 2**: We next study the dynamics of the rising bubble in case II by using the methods introduced in SS4.3, and their volume-preserving variants from SS4.5. Once again we consider three different sets of discretization parameters for each method, and the numerical results are reported in Table 3. Here we observe that these methods can produce very similar numerical results. In particular, the volume is exactly preserved for the methods \(\mathrm{n}\text{-}\mathsf{Stab}^{h}\) and \(\mathrm{n}\text{-}\mathsf{Equ}^{h}\), which numerically verifies (4.32) and (4.34). The benchmark quantities and the evolving fluid interface are shown in Fig. 4 and Fig. 5, respectively.
\begin{table}
\begin{tabular}{c|c c c|c c c} \hline \hline & \multicolumn{3}{c|}{n-Stab\({}^{h}\) in (4.15)} & \multicolumn{3}{c}{n-Equ\({}^{h}\) in (4.23)} \\ \hline \((h,\Delta t)\) & \((h_{0},\Delta t_{0})\) & \((\frac{h_{0}}{2},\frac{\Delta t_{0}}{4})\) & \((\frac{h_{0}}{4},\frac{\Delta t_{0}}{16})\) & \((h_{0},\Delta t_{0})\) & \((\frac{h_{0}}{2},\frac{\Delta t_{0}}{4})\) & \((\frac{h_{0}}{4},\frac{\Delta t_{0}}{16})\) \\ \hline \(\not{k}_{\min}\) & 0.9490 & 0.9484 & 0.9483 & 0.9473 & 0.9480 & 0.9482 \\ \hline \(t_{\not{k}-\not{k}_{\min}}\) & 3.0000 & 3.0000 & 3.0000 & 3.0000 & 3.0000 & 3.0000 \\ \hline \(V_{c,\max}\) & 0.3689 & 0.3664 & 0.3657 & 0.3677 & 0.3661 & 0.3656 \\ \hline \(t_{v_{c}-v_{c,\max}}\) & 0.9100 & 0.9100 & 0.9119 & 0.9200 & 0.9150 & 0.9125 \\ \hline \(z_{c}(t=3)\) & 1.4925 & 1.4890 & 1.4882 & 1.4892 & 1.4883 & 1.4880 \\ \hline \(v_{\Delta}(t=3)\) & -8.80E-4 & -2.28E-4 & -5.74E-5 & -4.10E-4 & -1.07E-4 & -2.71E-5 \\ \hline \hline \end{tabular}
\end{table}
Table 2: Benchmark quantities of the rising bubble in the case I, where \(h=1/J_{\Gamma}\) with \(h_{0}=2^{-4}\) and \(\Delta t_{0}=0.01\).
\begin{table}
\begin{tabular}{c|c c c|c c c} \hline \hline & \multicolumn{3}{c|}{n-Stab\({}^{h}\)} & \multicolumn{3}{c}{n-Equ\({}^{h}\)} \\ \hline \((h,\Delta t)\) & \((h_{0},\Delta t_{0})\) & \((\frac{h_{0}}{2},\frac{\Delta t_{0}}{4})\) & \((\frac{h_{0}}{4},\frac{\Delta t_{0}}{16})\) & \((h_{0},\Delta t_{0})\) & \((\frac{h_{0}}{2},\frac{\Delta t_{0}}{4})\) & \((\frac{h_{0}}{4},\frac{\Delta t_{0}}{16})\) \\ \hline \(\not{k}_{\min}\) & 0.7504 & 0.7538 & 0.7547 & 0.7411 & 0.7511 & 0.7540 \\ \hline \(t_{\not{k}-\not{k}_{\min}}\) & 1.5000 & 1.5000 & 1.5000 & 1.5000 & 1.5000 & 1.5000 \\ \hline \(V_{c,\max}\) & 0.3787 & 0.3758 & 0.3750 & 0.3780 & 0.3756 & 0.3750 \\ \hline \(t_{v_{c}-v_{c,\max}}\) & 0.5600 & 0.5600 & 0.5600 & 0.5600 & 0.5600 & 0.5600 \\ \hline \(z_{c}(t=1.5)\) & 0.9723 & 0.9722 & 0.9721 & 0.9710 & 0.9719 & 0.9720 \\ \hline \(v_{\Delta}(t=1.5)\) & -2.95E-3 & -6.89E-4 & -1.77E-4 & -6.96E-4 & -2.04E-4 & -5.14E-5 \\ \hline \hline \end{tabular}
\end{table}
Table 3: Benchmark quantities of the rising bubble in case II, where \(h=1/J_{\Gamma}\) with \(h_{0}=2^{-4}\) and \(\Delta t_{0}=0.01\).
Figure 3: Evolution of the rising bubble in case I. On the left we show the snapshots of the fluid interface at times \(t=0,0.5,\cdots,3.0\) with visualisations of the computational mesh and the bubble at the final time. On the right are plots of the energy, the relative volume loss and the mesh quality indicator. Here \(h=\frac{1}{64}\) and \(\Delta t=1.5625\times 10^{-4}\).
Figure 2: The time history of the benchmark quantities, where (a),(b),(c) are from the method n-Stab\({}^{h}\), and (d) is from the method n-Equ\({}^{h}\).
Figure 4: The time history of the discrete benchmark quantities for the methods n-Equ\({}^{h}\) and n-Equ\({}^{h}\), where \(h=\frac{1}{64}\) and \(\Delta t=1.5625\times 10^{-4}\).
Figure 5: Evolution of the rising bubble in case II using the method n-Equ\({}^{h}\), where we show the generating curves at times \(t=0,0.3,0.6,0.9,1.2,1.5\) and the axisymmetric interfaces \(\mathcal{S}(t)\) at \(t=1.5\) with views from different directions. Here \(h=\frac{1}{64}\), \(\Delta t=1.5625\times 10^{-4}\).
### The oscillating droplet
In this subsection, we numerically study the oscillation of a levitated drop which is surrounded by a fluid of low density. Inspired by the work in [1], we consider an axisymmetric perturbation of a spherical equilibrium. In particular, the generating curve of the initial drop is given by
\[\left\{\begin{array}{ll}r(\theta,0)&=R_{0}\left[1+\varepsilon_{n,0}P_{n}(\cos \theta)-\frac{1}{2n+1}\varepsilon_{n,0}^{2}\right]\cos(\theta-\frac{\pi}{2}), \\ z(\theta,0)&=R_{0}\left[1+\varepsilon_{n,0}P_{n}(\cos\theta)-\frac{1}{2n+1} \varepsilon_{n,0}^{2}\right]\sin(\theta-\frac{\pi}{2})+1.0,\end{array}\right. \theta\in[0,\pi],\quad n\geq 2, \tag{5.6}\]
where \(R_{0}\) is the radius, \(\varepsilon_{n,0}\) is the magnitude of the perturbation, and \(P_{n}(x)\) are Legendre polynomials. For example, \(P_{2}(x)=\frac{1}{2}(3x^{2}-1)\) and \(P_{5}(x)=\frac{1}{8}(63x^{5}-70x^{3}+15x)\). Then on recalling the analytical asymptotic solution [1, (15b) and (38)], we note that the dynamic generating curve can be approximated by
\[\left\{\begin{array}{ll}r(\theta,t)&=R_{0}\left[1+\varepsilon_{n}(t)P_{n}( \cos\theta)-\frac{1}{2n+1}\varepsilon_{n}^{2}(t)\right]\cos(\theta-\frac{\pi}{ 2}),\\ z(\theta,t)&=R_{0}\left[1+\varepsilon_{n}(t)P_{n}(\cos\theta)-\frac{1}{2n+1} \varepsilon_{n}^{2}(t)\right]\sin(\theta-\frac{\pi}{2})+1.0,\end{array}\right. \theta\in[0,\pi],\quad t\geq 0,\quad n\geq 2, \tag{5.7}\]
where \(\varepsilon_{n}(t)\) is given by
\[\varepsilon_{n}(t)\approx\varepsilon_{n,0}\exp(-\lambda_{n}t)\cos(\omega_{n }t)\quad\text{with}\quad\omega_{n}=\sqrt{\omega_{n,0}^{2}-\lambda_{n}^{2}},\]
and
\[\omega_{n,0}=\sqrt{\frac{n(n-1)(n+2)\gamma}{\rho_{-}R_{0}^{3}}},\qquad\lambda _{n}=\frac{(2n+1)(n-1)\mu_{-}}{\rho_{-}R_{0}^{2}}.\]
We further introduce the radius of the droplet as \(R(\theta,t)=R_{0}\left[1+\varepsilon_{n}(t)P_{n}(\cos\theta)-\frac{1}{2n+1} \varepsilon_{n}^{2}(t)\right]\). For this experiment we use the computational domain \(\mathcal{R}=[0,0.6]\times[0,2]\) with \(\partial_{1}\mathcal{R}=[0,0.6]\times\{0,2\}\) and \(\partial_{2}\mathcal{R}=\{0.6\}\times[0,2]\), and choose
\[\rho_{+}=1,\quad\rho_{-}=1000,\quad\mu_{+}=0.01,\quad\mu_{-}=2,\quad\gamma=40,\quad\vec{g}=\vec{0},\quad R_{0}=0.3.\]
**Example 3**: We first focus on the 2-mode perturbation of the droplet with \(\varepsilon_{2,0}=0.08\) in (5.6) for the initial interface. The obtained numerical results by the method n-EquiV\({}^{h}\) and n-StabV\({}^{h}\) are compared with the approximate solution in (5.7) with \(n=2\). As shown in Fig. 6, we observe an excellent agreement between the numerical solution and the approximate solution for both the introduced methods n-StabV\({}^{h}\) and n-EquiV\({}^{h}\).
Snapshots of the interface and velocity fields are visualized in Fig. 7. At time \(t=0.5\), we observe that a counterclockwise vortex is generated in the upper half region, and a clockwise vortex is generated at the bottom. This implies that the droplet is spreading in the horizontal direction. While at \(t=1.0\) and \(t=1.5\), the droplet is spreading in the vertical direction. The benchmark quantities are plotted in Fig. 8, which further verifies the good properties of the introduced methods.
**Example 4**: To further test the accuracy of our introduced ALE methods, we next consider the case of a 5-mode perturbation. The same computational parameters are used except that the initial generating curve is given by (5.6) with \(n=5\), and \(\varepsilon_{n,0}=0.02\). The numerical results for \(n=5\) are shown in Fig. 9, where we observe an excellent agreement as well.
## 6 Conclusions
We proposed and analyzed a variety of ALE finite element approximations for the axisymmetric two-phase Navier-Stokes flow in both the conservative and nonconservative form. The introduced methods were shown to satisfy either unconditional stability or the equidistribution property, relying on two different approximations of the mean curvature of the axisymmetric interface. With the help of time-weighted approximations of the interface normals, we further adapted the introduced methods to achieve exact volume preservation on the discrete level. Numerical examples for a rising bubble and oscillating droplet were provided to examine the performance of these introduced methods. We observed that these methods can produce very accurate results, and that the volume preservation and energy stability are satisfied well on the discrete level. In future, we will consider the application of the introduced methods to more complex problems in two-phase flow.
Figure 8: The time history of the discrete quantities for the experiment in Fig. 6.
Figure 6: \([n=2,\varepsilon_{n,0}=0.08]\) The displacement for the upper point of the generating curve on the z-axis, where the numerical results are obtained by using the methods n-StabV\({}^{\text{th}}\) (left panel) and n-EquV\({}^{\text{th}}\) (right panel) with \(\Delta t=10^{-3}\), \(K=1348\), \(J_{\mathcal{R}}=2574\) and \(J_{\Gamma}=64\).
Figure 7: Snapshots of the fluid interface and the velocity fields at several times for the experiment in Fig. 6.
## Acknowledgements
The work of Quan Zhao was supported by the Alexander von Humboldt Foundation.
## Appendix A Differential calculus
Let \(\varphi:\mathcal{R}\times[0,T]\rightarrow\mathbb{R}\) be a scalar field. Applying the Reynolds transport theorem on \(\mathcal{R}_{\pm}(t)\) in terms of the ALE moving frame, it is not difficult to show that
\[\frac{\mathrm{d}}{\mathrm{d}t}\int_{\mathcal{R}_{\pm}(t)}\varphi\,r \,\mathrm{d}r\mathrm{d}z =\frac{\mathrm{d}}{\mathrm{d}t}\int_{\mathcal{R}_{\pm}(t)}\varphi \,[\vec{x}\cdot\vec{e}_{1}]\,\mathrm{d}r\mathrm{d}z\] \[=\int_{\mathcal{R}_{\pm}(t)}(\partial_{t}^{\circ}\varphi\,r+ \varphi\,[\vec{w}\cdot\vec{e}_{1}])\mathrm{d}r\mathrm{d}z+\int_{\mathcal{R}_{ \pm}(t)}\varphi\,r\,\nabla\cdot\vec{w}\,\mathrm{d}r\mathrm{d}z\] \[=\int_{\mathcal{R}_{\pm}(t)}(\partial_{t}^{\circ}\varphi\,r+ \varphi\nabla\cdot[r\,\vec{w}])\mathrm{d}r\mathrm{d}z, \tag{10}\]
where \(\partial_{t}^{\circ}\) is defined in (3.11) as the derivative with respect to the ALE moving reference, and \(\vec{w}\) is the mesh velocity defined in (3.9). Applying \(\varphi=\rho_{\pm}\vec{u}\cdot\vec{\chi}\) to (10) and combining the two equations yields that
\[\frac{\mathrm{d}}{\mathrm{d}t}\Big{(}\rho\,\vec{u},\,\vec{\chi}\,r\Big{)}= \Big{(}\rho\,\partial_{t}^{\circ}\vec{u},\,\vec{\chi}\,r\Big{)}+\Big{(}\rho \,\partial_{t}^{\circ}\vec{\chi},\,\,\vec{u}\,r\Big{)}+\Big{(}\rho\,\vec{u} \cdot\vec{\chi},\,\,\nabla\cdot[r\,\vec{w}]\Big{)}, \tag{11}\]
where \((\cdot,\cdot)\) is the \(L^{2}\)-inner product over \(\mathcal{R}\). This immediately implies that
\[\frac{1}{2}\frac{\mathrm{d}}{\mathrm{d}t}\Big{(}\rho\,\vec{u},\,\vec{u}\,r \Big{)}=\Big{(}\rho\,\partial_{t}^{\circ}\vec{u},\,\,\vec{u}\,r\Big{)}+\frac{ 1}{2}\Big{(}\rho\,|\vec{u}|^{2},\,\,\nabla\cdot[r\,\vec{w}]\Big{)}. \tag{12}\]
|
2306.00435 | How Many Answers Should I Give? An Empirical Study of Multi-Answer
Reading Comprehension | The multi-answer phenomenon, where a question may have multiple answers
scattered in the document, can be well handled by humans but is challenging
enough for machine reading comprehension (MRC) systems. Despite recent progress
in multi-answer MRC, there lacks a systematic analysis of how this phenomenon
arises and how to better address it. In this work, we design a taxonomy to
categorize commonly-seen multi-answer MRC instances, with which we inspect
three multi-answer datasets and analyze where the multi-answer challenge comes
from. We further analyze how well different paradigms of current multi-answer
MRC models deal with different types of multi-answer instances. We find that
some paradigms capture well the key information in the questions while others
better model the relationship between questions and contexts. We thus explore
strategies to make the best of the strengths of different paradigms.
Experiments show that generation models can be a promising platform to
incorporate different paradigms. Our annotations and code are released for
further research. | Chen Zhang, Jiuheng Lin, Xiao Liu, Yuxuan Lai, Yansong Feng, Dongyan Zhao | 2023-06-01T08:22:21Z | http://arxiv.org/abs/2306.00435v1 | # How Many Answers Should I Give?
###### Abstract
The multi-answer phenomenon, where a question may have multiple answers scattered in the document, can be well handled by humans but is challenging enough for machine reading comprehension (MRC) systems. Despite recent progress in multi-answer MRC, there lacks a systematic analysis of how this phenomenon arises and how to better address it. In this work, we design a taxonomy to categorize commonly-seen multi-answer MRC instances, with which we inspect three multi-answer datasets and analyze where the multi-answer challenge comes from. We further analyze how well different paradigms of current multi-answer MRC models deal with different types of multi-answer instances. We find that some paradigms capture well the key information in the questions while others better model the relationship between questions and contexts. We thus explore strategies to make the best of the strengths of different paradigms. Experiments show that generation models can be a promising platform to incorporate different paradigms. Our annotations and code are released for further research1.
Footnote 1: [https://github.com/luciusssss/how-many-answers](https://github.com/luciusssss/how-many-answers)
## 1 Introduction
In the typical setting of machine reading comprehension, such as SQuAD Rajpurkar et al. (2016), the system is expected to extract a single answer from the passage for a given question. However, in many scenarios, questions may have multiple answers scattered in the passages, and all the answers should be found to completely answer the questions, such as the examples illustrated in Figure 1. Recently, a series of MRC benchmarks featuring multi-answer instances have been constructed, including DROP Dua et al. (2019), Quoref Dasigi et al. (2019) and MultiSpanQA Li et al. (2022). Most current research efforts focus primarily on improving the overall QA performance on these benchmarks Hu et al. (2019); Segal et al. (2020); Li et al. (2022). Yet, as far as we know, there still lacks a systematic analysis of how the phenomenon of multi-answer arises and how we can better tackle this challenge.
In this paper, we systematically analyze the categorization of multi-answer MRC instances and investigate how to design a strong multi-answer MRC system. We try to answer the following research questions: (1) Where does the multi-answer challenge come from? (2) How do different MRC models specifically deal with the multi-answer challenge? (3) How can we design better models by combining different multi-answer MRC paradigms?
We first analyze existing multi-answer MRC datasets to track the origin of the multi-answer challenge. Previous works have attempted to categorize multi-answer instances primarily based on the distances or relationships between multiple an
Figure 1: Two examples from existing multi-answer MRC datasets.
swers (Li et al., 2022; Ju et al., 2022). Yet, they did not holistically consider the interaction between questions and contexts. We observe that in some cases the number of answers is indicated in the question itself (_two players_ in Example A of Figure 1) while in others we have no idea until we read the documents carefully (Example B of Figure 1).
To better understand this challenge, we develop a taxonomy for the multi-answer phenomenon, based on how the number of answers is determined: the question itself suffices, or both the question and the passage should be taken into consideration. We annotate 6,857 instances from DROP, Quoref, and MultiSpanQA based on our taxonomy and find that the procedure of dataset construction has a large influence on the expressions in the questions. Most questions in crowdsourced datasets contain certain clues indicating the number of answers. By contrast, real-world information-seeking questions are less likely to specify the number of answers, which is usually dependent on the passages.
We further use our annotations to examine the performance of current MRC solutions regarding the multi-answer challenge (Hu et al., 2019; Segal et al., 2020; Li et al., 2022), which can be categorized into 4 paradigms, i.e., Tagging, NumPred, Iterative and Generation. We analyze their strengths and weaknesses and find that some efforts, e.g., NumPred, are good at capturing the key information in the questions, while others, e.g., Iterative, can better model the relation between questions and contexts. This motivates us to investigate better ways to benefit from different paradigms.
Given the complementary nature of these paradigms, we wonder whether a combination of paradigms improves performance on multi-answer MRC. We explore two strategies, early fusion and late ensemble, to benefit from different paradigms. With a generation model as the backbone, we attempt to integrate the paradigms NumPred and Interactive, in a lightweight Chain-of-Thought style (Wei et al., 2022). Experiments show that the integration remarkably improves the performance of generation models, demonstrating that Generation is a promising platform for paradigm fusion.
Our contributions are summarized as follows: (1) We design a taxonomy for multi-answer MRC instances according to how the number of answers can be determined. It considers both questions and contexts simultaneously, enlightening where the multi-answer challenge comes from. (2) We annotate 6,857 instances from 3 datasets with our taxonomy, which enables us to examine 4 paradigms for multi-answer MRC in terms of their strengths and weaknesses. (3) We explore various strategies to benefit from different paradigms. Experiments show that generation models are promising to be backbones for paradigm fusion.
## 2 Task Formulation
In multi-answer MRC, given a question \(Q\) and a passage \(P\), a model should extract several spans, \(A=\{a_{1},a_{2},...,a_{n}\}(n\geq 1)\), from \(P\) to answer \(Q\). Each span, \(a_{i}\in A\), corresponds to a partial answer to \(Q\), and the answer set \(A\) as a whole answers \(Q\) completely. These spans can be contiguous or discontiguous in the passage.
We distinguish between two terms, _multi-answer_ and _multi-span_, which are often confused in previous works. _Multi-answer_ indicates that a question should be answered with the complete set of entities or utterances. _Multi-span_ is a definition from the perspective of answer annotations. In certain cases, the answer annotation of a question can be either single-span or multi-span, as explained in the next paragraph. Ideally, we expect that the answers to a multi-answer question should be annotated as multi-span in the passage, where each answer is grounded to a single span, although some of them can be contiguous in the passage.
**Q0**: What's Canada's official language?
**P**: [...] **English** and **French**, are the official languages of the Government of Canada. [...]
For example, in Q0, there are two answers, _English_ and _French_, to the given question. According to the annotation guidelines of SQuAD, one might annotate this instance with a single continuous span _English and French_. Yet, this form of annotation is not preferred in the multi-answer MRC setting. It blurs the boundary of different answers and fails to denote explicitly the number of expected answers. Thus, it is suboptimal for a comprehensive model evaluation. Instead, we suggest denoting each answer with distinct spans, say, annotating this instance with two spans, _English_ and _French_. With this criterion, we can encourage models to disentangle different answers. With fine-grained answer annotations, we can also assess how well a model answers a question sufficiently and precisely.
This annotation criterion generally conforms to the annotation guidelines of existing multi-answer datasets, e.g., DROP, Quoref and MultiSpanQA.
A few instances violating the criterion are considered as bad annotations, as discussed in Section 4.2. See more remarks on the task formulation in Appendix A.
## 3 Taxonomy of Multi-Answer MRC
To better understand the challenge of multi-answer, we first design a taxonomy to categorize various multi-answer MRC instances. It assesses how the number of answers relates to the question or passage provided. Different from the previous works that classify questions according to the distances or relations between multiple answers Li et al. (2022); Ju et al. (2022), our taxonomy, taking both questions and passages into consideration, focuses on how the number of answers is determined. This enables us to analyze multi-answer questions and single-answer questions in a unified way. We illustrate our taxonomy in Figure 2 and elaborate on each category as follows.
Question-DependentIf one can infer the exact number of answers from the question without referring to the passage, this instance belongs to the question-dependent category. According to whether there are clue words that directly indicate the number of answers, this type is further divided into two sub-categories:
(a) In a with-clue-words question, one can find a few words that indicate the number of answers. In Q1, the word _two_ in the question indicates that two answers are expected.
**Q1**: What are the two official languages of Puerto Rico?
**P**: [...] **English** is an official language of the Government of Puerto Rico. [...] As another official language,
**Spanish** is widely used in Puerto Rico. [...]
We group the clue words into five types: cardinal, ordinal, comparative/superlative, alternative, and other lexical semantics, as illustrated in Table 1.
(b) In a without-clue-words question, although we can not locate obvious clue words, we can infer the number of answers with sentence semantics or commonsense knowledge. In Q2, we can determine that there is only one conversion result for the question based on sentence semantics instead of any single words.
**Q2**: 1 light year equal to how many km?
**P**: [...] The light-year is a unit of length used to express astronomical distances. It is about **9.5 trillion kilometres** or 5.9 trillion miles. [...]
In Q3, we can infer that the following question has only one answer, based on the commonsense that there is only one winner of a given Super Bowl.
**Q3**: Who won Super Bowl XXXIX?
**P**: [...] The Eagles advanced to Super Bowl XXXIX, where they dueled the 2004 **New England Patriots** season. [...] The Patriots won 24-21. [...]
Passage-DependentIn a passage-dependent instance, the question itself is not adequate to infer the number of answers. One needs to rely on the provided passage to decide how many answers are needed to answer the question. In Q4, we have no idea of the number of answers solely based on the question. If we refer to the passage, we will find ten answers to the question.
**Q4**: Which countries does the Danube River flow through?
**P**: [...] Originating in **Germany**, the Danube flows southeast for 2,850 km, passing through or bordering
**Austria**, **Slovakia**, **Hungary**, **Croatia**, **Serbia**, **Romania**, **Bulgaria**, **Moldova** and **Ukraine** before draining into the Black Sea. [...]
## 4 Analyses of Multi-Answer Datasets
We investigate existing multi-answer datasets based on our designed taxonomy to analyze where the multi-answer challenge comes from
\begin{table}
\begin{tabular}{l l l} \hline \hline
**Type** & **Question** & **\# Ans.** \\ \hline Cardinal & Which **two** players & \\ & completed 1-yard TD pass? & 2 \\ \hline Ordinal & Who scored the **first** & \\ & touchdown of the game? & 1 \\ \hline Comp./Super. & What’s the **largest** pizza & \\ & chain in America? & 1 \\ \hline Alternative & Is San Juan Bautista & \\ & incorporated **or** & 1 \\ & unincorporated? & \\ \hline Other & What are the first names of & \\ & The **trio** who try to call 911? & 3 \\ \hline \hline \end{tabular}
\end{table}
Table 1: Examples of various types of clue words. Comp./Super. denotes comparatives and superlatives.
Figure 2: Illustration of our taxonomy for multi-answer MRC instances.
### Datasets
We annotate the validation sets of three widely-used multi-answer MRC datasets, i.e., DROP Dua et al. (2019), Quoref Dasigi et al. (2019), and MultiSpanQA Li et al. (2022). The number of annotated questions is listed in Table 2 and more statistics are in Appendix B.
**DROP** is a crowdsourced MRC dataset for evaluating the discrete reasoning ability. The annotators are encouraged to devise questions that require discrete reasoning such as arithmetic. DROP has four answer types: numbers, dates, single spans, and sets of spans. Since the previous two types of answers are not always exact spans in the passages, we only consider the instances whose answers are single spans or sets of spans.
**Quoref** focuses on the coreferential phenomena. The questions are designed to require resolving coreference among entities. 10% of its instances require multiple answer spans.
**MultiSpanQA** is a dataset specialized for multi-span reading comprehension. The questions are extracted from NaturalQuestions Kwiatkowski et al. (2019), which are real queries from the Google search engine.
### Annotation
Annotation ProcessOur annotation process is two-staged: we first automatically identify some question-dependent instances and then recruit annotators to classify the remaining ones.
In the first stage, we automatically identify the questions containing certain common clue words such as numerals (full list in Appendix B) to reduce the workload of whole-process annotation. Afterward, the annotators manually check whether each instance is question-dependent. Out of the 4,594 recalled instances, 3,727 are identified as question-dependent.
In the second stage, we recruit annotators to annotate the remaining 3,130 instances. For each instance, given both the question and the answers, the annotators should first check whether the form of answers is correct and mark incorrect cases as bad-annotation2. We show examples of common bad-annotation cases in Table 10. After filtering out the bad-annotation ones, the annotators are presented with the question only and should decide whether they could determine the number of answers solely based on the question. If so, this instance is annotated as question-dependent; otherwise passage-dependent. For a question-dependent instance, the annotators are further asked to extract the clue words, if any, from the question, which determines whether the instance is with-clue-words or without-clue-words.
Footnote 2: In the first stage, the annotators also need to check whether an instance is bad-annotation.
Quality ControlSix annotators participated in the annotation after qualification. Each instance is annotated by two annotators. In case of any conflict, a third annotator resolves it. An instance is classified as bad-annotation if any annotator labels it as bad-annotation. Cohen's Kappa between two initial annotators is 0.70, indicating substantial agreement. See more details in Appendix B.
### Analyses of Annotation Results
With our annotated data, we study how the multi-answer instances differ across different datasets under our designed taxonomy. We find that the distributions of instance types are closely related to how the datasets are constructed.
Instance TypesThe distributions of instance types in different datasets are shown in Table 3. Question-dependent prevails in DROP and Quoref, making up over 70% of the two datasets. In contrast, most instances in MultiSpanQA are passage-dependent. This difference stems from how the questions are collected.
DROP and Quoref use crowdsourcing to collect questions with specific challenges. Given a passage, the annotators know the answers in advance and produce questions that can only be answered through certain reasoning skills. These artificial questions are more likely to contain clues to the number of answers, such as the question with ordinal in Table 1. By contrast, the questions in MultiSpanQA are collected from search engine queries. Users generally have no idea of the answers to the queries. The number of answers, as a result, is
\begin{table}
\begin{tabular}{l r r r} \hline \hline
**Dataset** & **All** & **Single-Ans.** & **Multi-Ans.** \\ \hline DROP & 3,133 & 2,609 & 524 \\ Quoref & 2,418 & 2,198 & 220 \\ MultiSpanQA & 1,306 & 653 & 653 \\ \hline Total & 6,857 & 5,460 & 1,397 \\ \hline \hline \end{tabular}
\end{table}
Table 2: The number of instances for human annotation in the validation set of each dataset.
more often dependent on the provided passages, such as Q4 in Section 3.
Clue WordsSince a large portion (57.8%) of the annotated instances belong to the with-clue-word type, we further investigate the distribution of clue words in different datasets, shown in Table 4. On the one hand, the questions contain a large variety of clue words, demonstrating the complexity of multi-answer MRC. On the other hand, the prevailing type of clue words is different in each dataset, reflecting the preference in dataset construction. Specifically, nearly 60% of the with-clue-word questions in DROP are alternative questions with comparatives/superlatives, because DROP's annotators are encouraged to inject discrete reasoning challenges, e.g., comparison, when writing questions. In Quoref, 91% of the clue words indicate the number of answers through their lexical semantics. This unbalanced distribution results from the emphasis on coreference resolution: most questions begin with _what is the name of the person who..._, where _name of the person_ is identified as clue words. In MultiSpanQA, whose questions are search engine queries, 63% of the with-clue-word questions contain numerals. If users already know the number of desired answers, they tend to restrict it in the question, such as _seven wonders of the world_.
We provide more analyses on of how the instance types are distributed with respect to the specific number of answers in Appendix C.
## 5 Existing Multi-Answer MRC Models
Based on our categorization of the multi-answer instances, we continue to investigate how existing multi-answer MRC models perform differently on various types of multi-answer instances. We summarize current solutions into four paradigms according to how they obtain multiple answers, as illustrated in Figure 3.
TaggingSegal et al. (2020) cast the multi-answer MRC task as a sequence tagging problem, similar to named entity recognition (NER), so that the model can extract multiple non-contiguous spans from the context.
NumPred (Number Prediction)Hu et al. (2019) first predict the number of answers \(k\) as an auxiliary task and then select the top \(k\) non-overlapped ones from the output candidate spans.
IterativeSearching for evidence iteratively is widely adopted in many QA tasks (Xu et al., 2019; Zhao et al., 2021; Zhang et al., 2021), but it is not explored in multi-answer MRC. We adapt this idea to extract multiple answers iteratively. In each iteration, we append the previously extracted answers to the question, with the word _except_ in between, and then feed the updated question to a single-answer MRC model. The iterative process terminates when the model predicts no more answers.
GenerationGeneration has been adopted as a uniform paradigm for many QA tasks (Khashabi et al., 2020, 2022), but it is less explored on multi-answer MRC. For Generation, we concatenate all answers, with semicolons as separators, to form an output sequence, and finetune the model to generate it conditioned on the question and passage.
### Experimental Setup
Implementation DetailsWe use RoBERTa-base (Liu et al., 2019) for the three extractive
\begin{table}
\begin{tabular}{l|r|r r r r r r} \hline \hline
**Dataset** & **with-clue-word** & **Cardinal** & **Ordinal** & **Comp./Super.** & **Alternative** & **Other Semantics** \\ \hline DROP & 2,204 & 113 (5.1\%) & 592 (26.9\%) & **1,298 (58.9\%)** & 1,214 (55.1\%) & 135 (6.1\%) \\ Quoref & 1,639 & 83 (5.1\%) & 35 (2.1\%) & 25 (1.5\%) & 0 (0.0\%) & **1,501 (91.6\%)** \\ MultiSpanQA & 121 & **51 (41.8\%)** & 26 (21.3\%) & 23 (19.0\%) & 2 (1.6\%) & 19 (15.6\%) \\ \hline \hline \end{tabular}
\end{table}
Table 4: Distribution of clue word types in three datasets. A question may contain multiple types of clue words.
\begin{table}
\begin{tabular}{l|r|r r r|r} \hline \hline
**Dataset** & **passage-dependent** & \multicolumn{3}{c|}{**question-dependent**} & \multicolumn{1}{c}{**bad-annotation**} \\ & & All & with-clue-word & no-clue-word & & \\ \hline DROP & 826 (26.4\%) & **2,242 (71.6\%)** & 2,204 (70.3\%) & 38 (1.2\%) & 65 (2.1\%) \\ Quoref & 711 (29.4\%) & **1,704 (70.5\%)** & 1,639 (67.8\%) & 65 (2.7\%) & 3 (0.2\%) \\ MultiSpanQA & **991 (75.9\%)** & 285 (21.8\%) & 121 (9.3\%) & 164 (12.6\%) & 30 (2.3\%) \\ \hline Total & 2,528 (36.9\%) & 4,231 (61.7\%) & 3,964 (57.8\%) & 267 (3.9\%) & 98 (1.4\%) \\ \hline \hline \end{tabular}
\end{table}
Table 3: Distribution of instance types in three datasets.
paradigms and BART-base Lewis et al. (2020) for Generation. We train models on the training sets of each dataset and evaluate them on the corresponding validation sets with our instance type annotations. See more details in Appendix D.1.
MetricsWe adopt the official metrics of Multi-SpanQA Li et al. (2022), including the precision (P), recall (R), and F1 in terms of exact match (EM) and partial match (PM). See Appendix D.2 for details.
### Results and Analyses
We report the overall performance in Table 5, and the performance on different instance types in Table 6. We observe that each of these paradigms has its own strengths and weaknesses.
Tagging outperforms other paradigms on DROP and Quoref, whose dominating instance type is question-dependent. Although Tagging has no explicit answer number prediction step, it can still exploit this information implicitly because it takes the question into account during the sequential processing of every token. Besides, Tagging, as a common practice for entity recognition, is good at capturing the boundaries of entities. Thus, it is not surprising that it performs the best on DROP and Quoref, most of whose answers are short entities.
Iterative achieves the best overall performance on MultiSpanQA, whose prevailing instance type is passage-depenent. This paradigm does not directly exploit the information of the number of answers given in the question. Rather, it encourages adequate interactions between questions and passages, performing single-answer extraction at each step. As a result, Iterative does well for the questions whose number of answers heavily depends on the given context.
As for NumPred, although we expect high performance on question-dependent instances, it lags behind Tagging by approximately 2% in PM F1 on DROP and Quoref. This might result
\begin{table}
\begin{tabular}{l|c|c c c} \hline \hline
**Model** & **p-dep.** & \multicolumn{3}{c}{**q-dep.**} \\ & & All & w/-clue & w/o-clue \\ \hline \multicolumn{5}{c}{DROP} \\ \hline Tagging & **74.57** & **79.11** & **80.88** & 68.77 \\ NumPred & 72.37 & 77.54 & 79.32 & 70.08 \\ Iterative & 73.47 & 77.60 & 79.21 & 65.73 \\ Generation & 72.18 & 74.77 & 76.19 & **72.62** \\ \hline \multicolumn{5}{c}{Quoref} \\ \hline Tagging & 70.60 & **84.86** & **85.23** & 75.76 \\ NumPred & 69.45 & 81.88 & 82.44 & 70.12 \\ Iterative & **71.42** & 82.18 & 82.37 & **77.30** \\ Generation & 66.31 & 77.41 & 78.38 & 52.63 \\ \hline \multicolumn{5}{c}{MultiSpanQA} \\ \hline Tagging & 82.28 & 79.66 & 86.60 & 73.36 \\ NumPred & 77.77 & 77.11 & 78.19 & **78.77** \\ Iterative & **82.78** & **82.09** & **87.22** & 77.80 \\ Generation & 80.57 & 78.05 & 81.73 & 75.85 \\ \hline \hline \end{tabular}
\end{table}
Table 6: The performance (PM F1) of four paradigms on different types of instances. p-dep. denotes passage-dependent. q-dep. denotes question-dependent.
Figure 3: An illustration of four paradigms for multi-answer MRC.
\begin{table}
\begin{tabular}{l|c c c|c c c} \hline \hline
**Model** & \multicolumn{3}{c|}{**EM**} & \multicolumn{3}{c}{**PM**} \\ & **P** & **R** & **F1** & **P** & **R** & **F1** \\ \hline \multicolumn{5}{c}{DROP} \\ \hline Tagging & **61.86** & **63.91** & **62.87** & **77.53** & **77.39** & **77.46** \\ NumPred & 61.59 & 56.77 & 59.09 & 76.71 & 74.86 & 75.77 \\ Iterative & 60.66 & 60.07 & 60.36 & 76.19 & 76.04 & 76.11 \\ Generation & 60.07 & 57.15 & 58.58 & 75.39 & 72.39 & 73.86 \\ \hline \multicolumn{5}{c}{Quoref} \\ \hline Tagging & **71.00** & **72.21** & **71.60** & **80.44** & **79.74** & **80.09** \\ NumPred & 65.61 & 63.57 & 64.57 & 77.30 & 78.20 & 77.75 \\ Iterative & 67.28 & 66.35 & 66.81 & 78.57 & 78.58 & 78.57 \\ Generation & 63.57 & 63.39 & 63.48 & 73.38 & 74.02 & 73.70 \\ \hline \multicolumn{5}{c}{MultiSpanQA} \\ \hline Tagging & 61.31 & **68.84** & 64.85 & 80.45 & **83.08** & 81.75 \\ NumPred & 55.03 & 46.06 & 50.15 & 80.16 & 75.26 & 77.63 \\ Iterative & **66.32** & 67.98 & **67.14** & **84.39** & 80.96 & **82.64** \\ Generation & 65.40 & 62.60 & 63.97 & 82.06 & 78.14 & 80.06 \\ \hline \hline \end{tabular}
\end{table}
Table 5: Performance of four paradigms on three datasets.
from the gap between training and inference. The model treats the answer number prediction and answer span extraction as two separate tasks during training, with limited interaction. Yet during inference, the predicted number of answers is used as a hard restriction on multi-span selection. Different from the decent performance on DROP and Quoref, NumPred performs worst among the four paradigms on MultiSpanQA, because it is difficult for models to accurately predict the number of answers for a long input text that requires thorough understanding.
Among all paradigms, Generation generally performs the worst. Under the same parameter scale, extractive models seem to be the better choice for tasks whose outputs are exact entity spans from the input, while generation models do well in slightly longer answers. This also explains the smaller gap between Generation and extractive paradigms on MultiSpanQA compared to that on DROP and Quoref: MultiSpanQA has many descriptive long answers instead of short entities only.
## 6 Fusion of Different Paradigms
From the above analysis, we can see that extractive methods can better locate exact short spans in the passage, and NumPred can provide potential guidance on the number of answers. Meanwhile, the generation models can better handle longer answers and are more adaptable to different forms of inputs and outputs. Now an interesting question is how to combine different paradigms to get the best of both worlds.
We explore two strategies for combining different paradigms: **early fusion** and **late ensemble**. The former mixes multiple paradigms in terms of model architectures while the latter ensembles the predictions of different models. We discuss our exploration of late ensemble in Appendix E.1 since model ensemble is a well-explored technique. Here we primarily elaborate on early fusion. We carry out a series of pilot studies to demonstrate the potential of paradigm fusion.
Previous works attempt to fuse two extractive paradigms, Tagging and NumPred(Segal et al., 2020; Li et al., 2022). However, they only lead to marginal improvements, probably because Tagging can already implicitly determine answer numbers well and the help of NumPred is thus limited.
Although the performance of base-size generation models on multi-answer MRC is inferior to that of extractive ones, generation models of larger sizes show great potential with more parameters and larger pre-training corpora (Khashabi et al., 2020, 2022). More importantly, Generation can easily adapt to various forms of inputs and outputs. We carry out pilot studies using a generation model as the backbone and benefiting from the ideas of other paradigms. We propose several lightweight methods to combine Generation with NumPred and Iterative, as illustrated in Figure 4.
Generation + NumPredInspired by recent works on Chain-of-Thought (Wei et al., 2022), we guide the model with prompts indicating the number of answers. We introduce a NumPred** **prompt sentence** (NPS) in the form of _There are \(\{2,3,...\}\) answers/There is only one answer_. We experiment with two variants, multitask and pipeline. In the multitask variant, the model outputs an NPS before enumerating all the answers. In the pipeline variant, we predict the number of answers with a separate classifier and then append the NPS to the question as extra guidance.
Generation + IterativeWe substitute the original extractor of Iterative with a generator. The iterative process terminates when the model
Figure 4: An illustration of different strategies for early fusion of paradigms.
outputs the string _No answer_. Besides the normal setting, we experiment with another variant that additionally outputs an NPS in the form of _The number of remaining answers is \(\{1,2,3,...\}\)_.
ResultsOur main experiments are conducted with BART-base and BART-large due to our limited computational budget. For the pipeline variant of Generation + NumPred, we use RoBERTbase as an answer number classifier. The overall experiment results are reported in Table 7 and the results on different question types are reported in Appendix E.2.
When Generation is multitasking with NumPred, it outperforms the vanilla one consistently. The NPS in the output provides a soft but useful hint for the succeeding answer generation, improving the accuracy of answer number prediction by 1.7% on average for BART-base. The pipeline variant is often inferior to the multitasking one due to error propagation. Especially, its performance drops a lot on MultiSpanQA, whose instances are passage-dependent. The accuracy of the answer number classifier on MultiSpanQA lags behind that on the other two datasets by more than 12%. Thus the NPS in the input, with an unreliably predicted answer number, is more likely to mislead the subsequent answer span generation.
The combination of Generation and Iterative does not always lead to improvement. This might be because the answer generation process of Generation is already in an iterative style: in the output sequence, each answer is generated conditioned on the previously-generated ones. The incorporation of Iterative thus does not lead to further improvement. When we further introduce an NPS with the number of remaining answers, the performance generally outperforms the normal setting. This proves that Generation, as a backbone, is easy to integrate with various hints.
Pilot Study on GPT-3.5To investigate whether these fusion strategies work on larger models, we conduct a pilot study on GPT-3.5. We use the 653 multi-answer instances in the validation set of MultiSpanQA for experiments. The prompts are listed in Appendix E.2. The experiment results are shown in Table 8.
When given only one example for in-context learning, GPT-3.5 can already achieve 79.27% PM F1 on the multi-answer instances, with only a small gap between BART trained on full data. Its EM F1 score is low because GPT-3.5 cannot handle the boundaries of answer spans well. This is not unsurprising since one example is not sufficient for GPT-3.5 to learn the annotation preference of span boundaries in MultiSpanQA. If we ask GPT-3.5 to predict the number of answers before giving all the answers, we observe an improvement of 10.1% EM F1 and 3.1% PM F1. This proves the effectiveness of fusing NumPred with larger generation models
As evidenced by the above trials, it is promising to fusion different paradigms. We hope that our exploration will inspire future works adopting larger generation models for multi-answer MRC.
## 7 Related Works
Compared to the vast amount of single-answer MRC datasets, the resources for multi-answer MRC are limited. Aside from the datasets in Section 4.1, MASH-QA Zhu et al. (2020) focuses on the healthcare domain, with 27% of the questions having multiple long answers, ranging from phrases to sentences. CMQA Ju et al. (2022) is
\begin{table}
\begin{tabular}{l|c c|c c} \hline \hline \multirow{2}{*}{**Model**} & \multicolumn{2}{c|}{**Base**} & \multicolumn{2}{c}{**Large**} \\ & **EM** & **PM** & **EM** & **PM** \\ \hline \multicolumn{5}{c}{DROP} \\ \hline Vanilla Generation & 58.58 & 73.86 & 66.43 & 80.55 \\ +NumPred (multitask) & **60.02** & **74.34** & **69.61** & **82.85** \\ +NumPred (pipeline) & 59.19 & 73.94 & 66.45 & 80.63 \\ +Iterative (normal) & 58.44 & 73.58 & 66.55 & 80.53 \\ +Iterative (number) & 58.98 & 74.07 & 68.19 & 82.17 \\ \hline \multicolumn{5}{c}{Quoref} \\ \hline Vanilla Generation & 63.48 & 73.70 & 76.57 & 84.47 \\ +NumPred (multitask) & 66.25 & 75.43 & **77.04** & 84.45 \\ +NumPred (pipeline) & 67.94 & 77.42 & 75.42 & 83.66 \\ +Iterative (normal) & **68.81** & **78.23** & 74.72 & 82.60 \\ +Iterative (number) & 63.33 & 73.34 & 76.67 & **84.57** \\ \hline \multicolumn{5}{c}{MultiSpanQA} \\ \hline Vanilla Generation & 63.97 & 80.06 & 69.13 & 84.61 \\ +NumPred (multitask) & **64.85** & **80.58** & **69.31** & **84.82** \\ +NumPred (pipeline) & 39.71 & 60.94 & 45.34 & 68.09 \\ +Iterative (normal) & 63.26 & 79.97 & 65.62 & 82.88 \\ +Iterative (number) & 63.84 & 80.04 & 66.77 & 83.41 \\ \hline \hline \end{tabular}
\end{table}
Table 7: The performance (EM F1 and PM F1) of different strategies for early fusion of paradigms.
\begin{table}
\begin{tabular}{l|c|c c} \hline \hline
**Model** & **Setting** & **EM F1** & **PM F1** \\ \hline Vanilla BART-base & Supervised & 66.77 & 81.24 \\ Vanilla BART-large & Supervised & 71.93 & 85.83 \\ Vanilla GPT-3.5 & One-Shot & 53.34 & 79.27 \\ GPT-3.5 + NumPred & One-Shot & 63.45 & 82.38 \\ \hline \hline \end{tabular}
\end{table}
Table 8: The performance of BART and GPT-3.5 on the multi-answer instances of MultiSpanQA.
another multi-answer dataset in Chinese, featuring answers with conditions or different granularities. For our analysis, we select two commonly-used datasets, DROP and Quoref, as well as a newly-released dataset, MultiSpanQA.
Current models addressing multi-answer MRC generally fall into two paradigms: Tagging(Segal et al., 2020) and NumPred(Hu et al., 2019), as explained in Section 5. Iterative(Xu et al., 2019; Zhao et al., 2021; Zhang et al., 2021; Gao et al., 2021) and Generation(Khashabi et al., 2020, 2022) have been adopted for many types of QA tasks including knowledge base QA, multiple-choice QA, and open-domain QA. Nevertheless, their performance on multi-answer MRC is less explored. In our paper, we also study how to adapt these paradigms for multi-answer MRC. Apart from the exploration of model architectures for multi-answer MRC, Lee et al. (2023) attempt to generate multi-answer questions as data augmentation.
Previous works have made preliminary attempts in fusing two extractive paradigms. Segal et al. (2020) adopt a single-span extraction model for single-answer questions and Tagging for multi-answer questions; Li et al. (2022) add a NumPred head to the Tagging framework. The predicted number of answers is used to adjust the tagging results. Both strategies lead to marginal improvement over the baselines. We instead resort to Generation for paradigm fusion, considering its potential with larger sizes and its flexibility in inputs and outputs.
## 8 Conclusion
In this paper, we conduct a systematic analysis for multi-answer MRC. We design a new taxonomy for multi-answer instances based on how the number of answers is determined. We annotate three datasets with the taxonomy and find that multi-answer is not merely a linguistic phenomenon; rather, many factors contribute to it, especially the process of data collection. With the annotation, we further investigate the performance of four paradigms for multi-answer MRC and find their strengths and weaknesses. This motivates us to explore various strategies of paradigm fusion to boost performance. We believe that our taxonomy can help determine what types of questions are desirable in the annotation process and aid in designing more practical annotation guidelines. We hope that our annotations can be used for more fine-grained diagnoses of MRC systems and encourage more robust MRC models.
## Limitations
First, our taxonomy of multi-answer MRC instances only considers whether we know the _exact_ number of answers from the questions. In some cases, one might have an _imprecise estimate_ of answer numbers from the question. For example, for the question _Who are Barcelona's active players?_, one might estimate that there are dozens of active players for this football club. Yet, these estimations are sometimes subjective and difficult to quantify. Therefore, this instance is classified as passage-dependent according to our current taxonomy. We will consider refining our taxonomy to deal with these cases in the future.
Second, we did not conduct many experiments with pre-trained models larger than the large-size ones due to limited computational budgets. Generation models of larger sizes show great potential with more parameters and larger pre-training corpora. We encourage more efforts to deal with multi-answer MRC with much larger models, such as GPT-3.5.
## Acknowledgments
This work is supported by NSFC (62161160339). We would like to thank the anonymous reviewers for their valuable suggestions, and our great annotators for their careful work, especially Zhenwei An, Nan Hu, and Hejing Cao. Also, we would like to thank Quzhe Huang for his help in this work. For any correspondence, please contact Yansong Feng.
|
2303.05393 | Deep Functional Predictive Control for Strawberry Cluster Manipulation
using Tactile Prediction | This paper introduces a novel approach to address the problem of Physical
Robot Interaction (PRI) during robot pushing tasks. The approach uses a
data-driven forward model based on tactile predictions to inform the controller
about potential future movements of the object being pushed, such as a
strawberry stem, using a robot tactile finger. The model is integrated into a
Deep Functional Predictive Control (d-FPC) system to control the displacement
of the stem on the tactile finger during pushes. Pushing an object with a robot
finger along a desired trajectory in 3D is a highly nonlinear and complex
physical robot interaction, especially when the object is not stably grasped.
The proposed approach controls the stem movements on the tactile finger in a
prediction horizon. The effectiveness of the proposed FPC is demonstrated in a
series of tests involving a real robot pushing a strawberry in a cluster. The
results indicate that the d-FPC controller can successfully control PRI in
robotic manipulation tasks beyond the handling of strawberries. The proposed
approach offers a promising direction for addressing the challenging PRI
problem in robotic manipulation tasks. Future work will explore the
generalisation of the approach to other objects and tasks. | Kiyanoush Nazari, Gabriele Gandolfi, Zeynab Talebpour, Vishnu Rajendran, Paolo Rocco, Amir Ghalamzan E. | 2023-03-09T16:31:35Z | http://arxiv.org/abs/2303.05393v1 | # Deep Functional Predictive Control for Strawberry Cluster Manipulation using Tactile Prediction
###### Abstract
This paper introduces a novel approach to address the problem of Physical Robot Interaction (PRI) during robot pushing tasks. The approach uses a data-driven forward model based on tactile predictions to inform the controller about potential future movements of the object being pushed, such as a strawberry stem, using a robot tactile finger. The model is integrated into a Deep Functional Predictive Control (d-FPC) system to control the displacement of the stem on the tactile finger during pushes. Pushing an object with a robot finger along a desired trajectory in 3D is a highly nonlinear and complex physical robot interaction, especially when the object is not stably grasped. The proposed approach controls the stem movements on the tactile finger in a prediction horizon. The effectiveness of the proposed FPC is demonstrated in a series of tests involving a real robot pushing a strawberry in a cluster. The results indicate that the d-FPC controller can successfully control PRI in robotic manipulation tasks beyond the handling of strawberries. The proposed approach offers a promising direction for addressing the challenging PRI problem in robotic manipulation tasks. Future work will explore the generalisation of the approach to other objects and tasks.
## I Introduction
In the field of Physical Robot Interaction (PRI), successful manipulation tasks rely on accurate interaction models that utilise rich sensory information and intelligent control strategies [1]. Tactile feedback is a particularly effective sensing modality for PRI tasks, especially when vision-based control, such as visual servoing [2], is not feasible due to occlusion [3]. For example, pushing a ripe strawberry that is occluded by plant stems, leaves, or unripe fruits in a cluster [4] can require tactile feedback for effective control.
Pushing is an important manipulation task that has many applications, including effective object manipulation under uncertainty [5], pre-grasp manipulation to position an object in a suitable configuration for grasping [6], and agile soccer ball pushing by a mobile robot [7]. Analytical models for pushing require complete knowledge of the environment, including physical and geometric properties such as object pose, shape, friction parameters, and mass. Developing analytical models for unstructured environments characterized by high degrees of freedom, non-linearity, and stochasticity, such as the case of pushing a flexible stem to reach a strawberry, can be a challenging task [8].
Most existing pushing methods are designed for 2-D scenarios in which an object is moving on a flat surface, but in the case of strawberry picking, a 3-D pushing scenario is more relevant [9]. Pushing a strawberry in a 3-dimensional space is more challenging than pushing an object on a table (i.e. a 2D problem). While interactive movement primitives [10] can be used to plan pushing actions, an accurate interaction model is crucial for effectively controlling the planned motion of the strawberry during pushing in this scenario.
In this paper, we presented a novel deep functional predictive control pipeline for the manipulation of strawberries grown on a table. Our pipeline consists of three key modules: a deep action-conditioned Tactile Forward Model (TFM), a deep Contact Localisation Model (CLM), and an online deep Functional Predictive Control (d-FPC) to generate control actions. We collected a dataset of plastic strawberries being pushed in our lab setting to train TFM, which is the state-of-the-art tactile prediction model. We also trained CLM to calibrate our tactile sensor using a dataset of strawberry pushing. Finally, d-FPC uses real-time predictions from TFM and CLM to generate robot actions based on future error signal estimations to control the stem pose on the sensor surface. We compared our proposed functional predictive controller's performance with a PD control-based system that only uses CLM and demonstrated that the predictive system outperforms this baseline model. This study addresses the challenge of pushing flexible obj
Fig. 1: Strawberry pushing setup: a Franka Emika robotic arm is pushing a cluster of strawberries from right to left where the nearest strawberry stem comes in contact with its tactile finger. (Right) the robot at the beginning of the pushing action, (Left) the robot, and cluster at the end of pushing.
of our knowledge, this is the first study to do so. Our results demonstrate the effectiveness of our proposed approach and pave the way for future research in the manipulation of flexible objects using deep functional predictive control.
## II Related works
Cluster manipulation in fruit harvesting is a challenging task from both motion planning and motion control perspectives [11, 12]. One of the challenges is avoiding slip of a grasped object, which can be addressed through closed-loop robot trajectory adaptation [13]. Deformable object manipulation, such as cloth, has been modelled using simplified mass-spring models or 3D mesh generation [14], while heuristic feature spaces have been used for flexible cable manipulation with dual robot arms [15]. However, analytical modelling methods are limited to specific object sets and are not scalable to larger object and action sets. In contrast, our proposed approach uses a time-series model for action-conditioned tactile prediction for pushing control which can be applied in unstructured settings without the knowledge about the model of the individual objects.
Tactile feedback is mostly used for grasp control in robotic object clutter manipulation [3] and detecting a grip on the fruit, detaching and dropping into a basket in harvesting settings [16]. Another line of research uses tactile sensors for ripeness estimation [17] and slip detection during fruit picking [18]. However, the use of tactile sensors has been limited to grip control and has not been applied for any cluster manipulation. In our work, we exploit tactile feedback for trajectory-level control for pushing a flexible plant stem.
Tactile prediction models are used for controlling manipulation tasks, from the simple task of rolling a marble on a table [19] to the complex task of slip control [13]. The core of such controllers is a forward model that can generate predicted tactile readings (we call them tactile images). For instance, action-conditioned tactile predictive models are utilised with a taxel-based tactile sensor in pick and place tasks [10], demonstrating the approach performs well only for flat surface objects.
Our approach uses a time-series model for tactile prediction based on [10]. We form a deep Predictive Functional Control (d-FPC) [20, 21] which enables the robot to control the strawberry pushing actions. Deep models have been extensively used for learning lower dimensional state spaces for Model Predictive Control (MPC) [22]. These methods have also been used for learning visual dynamic models for control [23]. In a simplified task of rolling a dice, the tactile prediction was used in an MPC controller [19]. In our work, we form a Proportional-Derivative (PD) control over the error in the prediction horizon to control the contact state of a flexible object on a robot hand. Unlike previous work that used trajectory adaptation to minimise the likelihood of predicted binary slip signal in a prediction horizon [13], our model learns the complex contact behaviour and generates actions to control the movements of the stem on the tactile finger to keep it stable.
## III Methodology
Camera-based tactile sensorWe use a customised camera-based tactile sensor for pushing strawberries similar to Tactip [24]. This sensor has a camera and an LED light looking at a deformable membrane with embedded white markers (Fig. 3). The applied pressure on the sensor yields a deformation that is captured by the camera.
Contact Localisation Model (CLM)The motions of the marker array printed on the sensor are indicative of the magnitude and location of the applied force. For the current problem setting, we are more interested in force localisation for doing stem contact state control. To find the mapping from raw tactile images to contact location in 1-dimensional space, we use a Convolutional Neural Network with the architecture shown in Fig.2 (red box). CLM consists of two convolutional and three dense layers. The output of CLM is the distance of the contact force from the sensor camera lens along the sensor conic axis. The data set for training CLM consists of applying forces to the fixed sensor by a rod (mimicking strawberry stem) attached to the robot end-effector (EE) with a 5mm distance step. At each step, the robot applies force on the membrane toward the sensor base by a 1mm penetration step. Overall, 150 stem pushing samples in 10 locations are collected to train CLM.
Tactile Forward Model (TFM)Here, we present the formulation of the tactile prediction problem for our custom-made camera-based tactile sensor. Tactile prediction aims to estimate future tactile images based on a set of previous tactile images \(\textbf{x}_{0},...,\textbf{x}_{c-1}\) obtained from physical interactions, where \(c\) is the length of the context window. Specifically, the objective is to sample from the conditional distribution \(p(\textbf{x}_{c\cdot T}|\textbf{x}_{0:c-1})\), where \(\textbf{x}_{i}\) denotes the i\({}^{th}\) tactile image in the sequence and \(T\) is the sum of the context window length and the prediction horizon length.
Since the robot's actions alter the environment during physical interaction, we incorporate action conditioning to predict tactile sensation more accurately. The action-conditioned tactile prediction problem is formulated as predicting the future tactile images \(\textbf{x}_{c:T}\) given a sequence of previous robot actions \(\textbf{a}_{0:c-1}\), previous tactile images \(\textbf{x}_{0:c-1}\), and a sequence of future/planned robot actions/trajectory \(\textbf{a}_{c:T}\). Here, a robot action, \(\textbf{a}\in\mathbb{R}^{6}\), refers to the end-effector task space position and orientation (Euler angles) with respect to the robot base, while a tactile image is represented by \(\textbf{x}\in\mathbb{R}^{64\times 64\times 3}\), which captures the surface deformation caused by the applied force. The conditional distribution will be:
\[p(\textbf{x}_{c:T}|\textbf{x}_{0:c-1},\textbf{a}_{0:T}) \tag{1}\]
Factorising this we can define the model as \(\Pi_{t=c}^{T}p\theta(\textbf{x}_{t}|\textbf{x}_{0:t-1},\textbf{a}_{0:t})\). Learning now involves training the parameters of the factors \(\theta\).
The model architecture is depicted in Fig.2 (blue box). We extract scene features from the input tactile image by convolutional filters in the first two layers of the network as the encoder. Each convolution operation is followed by
the Relu activation function and 2D maxpooling operations. Robot action sequences are concatenated with latent tactile features after the convolutional layers. These latent space features with downsampled width and height and a larger number of channels are fed to the Conv-LSTM chain. These layers process the spatiotemporal dependencies among the latent features. After this point, we need to upscale the features to reach the tactile image size. As such, two convolutional layers, each one followed by Relu activation and 2D upsampling, are applied to ConvLSTM outputs. To apply the pixel motion changes to the input, we use the skip connection for the input tactile image and apply \(tanh\) activation to construct the next tactile images in the sequence.
deep-Functional Predictive Control (d-FPC)We denote the predicted stem location (from CLM) on the sensor at time \(t\) by \(s_{t}\). The goal of our d-FPC is to control the stem displacement on the tactile finger. Hence, this allows the robot to keep the contact fixed with the strawberry stem during pushing actions and avoid the contact location approaching the tip or the base of the sensor. These are sensor surface boundary zones and approaching them increases the probability of losing contact with the stem. We use the stem-finger contact point at time \(t\) as the reference for our d-FPC controller. We define an error signal as the distance of the contact point from the reference point:
\[e_{i,t}=\hat{s}_{i}-s_{t},\;i=c,...,T \tag{2}\]
where \(\hat{s}_{i}\) is the predicted stem location for a sequence of planned robot movements. We formulate our d-FPC over the error signal as follows:
\[a_{t,res}=-\sum_{i=c:T}(k_{p_{i}}\times e_{i,t}+k_{d_{i}}\times\dot{e}_{i,t}) \tag{3}\]
where \(a_{t,res}\) is the residual action value to be added to the reference trajectory \(a_{t,ref}\) to generate the control action \(A_{t}\). \(A_{t}\) is a rotational velocity around the contact line axis. Fig.2 (green box) shows the schematic of the d-FPC. The generated control output is a rotational velocity proportional to the distance of the stem from the reference line. The derivative term avoids overshooting and having large instant rotations.
effective and versatile tool for physical interaction in a range of applications.
We have collected the data from a series of strawberry-pushing tasks in 3-D. The pushing dataset includes data for single strawberry pushing and pushing a cluster of strawberries. To simulate the table-top strawberry growing scenario, we attached each plastic strawberry to a thin wire that makes a nonlinear elastic behaviour similar to those usually observed in tabletop-grown strawberries. To simulate realistic tactile feedback, we added knots on the stalk of each strawberry (Fig.1) and injected silicone to increase their weight (each strawberry weighs c. 20 g to 30 g).
We generate the pushing trajectories for the training data collection phase by two methods: first by Pilz industrial motion planner by specifying initial and target robot poses, and second by defining a minimum time reference trajectory using the robot's Cartesian velocity controller. We use the second method to be able to regenerate comparably similar trajectories in test time, as opposed to the first case where trajectories are generated by the motion planning library. Trajectories include linear and circular motion patterns to perform the pushing tasks. Arc trajectories were used to collect more tactile-conditioned robot movements, where the finger followed the motion of the pushed stem/strawberry. These pushes started at a position \(p_{0}\) and orientation \(q_{0}\), followed an arc trajectory, and ended at a final position \(p_{f}\) with a value of \(z\) coordinate larger than initial position. The final orientation \(q_{f}\) is selected to maintain contact with the elements pushed. The pushing actions were performed from right to left and vice versa, and they involved single or multiple stems (Fig. 1), generating greater deformations on the membrane.
We collected a total of 430 mixed linear/circular motion tasks containing (i) tactile images from the finger at 60 Hz and (ii) robot state data sampled at 1000 Hz, representing the position and orientation of the end effector in the planned trajectory. These readings were synchronised using the ROS _ApproximateTime_ policy and fed into the tactile forward model both in training and test times.
Considering the robot's motion, slip occurred mainly on the width and length of the finger but could also happen in other directions depending on the motion of the stems during the pushing actions.
## V Results and discussion
We test the performance of our proposed control pipeline in real-time on pushing tasks of strawberry stems and compare the performance with a baseline controller and an open-loop system. The tactile sensor is mounted on Franka Emika robot connected to a PC with Intel(r) Core(tm) i7-8700K CPU @ 3.70GHz x 12 and 64GB RAM running Ubuntu 20.04 and ROS Noetic. Torch library is used for offline training and online testing of the neural network models. Test manipulation tasks consist of performing pushing trajectories with linear and circular motion patterns using the robot's Cartesian velocity controller.
Performance metrics include: (I) Stem max displacement and (II) the number of stem slip instances on the sensor surface. If we denote stem location at time \(t\) by \(s_{i}\) where \(i\in(0,1,...,T)\) for a pushing trial, metric (I) is defined as the absolute value of the difference of maximum and minimum stem location in a trial \(|max(s_{i})-min(s_{i})|:i=1,...,T\). Metric (II) is defined as the number of time steps where the differential values \(\dot{s}_{i}\) were larger than threshold \(\gamma\). While metric (I) shows full stem displacement, metric (II) shows the stem's sudden large motion instances or slippage on the sensor surface. We also present the area under the curve of stem displacement and generated action. We repeat each test case 5 times and present the mean and standard deviation of the metric values. Overall we conducted 100 test-pushing trials.
To evaluate the effectiveness of d-FPC for pushing control, we compare the control performance with a PD control-based tactile servoing system as the baseline model. Both models' results are presented against the open-loop system with a pre-specified reference trajectory.
In this paper, we utilise a minimum-time reference trajectory (such as bang-bang) for the open-loop system, although any desired reference trajectory can be used. To make valid comparisons among trials, we consider three initial contact zones for the stem including **Zone-1** where the contact point is between the middle and tip of the sensor, **Zone-2** has the contact point between the middle and base of the sensor, and **Zone-3** where the contact point is close to sensor centre line. Since the tactile sensor has varying deformation limits across
Fig. 3: (a) Our tactile finger design features a deformable half-conic membrane with an integrated miniature camera and LED light. The initial contact line with the stem is considered as the reference line. We predict the location of the stem line \(\hat{s}\) within the prediction window \(c,...,T-1\), where \(c\) denotes the context window. The robot rotates around the stem contact line to counteract the predicted stem displacement with an action \(A_{t}\). While (b) depicts the camera readings of the tactile finger at rest, (c) and (d) show when forces are applied to the membrane near the base and between the base and middle point, respectively.
its conic axis we compare the trials with corresponding initial contact zones together.
We conduct a comparison test with a one-degree-of-freedom (DOF) horizontal pushing along the \(Y\)-axis of the robot's base frame. Both PD and d-FPC controllers generate control actions for the robot hand's rotation around the contact line to prevent stem slip on the sensor surface. The results are presented in Table I, where test cases are conducted separately for each initial contact zone. Both PD and d-FPC controllers decrease the stem's maximum displacement. We observe that d-FPC outperforms the PD controller for Zone-1 and Zone-3, but PD shows better performance for Zone-2 very close to the sensor base. This is because the sensor has its largest deformation limit in the Base zone, resulting in relatively large initial deformation after making contact, making it difficult for TFM to predict future stem states. The prediction of the error signal helps d-FPC to have more reaction time than PD.
We find that d-FPC is the most effective controller to reduce the number of stem slip instances, with the smallest area under the curve of displacement compared to the PD controller. We also present the computation time to show the relative computation complexity of each system. Since d-FPC has two stacked deep models, the computation time is larger than the PD controller.
To compare the performance of different controllers in a qualitative manner, we present the stem location obtained in two trials (shown in Fig. 4a): Trial-1, where the stem-finger initial contact point is in Zone-1, is shown with solid lines, and Trial-2, with the contact point in Zone-2, is shown with a dashed line. Our results show that d-FPC outperforms PD controller and open loop in maintaining the stem contact, resulting in the smallest displacement of the stem. Furthermore, Fig. 4b shows the control actions generated by each controller. We observe that d-FPC generates actions of larger magnitude in Trial-1 because the likelihood of losing the stem in Zone-1 (namely closer to the tip) is larger than in Zone-2. In Trial-2, the magnitude of d-FPC and PD controller
\begin{table}
\begin{tabular}{|c||c|c|c|c|c|} \hline \multirow{2}{*}{Model} & Robot & Stem max & Stem slip & Disp. & Action \\ & trajectory & disp. & instances & integral & integral \\ \hline \multirow{2}{*}{Open-loop} & Linear & 1.21 \(\pm\) 0.18 & 44.38 \(\pm\) 10.3 & 0.88 \(\pm\) 0.4 & - \\ & Circular & 1.35 \(\pm\) 0.46 & 48.18 \(\pm\) 5.2 & 1.02 \(\pm\) 0.5 & - \\ \hline \multirow{2}{*}{PD} & Linear & 0.58 \(\pm\) 0.21 & 25.53 \(\pm\) 4.2 & 0.63 \(\pm\) 0.1 & 5.39 \(\pm\) 6.2 \\ & Circular & 1.20 \(\pm\) 0.01 & 17.6 \(\pm\) 2.0 & 0.44 \(\pm\) 0.0 & 9.89 \(\pm\) 0.8 \\ \hline \multirow{2}{*}{**d-FPC**} & Linear & **0.29 \(\pm\) 0.04** & **8.11 \(\pm\) 1.4** & **0.13 \(\pm\) 0.0** & 4.49 \(\pm\) 2.5 \\ & Circular & **0.54** \(\pm\) 0.05 & **5.0**\(\pm\) 1.5 & **0.22 \(\pm\) 0.0** & 6.66 \(\pm\) 0.8 \\ \hline \end{tabular}
\end{table} TABLE II: Comparison of the controllers in linear and circular pushing trajectories (* integral is the integral of the * magnitude.).
\begin{table}
\begin{tabular}{|c||c|c|c|c|c|c|} \hline \multirow{2}{*}{Model} & Contact & Stem max & Stem slip & Disp. & Action \\ & zone & disp. & instances & integral & integral \\ \hline \multirow{3}{*}{Open-loop} & 1 & 0.80 \(\pm\) 0.2 & 31.23 \(\pm\) 4.3 & 0.83 \(\pm\) 0.1 & - & - \\ & 2 & 1.35 \(\pm\) 0.2 & 50.19 \(\pm\) 5.7 & 0.91 \(\pm\) 0.1 & - & - \\ & 3 & 0.91 \(\pm\) 0.1 & 39.83 \(\pm\) 3.2 & 0.86 \(\pm\) 0.2 & - & - \\ \hline \multirow{3}{*}{PD} & 1 & 0.65 \(\pm\) 0.1 & 27.2 \(\pm\) 6.5 & 0.75 \(\pm\) 0.1 & 2.93 \(\pm\) 0.7 & 18.73 \(\pm\) 2 \\ & 2 & **0.36**\(\pm\) 0.0 & 10.2 \(\pm\) 2.4 & 0.48 \(\pm\) 0.0 & 5.12 \(\pm\) 3.8 & 20.30 \(\pm\) 1 \\ & 3 & 0.63 \(\pm\) 0.1 & 24.2 \(\pm\) 1.6 & 0.47 \(\pm\) 0.1 & 9.73 \(\pm\) 5.4 & 19.73 \(\pm\) 1 \\ \hline \multirow{3}{*}{**d-FPC**} & 1 & **0.20**\(\pm\) 0.0 & **5.0**\(\pm\) 1.2 & **0.12**\(\pm\) 0.0 & 3.74 \(\pm\) 0.8 & 60.49 \(\pm\) 6 \\ & 2 & 0.43 \(\pm\) 0.0 & **7.2**\(\pm\) 0.7 & **0.18**\(\pm\) 0.0 & 4.27 \(\pm\) 1.2 & 55.02 \(\pm\) 2 \\ \cline{1-1} & 3 & **0.25**\(\pm\) 0.1 & **6.0**\(\pm\) 0.6 & **0.09**\(\pm\) 0.0 & 4.57 \(\pm\) 2.4 & 58.54 \(\pm\) 3 \\ \hline \end{tabular}
\end{table} TABLE III: Controller and open loop performances for Pushing a cluster of strawberries.
\begin{table}
\begin{tabular}{|c||c|c|c|c|c|} \hline \multirow{2}{*}{Model} & Contact & Stem max & Stem slip & Action \\ & zone & disp. & instances & integral & integral \\ \hline \multirow{3}{*}{Open-loop} & 1 & 0.80 \(\pm\) 0.2 & 31.23 \(\pm\) 4.3 & 0.83 \(\pm\) 0.1 & - & - \\ & 2 & 1.35 \(\pm\) 0.2 & 50.19 \(\pm\) 5.7 & 0.91 \(\pm\) 0.1 & - & - \\ & 3 & 0.91 \(\pm\) 0.1 & 39.83 \(\pm\) 3.2 & 0.86 \(\pm\) 0.2 & - & - \\ \hline \multirow{3}{*}{PD} & 1 & 0.65 \(\pm\) 0.1 & 27.2 \(\pm\) 6.5 & 0.75 \(\pm\) 0.1 & 2.93 \(\pm\) 0.7 & 18.73 \(\pm\) 2 \\ & 2 & **0.36**\(\pm\) 0.0 & 10.2 \(\pm\) 2.4 & 0.48 \(\pm\) 0.0 & 5.12 \(\pm\) 3.8 & 20.30 \(\pm\) 1 \\ & 3 & 0.63 \(\pm\) 0.1 & 24.2 \(\pm\) 1.6 & 0.47 \(\pm\) 0.1 & 9.73 \(\pm\) 5.4 & 19.73 \(\pm\) 1 \\ \hline \multirow{3}{*}{**d-FPC**} & 1 & **0.20**\(\pm\) 0.0 & **5.0**\(\pm\) 1.2 & **0.12**\(\pm\) 0.0 & 3.74 \(\pm\) 0.8 & 60.49 \(\pm\) 6 \\ & 2 & 0.43 \(\pm\) 0.0 & **7.2**\(\pm\) 0.7 & **0.18**\(\pm\) 0.0 & 4.27 \(\pm\) 1.2 & 55.02 \(\pm\) 2 \\ \cline{1-1} & 3 & **0.25**\(\pm\) 0.1 & **6.0**\(\pm\) 0.6 & **0.09**\(\pm\) 0.0 & 4.57 \(\pm\) 2.4 & 58.54 \(\pm\) 3 \\ \hline \end{tabular}
\end{table} TABLE I: Control performance for the PD and d-FPC in pushing a single strawberry along a linear trajectory.
actions is similar since the contact between the stem and sensor membrane is tighter due to a larger deformation of the sensor closer to the sensor base.
We test the performance of the systems in a three DOF task with a bang-bang reference for translation along \(Y\), \(Z\), and rotation \(W_{x}\) of Cartesian velocity space. This is a more challenging task because the robot wrist will rotate 45 degrees along the pushing trajectory which causes larger deformation of the stem and more slip instances. Based on Table II d-FPC is the most effective controller in decreasing the stem displacement and slip instances. PD has a smaller improvement in max displacement for the circular motion than the linear motion compared to the open-loop system. This indicates that not having enough reaction time in this task can lead to failure in achieving the control objective.
We test the generalisation performance of the pushing controller when pushing a stem in a cluster of strawberries. In this task additional to the target stem, other stems, leaves, or strawberries come into contact with the sensor which makes both tactile prediction and control more challenging. Table III shows the results for pushing a stem in a cluster. Although the control performance of PD and d-FPC degrades compared to pushing an isolated stem, both systems improve the performance metrics relative to the open-loop system.
Fig.5 shows cluster pushing results for sample trials of linear and circular pushing trajectories. For the linear push, PD has slight improvement compared to the open-loop system but d-FPC reduces stem displacement more effectively. For the circular push, while the open-loop system loses contact with the stem because of large stem slippage in the last part of the trial, both PD and d-FPC reduce the stem displacement to avoid large slips. d-FPC keeps the displacement more bounded relative to the PD controller does.
## VI Conclusion
We presented a novel deep Predictive Functional Control (d-PFC) framework to control the contact location of a strawberry stem on our tactile finger. Our proposed method leverages a time-series model for generating action-conditioned tactile predictions and a convolutional neural network (CNN) model converting the tactile images to contact location. We demonstrated the effectiveness of our approach through a series of experiments with a Franka Emika robot and a customised tactile finger, showing that our model can learn complex contact behaviours and generate actions to control the movements of flexible objects to keep them stable, e.g. pushing a cluster of strawberries.
Overall, our work highlights the potential of deep learning-based approaches in addressing the challenges of tactile sensing-based manipulation tasks and lays the foundation for future research in this field.
|
2305.17419 | On random number generators and practical market efficiency | Modern mainstream financial theory is underpinned by the efficient market
hypothesis, which posits the rapid incorporation of relevant information into
asset pricing. Limited prior studies in the operational research literature
have investigated tests designed for random number generators to check for
these informational efficiencies. Treating binary daily returns as a hardware
random number generator analogue, tests of overlapping permutations have
indicated that these time series feature idiosyncratic recurrent patterns.
Contrary to prior studies, we split our analysis into two streams at the annual
and company level, and investigate longer-term efficiency over a larger time
frame for Nasdaq-listed public companies to diminish the effects of trading
noise and allow the market to realistically digest new information. Our results
demonstrate that information efficiency varies across years and reflects
large-scale market impacts such as financial crises. We also show the proximity
to results of a well-tested pseudo-random number generator, discuss the
distinction between theoretical and practical market efficiency, and find that
the statistical qualification of stock-separated returns in support of the
efficient market hypothesis is dependent on the driving factor of small
inefficient subsets that skew market assessments. | Ben Moews | 2023-05-27T08:55:25Z | http://arxiv.org/abs/2305.17419v2 | # On random number generators and practical market efficiency
###### Abstract
Modern mainstream financial theory is underpinned by the efficient market hypothesis, which posits the rapid incorporation of relevant information into asset pricing. Limited prior studies in the operational research literature have investigated tests designed for random number generators to check for these informational efficiencies. Treating binary daily returns as a hardware random number generator analogue, tests of overlapping permutations have indicated that these time series feature idiosyncratic recurrent patterns. Contrary to prior studies, we split our analysis into two streams at the annual and company level, and investigate longer-term efficiency over a larger time frame for Nasdaq-listed public companies to diminish the effects of trading noise and allow the market to realistically digest new information. Our results demonstrate that information efficiency varies across years and reflects large-scale market impacts such as financial crises. We also show the proximity to results of a well-tested pseudo-random number generator, discuss the distinction between theoretical and practical market efficiency, and find that the statistical qualification of stock-separated returns in support of the efficient market hypothesis is dependent on the driving factor of small inefficient subsets that skew market assessments.
keywords: Econometrics; finance; statistics; time series Msc: [2020] 62P20, 90B90, 91B84 +
Footnote †: journal: International Journal of Theoretical Research Society, 2023-07-20, [https://doi.org/10.1080/01605682.2023.2219292](https://doi.org/10.1080/01605682.2023.2219292).
## 1 Introduction
One of the primary constituents of financial research is the efficient market hypothesis, which, depending on its variation, prohibits the possibility of significant forecasting based on different kinds of data due to the sufficiently fast incorporation of available information into asset prices (Fama, 1965, 1970). It is, in some settings, linked to the hypothesis that markets are inherently unpredictable due to following random walks to varying degrees, effectively viewing financial time series as martingales or submartingales (Kendall and Hill, 1953; Cootner, 1964; Malkiel, 1973).
Randomised algorithms employing a random number generator (RNG) are ubiquitous in research applications, including fields as diverse as politics, biology, and cosmology (see, for example, Carson and Lubensky, 2009; Chaudhary et al., 2014; Moews et al., 2019). The most common application area is, of course, cryptography, as encryptions that underlie electronic communication protocols and, by extension, the Internet, rely on hard-to-predict pseudo-RNGs (Cavusoglu et al., 2016). Given these security challenges, there was an early desire to develop statistical tests for randomness, most famously the Diehard Battery of Tests, which contains the overlapping permutations test applicable to binary sequences (Marsaglia and Tsang, 2002).
In the literature on financial machine learning, a common way to approach challenges into a two-class forecasting problem is to transform datasets to a binary representation (Fischer and Krauss, 2018; Lee et al., 2019; Moews et al., 2019). When doing so in a way that removes known market features such as heteroskedasticity, meaning a lack of variance homogeneity along the evolution of a given time series, we can pose the question whether tests assessing the quality of RNGs can then be applied to investigations of market efficiency.
The collection of prior research features two works covering this point of view, both in the operational research literature. First, Doyle and Chen (2013) introduce the application of the overlapping permutations test to the efficient market hypothesis in an exploratory study, analysing daily closing prices for 76 broad exchange indices and finding non-uniformity of changes in returns for a subset of them.
Explicitly building on the latter study, Noakes and Rajaratnam (2016) then focus on the Johannesburg Stock Exchange to investigate the efficiency of small, mid, and large market capitalisation indices
over the 2005-2019 period. They extend the mentioned prior research by including adjustments for thin trading, meaning periods of no or low trading volumes, due to the same use of daily price series, and find more evidence for inefficiency among indices for companies with small market capitalisations.
In this paper, we confirm the viability of cross-disciplinary methodology transfers, from the field of random number generation to econometrics, bridging the gap through the application of operational research to the study of financial markets. We combine the strengths of the two existing studies in the literature by focussing on a single exchange generally considered to be efficient in the financial literature, and spanning a both larger and more recent time frame. We also make use of both monthly and daily returns, deviating from previous works by studying market efficiency over longer time horizons made available for information incorporation.
The further contributions of this paper are fourfold and go beyond the scope of the above-mentioned prior research. First, we investigate two sets of experiments, one separated by years and one by companies, to quantify variations in efficiency for both variables, and verify considerable annual variations. We challenge the latter finding with an analysis of cross-correlation systematics through monthly distributional sums, in which the impact of the recent global financial crisis can be observed.
Next, we compare both types of experiments to a state-of-the-art pseudo-RNG as a baseline for the overlapping permutations test, and find that company-separated tests show statistically significant inefficiencies by a slim margin, while year-separated tests paint a clearer picture of a lack of market efficiency. We then consider the role of a small subset of inefficiently traded outliers, demonstrating that company-separated return series fully qualify for randomness under the given test with only a small percentage of companies omitted, and put this finding in the context of prior results.
Lastly, we discuss the notions of theoretical and practical market efficiency as well as consequences of the former, and describe the sufficiency of our results for the latter. Our results have implications for the application of cryptographic tests in financial research, the evolution of weak-form inefficiency as an anomaly on volatile exchanges in developed markets, and the study of exchange inefficiency on the firm level.
## 2 Theory, data, and methodology
### Information efficiency in financial markets
As one of the cornerstones of modern financial theory, the efficient market hypothesis (EMH) makes statements about the incorporation of relevant information into stock prices. Initially proposed by Fama (1965), it branches into three major variations:
* The strong form states that asset prices reflect all information, both public and private, due to a timely incorporation by market participants.
* The semi-strong form relaxes this position and states the above only for publicly available information, allowing for profitable insider trading.
* The weak form, in a further constriction, posits that asset prices reflect past stock market information such as prices and trading volumes.
The weak-form EMH is of special interest for us, as it concerns the incorporation of past information regarding stock behaviour into the market, as opposed to newly emerging information such as earnings announcements. The latter can, due to the randomness of unpredictable new information, be viewed as noise injections into the market in the context of time series of returns, whereas past stock information should not have a significant impact on future market performance under the umbrella of all forms. While the prior literature on the topic of this paper does not cover market efficiency beyond the above, it is useful to provide a short overview. Fama (1970) frames the hypothesis in terms of expected returns,
\[\mathbb{E}(\tilde{p}_{i,t+1}|\Phi_{t})=[1+\mathbb{E}(\tilde{r}_{i,t+1}|\Phi_{ t})]p_{i,t}, \tag{1}\]
with \(p_{i,t}\) as the price of a given security \(i\) at time \(t\), and accordingly for \(p_{i,t+1}\), whereas \(r_{i,t}\) denotes the return percentage, meaning \(r_{i,t}={(p_{i,t+1}-p_{i,t})}/{p_{i,t}}\). \(\Phi_{t}\) represents information assumed to be incorporated into \(p_{i,t}\), and the tilde operator signifies the role as a random variable.
This formulation, despite its widespread adoption in financial economics, has not met universal approval. An early criticism is made shortly after by LeRoy (1976), who describes the definitions used in Fama (1970) as tautologies, an assessment repeated later as "[...] applying a conditional expectations operator to the identity defining the rate of return as equal to the price relative \(p_{t+1}/p_{t}\) (less one)." (LeRoy, 1989). Following Fama (1970), the position that \(p_{i,t}\) fully reflects \(\Phi_{t}\) then implies that
\[\begin{split}\mathbb{E}(\tilde{\alpha}_{i,t+1}|\Phi_{t})& =0,\text{with}\\ \alpha_{i,t+1}&=p_{i,t+1}-\mathbb{E}(p_{i,t+1}|\Phi_ {t}).\end{split} \tag{2}\]
The same holds for returns, meaning
\[\begin{split}\mathbb{E}(\tilde{\beta}_{i,t+1}|\Phi_{t})& =0,\text{with}\\ \beta_{i,t+1}&=r_{i,t+1}-\mathbb{E}(r_{i,t+1}|\Phi_ {t}).\end{split} \tag{3}\]
This is generally referred to as a "fair game" with respect to the available information by Fama (1970). As for Equation 1, LeRoy (1989) criticises that these equations follow from the definitions of \(\alpha_{i,t+1}\) and \(\beta_{i,t+1}\) with expectations conditional on \(\Phi_{t}\) on both sides, and argues that the former two definitions as fair game variables
do not restrict the stochastic process of the price. The implication is that any capital market would be efficient, while no empirical data could decide on market efficiency.
Later alternatives to these definitions include the reference to a true price model for assessing equilibrium values available to market agents, although this is acknowledged to introduce a joint hypothesis problem by Fama (1991), and these definitions continue to face criticisms as tautological (Pilkington, 2016). While the purpose of this section is a short-form overview of the background and equations commonly encountered, these objections should be kept in mind when assessing the literature, and reviews from different perspectives are available to the interested reader (Malkiel, 2005; LeRoy, 2010; Ausloss et al., 2016). Under the assumption that
\[\begin{split}\forall t\forall\Phi_{t}&:\mathbb{E} (\tilde{p}_{i,t+1}|\Phi_{t})\geq p_{i,t}\\ \Rightarrow\forall t\forall\Phi_{t}&:\mathbb{E}( \tilde{r}_{i,t+1}|\Phi_{t})\geq 0,\end{split} \tag{4}\]
the time series of prices \(\{p_{i,t}\}\) follows a submartingale. Interpreting market efficiency as the independence of successive returns, an additional assumption can be made, which is their identical distribution. This leads, as conditional and marginal probability distributions of independent random variables are identical, to
\[f(r_{i,t+1}|\Phi_{t})=f(r_{i,t+1}), \tag{5}\]
for a density function \(f\) that is invariant to \(t\). While widely accepted in mainstream financial theory, the EMH has attracted criticisms from the field of behavioural economics early on, for example by Nicholson (1968), and the general counterargument can be summarised as the doubtful statement that, maybe, people are not quite as rational as the mathematical maximisation of utility functions seems to imply (DellaVigna, 2009). These criticisms from a behavioural perspective persist until today, with a recent review available in Kapoor and Prosad (2017).
In more recent times, the field has further expanded into findings from neuroscience, with corresponding attacks on orthodox market efficiency (Ardalan, 2018). Despite this, the hypothesis has proven to possess explanatory power, thus cementing its place in the literature, and the results in this paper paint a picture of explicable variations rather than its rejection from a practical perspective.
While this section targets a limited overview, one alternative is of special interest in the discussion of Section 4 and warrants a short introduction due to its place between the EMH and behavioural criticisms mentioned above. Introduced by Lo (2004) the adaptive market hypothesis aims to reconcile the dominant notion of market efficiency with the findings of behavioural finance from an evolutionary perspective.
Contrary to the assumption that market forces are strong enough to overcome behavioural biases in aggregate, this alternative argues based on bounded rationality as pioneered by Simon (1955), as opposed to the axiom of rational expectations. Using this framework's assumption of "satisficing", the adoption of satisfactory choices due to the costs and limitations of human decision-making, the latter is explained through heuristics that are developed and adapted in an evolutionary learning process.
Should market circumstances change, maladaptive heuristics grow to be unfit, and market actors' behaviour needs to change to remain competitive. Changes in market efficiency in this context can be described, in simple terms, as markets being more efficient if many market agent "species" compete for limited financial opportunity resources, as opposed to few species competing for abundant resources.
The adaptive market hypothesis has found empirical success in the analysis of United States stock markets (Urquhart and McGroarty, 2014). Other studies from the last few years cover European and Asiant markets, as well as cryptocurrency exchanges. (Urquhart and McGroarty, 2016; Chu et al., 2019; Xiong et al., 2019). A recent overview for the interested reader can be found in Lo (2017). With this short primer covered, we can now think about the implications for binarised series of stock market returns and their relationship to random number generation.
### Exchange and empirical data description
We retrieve monthly close prices of Nasdaq-listed stocks, spanning the years 2001-2019, from the Wharton Research Data Services (WRDS) Compustat database. It also features, despite a smaller total market capitalisation, more companies than the New York Stock Exchange, and is subject to considerably higher year-to-year volatility. The latter is especially interesting for analyses comparing annual differences in informational efficiency, which is why we opt for this exchange as a data source.
This provides us with a dataset featuring 809,195 entries for 4,905 companies, with associated company identifiers and dates. Missing values are a challenge commonly encountered in financial data, and have to be dealt with either through omission of affected entries or imputation methods. While the latter, despite their widespread use, are sometimes cautioned against, for example by Kofman and Sharpe (2003), the problem that we would encounter in our analysis is more fundamental:
We are, as Section 2.4 will detail in a bit, interested in the distribution of binary patterns, and the content of missing sections can be entirely unrelated to the pattern of missing entries, for example due to data collection issues stemming from technical difficulties limited to certain periods. As the question how these subtly changing binary patterns should be imputed is difficult to answer satisfactorily, we drop companies that feature missing monthly close prices within the time frame covered by the dataset. This leads to the omission of 417 companies, or
approximately 8.50%, and is followed by a further cleaning step that drops companies that feature less than a year's worth of entries, resulting in another 5.86% being sorted out, which is acceptable given that we investigate efficiency across the exchange and on the company level.
Figure 1 shows the number of entries per year, tracing the evolution of companies featured on the exchange over time, with dashed and solid lines indicating the dataset before and after the preprocessing, respectively. Aside from the increasing number of Nasdaq-listed public companies, two features stick out. The first is the slowing of growth around 2008, which can be explained by the impact of the Global Financial Crisis of 2007-2008 on IPOs (Aktas et al., 2019).
The second is the bump around 2005, which shows a slight decrease, and is mostly visible in the dataset before preprocessing. Natural explanations for this deviation include aftershocks of the Dotcom Bubble's burst a few years prior as well as the privatisation effect of the Sarbanes-Oxley Act in the United States following a series of corporate and accounting scandals, which regulates financial reporting and report keeping for public companies. This wave of privatisations for formerly public companies is demonstrated in the literature, for example by Engel and Wang (2007).
The effect on efficiency is still debated, as described in an overview by Bai et al. (2016), although this is combined with a reported lack of empirical evidence for disclosure legislation leading to breaks in market informativeness. As our approach studies market efficiency regardless of contributing factors, this is not of direct concern, but our analysis shows an improvement in month-to-month efficiency for the year said act was passed, lasting until the Global Financial Crisis of 2007-2008.
### Data preprocessing and considerations
WRDS Compustat, as described in Section 2.2, provides both cumulative adjustment factors and total return factors for the purpose of price adjustment for any given time period, with the former being a ratio that enables per-share prices to be adjusted for all stock splits and dividends occurring subsequent to the end of a given period. Similarly, the latter represents a multiplication factor that includes cash-equivalent distributions along with reinvestment of dividends, as well as the compounding effect of dividends paid on reinvested dividends. Following the database's guidelines, we compute adjusted close prices from unadjusted prices \(\hat{p}_{i,t}\), and for \(\delta_{i,t}\) and \(\gamma_{i,t}\) as the cumulative adjustment factor and the total return factor, respectively, as
\[p_{i,t}=\frac{\hat{p}_{i,t}\cdot\delta_{i,t}}{\gamma_{i,t}}. \tag{6}\]
In the next step, we calculate the return by computing the natural logarithm of the price ratio between the current and prior period for given price series of length \(N\),
\[r_{i,t}=\log_{e}\left(\frac{p_{i,t}}{p_{i,t-1}}\right),\text{ with }t\in\{1,2, \ldots,N\}. \tag{7}\]
Here, the logarithm takes the fact into account that individual stocks' price changes are partially dependent on price magnitudes (Karpoff, 1987). In order to visualise the relevance of working with returns instead of prices, Figure 2 shows recurrence plots for a random sample of companies from the dataset, with a recurrence plot \(R_{n,m}\) for horizontal and vertical axes \(n\) and \(m\) generally being calculated as
\[R_{n,m}=\Theta(\epsilon-||\overrightarrow{v}_{n}-\overrightarrow{v}_{m}||), \tag{8}\]
where \(\overrightarrow{v}\) is a phase space trajectory, \(\epsilon\) is a binarisation threshold, and \(\Theta\) is the Heaviside step function. Recurrence plots are frequently used in both statistics and chaos theory to image the periodic nature of phase space trajectories, meaning similar areas being visited in such a space. More informally and in our case, it shows the distance between points along a time series, omitting the binarisation and visualising recurrences as times a trajectory returns to a previous value.
The first row of the figure, corresponding to a trajectory \(\overrightarrow{p}_{i}\), shows slightly darkened upper-left and lower-right corners in the first row indicating slow adiabatic changes in line with the framing of market efficiency as a random walk with drift by Fama (1970). As we can see, raw prices are less than ideal in terms of their homogeneity, and the calculation of logarithmic returns in the second row generally alleviates these problems. In the final step, we binarise the return series using the median \(\tilde{r}_{t}\),
\[b_{i,t}=\left\{\begin{array}{ll}1&\quad\text{if }r_{i,t}>\tilde{r}_{i}\,\\ 0&\quad\text{else}.\end{array}\right. \tag{9}\]
Figure 1: Data points per calendar year. The figure shows entries for monthly stock prices available for Nasdaq-listed companies covering the years 2001–2019. The dashed line indicates the full dataset, whereas the solid line denotes the dataset with the omission of companies that feature missing values for price entries.
The choice of the median over the arithmetic mean follows Doyle and Chen (2013), as this option yields equal numbers of ones and zeroes in the resulting binary array, with an offset of one for uneven lengths. This binarisation also takes care of heteroskedasticity as the lack of variance homogeneity along the evolution of a given time series, which corresponds to return volatility in our case. The presence of heteroskedasticity in markets is well-known in both the financial and operational research literature (Mandelbrot and Taylor, 1967; Lamoureux and Lastrapes, 1990; Fabozzi et al., 2017; Meligkotsidou et al., 2019).
Lastly, another well-known effect in financial time series is momentum, meaning positive autocorrelation, which is generally described as a premier anomaly of the EMH (Fama, 1970). Financial research sometimes uses "runs tests" to check for unbroken sequences of upward or downward movements, which struggle to find oscillations between positive and negative autocorrelations, as they cancel each other out. The overlapping permutations test described in the following Section 2.4 is not subject to this shortcoming, and also able to check for patterns beyond such unbroken sequences.
### Overlapping permutations to test randomness
Originally developed as the generalised serial test by Good and Gover (1967), this approach is based on a small number of earlier works considering the use of permutations to assess randomness (Kendall and Smith, 1938; Bartlett, 1951; Good, 1953). It tests for the equiprobability of \(k^{\nu}\) separate \(\nu\)-nomes, or permutations of length \(\nu\), for \(k\) possible letters, and later found entry into the Diehard Battery of Tests for RNGs, here it is still one of the recommended core constituents of today's testing suites for pseudo-RNGs (Marsaglia and Tsang, 2002; Luengo and Villalba, 2021). In its binary variation, we set \(k=2\) and calculate, in an analogy to \(\chi^{2}\) for multinomial distributions,
\[\begin{split}\psi_{\nu}^{2}&=\sum_{i=1}^{2^{\nu}} \frac{(n_{i}-\lambda)^{2}}{\lambda},\text{ with}\\ \lambda&=\frac{N-\nu+1}{k}\end{split} \tag{10}\]
as the expectation for the frequency of each unique pattern under the assumption of uniformity. As \(\Psi_{\nu}^{2}\) does not have an asymptotic tabular \(\chi^{2}\) distribution due to a violation of the assumption of independence caused by the overlap of windows, Good and Gover (1967) propose first differences to alleviate this problem as
\[\nabla\psi_{\nu}^{2}=\psi_{\nu}^{2}-\psi_{\nu-1}^{2}. \tag{11}\]
This statistic fulfils an asymptotic tabular \(\chi^{2}\) distribution; and taking the second difference also fulfils asymptotic independence, meaning that we can calculate
\[\begin{split}\nabla^{2}\psi_{\nu}^{2}&=\nabla \psi_{\nu}^{2}-\nabla\psi_{\nu-1}^{2}\\ &=\psi_{\nu}^{2}-2\psi_{\nu-1}^{2}+\psi_{\nu-2}^{2},\end{split} \tag{12}\]
with \(\xi_{\nu}=\nabla^{2}2^{\nu}=2^{\nu}-2\cdot 2^{\nu-1}+2^{\nu-2}=2^{\nu-2}\) as the associated degrees of freedom for \(\nu\geq 2\). While the difference of one in separate \(\psi_{\nu}^{2}\) values in the case of uneven array lengths from Section 2.3 gives rise to the question of the impact, most terms cancel out in sums over \(\nabla^{2}\psi_{\nu}^{2}\), as shown by Doyle and Chen (2013). Following the prior literature, we make use of \(\nu\in\{1,2,\dots,8\}\) in our tests, and rely on the second differences due to its improved suitability for testing for uniform randomness (Marsaglia, 2005).
Figure 2: Recurrence plots for stock prices and relative returns. Each column of the plot corresponds to one randomly sampled Nasdaq-listed company from a dataset covering the years 2001–2019. The first row shows recurrence plots for unprocessed stock prices, while the second row shows the same type of plot for logarithmic returns relative to the respective prior period’s price for the same company.
## 3 Empirical experiments and results
### Tests of monthly information incorporation
In the first step, we calculate \(\psi_{\nu}^{2}\) values for our dataset, split into two experimental streams for company-separated and year-separated arrays, respectively. As we are dealing with a set of 4,225 company-associated stocks, summary statistics are computed for these measurements. The upper part of Table 1 shows these results on a company-separated level, listing the arithmetic mean and standard deviation for window sizes \(\nu\in\{1,2,\ldots,8\}\). In addition, as large-valued subsets will become relevant further down, the table also shows the respective maximum per window size.
Results for \(\nu=1\) correspond to the measurement of single binary values, and are expectedly slightly larger than for subsequent year-level experiments due to the smaller average array length for monthly entries over the investigated time frame.
Next, we repeat the same experiment with entries separated by their year, leading to the results listed in the lower part of Table 1. As opposed to the company-separated case, 19 entries easily lend themselves to being listed individually in a table, in addition to the summary statistics already used before. The \(\psi_{\nu}^{2}\) values paint a very diverse picture in terms of the year-to-year volatility of measured pattern retrieval, with both higher means and maxima. In part, this can be explained by the possibility of patterns occurring within the constraints of a given annual time frame, whereas company-separated measurements spanning the entire time period of the dataset offer an avenue to even these pattern distributions out.
While the means in the upper part of Table 1 closely trace the values reported by Doyle and Chen (2013) for the Nasdaq Composite index, validating both the implementation and the stability of these analyses over different time frames, year-to-year analyses paint a different picture. Given these findings, the question arises whether different window sizes correlate in terms of these results.
We plot \(\psi_{\nu}^{2}\) values for window sizes \(\nu\in\{1,2,\ldots,8\}\) in Figure 3 to see how indicators for the presence of recurring patterns relate to different pattern lengths. The plot demonstrates mostly strong co-movements across window sizes, steering us towards the notion that when patterns of one size are recurring above uniformity, so are those corresponding to other window sizes.
In the next step, we calculate second differences as per Equation 12, again for both company-separated and year-separated datasets. Given the mathematical guarantees of \(\nabla^{2}\psi_{\nu}^{2}\) values outlined in Section 2.4 and further described by Good and Gover (1967), we can now make use of critical \(\chi_{2}\) values for given degrees of freedom at the commonly employed 5% level for statistical significance. Second differences that fail to meet this threshold, thus not supporting the discarding of the null hypothesis of uniform randomness, are indicated in bold.
The upper part of Table 2 shows results for company-separated data. Interestingly, while the arithmetic means indicate uniform randomness, the combined \(\chi_{2}\) values for multiplied degrees of freedom of \(|A|\times\xi_{\nu}\), for lengths of given arrays A, do not share this result. This leads us to the suspicion that there are small subsets of highly inefficient stocks that skew the sums upwards, which is further supported by the calculation of proportions of
Figure 4: Kernel density estimates of monthly variability. The figure shows the distribution of binarised monthly returns per year, reshaped into month-wise columns and summed over each column.
Figure 3: Psi-square statistic per calendar year. The figure shows the evolution of \(\psi^{2}\) values for Nasdaq-listed companies in the 2001–2019 time frame, with shifting window sizes \(\nu\in\{1,2,\ldots,8\}\). The statistic follows Equation 10, with lighter srades of grey corresponding to higher values for the window size.
statistically significant measures in the table.
We confirm this effect by dropping sufficiently small percentages of the highest contributors and reevaluating the measures for combined \(\chi_{2}\) values. When doing so, it is important to readjust critical \(\chi^{2}\) values to account for the slightly reduced array sizes. The fourth to penultimate rows in the upper part of Table 2 demonstrate how small percentages (1%, 2%, 3%) result in statistical insignificance for the combined \(\chi_{2}\) values for different window sizes, starting with \(\nu=3,5,6\) for 1%. We extend the coverage by \(\nu=4,8\) for 2%, and finally add \(\nu=7\) for 3%, bringing the sums in line with the arithmetic means in terms of their support for uniform randomness.
Next, in the lower part of Table 2, we repeat the same measures for the year-separated dataset as for \(\psi_{\nu}^{2}\) values before in the lower part of Table 1. We see, just like for the company-separated results, statistically significant deviations from uniform randomness in the arithmetic means, which we have now established to be due to small subsets of companies with pattern-heavy stock behaviour. This results, when viewing measures at the annual level, in a high proportion of significant results, although these statistics vary starkly between years and demonstrate annual variations in market efficiency as measured through recurring patterns in this paper.
Finally, as a complementary visualisation, we transform each year's array into its constituent months to separate stocks, and plot kernel density estimates of the resulting array sums in Figure 4. This translates to each column-wise sum being a count of ones per month and year, where uniformly-random distributions would approximate a narrow distribution around the mean. The horizontal axis is scaled based on array lengths to maintain comparability, and distributions are centred around the mean indicated by a solid vertical line. While not a perfect approximation by any means, the evolution of the count spreads roughly follows the time series in Figure 3, including a broad distribution corresponding to increased intra-market cross-correlations during the Global Financial Crisis of 2007-2008 (Zheng et al., 2012).
### Comparison to measurements of daily data
In contrast to Doyle and Chen (2013) and Noakes and Rajaratnam (2016), who perform analyses on 76 and 111 time series, respectively, our analysis covers the entire exchange and operate on 4,905 instruments as described in Section 2.2. While this makes the use of monthly instead of daily close prices a natural choice, the computational expense to repeat our analysis for daily time steps, which brings the
\begin{table}
\begin{tabular}{l r r r r r r r} \hline \(\nu\) & 1 & 2 & 3 & 4 & 5 & 6 & 7 & 8 \\ \hline \(\overline{\psi_{\nu}^{2}}\) & 0.16 & 1.63 & 5.24 & 13.15 & 29.28 & 61.84 & 127.64 & 258.64 \\ \(\sigma\left(\psi_{\nu}^{2}\right)\) & 1.72 & 4.28 & 8.25 & 14.01 & 22.71 & 35.94 & 57.74 & 93.19 \\ \(\max(\psi_{\nu}^{2})\) & 47.51 & 107.48 & 222.87 & 393.09 & 694.66 & 1218.16 & 2112.62 & 3606.53 \\ \hline
2001 & 0 & 3.19 & 62.97 & 179.27 & 337.94 & 533.66 & 982.93 & 1562.51 \\
[MISSING_PAGE_POST]
& 140.65 & 316.36 & 556.10 & 846.33 & 1291.62 & 1930.10 & 2693.00 \\ \hline \(\overline{\psi_{\nu}^{2}}\) & 4.20 \(\cdot 10^{-5}\) & 33.55 & 125.40 & 268.10 & 516.89 & 862.71 & 1380.91 & 2070.87 \\ \(\sigma\left(\psi_{\nu}^{2}\right)\) & 9.70 \(\cdot 10^{-5}\) & 59.70 & 149.53 & 259.14 & 439.46 & 696.19 & 1039.12 & 1468.50 \\ \(\max(\psi_{\nu}^{2})\) & 3.44 \(\cdot 10^{-4}\) & 248.53 & 594.81 & 973.97 & 1413.72 & 2449.59 & 3853.99 & 5694.62 \\ \hline \end{tabular}
\end{table}
Table 1: Psi-square statistic per window size. The table shows, for window sizes \(\nu\in\{1,2,\ldots,8\}\), the mean, standard deviation, and maximum for \(\psi^{2}\) values for monthly data of Nasdaq-listed companies in the 2001–2019 time frame. The upper and lower parts show results for data separated by company and year, respectively, as well as \(\psi^{2}\) values for each year.a
number of entries from 809,195 to 18,832,546, is beneficial as a comparison. Consequently, we obtain daily close prices for the same stocks and years, and repeat our experiments to test for statistically significant deviations from uniform randomness.
Table 3 shows the results of this additional analysis. Measurements for the arithmetic means and standard deviations for company-separated data stay almost identical to both our monthly results and Doyle and Chen (2013)'s values for the Nasdaq Composite, demonstrating the broad consistency of our results for different time frames. One interesting difference to our previous results is the dropping of the highest 5% of inefficient contributors to achieve statistically significant measurements of efficiency across all degrees of freedom.
Given the smaller time steps, more stocks have a chance to contribute highly inefficient periods to the overall result, which is confirmed by the listed higher shares of inefficient contributors for degrees of freedom requiring additional percentages to be dropped. While not unexpected, this provides a useful insight into the slight differences that data granularity can have on analyses.
One difference of particular interest is the clear concentration of inefficiency for higher degrees versus lower degrees of freedom, which indicates shorter non-random patterns being more spread out across instruments for daily data. Similarly, the results for the year-separated dataset repeat the previously observed deviations from uniform randomness in the arithmetic means, and the stronger presence of an inefficient subset leads to a relative elevation in year-to-year measurements.
Taking into account the known differences in market
\begin{table}
\begin{tabular}{l r r r r r r} \hline & \(\nabla^{2}\psi_{2}^{2}\) & \(\nabla^{2}\psi_{4}^{2}\) & \(\nabla^{2}\psi_{5}^{2}\) & \(\nabla^{2}\psi_{6}^{2}\) & \(\nabla^{2}\psi_{7}^{2}\) & \(\nabla^{2}\psi_{8}^{2}\) \\ & \(\xi=2\) & \(\xi=4\) & \(\xi=8\) & \(\xi=16\) & \(\xi=32\) & \(\xi=64\) \\ \hline \(\nabla^{2}\psi_{\rho,\text{trans}}^{2}\) & **2.15** & **4.30** & **8.21** & **16.45** & **33.11** & **65.44** \\ \(\sigma\left(\nabla^{2}\psi_{\rho}^{2}\right)_{\text{firms}}\) & 2.49 & 3.39 & 5.03 & 7.62 & 11.89 & 17.93 \\ \(\sum\chi_{2}^{2}\) & 9067.27 & 18188.26 & 34676.73 & 69480.95 & 139904.27 & 276477.57 \\ \(\sum\chi_{2+1\%}^{2}\) & **8403.91** & 17301.45 & **33297.16** & **67200.00** & 136140.16 & 270135.37 \\ \(\sum\chi_{2+2\%}^{2}\) & **8015.03** & **16729.07** & **32454.35** & **65791.98** & 133849.35 & **265978.62** \\ \(\sum\chi_{2-3\%}^{2}\) & **7678.14** & **16210.66** & **31670.69** & **64460.44** & **131629.57** & **261950.75** \\ \(\left|\chi_{P<0.05}^{2}\right|/\left|\chi^{2}\right|\) & \(5.68\cdot 10^{-2}\) & \(6.13\cdot 10^{-2}\) & \(5.21\cdot 10^{-2}\) & \(6.04\cdot 10^{-2}\) & \(6.20\cdot 10^{-2}\) & \(6.93\cdot 10^{-2}\) \\ \hline
[MISSING_PAGE_POST]
\hline \(\nabla^{2}\psi_{\rho}^{2}\),\({}^{\text{years}}\) & 58.29 & 50.85 & 106.09 & 97.03 & 172.37 & 171.77 \\ \(\sigma\left(\nabla^{2}\psi_{\rho}^{2}\right)_{\text{years}}\) & 80.01 & 83.08 & 122.54 & 75.93 & 127.52 & 99.14 \\ \(\sum\chi_{2}^{2}\) & 1107.59 & 966.17 & 2015.79 & 1843.62 & 3275.09 & 3263.58 \\ \(\left|\chi_{P<0.05}^{2}\right|/\left|\chi^{2}\right|\) & 0.79 & 0.89 & 0.89 & 0.95 & 0.95 & 0.89 \\ \hline \end{tabular}
* The calculation of \(\nabla^{2}\psi_{\nu}^{2}\) follows Equation 12.
\end{table}
Table 2: Second difference for increasing degrees of freedom. The table shows, for degrees of freedom \(\xi\in\{2,4,8,16,32,64\}\), the mean and standard deviation for \(\nabla^{2}\psi_{\nu}^{2}\) values for monthly data of Nasdaq-listed companies in the 2001–2019 time frame, as well as the combined \(\chi^{2}\) statistic. The upper part shows results for data separated by company, as well as combined \(\chi^{2}\) statistics for percental omissions of the highest contributors. The lower part shows results for data separated by year, as well as \(\nabla^{2}\psi_{\nu}^{2}\) values for each year. Results failing the threshold for significance at the 5% level are indicated in bold.a
efficiency in favour of longer time scales (see, for example, Kim and Shamsuddin, 2008; Rodriguez et al., 2014), monthly and daily close prices reasonably mirror each other in terms of the overall findings and implications. This first application of the approach used in our experiments to varying data frequencies encourages the analysis of different time frames in related works, which we touch upon in Section 4.
### Comparison to pseudo-random numbers
When assessing the findings of the previous sections, a natural question is that of measurements one would expect from a uniformly-random distribution. This allows for a direct comparison to numerical results for our methodology that represent the case of the null hypothesis, and well-established pseudo-RNGs can be used as a baseline in our study's context of markets as an RNG analogue.
In terms of broader applications in programming languages, the MT19937 implementation of the Mersenne Twister algorithm has long been the a standard pseudo-RNG since its original inception as an answer to then-current flaws in older generators (Matsumoto and Nishimura, 1998). In more recent years, however, other general-purpose algorithms have been developed and begun to supplant its reign.
One example is the family of permuted congruential generators (PCG) introduced by O'Neill (2014). The
\begin{table}
\begin{tabular}{l r r r r r} \hline & \(\nabla^{2}\psi_{2}^{2}\) & \(\nabla^{2}\psi_{4}^{2}\) & \(\nabla^{2}\psi_{5}^{2}\) & \(\nabla^{2}\psi_{6}^{2}\) & \(\nabla^{2}\psi_{7}^{2}\) & \(\nabla^{2}\psi_{8}^{2}\) \\ & \(\xi=2\) & \(\xi=4\) & \(\xi=8\) & \(\xi=16\) & \(\xi=32\) & \(\xi=64\) \\ \hline \(\nabla^{2}\psi_{2,\text{trans}}^{2}\) & **2.44** & **4.48** & **8.53** & **16.56** & **32.75** & **64.88** \\ \(\sigma\left(\nabla^{2}\psi_{2}^{2}\right)_{\text{firms}}\) & 2.56 & 3.44 & 4.61 & 6.52 & 9.71 & 14.53 \\ \(\sum\chi_{2}^{2}\) & 10396.00 & 19075.23 & 36296.73 & 70430.19 & 139315.84 & 276003.70 \\ \(\sum\chi_{2-1\%}^{2}\) & 9756.11 & 18199.13 & 35141.65 & 68662.60 & 136419.66 & 271053.15 \\ \(\sum\chi_{2-2\%}^{2}\) & 9306.81 & 17581.32 & 34304.87 & 67314.86 & **134148.97** & **267121.43** \\ \(\sum\chi_{2-3\%}^{2}\) & 8901.99 & 17016.34 & 33501.13 & **66023.67** & **131939.71** & **263231.51** \\ \(\sum\chi_{-4\%}^{2}\) & 8547.31 & **16512.25** & **32758.54** & **64816.08** & **129856.88** & **59525.77** \\ \(\sum\chi_{2-5\%}^{2}\) & **8210.39** & **16032.74** & **32033.78** & **63628.77** & **127782.30** & **255810.71** \\ \(|\chi_{2-0.055}^{2}\) / \(|\chi^{2}|\) & \(8.74\cdot 10^{-2}\) & \(7.97\cdot 10^{-2}\) & \(7.05\cdot 10^{-2}\) & \(6.46\cdot 10^{-2}\) & \(6.42\cdot 10^{-2}\) & \(6.54\cdot 10^{-2}\) \\ \hline
[MISSING_PAGE_POST]
\hline \(\nabla^{2}\psi_{2,\text{years}}^{2}\) & 70.51 & 137.53 & 168.96 & 281.82 & 303.11 & 477.29 \\ \(\sigma\left(\nabla^{2}\psi_{2}^{2}\right)_{\text{years}}\) & 66.45 & 167.10 & 106.89 & 332.19 & 248.76 & 370.28 \\ \(\sum\chi_{2}^{2}\) & 1339.71 & 2613.12 & 3210.33 & 5354.63 & 5759.14 & 9068.57 \\ \(|\chi_{2<0.05}^{2}\) / \(|\chi^{2}|\) & 0.89 & 0.95 & 1.00 & 1.00 & 1.00 & 1.00 \\ \hline \end{tabular}
\end{table}
Table 3: Second difference for increasing degrees of freedom. The table shows, for degrees of freedom \(\xi\in\{2,4,8,16,32,64\}\), the mean and standard deviation for \(\nabla^{2}\psi_{2}^{2}\) values for daily data of Nasdaq-listed companies in the 2001–2019 time frame, as well as the combined \(\chi^{2}\) statistic. The upper part shows results for data separated by company, as well as combined \(\chi^{2}\) statistic for percentual omissions of the highest contributors. The lower part shows results for data separated by year, as well as \(\nabla^{2}\psi_{2}^{2}\) values for each year. Results failing the threshold for significance at the 5% level are indicated in bold.a
PCG64 implementation found widespread adoption, and was made the default generator used by the NumPy mathematical library as of version 1.17 in 2019. Among the reasons for this adoption are the passing of the TestU01 suite with zero failures, which distinguishes it from the Mersenne Twister algorithm as the prior default (L'Ecuyer and Simard, 2007).
Using the PCG64 implementation to repeat our experiments from Section 3.1, Table 4 shows that the means for \(\psi_{\nu}^{2}\) are very close to those for firm-separated values in the upper part of Table 1, with slightly higher standard deviations, while both means and standard deviations are notably lower for the year-separated dataset. In both cases, the respective maxima are considerably lower than those for the empirical Nasdaq data, underlining the previously noted impact of inefficient subsets.
For second differences \(\nabla^{2}\psi_{\nu}^{2}\), Table 5 shows the same statistical metrics as before, for firm-separated values, as the lower part of Table 2. We can see that both arithmetic means and combined \(\chi^{2}\) measures retain the null hypothesis of uniform randomness across all degrees of freedom, setting the pseudo-RNG apart from our market analogue. We also tested a simplistic pseudo-RNG based on logistic maps with iterative seed draws to confirm the overlapping permutations test's ability to pick up on weak pseudo-RNGs. The sums in particular often barely qualify or fail the test for uniform randomness, highlighting the need for well-tested generators for comparative purposes.
Results for company-separated data closely trace each other for empirical data and pseudo-RNG simulations, with marginally larger means and standard deviations for the former, and with means across both experiments maintaining the null hypothesis of uniformly-random data for all window sizes. The same holds true for combined \(\chi^{2}\) measures once the subset of high-impact contributors are removed, as described in Section 3.1. The proportion of statistically significant measures is also approximately the same for company-level Nasdaq data and pseudo-RNG simulations.
Contrary to that, year-separated experiments differ prominently between empirical and simulated data; means and standard deviations taken over annual measures are considerably larger than for the pseudo-RNG output. The latter also features a proportion of statistically significant results similar to the company-separated simulation, whereas the empirical dataset consists mostly of instances satisfying the criterion for inefficiency. As shown in
\begin{table}
\begin{tabular}{l r r r r r r r} \hline \(\nu\) & 1 & 2 & 3 & 4 & 5 & 6 & 7 & 8 \\ \hline \hline \(\psi_{\nu,\text{years}}^{2}\) & \(1.23\cdot 10^{-5}\) & 0.60 & 2.95 & 9.84 & 23.38 & 51.45 & 110.37 & 231.18 \\ \(\sigma\left(\psi_{\nu}^{2}\right)_{\text{years}}\) & \(1.51\cdot 10^{-5}\) & 0.54 & 2.63 & 6.35 & 11.56 & 19.77 & 30.75 & 45.76 \\ \(\max(\psi_{\nu}^{2})_{\text{years}}\) & \(210.51\) & \(225.86\) & \(218.95\) & \(244.50\) & 201.04 & 210.43 & 184.10 & 201.31 \\ \hline \(\psi_{\nu,\text{firms}}^{2}\) & \(6.93\cdot 10^{-3}\) & 1.07 & 4.17 & 11.32 & 26.37 & 57.47 & 120.64 & 248.09 \\ \(\sigma\left(\psi_{\nu}^{2}\right)_{\text{firms}}\) & \(1.13\cdot 10^{-2}\) & 1.47 & 3.51 & 6.50 & 10.81 & 16.67 & 25.08 & 37.02 \\ \(\max(\psi_{\nu}^{2})_{\text{firms}}\) & \(0.08\) & \(13.89\) & \(31.00\) & 65.27 & 139.00 & 274.40 & 513.21 & 949.11 \\ \hline \end{tabular}
\end{table}
Table 4: Psi-square statistic per window size. The table shows, for window sizes \(\nu\in\{1,2,\ldots,8\}\), the mean, standard deviation, and maximum of \(\psi_{\nu}^{2}\) for generated pseudo-random numbers. Rows 1–3 and rows 4–6 cover a set modelled on the year-separated and firm-separated dataset, respectively.a
\begin{table}
\begin{tabular}{l r r r r r r} \hline & \(\nabla^{2}\psi_{3}^{2}\) & \(\nabla^{2}\psi_{4}^{2}\) & \(\nabla^{2}\psi_{5}^{2}\) & \(\nabla^{2}\psi_{6}^{2}\) & \(\nabla^{2}\psi_{7}^{2}\) & \(\nabla^{2}\psi_{8}^{2}\) \\ & \(\xi=2\) & \(\xi=4\) & \(\xi=8\) & \(\xi=16\) & \(\xi=32\) & \(\xi=64\) \\ \hline \(\nabla^{2}\psi_{\nu,\text{years}}^{2}\) & \(\mathbf{1.76}\) & \(\mathbf{4.52}\) & \(\mathbf{6.66}\) & \(\mathbf{14.52}\) & \(\mathbf{30.85}\) & \(\mathbf{61.90}\) \\ \(\sigma\left(\nabla^{2}\psi_{\nu}^{2}\right)_{\text{years}}\) & \(2.42\) & \(3.30\) & \(3.01\) & \(6.15\) & \(7.02\) & \(9.83\) \\ \(\sum\chi_{\text{years}}^{2}\) & \(\mathbf{33.38}\) & \(\mathbf{85.96}\) & \(\mathbf{126.62}\) & \(\mathbf{275.87}\) & \(\mathbf{586.19}\) & \(\mathbf{1176.06}\) \\ \(\left|\chi_{\text{$\nu$}<0.051}^{2}\right|\)/ \(\left|\chi^{2}\right|\) & \(5.26\cdot 10^{-2}\) & \(10.53\cdot 10^{-2}\) & \(0\) & \(5.26\cdot 10^{-2}\) & \(0\) & \(0\) \\ \hline \(\nabla^{2}\psi_{\nu,\text{firms}}^{2}\) & \(\mathbf{2.04}\) & \(\mathbf{4.04}\) & \(\mathbf{7.91}\) & \(\mathbf{16.05}\) & \(\mathbf{32.07}\) & \(\mathbf{64.28}\) \\ \(\sigma\left(\nabla^{2}\psi_{\nu}^{2}\right)_{\text{firms}}\) & \(2.02\) & \(2.79\) & \(4.09\) & \(5.68\) & \(7.96\) & \(11.2\) \\ \(\sum\chi_{\text{firms}}^{2}\) & \(\mathbf{8621.46}\) & \(\mathbf{17066.77}\) & \(\mathbf{33402.40}\) & \(\mathbf{67826.51}\) & \(\mathbf{135477.96}\) & \(\mathbf{271599.01}\) \\ \(\left|\chi_{\text{$\nu$}<0.051}^{2}\right|\)/ \(\left|\chi^{2}\right|\) & \(5.04\cdot 10^{-2}\) & \(4.83\cdot 10^{-2}\) & \(4.88\cdot 10^{-2}\) & \(4.85\cdot 10^{-2}\) & \(4.31\cdot 10^{-2}\) & \(4.78\cdot 10^{-2}\) \\ \hline \end{tabular}
\end{table}
Table 5: Results for second differences for increasing degrees of freedom. The table shows, for degrees of freedom \(\xi\in\{2,4,8,16,32,64\}\), summary statistics for \(\nabla^{2}\psi_{\nu}^{2}\) values across generated pseudo-random numbers. Rows 1–4 and rows 5–8 cover a set modelled on the year-separated and firm-separated dataset, respectively. Results failing the threshold for significance at the 5% level are marked in bold.a
Section 3, a small number of stocks not filtered out in a year-by-year analysis drives much of these large values, although that specific impact does not explain the stark variability between years, with mostly years before the recent global financial crisis qualifying for market efficiency for some of the window sizes.
## 4 Discussion
We have shown that there are significant year-to-year changes in exchange-wide efficiency, as well as an overall inefficiency in aggregated annual data. We also find that individual stocks of Nasdaq-listed companies are efficient in aggregate when taking small subsets of inefficient outliers into account, which offers a partial explanation of annual variability, and that stocks follow approximately the same level of uniformly-random assessment as well-tested pseudo-RNGs. The last point is especially relevant to annual anomalies, as they can both be driven by inefficient subsets and be resolved in terms of efficiency over longer time frames, placing the market in a state of overall efficiency at a larger scale. While we have adjusted monthly and daily close prices for splits and dividends as described in Section 2.3, experiments with unadjusted raw prices yield almost identical results. One area that warrants a closer look is the compatibility of our results with the concept of market efficiency in general.
Samuelson (1973) formalises a random walk model of market efficiency, demonstrating both the martingale property of such a model and its allowance for subsets of market participants, too small to affect prices appreciably, to systematically realise excess returns. This is, of course, especially relevant in terms of stronger forms of market efficiency, for both fundamental analysis as permitted under the semi-strong EMH and strong-form insider trading. It also means that market data that is transformed into binarised returns can still contain hidden inefficiencies exploited by small pools of capital, drawing a line between the model of theoretical efficiency and the leeway that practical implementations allow. The result in terms of uniform randomness in empirical data, in both cases and as far as statistical analyses go, is the same.
This bears similarity to two different proposals in the literature; self-destruction of predictability as described by Timmermann and Granger (2004), which posits that anomaly exploitability decays due to a time-limited presence or public dissemination of the anomaly, and the adaptive market hypothesis by Lo (2004), which attempts to reconcile market efficiency with behavioural economics through adaption processes in changing market environments.
Research
on the latter mentioned in Section 2.1 generally focusses on foreign investment, market microstructure factors, and calendar effects. In contrast, we propose an additional technological perspective on the market environment and market actor adaptability. New developments, be it in terms of computing resources or methodology, do not push adopted approaches to market participation out due to the transitory nature of exploitable anomalies or widespread adoption following publication.
Instead, the process is a result of a technological arms race that renders prior solutions unfit for the changed market environment. An already established and prominent example of this process is the competition in terms of information transmission speeds among high-frequency trading firms. In recent years, the adoption of modern machine learning among financial practitioners, as well as the fast development of new methods in the field, has provided further fuel (Gogas and Papadimitriou, 2021). However, as long as adopted technologies, or satisficing heuristics under the terminology of the adaptive market hypothesis, do not outperform to a degree that renders predecessors ineffective, small pools of capital, akin to a small number of species in an abundant environment, can exploit anomalies in a shared manner.
The above paragraphs show that for the effects of theoretical market efficiency to occur, at least on a meaningful level and for varying notions of market efficiency, the underlying process can contain complications related to inefficiency. This should not come as a surprise, as the EMH, just like models in other disciplines rooted in the scientific method, is a model with explanatory power that does, necessarily, allow for a certain degree of leeway to remain succinct. Anomalies detected in our experiments are, thus, reconcilable with practical market efficiency.
With regard to financial economics and econometrics, this paper provides valuable insights on the time-dependent variability of weak-form market efficiency, as well as on the role of outliers in the assessment of overarching exchanges and broad indices. Natural follow-ups to this type of investigation are the more fine-grained analyses of flash crashes and financial crises, which Noakes and Rajaratnam (2016) already start for the impact of the Global Financial Crisis of 2007-2008 in South Africa, the potential for industry sector influences and similar effects that shape the presence and importance of inefficient subsets, and the measurement of differences between exchanges, both in terms of subset-driven inefficiency and annual variance, as well as regarding the impact of exchange volatility on overlapping permutations tests.
Similarly, in the field of market microstructure, the question arises whether the latter differences can be linked to exchange peculiarities such as trading rules, systems, and accessibility of advances in financial technology such as high-frequency trading. While our paper, due to limitations in its scope, follows Noakes and Rajaratnam (2016) in focussing on a particular exchange, a comparison to other developed markets, for example in Europe, is an interesting follow-up avenue (Borges, 2010).
In the same vein, and regarding the mentioned differences in trading environment and technology,
emerging markets also warrant further study to extend this area of application (Gregoriou, 2009). Lastly, the same approach can be transferred to different types of markets outside of stock exchanges. Here, the efficiency of foreign exchange markets is a prime target for follow-up research as a long-standing topic of interest in the financial literature (Burt et al., 1977; Chaboud et al., 2014). As mentioned in Section 3.2, different time scales are an interesting extension to these kinds of studies, and varying efficiencies of foreign exchange markets for varying data granularity is an additional direction for future research.
## 5 Conclusion
This paper builds on and extends a topic that has recently developed in the operational research literature, centred on the application of overlapping permutations tests from the field of random number generation to financial exchanges, to test for informational efficiency in markets. To this end, we go beyond existing research by covering a larger and more recent time frame with longer step sizes, and by splitting our experiments into the company level and year-separated analyses for Nasdaq data.
Our results for company-separated data demonstrate that stocks of individual Nasdaq-listed public companies feature average market efficiency as measured in this study, although this efficiency is only confirmed when omitting a small subset of outliers, which skew the overall assessment towards statistically significant inefficiency for the overall exchange. This has implications for prior research on whole markets and overarching indices, and for hypothesis tests of market efficiency more generally. For daily instead of monthly close prices, the number of outliers is slightly larger, as shorter-term inefficiencies in price behaviour can contribute to the results, and this increase is driven by short patterns spanning only a few days.
When performing the same analysis on year-separated data instead, we find that the same effect applies, but also that assessments vary starkly in their pattern recurrence, which is further confirmed through the distribution of summed counts, and reflects cross-correlations and decreased efficiency during financial crisis scenarios.
For both streams, we perform comparisons to a well-tested pseudo-random number generator and find comparable measures for company-separated data once outliers are removed, while annual analyses differ in their year-to-year variation. We also discuss the implications of theoretical versus practical efficiency for market participants, arguing for the latter kind of efficiency to allow for adaptive leeway as well as unrealised inefficiencies while maintaining the results implied by financial theory.
Our work contributes to the literature on cross-disciplinary methodology transfers in operational research, applications of cryptographic tools in econometric analyses, the evolution of weak-form inefficiency as an anomaly on volatile exchanges in developed markets, and the broader study of exchange efficiency on the individual company level as well as differences between exchanges and links to market microstructure.
## Acknowledgements
Special thanks go to Antonia Gieschen, whose comments on the potential role of outliers have made the analyses in this paper more complete, as well as Gbenga Ibikunle for previous conversations on the intricacies of testing for market efficiency. We also wish to express our gratitude to the two anonymous reviewers whose comments helped to improve this paper.
|
2308.11094 | WKB approximation to boson dark matter | Galactic dark matter halos may be composed of ultralight axions (ULAs) ($m_a
\lesssim 1$ eV) with wave functions that satisfy nonlinear
Schr\"{o}dinger-Poisson equations (SPA). We find eigenstates of SPA in WKB
approximation. The expansion parameter of the WKB approximation is
$\delta=1/\sqrt{S}$, where $S=2 M R G m_a^{2}$, with $M$ being the total mass,
$R$ the radius of the halo, and $G$ the gravitational constant. $S\gg 1$ for
almost all galaxies, even if the ULA mass is as small as $m_a=10^{-22} $ eV,
making the leading order WKB approximation almost exact. As the level spacing
of bound states is roughly proportional to $\delta$, the number of states in
the gravitational well is huge. We do not see a reason why not all or most of
them contribute to the halo. Using an appropriate distribution function allows
the summation of states to construct the profile of the halo as a function of
the gravitational potential, which can be found solving the Poisson equation.
Using various energy distribution functions, we obtain results similar to those
in simulations. Future plans include investigations of collapse through time
dependent generalizations, and inclusion of self-interactions, which also
induce decay processes of the halo. | Lauren Street, Peter Suranyi, L. C. R. Wijewardhana | 2023-08-22T00:24:47Z | http://arxiv.org/abs/2308.11094v1 | # WKB approximation to bosonic dark matter
###### Abstract
Galactic dark matter halos may be composed of ultralight axions (ULAs) (\(m_{a}\lesssim 1\) eV) with wave functions that satisfy nonlinear Schrodinger-Poisson equations (SPA). We find eigenstates of SPA in WKB approximation. The expansion parameter of the WKB approximation is \(\delta=1/\sqrt{S}\), where \(S=2MRGm_{a}^{2}\), with \(M\) being the total mass, \(R\) the radius of the halo, and \(G\) the gravitational constant. \(S\gg 1\) for almost all galaxies, even if the ULA mass is as small as \(m_{a}=10^{-22}\) eV, making the leading order WKB approximation almost exact. As the level spacing of bound states is roughly proportional to \(\delta\), the number of states in the gravitational well is huge. We do not see a reason why not all or most of them contribute to the halo. Using an appropriate distribution function allows the summation of states to construct the profile of the halo as a function of the gravitational potential, which can be found solving the Poisson equation. Using various energy distribution functions, we obtain results similar to those in simulations. Future plans include investigations of collapse through time dependent generalizations, and inclusion of self-interactions, which also induce decay processes of the halo.
The structure of galaxies and the rotation curves of stars in galaxies can potentially be explained with the assumption that most of galactic matter is composed of presently unknown particles, termed dark matter (DM), which interact very weakly with particles of the Standard Model. One of the most popular variants of DM is the weakly interacting massive particle (WIMP), consisting of massive, non-relativistic particles, heavier than neutrinos [1; 2; 3]. Since no such particles, in the appropriate mass range, have been discovered yet, other alternatives for DM have also been considered. Among others, a prominent candidate is ultralight axions (ULAs) with Compton wavelengths ranging from cosmic size [4; 5; 6; 7; 8; 9; 10; 11; 12; 13; 14] to that of masses \(m_{a}\sim 1\) eV [15].
There have been many simulations of the collapse of ULAs on galactic scales without [16; 17; 18; 19; 20; 21; 22] and with baryonic feedback [23]. In both cases, ULA systems were shown to collapse to a condensed core, surrounded by a virialized halo of non-relativistic ULA. Recently, there have also been simulations performed for ULAs with self-interactions, [24; 25; 26], for systems composed of multiple flavors without self-interactions [27], and for systems composed of multiple flavors with self-interactions [28]. In order to study dynamic perturbations to the central soliton, multiple eigenstates of a Schrodinger Poisson system where the gravitational potential for higher modes is generated by the solitonic ground state was performed in reference [29]. These states can be used to construct halos of low mass galaxies where due the the low value of S only a small number of states contribute to the halo.
In another work [30], self-consistent simulations of the halo constructed from excited states of the Schrodinger-Poisson (SP) equations were performed. The authors found that the collapse of the system consisted of a condensed soliton core surrounded by a halo composed of excited eigenstates. In a subsequent work [31], systems satisfying SP equations were analyzed assuming composition of a small number of energy eigenstates, including the stability and virialization of the system.
The purpose of this work is to construct DM from self-adjoint or complex ULAs with self-interactions using WKB approximation. We ignore self-interactions in solving the equations of motion, but in a subsequent work we will consider the effect of \(2\to 2\) interactions on the stability of excited eigenstates. We emphasize that, because only this particular interaction is relevant, our model can be used for real, or just as easily for complex, scalar fields. For the sake of simplicity, we focus on a real scalar field giving rise to a ULA subject to a \(\Phi^{4}\) self-interaction. Such an interaction is the leading order expansion term of an axion potential, \(V=m_{a}^{2}f^{2}[1-\cos(\Phi/f)]\). For the range of galactic sizes and ULA masses considered here, the contribution of self-interaction terms to the equations of motion is negligible compared to that of the gravitational interaction. The ratio of self-interactions to gravitational interactions scales as
\[\frac{\mathrm{S}I}{GI}\sim\frac{M_{P}^{2}}{f_{a}^{2}}\frac{1}{m_{a}^{2}R^{2}},\]
where \(M_{P}=G^{-1/2}\) is the Planck mass and \(G\) is Newton's constant, \(f_{a}\) is the axion decay constant, \(R\) is the radial scale of the system and \(m_{a}\) is the mass of the axion. Using \(m_{a}=10^{-22}eV\), \(f/M_{p}=10^{-3}\) for the Milky Way with \(R=10^{5}\) light year, we obtain \(SI/GI\simeq 10^{-4}\).
Self-interactions of ULAs may be important for extremely small galaxies. In fact, based on studies of axion stars [32; 33; 34] they can possibly generate a cutoff in the mass of small, stable galaxies with very large densities. That possibility will be investigated in future publications.
## I Equations of motion
Our basic assumption is that radial eigenfunctions of the halo satisfy a Schrodinger-Gross-Pitaevskii equation
\[E_{nl}\psi_{nl}=-\frac{1}{2m_{a}}\left[\psi_{nl}^{\prime\prime}+\frac{2}{r} \psi_{nl}^{\prime}\right]+\left[\frac{1}{2m_{a}}\frac{l(l+1)}{r^{2}}+V_{g} \right]\psi_{nl},\] (I.1)
where quantum numbers \(n\) and \(l\) characterize eigenstates, and \(V_{g}\) is the gravitational potential. The most general stationary state wave function of the halo is
\[\Psi(\mathbf{r},t)=\sum_{nlm}\psi_{nl}(r)Y_{lm}(\theta,\phi)e^{i(E_{nl}t+ \alpha_{nlm})},\] (I.2)
where \(\alpha_{nlm}\) are random phases. Then the gravitational potential is
\[V_{g}=-G\,m_{a}\int d^{3}r^{\prime}\frac{\Psi(\mathbf{r}^{\prime})^{2}}{| \mathbf{r}-\mathbf{r}^{\prime}|}\simeq-G\,m_{a}\sum_{nl}(2l+1)\int d^{3}r^{ \prime}\frac{\psi_{nl}(r^{\prime})^{2}}{|\mathbf{r}-\mathbf{r}^{\prime}|},\] (I.3)
where we average over time and random phases, and assume spherical symmetry.
The main result of this paper is the analytic derivation of the density distribution of a ULA halo. Suppose the gravitational potential of the system, \(V_{g}(r)\), is known. Then the density distribution satisfies the Poisson equation
\[\nabla^{2}V_{g}(r)=-4\pi\,G\,m_{a}\rho(r).\] (I.4)
If we can construct the density distribution as a function of \(V_{g}\), then Eq. (I.4) constitutes a second order differential equation for \(V_{g}\) which can be solved, providing the density distribution. This density distribution can then be compared to simulations or observations.
Expanding the wave function in radial coordinates, assuming again that interference terms are negligible when taking averages and the density distribution is spherically symmetric, the number density can be written as
\[\rho(r)=m_{a}\sum_{nl}(2l+1)|\psi_{nl}|^{2},\] (I.5)
where we normalize wave functions as
\[\int d^{3}r|\psi_{nl}|^{2}=N_{nl},\] (I.6)
where \(N_{nl}\equiv M_{nl}/m_{a}\) and \(M_{nl}\) is the total mass of states having quantum numbers \(n\) and \(l\).
In phenomenological models ([35; 36]) the central density and radial scale are undetermined free parameters. To compare various radial scales we introduce the universal radial scale parameter \(R\), the rescaled dimensionless coordinate, \(z=r/R\), and the gravitational scaling function, \(w(z)\), through the equation
\[V_{g}(z)=-G\frac{m_{a}^{2}}{R}\int d^{3}z^{\prime}\frac{\tilde{ \rho}(z^{\prime})}{|z-z^{\prime}|}=-\frac{GM\,m_{a}}{R}w(z),\] (I.7)
where we rescale the density as \(\tilde{\rho}(z)=\left(R^{3}/m_{a}\right)\rho(r)\). Choosing \(R\) as the harmonic average of \(r\) weighted over the density,
\[\frac{1}{R}=\left\langle\frac{1}{r}\right\rangle,\] (I.8)
the gravitational potential at the center is \(V_{g}(0)=-GM\,m_{a}/R\). Then Eq. (I.7) implies that \(1\geq w(z)>0\), with \(w(0)=1\).
Phenomenological models of halos are of the form \(\rho(r)=\rho(0)f(r/R_{s})\), where \(R_{s}\) is a radial scale, differing from the one defined in Eq. (I.8). The concentration is defined as \(c=r_{\rm vir}/R_{s}\), the virial radius, \(r_{\rm vir}\), is usually defined as the radius of a sphere containing the total mass of the halo and densities are cut off at \(r_{\rm vir}\).
Using our method, all densities vanish at some finite value of \(z=z_{\rm vir}\) which we identify with the scaled virial radius, \(z_{\rm vir}=r_{\rm vir}/R\). In phenomenological models, the scales of models, e.g. concentration, are not universal between models. As Eq. (I.8) implies \(\langle 1/z\rangle=1\), we can give a universal definition to the concentration \(c=z_{\rm vir}\).
Note that \(R\) is of the same order of magnitude but larger than the scaling parameter of phenomenological models ([35; 36]) and unlike those, _model independent_. That can be shown by calculating the harmonic average, Eq. (I.8), in those models. The harmonic radius of a NFW halo is
\[R=R_{\rm NFW}\left[\left(1+\frac{1}{c_{\rm NFW}}\right)\log(1+c_{\rm NFW})-1 \right]>R_{\rm NFW},\] (I.9)
where \(R_{\rm NFW}\) is the scaling radius of the NFW halo, and \(c_{\rm NFW}\) is the NFW concentration. Similarly, the harmonic radius of the Burkert halo is
\[R=R_{\rm B}\frac{\log(1+c_{\rm B}^{2})+2\log(1+c_{\rm B})-2\tan^{-1}(c_{\rm B })}{\log(1+c_{\rm B}^{2})-2\log(1+c_{\rm B})+2\tan^{-1}(c_{\rm B})}>R_{\rm B},\] (I.10)
where \(R_{\rm B}\) is the scaling radius of the Burkert halo, and \(c_{\rm B}\) is the Burkert concentration.
Another interesting property of the gravitational scaling function, \(w(z)\) is related to its behavior at the virial radius. Virial radius is defined by \(w(z_{\rm vir})=0\). Taking Eq. (I.7) at \(z=z_{\rm vir}\) and using Gauss's theorem we obtain
\[V_{g}(z_{\rm vir})=-\frac{GM\,m_{a}}{r_{\rm vir}}=-\frac{GM\,m_{a}}{R\,z_{\rm vir }}=-\frac{GM\,m_{a}}{R}w(z_{\rm vir}),\] (I.11)
with the implication \(w(z_{\rm vir})=1/z_{\rm vir}\).
We also define rescaled dimensionless wave functions, with unit normalization, as
\[\phi_{nl}(z)=R^{-3/2}N_{nl}^{1/2}\psi_{nl}(r).\] (I.12)
Writing Eq. (I.1) in terms of rescaled wave functions we obtain
\[\epsilon_{nl}\phi_{el}=\frac{1}{S}\left(-\phi_{nl}^{\prime\prime}-\frac{2}{z} \phi_{nl}+\frac{l(l+1)}{z^{2}}\phi_{nl}\right)-w(z)\phi_{el},\] (I.13)
where
\[\epsilon_{nl}=\frac{1}{S}2m_{a}R^{2}E_{nl}\] (I.14)
and where the dimensionless halo size parameter is
\[S=2MGR\,m_{a}^{2}.\] (I.15)
As the expectation value of the kinetic term is positive, Eq. (I.13) implies that the range of the scaled energy parameter is \(-1\leq\epsilon\leq 0\).
Rough estimates show that \(S\gg 1\) even for moderate size galaxies, even if the ULA mass is as small as \(m_{a}=10^{-22}\) eV. For the Milky Way \(S_{MW}\gtrsim 10^{5}\). As a contrast to the galactic halo, \(S=O(1)\) for a soliton, or axion star.
Using Eq. (I.13) we estimate the number of bound states. The depth of the rescaled potential well is \(w(0)=1\) and as we see later the spectrum of \(\epsilon_{nl}\) fills most of the interval \(-1\leq\epsilon\leq 0\). As we will see in the next section, WKB quantization implies that \(\epsilon\) is quantized as \(\epsilon\sim-n/\sqrt{S}\), where \(n\) is the principal quantum number. Then the principal quantum number takes up to \(O(\sqrt{S})\) different values. Including states with nonzero angular momentum, we estimate that the number of energy levels in the potential well is \(O(\sqrt{S}\), a very large number. We find no reason why most of those states would not be occupied by the astronomical number of ULAs. In a previous study [31; 37], only a small number of excited states were considered.
## II WKB approximation
Eq. (I.13) lends itself to a perturbative WKB expansion in \(\delta=1/\sqrt{S}\)[38] which gives a solution to the differential equation,
\[\phi_{nl}(z)=\text{Exp}\left[\frac{1}{\delta}\sum_{k=0}^{\infty}\delta^{k}P_ {k}\right],\] (II.1)
where for the two independent solutions in the allowed, oscillating region are
\[P_{0} =\pm\,i\,\int_{z_{\text{min}}}^{z}dz^{\prime}\sqrt{F_{nl}(z^{ \prime})},\] (II.2) \[P_{1} =\log\left[\frac{\mathcal{N}_{nl}}{zF_{nl}(z)^{1/4}}\right]\] (II.3)
while terms \(P_{2},...\) are of \(O(S^{-1/2})\), and negligible. Eq.(I.13) is linear, so any linear combination of solutions is admissible.The unique solution satisfying boundary conditions, finiteness at the turning points, which are zeros of \(F_{nl}[z]\), is
\[\phi_{nl}=\frac{\mathcal{N}_{nl}}{zF_{nl}(z)^{1/4}}\sin\left(\sqrt{S}\int_{z_ {\text{min}}}^{z}dz^{\prime}\sqrt{F_{nl}(z^{\prime})}\right),\] (II.4)
where \(F_{nl}(z)\) is
\[F_{nl}(z)=w(z)+\epsilon_{nl}-\frac{l(l+1)}{z^{2}\,S}.\] (II.5)
Note that wave functions \(\phi_{nl}(z)\) are normalized to 1. Multiplier \(\mathcal{N}_{nl}\) is introduced to ensure the correct normalization of wave functions,
\[\mathcal{N}_{nl}^{-2}=2\pi\int_{z_{\text{min}}}^{z_{\text{max}}}\frac{dz}{F_{ nl}(z)^{1/2}}.\] (II.6)
In the classically forbidden region the wave function decreases exponentially as \(\phi\propto\exp(-\sqrt{S}c)\), where \(c\) is finite as \(S\rightarrow\infty\).. In the limit \(S\rightarrow\infty\) the approximate solution Eq. (II.4) becomes exact. As usual, \(z_{\text{min}}\) and \(z_{\text{max}}\) are the turning points where \(F_{nl}=0\) (II.4) is also known as the Wentzel ansatz to the WKB solution of Eq. (I.13).
Noting that in Eq. (II.4), factors other then the exponential function vary slowly as function of \(n\) and \(l\), the quantization condition for energy eigenvalues \(\epsilon_{nl}\) can be read off from Eq. (II.4) [39; 40]:
\[\int_{z_{\text{min}}}^{z_{\text{max}}}\sqrt{F_{nl}}=\frac{n}{\sqrt{S}}\pi.\] (II.7)
The principal quantum number, \(n\), and the orbital quantum, \(l\), only appear in combinations \(\nu=n/\sqrt{S}\) and \(\lambda=(l+1/2)/\sqrt{S}\).1 As the separation of subsequent values of quantum numbers \(\nu\) and \(\lambda\) vanishes as \(S\) increases, they will be replaced by continuous variables. That replacement will facilitate the construction of the density as a function of \(w(z)\) in the next section. Using the continuous quantum numbers, we can write the wave function as
Footnote 1: We use Langer’s method to define \(\lambda\)[39][40].
\[\phi_{c\lambda}=\frac{\mathcal{N}_{\epsilon\lambda}}{zF_{\epsilon\lambda}(z)^ {1/4}}\sin\left(\sqrt{S}\int_{z_{\rm min}}^{z}dz^{\prime}\sqrt{F_{\epsilon \lambda}(z^{\prime})}\right),\] (II.8)
In Eq. (II.8) we use an unique connection between quantum numbers \(\nu\) and \(\epsilon\) through quantization condition (II.7), as it will be explained int the next section.
## III Construction of the halo
The rescaled wave function of the halo is obtained from Eq. (I.2) as
\[\Phi(z,t)=\sum_{nlm}\phi_{nl}(z)Y_{lm}(\theta,\phi)e^{i(\alpha_{nlm}+ \epsilon_{nl}t)}.\] (III.1)
Then the total density, averaged over rotations and random phases, and normalized to 1, is
\[\rho(z)=|\Phi(z,t)|^{2}\simeq\sum_{nl}(2l+1)\frac{M_{nl}}{M}|\phi_{nl}|^{2}.\] (III.2)
Consider now that wave functions depend on quantum numbers \(n\) and \(l\) only through combinations \(\nu=n/\sqrt{S}\), in Eq. (II.7) and \(\lambda=\sqrt{l(l+1)}/\sqrt{S}\), in Eq. (II.5). The rescaled quantum numbers, \(\nu\) and \(\lambda\), have finite ranges and become dense in those ranges as \(S\rightarrow\infty\). Therefore, the error of turning summations over \(n\) and \(l\) into integrations over \(\nu\) and \(\lambda\) is only of \(O(1/\sqrt{S})\) and negligible for almost all galaxies. Then we obtain
\[\rho(z)=S^{3/2}\int d\nu\int d\lambda^{2}\frac{\mathcal{N}_{c\lambda}^{2}}{z^ {2}F_{\epsilon\lambda}^{1/2}b(\epsilon,\lambda)},\] (III.3)
where dimensionless, continuous distribution function \(b(\epsilon,\lambda)\)_interpolates distribution function_\(M_{nl}/M\). It is normalized as
\[S^{3/2}\int d\nu\int d\lambda^{2}b(\epsilon,\lambda)=1.\] (III.4)
The total variation of \(\epsilon\) is 1, or \(1+\epsilon_{c}\), in case there is a gap of size \(|\epsilon_{c}|\). There are \(O(\sqrt{S})\) energy levels, implying that the discrete values of \(\epsilon\) are dense on interval \((-1,0)\). Wave functions depend explicitly on \(\epsilon\) only, therefore we change integration variable \(\nu\) to \(\epsilon\). We use the quantization condition (Eq. (II.7)), written in terms of continuous quantum numbers,
\[\nu\,\pi=\int_{z_{\rm min}}^{z_{\rm max}}dz\sqrt{w[z]+\epsilon-\frac{\lambda^ {2}}{z^{2}}},\] (III.5)
to find the appropriate Jacobian. Taking the derivative of Eq. (III.5) with respect to \(\epsilon\) at constant \(\lambda\), we note that terms coming from differentiating the integral with respect to boundary values \(z_{\rm min}\) and \(z_{\rm max}\) vanish. Then using Eq. (II.6) we obtain
\[\frac{d\nu}{d\epsilon}\pi=\frac{1}{2}\int_{z_{\rm min}}^{z_{\rm max}}\frac{dz} {\sqrt{w(z)+\epsilon-\frac{\lambda^{2}}{z^{2}}}}=\frac{1}{4\pi}\mathcal{N}_{ nl}^{-2},\] (III.6)
Substituting Eq. (III.6) into Eq. (III.3) we arrive at our final result for the density:
\[\rho(z)=\frac{S^{3/2}}{4\pi^{2}}\int_{-w(z)}^{\epsilon_{\rm max}}d\epsilon \int_{0}^{z^{2}(w(z)+\epsilon)}\frac{d\lambda^{2}b(\epsilon,\lambda)}{z^{2} \sqrt{w[z]+\epsilon-\frac{\lambda^{2}}{z^{2}}}},\] (III.7)
where \(\epsilon_{\rm max}\leq 0\) is an admissible integration constant allowing for the existence of an energy gap. If the distribution function \(b(\epsilon,\lambda)\) is independent of the angular momentum quantum number, \(\lambda\), integration over \(\lambda\) yields
\[\rho(z)=\frac{S^{3/2}}{2\pi^{2}}\int_{-w(z)}^{\epsilon_{\rm max}}d\epsilon\sqrt{ w[z]+\epsilon}\,b(\epsilon)=\rho(0)\frac{\int_{-w(z)}^{\epsilon_{\rm max}}d \epsilon\sqrt{w(z)+\epsilon}\,b(\epsilon)}{\int_{-1}^{\epsilon_{\rm max}}d \epsilon\sqrt{1+\epsilon}\,b(\epsilon)}.\] (III.8)
Even if \(b(\epsilon,\lambda)\) depends on \(\lambda\) it is expected that that an expansion with respect to \(\lambda^{2}\) converges rapidly. Then, using the expansion
\[b(\epsilon,\lambda)=\sum_{k=0}b(\epsilon)^{(k)}\lambda^{2k}\] (III.9)
we can integrate the series with respect to \(\lambda^{2}\), term by term to obtain
\[\rho(z)=\frac{\rho(0)}{2}\frac{\sum_{k}z^{2k}\frac{\sqrt{\pi}\Gamma(1+k)}{ \Gamma(3/2+k)}\int_{-w(z)}^{\epsilon_{\rm max}}d\epsilon\,b^{(k)}(\epsilon)( w(z)+\epsilon)^{k+1/2}}{\int_{-1}^{\epsilon_{\rm max}}d\epsilon\sqrt{1+ \epsilon}\,b(\epsilon)}\] (III.10)
Using Eqs. (III.8) or (III.10), the Poisson equation,
\[w^{\prime\prime}(z)+\frac{2}{z}w^{\prime}(z)=-4\pi\rho(z)\] (III.11)
becomes a differential equation for \(w(z)\). Since \(w(0)=1\) and \(w^{\prime}(0)=0\), to insure that the density is regular at the center (the exception is the case when density is replaced by the NFW profile), after providing a distribution function there are no undetermined integration constants, other than \(\epsilon_{\rm max}\) and \(\rho(0)\). As we will see in Sec. V, for some distribution functions \(b(\epsilon,\lambda)\) the density (Eq. (III.7)) or (III.8), as appropriate, can be analytically integrated.
The strategy for solving Eq. (III.11) is simpler for Eq. (III.8) then for Eq. (III.10), because after rescaling \(z\) as \(x=z\,\sqrt{\rho(0)}\), eliminating the central density from the equation, we find a unique solution at fixed \(\epsilon_{\rm max}\). Among others, we find \(\langle 1/x\rangle\). Then we can restore the original scaling variable \(z=x\langle 1/x\rangle\), which is true because \(\langle 1/z\rangle=1\). We also find \(\rho(0)=\langle 1/x\rangle^{2}\).
It is more complicated to find the solution of Eq. (III.11) in the case when the energy spectrum is not degenerate. Then, due to the explicit dependence of \(\rho(z)\) on \(z\), coordinate \(z\) cannot be eliminated from Eq. (III.10) by a simple rescaling. In that case, rather than rescaling \(z\) we need to search for an appropriate value of \(\rho(0)\) at fixed \(\epsilon_{\rm max}\), such that \(\rho(z)\) is normalized to 1.
Dynamical simulations generate numerical distribution functions, as well [30]. A time dependent version of this work, to be published, can potentially do that as well. One advantage of our analytic approximation method is scalability: dynamical simulations are limited by computational power to relatively small galactic halos.
Finally, we note that rotation curves have simple scale invariant representations in terms of our scaling variable and scaling function. Up to an overall factor of dimension of velocity, \(v_{0}\),
\[v(r)=v_{0}\sqrt{-z\,w^{\prime}(z)}.\] (III.12)
## IV Virial theorem in WKB
Simulations show that, in agreement with expectations, the halo is virialized after collapse [17][18]. Using _stationary_ WKB wave functions the halo does not automatically satisfy the virial theorem. However, a time dependent generalization of the WKB approximation is expected to to converge to a virialized state. Such a generalization is deferred to a future publication.
The virial theorem for a self-gravitating system, without contact interactions, is \(2K+E_{g}=0\), where \(K\) is the total kinetic energy and \(V_{g}\) is the total gravitational energy. Using the definitions of the previous sections, they are
\[K = \frac{1}{2m_{a}^{2}}\sum_{nl}(2l+1)\int d^{3}r\left[|\psi^{\prime }_{nl}(r)|^{2}+\frac{l(l+1)}{r^{2}}|\psi_{nl}(r)|^{2}\right],\] (IV.1) \[E_{g} = \frac{1}{2m_{a}}\sum_{nl}(2l+1)\int d^{3}r|\psi_{nl}(r)|^{2}V_{g }(r),\] (IV.2)
where \(V_{g}(r)\) has been defined in Eq. (I.3). Using the dimensionless wave functions and rescaled radial parameter, \(z=r/R\), we can rewrite \(K\) and \(V\) as
\[K =\frac{1}{2m_{a}^{2}R^{2}}\sum_{nl}(2l+1)M_{nl}\times\int d^{3}z \left[\phi_{nl}^{\prime}[z]^{2}+\frac{l(l+1)}{z^{2}}|\phi_{nl}(z)|^{2}\right],\] (IV.3) \[E_{g} =-\frac{MG}{2R}\sum_{nl}(2l+1)M_{nl}\int d^{3}z|\phi_{nl}(z)|^{2} w(z)\] (IV.4)
Finally, using Eq. (II.8), introducing continuous energy and angular momentum variables as in Eq. (III.3), and expressing \(G\) by size parameter \(S\), we obtain
\[K =2C\int dz\int_{-w(z)}^{\epsilon_{c}}d\epsilon\int_{0}^{z^{2}[w(z )+\epsilon]}d\lambda^{2}b(\epsilon,\lambda)\left[\sqrt{w(z)+\epsilon-\frac{ \lambda^{2}}{z^{2}}}+\frac{\lambda^{2}}{z^{2}\sqrt{w(z)+\epsilon-\frac{ \lambda^{2}}{z^{2}}}}\right],\] (IV.5) \[E_{g} =-C\int dz\int_{-w(z)}^{\epsilon_{c}}d\epsilon\int_{0}^{z^{2}[w( z)+\epsilon]}d\lambda^{2}b(\epsilon,\lambda)\frac{w(z)}{\sqrt{w(z)+\epsilon-\frac{ \lambda^{2}}{z^{2}}}},\] (IV.6)
where \(C=SM/(4m_{a}^{2}R^{2})\), is a constant and \(\epsilon_{c}\leq 0\) is an admissible energy cutoff parameter. Then the virial theorem applied to our system becomes independent of all dimensional parameters and of dimensionless size parameter, \(S\). It reads as
\[\int_{9}^{z_{\rm vir}}dz\,z^{2}\int_{-w(z)}^{\epsilon_{c}}d\epsilon\int_{0}^{z ^{2}[w(z)+\epsilon]}d\lambda^{2}\frac{b(\epsilon,\lambda)(3w(z)+4\epsilon)}{ \sqrt{w(z)+\epsilon-\frac{\lambda^{2}}{z^{2}}}}=0.\] (IV.7)
In the particular case when distribution function, \(b(\epsilon\lambda)\to d_{\epsilon}\) is independent of \(\lambda\), we can integrate over \(\lambda\) and the virial theorem becomes
\[\int_{9}^{z_{\rm vir}}dz\,z^{2}\int_{-w(z)}^{\epsilon_{c}}d\epsilon\,b( \epsilon)\sqrt{w[z]+\epsilon}(3w(z)+4\epsilon)=0.\] (IV.8)
Suppose now that an ansatz for the distribution function \(b(\epsilon,\lambda)\) depends on a free parameter. A simple example is, \(\epsilon_{\rm max}\), defining a gap in the energy spectrum, \(b(\epsilon,\lambda)=0\) for \(\epsilon_{\rm max}<\epsilon<0\). Another possibility is that a background density, \(\rho_{0}\), of undetermined size is subtracted from the density. Then, using the procedure described in Sec. III, we find the numerical gravitational scaling function, \(w(z)\), as a function of \(\epsilon_{\rm max}\) or \(\rho_{0}\). Substituting into Eq. (IV.7) allows us to fix \(\epsilon_{\rm max}\) or, alternatively, \(\rho_{0}\), which we will demonstrate in the next section. If there is no choice for the free parameter to satisfy the condition given by Eq. IV.7, then the distribution function \(b(\epsilon,\lambda)\) does not allow for the system to reach dynamical equilibrium and is not physically acceptable.
## V Examples for halos
To complete the calculation of the density (Eq. (III.7)) we have to know the energy distribution function \(b(\epsilon,\lambda)\). In this section we will explore a variety of physical choices for \(b(\epsilon,\lambda)\) to check whether they provide acceptable density distributions. In particular, we will pay particular attention to whether the obtained density distribution are in agreement with the following general features of observational data and simulations.
* It is generally accepted [16][13][18] that after its collapse, at least asymptotically in time, the halo is virialized, i.e. satisfies virial condition, \(2K+E_{g}=0\). In simulations the system is driven towards dynamical equilibrium. As we pointed out earlier, unlike in simulations, \(b(\epsilon,\lambda)\) must have a parameter, which can be chosen such that the condition is satisfied.
* Another important feature is that galaxies and galaxy clusters are coming in a very wide range of sizes. As condensation \(c\) controls the size of the halo, general solutions must have solutions for a large range of \(c\). However, not all density distributions satisfy this criterion, as an example, a model which we examined in a previous work [42].
* Popular models for galactic halos predict that the rotation curves are independent of the concentration. Though all observed rotation curves are close to being flat, significant variations have been observed [41]. This is especially important if one tries to describe a wide variety halos, including galaxy clusters.
### Thermal equilibrium
Recently, we applied the WKB expansion method to investigate a system in thermal equilibrium [42]. We give a simplified account of those investigations, below. Neglecting gravitational interactions, the system is treated as an ideal Bose-Einstein gas. The number of particles on energy levels \(E_{n}\) is \(N_{n}\). Then the partition function for the for the system is
\[e^{-\beta\,g}=\prod_{n}\frac{1}{1-e^{-\beta(E_{n}-\mu)}}.\] (V.1)
The average value of \(N_{n}\) is
\[\langle N_{n}\rangle=\frac{1}{e^{\beta(E_{n}-\mu)}-1}\] (V.2)
The occupation number, \(\langle N_{n}\rangle\), is enormous in every state, \(n\). As a result, every exponent must satisfy the inequality \(1\gg\beta\left(E_{n}-\mu\right)>0\). Consequently, we can expand them, keeping terms to linear order, to arrive at the Rayleigh-Jeans limit of Bose-Einstein statistics.
\[\langle N_{n}\rangle=\frac{1}{\beta(E_{n}-\mu)}\] (V.3)
Using Eq. (I.13) and rescaling \(\beta\) and \(\mu\) as
\[\beta=\frac{2m_{a}R^{2}}{S}\tilde{\beta},\] (V.4) \[\mu=\frac{S}{2m_{a}R^{2}}\tilde{\mu}\]
we substitute into Eq. (III.8), to get
\[\rho(z)=\rho_{0}\left(\int_{-w(z)}^{0}\frac{\sqrt{w(z)+\epsilon}}{\epsilon- \tilde{\mu}}d\epsilon-\rho_{B}\right),\] (V.5)
where
\[\rho_{0}=\frac{S^{3/2}}{4\pi^{2}\tilde{\beta}}\]
and \(\rho_{0}\rho_{B}\) is a background density term. \(\rho_{B}\) is considered to be an adjustable parameter.
Now remember that the ground state corresponds to \(\epsilon=-1\), Then the physical range of the chemical potential is \(\tilde{\mu}\leq-1\). During the collapse of the halo the system cools and the chemical potential increase towards its critical value, \(\tilde{\mu}=-1\), where Bose-Einstein condensation starts. Integrating density \(\rho(z)\) and setting \(\tilde{\mu}=-1\) we obtain Poisson equation
\[w^{\prime\prime}(z)+\frac{2}{z}w^{\prime}(z)=-8\rho_{0}\pi\] (V.6) \[\left[\tan^{-1}\left(\sqrt{\frac{w(z))}{1-w(z)}}\right)\sqrt{1-w (z)}-\sqrt{w(z)}-\frac{\rho_{B}}{2}\right].\]
To facilitate the numerical solution of Eq. (V.6) we rescale coordinate \(z\to x/\sqrt{8\rho_{0}\pi}\). Then integrating the Poisson equation at a series of choices for background density parameter \(\rho_{B}\), we find that at \(\rho_{B}=0.017\), the virial condition (Eq. (IV.8)) is satisfied. As at that choice of \(\rho_{B}\)\(w(x_{\rm vir})=0\), where \(x_{\rm vir}\simeq 4.74\). We also calculate the concentration,
\[c=\left\langle\frac{1}{x}\right\rangle x_{\rm vir}\simeq 2.78\]
A similar value is obtained for \(c\) if we modify the distribution function to take into account gravitational interactions. That can be done by replacing energy level \(E_{n}\) by \(H_{n}\), the contribution of that energy level to the conserved Hamiltonian. Since observations imply \(c\gtrsim 10\), the final state of the collapse cannot be in dynamical equilibrium and thermal equilibrium at the same time.
### The King model
The King model [43] is based on the classical kinetic theory of collision-free self-gravitating particles, assuming Maxwell velocity distribution cut off at an escape velocity. Then the equilibrium energy distribution takes the form
\[f_{\rm King}=\begin{cases}A\left(e^{-\beta\left(E-E_{c}\right)}-1\right)&\text{ if }E\leq E_{c}\\ 0&\text{otherwise.}\end{cases}\] (V.7)
In a recent paper, [30], galactic halos were constructed, using the King distribution [43] (and other classical distribution functions, like the fermionic King model and the Ossipov-Merritt model [44; 45]), combined by wave functions of excited states of Eq. I.1 obtained by simulations. The halos were compared to profiles obtained by simulations of collapsing systems of ultralight bosons, with excellent agreement. Unlike in [30], we do not add a contribution for the condensate at the core of the density distribution, because our aim is to compare the densities predicted by the King model to phenomenological models, rather than the results of simulations. However, our result are in excellent agreement with simulations outside of the \(z\lesssim 0.1\) region where the soliton core dominates the density distribution.
We will use our analytic WKB wave functions (Eq. (II.8)) combined with the King distribution function to construct halos, which reduces the number of parameters used in the construction. The fermionic King model, also used in [30] give slightly better fits, as it has an extra adjustable parameter. As we will see below, using WKB wave functions, rather than simulations, has the advantage of scalability, which allows us to find new features when investigating larger systems.
Adapting Eq. (V.7) to WKB wave functions we write Eq. (III.8) as
\[\rho(z)=\frac{A\,S^{3/2}}{4\pi^{2}}\int_{-w(z)}^{\epsilon_{c}}d\epsilon\left( e^{-\beta\left(\epsilon-\epsilon_{c}\right)}-1\right)\sqrt{w[z]+\epsilon},\] (V.8)
Scaling out coefficient \(A\,(S/\beta)^{3/2}/4\pi^{2}\) from the density and rescaling \(z\to x\), as explained in Sec. III, we integrate over \(\epsilon\) to get the Poisson equation
\[q^{\prime\prime}(x) +\frac{2}{x}q^{\prime}(x)\] (V.9) \[=-3\left[e^{q(x)}{\rm erf}\left(\sqrt{q(x)}\right)-6\sqrt{q(x)}- 4q(x)^{3/2}\right],\]
where
\[q(x)=\beta(w(z)+\epsilon_{c}),\] (V.10)
and where
\[x=z\frac{A^{1/2}S^{3/2}}{2\sqrt{6}\pi\beta^{1/4}}.\] (V.11)
The only adjustable parameter in the Poisson equation (Eq. (V.9)) is \(q(0)=\beta(1+\epsilon_{c})\), which we took be to a number of values. We found that the concentration, \(c=z_{\rm vir}=x_{\rm vir}(1/x)\), plotted in Fig. 1, is a rapidly rising function of \(q(0)\). The implication is that the King model can describe halos of markedly different sizes.
We found that in the range \(7\lesssim q(0)\lesssim 10\), the King model density distributions are quite close to NFW or Burkert distributions well in the region where the bulk of the density is distributed, \(0.1\lesssim z\lesssim 10\). As an example, we plot the WKB density distribution, \(\rho(z)\), corresponding to initial condition \(q(0)=8\) in Fig. 2. We switch to the physical, rescaled radial coordinate, \(z\) and choose central density \(\rho(0)=1\) in the plot. NFW and Burkert profiles, the core density and core radius parameters of which were fitted to the numerically calculated halo, are also plotted. We found fitted parameters \(R_{\rm NFW}=0.437R\), and \(R_{\rm Burkert}=0.269R\), where \(R\) is the harmonic radius of the WKB solution. Then, we were able to calculate the virial radii and concentrations of those profiles using Eqs. (I.9) and (I.10). We found \(c_{\rm NFW}=22.3\) and \(c_{\rm Burkert}=35.7\) and physical virial radii,
\[z_{\rm NFW} =c_{\rm NFW}R_{\rm NFW}/R=9.78,\] \[z_{\rm Burkert} =c_{\rm Burkert}R_{\rm Burkert}/R=9.74,\] (V.12)
which are quite close to each other. The virial radius of the WKB halo is \(z_{\rm vir}=20.1\), as it does not have a sharp cutoff. However, in the region \(10<z<20\), \(\rho(z)=\mathcal{O}\left(10^{-5}\right)\).
Next, we examine the predictions of the King model at a larger virial radius, in the range where \(c\) is a linearly rising function of \(q(0)\), as it can be seen in Fig. 1. Very likely that range corresponds to galaxies larger than the Milky Way,
or galaxy clusters, which are currently not accessible for simulations. At \(q(0)\gtrsim 12\) density profiles are very different from those at \(q(0)=8\), plotted in Fig. 3. In fact, they are in excellent agreement with a slightly cored isothermal profile, with behavior \(\rho(z)\propto z^{-2}.\) We plot \(\rho(z)\) at \(q(0)=25\) as a function of \(z\) in Fig. 3, along with a weakly cored pseudo-isothermal profile, an NFW profile, and a Burkert profile. All profiles are scaled to match at \(z=1\), which corresponds to the harmonic average radius of the system. Note that the bulk of contributions to the total mass come from the range \(0.01\lesssim z\lesssim 10\).
Clearly, the King model profile is in excellent agreement with the isothermal profile [46][47][48][49], while the NFW and Burkert profiles are not.
Figure 1: Virial radius as a function of \(q(0)\)
Figure 3: King model density distribution compared to isothermal, NFW and Burkert profiles at \(q(0)=25\).
Figure 2: King model density distribution compared to NFW and Burkert profiles at \(q(0)=8\).
Lin et. al. [30] generated two halos labeled A and B, for which they fit energy cutoff parameter, \(E_{c}\) and scale parameter, \(\beta_{\rm sim}\), defining the King model. We wish to compare those with parameters we use to construct WKB halos. However, the corresponding values of \(\beta_{\rm WKB}\) and \(\epsilon_{c}\) cannot be directly compared with the simulations of [30], because we rescaled energy \(E\to\epsilon\) and, consequently, our \(\beta\) value is also rescaled, though rescaling leaves the dimensionless quantity \(E_{c}\beta_{\rm sim}=\epsilon_{c}\beta_{\rm WKB}\), unchanged. To check that equality, we need to use the relationship (Eq. (V.10)), which implies of \(q(0)=\beta_{\rm WKB}(1+\epsilon_{c})\). Note now, that cutoff parameter \(\epsilon_{c}\) has not been fixed in previous calculations. In other words, \(\epsilon_{c}\) is a free parameter.
Consider now that the halos generated in [30] are virialized, while there is no reason why our halos would in general be. In fact, calculating (IV.8) as
\[2\,K+E \propto \int_{0}^{z_{\rm vir}}dz\,z^{2}\int_{-w(z)}^{\epsilon_{c}}d\epsilon \sqrt{w(z)+\epsilon}\] \[\times \left(e^{-\beta(\epsilon-\epsilon_{c})}-1\right)\left(\epsilon+ \frac{3}{4}w(z)\right)\]
we find that at every \(q(0)\) we have \(\langle 2\,K+E\rangle=a+b\,\beta\,\epsilon_{c}\) with varying fixed values for \(a\) and \(b\). Then the vanishing of \(\langle 2\,K+E\rangle\) fixes the combination \(\beta\epsilon_{c}\) at every choice of \(q(0)\).
We plot the combination \(\epsilon_{c}\beta\) as function of \(q(0)\) along with the values of \(E_{c}\beta_{\rm sim}\) of halos \(A\) and \(B\), \(-0.539\) and \(-0.51\), of [30], in Fig. 4. Those combinations are equal after rescaling \(E_{c}\) and \(\beta\). The deviation of values of \(\beta\,\epsilon_{c}\) between those found in our WKB calculation and those in simulations, are only a few percent at the relevant values of \(q(0)\).
## VI Summary
We have investigated the possibility that galactic dark matter haloes could be made up of ultra-light bosons with wave functions that satisfy nonlinear Schrodinger-Poisson equations. We have found eigenstates and eigenvalues of the Schrodinger equation using the WKB approximation. The approximation method becomes more accurate as the galaxy's mass rises, however it has been demonstrated that even for smaller galaxies, the leading WKB approximation may yield accurate answers. The WKB approximation expresses the wave functions in terms of the gravitational potential. The energy levels were determined by the Bohr-Sommerfeld quantization condition, implied by the WKB method. To determine how we may better explain the data we have compared two anatze for the level occupation number distributions, the Bose-Einstein distribution and the King model, at appropriate values of temperature and chemical potential. The modulus square of each wave function multiplied by the occupation number, summed over all the bound states, yielded the total density of particles. The mass density is therefore a function of the gravitational potential \(V_{g}\) since the Poisson equation connects \(V_{g}\) to the particle density. This procedure allowed us to obtain a differential equation for the gravitational potential. Solving this equation enabled us to obtain the mass distribution of the ULDM model of a galaxy. This technique can easily be scaled up to model the DM halos of galaxies or galaxy clusters, well beyond the mass range covered by simulations.
When we used the Bose Einstein density distribution in our computation of the density profile we observed that the concentration parameter, \(c\), of the resulting galaxy was smaller in magnitude than what is observed for most galaxies.
Figure 4: Plot of King model values of \(\beta\,\epsilon_{c}\) along with those of halos A and B obtained in simulations of [30].
When the King model particle distribution was utilized the concentration parameters, on the other hand, were within the permissible range for Milky way-like galaxies.
In future work we plan to study the dynamical collapse of boson clouds using variational technique and study the times scales for collapse and decay in more detail. We will also investigate how the inclusion of self-interactions affects the the dynamics of halo.
###### Acknowledgements.
The authors are indebted to Joshua Eby for fruitful discussions. L.S. also thanks the Department of Physics at the University of Cincinnati for financial support in the form of the Violet M. Diller Fellowship. During part of this research L.S. was supported by the U.S. Department of Energy (DOE), Office of Science, Office of Workforce Development for Teachers and Scientists, Office of Science Graduate Student Research (SCGSR) program. The SCGSR program is administered by the Oak Ridge Institute for Science and Education (ORISE) for the DOE. ORISE is managed by ORAU under contract number DE-SC0014664. All opinions expressed in this paper are the authors? and do not necessarily reflect the policies and views of DOE, ORAU, or ORISE. Research of L.C.R.W. is partially supported by the US. Department of Energy grant DE-SC1019775.
|
2301.12096 | QD3SET-1: A Database with Quantum Dissipative Dynamics Data Sets | Simulations of the dynamics of dissipative quantum systems utilize many
methods such as physics-based quantum, semiclassical, and quantum-classical as
well as machine learning-based approximations, development and testing of which
requires diverse data sets. Here we present a new database QD3SET-1 containing
eight data sets of quantum dynamical data for two systems of broad interest,
spin-boson (SB) model and the Fenna--Matthews--Olson (FMO) complex, generated
with two different methods solving the dynamics, approximate local thermalizing
Lindblad master equation (LTLME) and highly accurate hierarchy equations of
motion (HEOM). One data set was generated with the SB model which is a
two-level quantum system coupled to a harmonic environment using HEOM for 1,000
model parameters. Seven data sets were collected for the FMO complex of
different sizes(7- and 8-site monomer and 24-site trimer with LTLME and 8-site
monomer with HEOM) for 500--879 model parameters. Our QD3SET-1 database
contains both population and coherence dynamics data and part of it has been
already used for machine learning-based quantum dynamics studies. | Arif Ullah, Luis E. Herrera Rodriguez, Pavlo O. Dral, Alexei A. Kananenka | 2023-01-28T05:39:31Z | http://arxiv.org/abs/2301.12096v1 | # QD3SET-1: A Database with Quantum Dissipative Dynamics Data Sets
###### Abstract
Simulations of the dynamics of dissipative quantum systems utilize many methods such as physics-based quantum, semiclassical, and quantum-classical as well as machine learning-based approximations, development and testing of which requires diverse data sets. Here we present a new database QD3SET-1 containing eight data sets of quantum dynamical data for two systems of broad interest, spin-boson (SB) model and the Fenna-Matthews-Olson (FMO) complex, generated with two different methods solving the dynamics, approximate local thermalizing Lindblad master equation (LTLME) and highly accurate hierarchy equations of motion (HEOM). One data set was generated with the SB model which is a two-level quantum system coupled to a harmonic environment using HEOM for 1,000 model parameters. Seven data sets were collected for the FMO complex of different sizes (7- and 8-site monomer and 24-site trimer with LTLME and 8-site monomer with HEOM) for 500-879 model parameters. Our QD3SET-1 database contains both population and coherence dynamics data and part of it has been already used for machine learning-based quantum dynamics studies.
## Background & Summary
The simulation of the inherently quantum-mechanical dynamics underlying charge, energy, and coherence transfer in the condensed-phase is one of the most difficult challenges for computational physics and chemistry. The exponential scaling of the computational cost with system size makes the quantum-mechanically exact simulations of such processes in complex systems infeasible. With the exception of a few model Hamiltonians whose form makes the numerically exact quantum-dynamics simulations possible, any simulation of general condensed-phase systems must rely on approximations [1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28]. Data-driven machine learning (ML) methods for quantum dynamics emerged as attractive alternative to the physics-based approximations due to their low computational cost and high accuracy [29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46]. Development and testing of new simulation methodologies, both physics- and ML-based, would be greatly facilitated if high-quality reference quantum-dynamics data for a diverse set of quantum systems of interest were available.
Here we present a QD3SET-1 database, a collection of eight data sets of time-evolved population dynamics of the two systems: spin-boson (SB) model and the Fenna-Matthews-Olson (FMO) light-harvesting complex. The data sets are summarized in Table 1. The SB model describes a (truncated or intrinsic) two-level quantum system linearly coupled to a harmonic bath [47, 48]. The physics of both the ground state and the dynamics of the SB model is very rich. It has been a continuous object of study during the past decades. SB has become a paradigmatic model system in the development of approximate quantum-dynamics methods and, nowadays, it is becoming a popular choice for the development of ML models [30, 33, 40].
The FMO system has become one of the most extensively studied natural light-harvesting complexes [49, 50, 51, 52, 53, 54, 55]. Under physiological conditions, the FMO complex forms a homotrimer consisting of eight bacteriochlorophyll-a (BChla) molecules per monomer. The biological function of the FMO trimer is to transfer excitation energy from the chlorosome to the reaction center (RC) [56]. An interest in this light-harvesting system sparked when two-dimensional electronic spectroscopy experiments detected the presence of quantum coherence effects in the FMO complex [57, 58, 59]. These observations triggered intense debates about the role this coherence might play in the highly efficient excitation energy transfer (EET).
Early studies of the FMO complex considered only seven-site FMO models comprising of BChla 1-7. Until BChla 8 was discovered, BChla 1 and 6 were both assumed to be possible locations for accepting the excitation from the chlorosome because they are believed to be the nearest pigments to the antenna which captures the sunlight [60, 61, 62]. From there, the energy is subsequently funneled through two nearly independent routes: from site 1 to 2 (pathway 1) or from site 6 to sites 7, 5, and 4 (pathway 2). The terminal point of either route is site 3, where the exciton is then transferred to the RC [63].
Ever since the discovery of the eighth BChla, the role of this pigment in the EET has been extensively investigated [63, 64, 65, 66, 67, 68, 69, 70].
In particular, it was shown that while the population dynamics of the eight-site FMO model is markedly different from a seven-site configuration, the EET efficiencies in both models were predicted to remain comparable and very high [63]. BChla 8 has also been suggested as possible recipient of the initial excitation.
The dynamics of the FMO model has been a subject of numerous computational studies primarily focusing on understanding the role of the protein environment on the efficiency of EET (see e.g., Refs. [71, 72, 73, 74]). Numerical simulations typically employ one of the several parameterized or fitted into the experimental data FMO model Hamiltonians that differ in the BChla excitation energies and the couplings between different BChla sites [49, 56, 75, 76, 77, 78, 79]. Simulations of the full FMO trimer containing 24 BChl have also been performed [80, 81].
Reported in this Data Descriptor a QD3SET-1 database contains seven data sets of time-evolved population dynamics of FMO models with different system Hamiltonians and initial excitations for several hundreds of bath and system-bath parameters. Hierarchy of equations of motion (HEOM) approach [5, 7] was used to simulate the population dynamics of SB and FMO models, in one of the seven FMO data sets. HEOM is a numerically exact method that can describe the dynamics of a system with a non-perturbative and non-Markovian system-bath interaction. The high computational cost of HEOM, however, limits the number of FMO simulations that can be performed with this method. To generate other six FMO data sets, an approximate method--local thermalizing Lindblad master equation (LTLME) [82, 83] was used.
Some of our data was already used in previous studies developing and benchmarking ML models for quantum-dynamics simulations [30, 31, 32, 33]. Here we regenerate one of the data sets to augment with more data and provide many new data sets generated from scratch (Table 1). To facilitate their use, we organize the data sets in a coherently formatted database and provided metadata and extraction scripts. We expect our Database that accompany this Data Descriptor will serve as valuable resources in the development of new quantum-dynamics methods.
## Methods
### SB data set
This data set is re-generated with the same settings and the same parameters as in on our previous SB data set [33] in order to include all the elements (populations and coherences) of the system's reduced density matrix (RDM). The populations and population differences were published and used before [30, 33]. Below we provide a brief summary for self-containing presentation of the data set.
#### Spin-boson model
The spin-boson model comprises a two-level quantum subsystem (TLS) coupled to a bath of harmonic oscillators. The Hamiltonian has the following standard system-bath form: \(\hat{H}=\hat{H}_{s}+\hat{H}_{b}+\hat{H}_{sb}\). The Hamiltonian of the TLS in the local (or site) basis \(\{\ket{+},\ket{-}\}\) is given by (\(\hbar\)=1)
\[\hat{H}_{s}=\epsilon\left(\ket{+}\bra{+}-\ket{-}\bra{-}\right)+\Delta\left( \ket{+}\bra{-}+\ket{-}\bra{+}\right), \tag{1}\]
where \(\epsilon\) is the so-called energetic bias and \(\Delta\) is the tunneling splitting. The harmonic bath is an ensemble of independent harmonic oscillators
\[\hat{H}_{b}=\sum_{j=1}^{N_{b}}\left(\frac{\hat{p}_{j}^{2}}{2m_{j}}+\frac{1}{2} m_{j}\omega_{j}^{2}\hat{x}_{j}^{2}\right), \tag{2}\]
where \(\{\hat{x}_{j}\}\) and \(\{\hat{p}_{j}\}\) are the coordinates and momenta, respectively, of \(N_{b}\) independent harmonic bath modes with masses \(\{m_{j}\}\) and frequencies \(\{\omega_{j}\}\). The TLS and bath are coupled through the additional term
\[\hat{H}_{sb}=-\sum_{j=1}^{N_{b}}c_{j}\hat{x}_{j}\left(\ket{+}\bra{+}-\ket{-} \bra{-}\right), \tag{3}\]
where \(\{c_{j}\}\) are the coupling coefficients.
The effects of the bath on the dynamics of TLS are collectively determined by the spectral density function [84]
\[J(\omega)=\frac{\pi}{2}\sum_{j=1}^{N_{b}}\frac{c_{j}^{2}}{m_{j}\omega_{j}} \delta(\omega-\omega_{j}). \tag{4}\]
In this work we choose to employ the Debye form of the spectral density (Ohmic spectral density with the Drude-Lorentz cut-off) [85]
\[J(\omega)=2\lambda\frac{\omega\gamma}{\omega^{2}+\gamma^{2}}, \tag{5}\]
where \(\lambda\) is the bath reorganization energy, which controls the strength of system-bath coupling, and the cutoff frequency \(\gamma=1/\tau_{c}\) (\(\tau_{c}\) is the bath relaxation time).
All dynamical properties of the TLS can be obtained from the RDM
\[\tilde{\rho}_{\alpha\beta}(t)=\mathrm{Tr}_{b}\langle\alpha|e^{-i\hat{H}t/\hbar} \hat{\rho}(0)e^{i\hat{H}t/\hbar}|\beta\rangle, \tag{6}\]
where \(\alpha,\beta\in\{|+\rangle,|-\rangle\}\), \(\hat{\rho}\) is the total density operator, and the trace is taken over bath degrees of freedom. For example, the commonly used in benchmark studies population difference is obtained from the RDM as follows: \(p_{+}(t)-p_{-}(t)=\tilde{\rho}_{++}(t)-\tilde{\rho}_{--}(t)\).
The initial state of the total system is assumed to be a product state of the system and bath in the following form
\[\hat{\rho}(0)=\hat{\rho}_{\mathrm{s}}(0)\hat{\rho}_{\mathrm{b}}(0). \tag{7}\]
In Eq. (7) the bath density operator is an equilibrium canonical density operator \(\hat{\rho}_{b}(0)=e^{-\beta\hat{H}_{\mathrm{b}}}/\mathrm{Tr}_{\mathrm{b}}\left[ e^{-\beta\hat{H}_{\mathrm{b}}}\right]\), where \(\beta=(k_{\mathrm{B}}T)^{-1}\) is the inverse temperature and \(k_{\mathrm{B}}\) is the Boltzmann constant. The initial density operator of the system is chosen to be \(\hat{\rho}_{\mathrm{s}}(0)=|+\rangle\langle+|\). These conditions corresponds to instantaneous photoexcitation of the subsystem.
### Data generation with spin-boson model and the hierarchy equations of motion approach
The data set for the spin-boson model was generated as described previously [33]. We also summarize it below. The following system and bath parameters were chosen: \(\tilde{\epsilon}=\epsilon/\Delta=\{0,1\}\), \(\tilde{\lambda}=\lambda/\Delta=\{0.1,0.2,0.3,0.4,0.5,0.6,0.7,0.8,0.9,1.0\}\), \(\tilde{\gamma}=\gamma/\Delta=\{1,2,3,4,5,6,7,8,9,10\}\), and \(\tilde{\beta}=\beta\Delta=\{0.1,0.25,0.5,0.75,1\}\), where the tunneling matrix element \(\Delta\) is set as an energy unit. For all combinations of these parameters the system's RDM was propagated using HEOM approach implemented in QuTiP software package [86]. The total propagation time was \(t_{\mathrm{max}}\Delta=20\) and the HEOM integration time-step was set to \(dt\Delta=0.05\). In total, 1,000 of HEOM calculations, 500 for symmetric (\(\epsilon/\Delta=0\)) and 500 for asymmetric (\(\epsilon/\Delta=1\)) spin-boson Hamiltonian were performed. The data set contains a set of RDMs from \(t\Delta=0\) to \(t_{\mathrm{max}}\Delta=20\), saved every \(dt\Delta=0.05\), for every combination of the parameters \((\tilde{\epsilon},\tilde{\lambda},\tilde{\gamma},\tilde{\beta})\) described above.
### Fenna-Matthews-Olson complex data sets
In this section we first describe the general theory behind the FMO model Hamiltonian and later for each data set we provide specific technical details. See also Table 1 for an overview of each data set.
### FMO model Hamiltonian
The FMO complex in this work is described by the system-bath Hamiltonian with the renormalization term \(\hat{H}=\hat{H}_{s}+\hat{H}_{b}+\hat{H}_{rep}\). The electronic system is described by the Frenkel exciton Hamiltonian
\[\hat{H}_{s}=\sum_{n=1}^{N_{c}}E_{n}|n\rangle\langle n|+\sum_{n,m=1,n\neq m}^{N _{c}}V_{nm}|n\rangle\langle m|, \tag{8}\]
where \(|n\rangle\) denotes that only the \(n\)th site is in its electronically excited state and all other sites are in their electronically ground states, \(E_{n}\) are the transition energies, and \(V_{nm}\) is the Coulomb coupling between \(n\)th and \(m\)th sites. The couplings are assumed to be constant (the Condon approximation). Note that the overall electronic ground state of the pigment protein complex \(|0\rangle\) is assumed to be only radiatively coupled to the single-excitation manifold and as such it is not included in the dynamics calculations. In analogy with the SB model the bath is modeled by a set of independent harmonic oscillators. The thermal bath is coupled to the subsystem's states \(|n\rangle\) through the system-bath interaction term
\[\hat{H}_{sb}=\sum_{n=1}^{N_{c}}\sum_{j=1}^{N_{b}}c_{nj}\hat{x}_{j}|n\rangle \langle n|, \tag{9}\]
where each subsystem's state is independently coupled to its own harmonic environment and \(c_{nj}\) are the pigment-phonon coupling constants of environmental phonons local to the \(n\)th BChla.
The FMO model Hamiltonian contains a reorganization term which counters the shift in the minimum energy positions of harmonic oscillators introduced by the system-bath coupling. In the case that each state \(|n\rangle\) is independently coupled to the environment the renormalization term takes the following form
\[\hat{H}_{ren}=\sum_{n=1}^{N_{c}}\lambda_{n}|n\rangle\langle n|, \tag{10}\]
where \(\lambda_{n}=\sum_{j}c_{nj}^{2}/(2m_{j}\omega_{j}^{2})\) is the bath reorganization energy. The bath spectral density associated with each electronic state is assumed to be given by the Lorentz-Drude spectral density (Eq. 5).
Analogously to the SB data set the initial state of the total system is assumed to be a product state of the system and bath. The initial electronic density operator given by \(\hat{\rho}_{s}(0)\) was varied as described below. The bath density operator is taken to be the equilibrium canonical density operator.
FMO-Ia, FMO-Ib, and FMO-II data sets: 7-site FMO models with the local thermalizing Lindblad master equation approach
We generated data sets for the two 7-site system (\(N_{e}=7\)) Hamiltonians. FMO-I data set was generated for the system Hamiltonian parameterized by Adolphs and Renger [49] and given by (in cm\({}^{-1}\))
\[H_{s}=\begin{pmatrix}200&-87.7&5.5&-5.9&6.7&-13.7&-9.9\\ -87.7&320&30.8&8.2&0.7&11.8&4.3\\ 5.5&30.8&0&-53.5&-2.2&-9.6&6.0\\ -5.9&8.2&-53.5&110&-70.7&-17.0&-63.6\\ 6.7&0.7&-2.2&-70.7&270&81.1&-1.3\\ -13.7&11.8&-9.6&-17.0&81.1&420&39.7\\ -9.9&4.3&6.0&-63.3&-1.3&39.7&230\end{pmatrix}, \tag{11}\]
FMO-Ia data set comes directly from our previous studies [31, 32] and FMO-Ib data set was generated here for a broader parameter space as described below.
FMO-II data set was generated for the Hamiltonian parameterized by Cho _et al._[76] which takes the following form (in cm\({}^{-1}\))
\[H_{s}=\begin{pmatrix}280&-106&8&-5&6&-8&-4\\ -106&420&28&6&2&13&1\\ 8&28&0&-62&-1&-9&17\\ -5&6&-62&175&-70&-19&-57\\ 6&2&-1&-70&320&40&-2\\ -8&13&-9&-19&40&360&32\\ -4&1&17&-57&-2&32&260\end{pmatrix}. \tag{12}\]
The diagonal offset of 12210 cm\({}^{-1}\) is added to both Hamiltonians. Each site is coupled to its own bath characterized by the Drude-Lorentz spectral density, Eq. 5, but the bath of each site is described by the same spectral density.
For FMO-Ia data set, the following spectral density parameters and temperatures were employed: \(\lambda\) = {10, 40, 70,..., 310} cm\({}^{-1}\), \(\gamma\) = {25, 50, 75,..., 300} fs rad\({}^{-1}\), and T = {30, 50, 70,..., 310} K. For FMO-Ib and FMO-II data sets, the spectral density parameters and temperatures were: \(\lambda\) = {10, 40, 70,..., 520} cm\({}^{-1}\), \(\gamma\) = {25, 50, 75,..., 500} cm\({}^{-1}\), and T = {30, 50, 70,..., 510} K.
For FMO-Ia, FMO-Ib, and FMO-II data sets, the farthest-point sampling [87] was employed to select the most distant points in the Euclidean space [32] of parameters which typically more efficiently covers relevant space compared to random sampling [87]. We choose the top 500 (most distant) combinations of (\(\lambda\), \(\gamma\), \(T\)) based on farthest-point sampling. For each selected set of parameters the system RDM was calculated using the local thermalizing Lindblad master equation (LTLME) approach [82, 83] implemented in the quantum_HEOM package [83, 88]. Two subsets of the data set were generated, one for the initial electronic density operator \(\hat{\rho}_{s}(0)=|1\rangle\langle 1|\) corresponding to the initial excitation of site-1 and the other one for the initial density operator \(\hat{\rho}_{s}(0)=|6\rangle\langle 6|\) which corresponds to the initial excitation of site-6. In each case, 500 RDM trajectories were generated. The data set contains both diagonal (populations) and off-diagonal (coherences) elements of the RDM on a time grid from 0 to 1 ns (in the case of FMO-Ia) and 0 to 50 ps (in the case of FMO-Ib and FMO-II) with the 5 fs time step.
FMO-III and FMO-IV data sets: 8-site FMO models with the local thermalizing Lindblad master equation approach
Using the same LTLME-based approach, we generated a data set for two different Hamiltonians for the 8-site FMO model. The first Hamiltonian (FMO-III data set) was parameterized by Jia _et al._[89] The electronic system Hamiltonian is given by (in cm\({}^{-1}\))
\[H_{s}=\begin{pmatrix}218&-91.0&4.1&-6.3&6.3&-8.8&-7.8&32.4\\ -91.0&81&28.7&8.2&1.0&8.8&3.4&6.3\\ 4.1&28.7&0&-46.6&-4.4&-9.3&1.3&1.3\\ -6.3&8.2&-46.6&105&-73.9&-17.7&-59.1&-1.9\\ 6.3&1.0&-4.4&-73.9&105&76.0&-3.1&4.2\\ -8.8&8.8&-9.3&-17.7&76.0&186&25.9&-11.6\\ -7.8&3.4&1.3&-59.1&-3.1&25.9&169&-11.9\\ 32.4&6.3&1.3&-1.9&4.2&-11.6&-11.9&154\end{pmatrix}, \tag{13}\]
with the diagonal offset of 11332 cm\({}^{-1}\).
The FMO-IV data set was generated for the Hamiltonian parameterized by Busch _et al._[64] (site energies) and Olbrich _et al._[67] (excitonic couplings) and takes the following form (in cm\({}^{-1}\))
\[H_{s}=\begin{pmatrix}310&-80.3&3.5&-4.0&4.5&-10.2&-4.9&21.0\\ -80.3&230&23.5&6.7&0.5&7.5&1.5&3.3\\ 3.5&23.5&0&-49.8&-1.5&-6.5&1.2&0.7\\ -4.0&6.7&-49.8&180&63.4&-13.3&-42.2&-1.2\\ 4.5&0.5&-1.5&63.4&450&55.8&4.7&2.8\\ -10.2&7.5&-6.5&-13.3&55.8&320&33.0&-7.3\\ -4.9&1.5&1.2&-42.2&4.7&33.0&270&-8.7\\ 21.0&3.3&0.7&-1.2&2.8&-7.3&-8.7&505\end{pmatrix}, \tag{14}\]
with the diagonal offset of 12195 cm\({}^{-1}\).
The same set of spectral density parameters and temperatures that was used in generation of the FMO-Ib and FMO-II data sets was used here. LTLME method was used to propagate system's RDM from 0 to 50 ps with 5 fs time-step and three initial states of the electronic system were considered: sites-1, 6 and 8. The data set contains both diagonal (populations) and off-diagonal (coherences) elements of the RDM. The calculations was performed with the quantum_HEOM package [88] with some local modifications to make it compatible for the Hamiltonians with larger dimension. We will refer to this as modified-quantum_HEOM implementation.
### FMO-V data set: FMO trimer with local thermalizing Lindblad master equation approach
Additionally, we also generated a data set for the FMO trimer. The overall excitonic Hamiltonian of all three subunits is given by
\[H_{s}=\begin{pmatrix}H_{A}&H_{B}&H_{B}^{T}\\ H_{B}^{T}&H_{A}&H_{B}\\ H_{B}&H_{B}^{T}&H_{A}\end{pmatrix} \tag{15}\]
where \(H_{A}\) is the subunit Hamiltonian for which we used the same Hamiltonian as in FMO-IV data set (Eq. 14), while \(H_{B}\) is the intra-subunit Hamiltonian which is taken from the work of Olbrich _et al._[67] and is given by (in cm\({}^{-1}\))
\[H_{B}=\begin{pmatrix}1.0&0.3&-0.6&0.7&2.3&1.5&0.9&0.1\\ 1.5&-0.4&-2.5&-1.5&7.4&5.2&1.5&0.7\\ 1.4&0.1&-2.7&5.7&4.6&2.3&4.0&0.8\\ 0.3&0.5&0.7&1.9&-0.6&-0.4&1.9&-0.8\\ 0.7&0.9&1.1&-0.1&1.8&0.1&-0.7&1.3\\ 0.1&0.7&0.8&1.4&-1.4&-1.5&1.6&-1.0\\ 0.3&0.2&-0.7&4.8&-1.6&0.1&5.7&-2.3\\ 0.1&0.6&1.5&-1.1&4.0&-3.1&-5.2&3.6\end{pmatrix}. \tag{16}\]
We propagate dynamics with LTLME from 0 to 50 ps with 5 fs time-step for the same parameters as was adopted in calculations for the FMO-Ib--FMO-IV data sets. The calculations were performed with the modified-quantum_HEOM implementation for the initial excited sites-1, 6 and 8.
### FMO-W data set: 8-site FMO model with the hierarchy of equations of motion approach
The LTLME approach provides only approximate description of quantum dynamics of the FMO complex. Therefore, the FMO-I--FMO-V data sets are useful merely for the developing machine learning models for quantum dynamics studies. For example, they can be used to train a neural network model which can then be further improved on more accurate but smaller data sets (e.g., via transfer learning). However, LTLME dynamics cannot be used to benchmark other quantum dynamics methods. In the latter case the high-quality reference data is needed.
To generate a data set with accurate FMO dynamics we performed HEOM calculations for the 8-site FMO model with the Hamiltonian given by Eq. 14. HEOM calculations were performed using the parallel hierarchy integrator (PHI) code [90]. The initial data set was chosen on the basis of farthest-point sampling similar to how it was done in the FMO-Ib--FMO-V data sets with the only difference being that instead of 500 most distant sets of parameters that were chosen in the preparation of FMO-Ib--FMO-V data sets, 1100 most distant set of parameters were used to prepare the initial FMO-VI data set. For certain parameters, the RAM requirements exceeded the RAM of computing nodes available to us (1 TB). Therefore, such parameter sets were excluded from the data set. Excluded parameters correspond to low temperatures, high reorganization energies, and low cut-off frequency. Such strong non-Markovian regimes pose significant challenges in the computational studies of open quantum systems. Approximately 20% of the initial data set was removed because of prohibitive memory requirements. We note that even though graphics processing units (GPU) implementations of HEOM (e.g., Ref. [91]) are much faster than their CPU-based counterparts, they are still limited by the small amount of memory in presently available GPUs.
For the remaining 80% of the data set HEOM calculations were performed for 2.0 ps. To speed up calculations, an adaptive integration Runge-Kutta-Fehlberg 4/5 [92] (RKF45) method was used as implemented in the PHI code. Using adaptive integration reduces both the total computation time and memory requirements but can lead to artifacts if the accuracy threshold is set too large [90]. In this work the PHI default accuracy threshold of \(1\cdot 10^{-6}\) was used. The initial integration time step was set to 0.1 fs. In RKF45 the integration time step is varied and, therefore, the output comprises time-evolved RDMs on an unevenly spaced time grid. To obtain the RDMs on an evenly spaced time grid of 0.1 fs, cubic-spline interpolation was used. The interpolation errors were examined on a few cases where 0.1 fs fixed time step integration was feasible. The errors in the populations were found to be less than \(10^{-5}\) which is much smaller compared to the convergence thresholds discussed below in Technical Validation. The final FMO-VI data set contains 879 entries each comprising all the populations and coherences for the RDM from 0 to 2 ps with the time-step of 0.1 fs.
## Data Records
All data sets can be accessed at [https://figshare.com/s/ed24594205ab87404238](https://figshare.com/s/ed24594205ab87404238). The data sets are stored in standard NumPy [93] binary file format (.npy) files. The following format of file names was adopted in the SB data set 2_epsilon-1.0_lambda-Y_gamma-Z_beta-XX.npy, where X denotes the value of the energetic bias (\(\tilde{\epsilon}\)), Y is the reorganization energy \(\tilde{\lambda}\), Z is the cut-off frequency \(\tilde{\gamma}\), and XX is the inverse temperature \(\tilde{\beta}\). The following format of file names was adopted in all FMO data sets X_initial-Y_gamma-Z_lambda-XX_temp-YY.npy, where X denotes the number of sites in the FMO model, Y is the initial state, Z is the value of bath frequency, XX is the value of reorganization energy, and YY is the temperature.
## Technical Validation
Central to the HEOM approach is the assumption that the bath correlation function \(C_{a}(t)\) for site \(a\) can be represented by an infinite sum of exponentially decaying terms \(C_{a}(t)=\sum_{k}^{n}c_{ak}\exp(-\nu_{ak}t)\), where \(\nu_{ak}=2\pi k/\beta\hbar\) are Matsubara frequencies. Further, each exponential term leads to a set of auxiliary density matrices which take into account the non-Markovian evolution of the system's RDM under the influence of bath. In practice, the summation must be truncated at a finite level, \(K\), which is called Matsubara cut-off and the set of auxiliary density matrices needs to be truncated at a finite number \(M\). In the truncated set of auxiliary matrices are indexed by \(\mathbf{n}=(n_{10},\ldots,n_{1K},n_{M0},\ldots,n_{MK})\). The hierarchy truncation level is given by \(L=\sum_{n=1}^{M}\sum_{k=0}^{K}n_{ak}\), where \(n_{ak}\) is the index of an auxiliary density matrix. The computational cost of the HEOM method rises steeply with the hierarchy level \(L\)[90].
The hierarchy truncation level \(L\) depends on how non-Markovian the system is. Although, there is some guidance on how to choose the Matsubara cut-off and hierarchy truncation level based on bath and spectral density parameters [5, 50], in practice, the values of \(M\) and \(K\) have to be chosen by requiring the convergence of the RDM to acceptable accuracy level. In this work HEOM calculations for the SB model were performed by setting \(L=30\) for all temperatures. The Matsubara cut-off was chosen depending on the temperature as follows: for \(\beta=0.1\)\(K\) was set to 2; for \(\beta=0.25,K=3\), for \(\beta=0.5,K=3\), for \(\beta=0.75,K=4\), and for \(\beta=1.0,K=5\). These values are chosen sufficiently high to ensure the convergence of the populations with respect to \(K\) and \(L\). Choosing high truncation levels in the HEOM calculations of a TLS does not present a problem given the presently available computational resources.
Similar approach of taking excessively large values of \(K\) and \(L\), however, is infeasible in the FMO calculations because the computational cost of HEOM grows steeply with the size of the quantum system. Therefore, the following approach was adopted for the HEOM calculations of the 8-site FMO model (FMO-VI data set). Starting from \(K=0\) and \(L=1\), \(K\) was increased until the maximum difference in the populations between calculations with \(K\) and \(K+1\) falls below a threshold \(\Delta\), i.e.,
\[\delta=\max_{\begin{subarray}{c}n=1,\ldots,N_{d}\\ t=0,\ldots,max\end{subarray}}\left|\rho_{n,n}^{K,L}(t)-\rho_{n,n}^{K+1,L}(t) \right|<\Delta. \tag{17}\]
When Eq.17 is satisfied for a given \(\Delta\) the convergence with respect to Matsubara cut-off is deemed to have been achieved. Then, for a fixed \(K\) a series of calculations were performed with increasing values of \(L\) until the maximum difference in populations between two consecutive calculations becomes less than the same threshold value \(\Delta\). When this condition is satisfied the convergence with respect to hierarchy truncation level as well as the overall convergence is declared. These steps were performed in the HEOM calculations for each parameter set for an 8-site FMO model until either the overall convergence is achieved or \(K\) and/or \(L\) become large enough so the calculation becomes intractable exceeding RAM available on our machines (1 TB).
In this work we set the threshold \(\Delta=0.01\). This threshold was chosen such that the population errors would be almost imperceptible which is illustrated in Figure 1. This data set is converged enough to be helpful in benchmarks of approximate methods describing quantum dynamics because the errors of these methods often exceed the threshold used in this work. Additionally, Figures 2 and 3 show the number of Matsubara terms and the hierarchy truncation level required for achieving the overall convergence depending on spectral density parameters and temperature.
## Usage Notes
A Python package for extracting data is provided together with the data set and can be accessed at [https://github.com/Arif-PhyChem/QD3SET](https://github.com/Arif-PhyChem/QD3SET).
## Code availability
PHI code (version 1.0) used in HEOM calculations was downloaded from [http://www.ks.uiuc.edu/Research/phi/](http://www.ks.uiuc.edu/Research/phi/). QuTip software package (version 4.6) used in HEOM calculations of the spin-boson model and was downloaded from [https://qutip.org/](https://qutip.org/). LTLME calculations of FMO models were performed with the basic quantum_HEOM package [https://github.com/jwa7/quantum_HEOM](https://github.com/jwa7/quantum_HEOM), and was modified to enable compatibility with the Hamiltonian of larger than the default dimension.
|
2307.11611 | Rotating Kiselev Black Holes in $f(R,T)$ Gravity | Exact solutions describing rotating black holes can provide significant
opportunities for testing modified theories of gravity, which are motivated by
the challenges posed by dark energy and dark matter. Starting with a spherical
Kiselev black hole as a seed metric, we construct rotating Kiselev black holes
within the $f(R,T)$ gravity framework using the revised Newman-Janis algorithm
- the $f(R,T)$ gravity-motivated rotating Kiselev black holes (FRKBH), which
encompasses, as exceptional cases, Kerr ($K=0$) and Kerr-Newman ($K=Q^2$) black
holes. These solutions give rise to distinct classes of black holes surrounded
by fluids while considering specific values of the equation-of-state parameter,
$w$, for viable choices for the $f(R,T)$ function. From the parameter space or
domain of existence of black holes defined by $a$ and $\gamma$ for FKRBH, we
discover that when $a_1<a<a_2$, there is a critical value $\gamma=\gamma_E$
which corresponds to extreme value black holes portrayed by degenerate
horizons. When $a<a_1$ ($a>a_2$), we encounter two distinct critical values
$\gamma=\gamma_{E1}, \; \gamma_{E2}$ with $\gamma_{E1}>\gamma_{E2}$ (or
$\gamma=\gamma_{E3},\; \gamma_{E4}$ with $\gamma_{E3}>\gamma_{E4}$. We delve
into the horizon and global structure of FKRBH spacetimes and examine their
dependence on parameters $w$ and $\gamma$. This exploration is motivated by the
remarkable effects of $f(R,T)$ gravity, which gives rise to diverse and
intricate spacetime structures within the domain where black holes exist. | Sushant G. Ghosh, Shafqat Ul Islam, Sunil D. Maharaj | 2023-07-21T14:31:33Z | http://arxiv.org/abs/2307.11611v1 | # Rotating Kiselev Black Holes in \(f(R,t)\) Gravity
###### Abstract
Exact solutions describing rotating black holes can provide significant opportunities for testing modified theories of gravity, which are motivated by the challenges posed by dark energy and dark matter. Starting with a spherical Kiselev black hole as a seed metric, we construct rotating Kiselev black holes within the \(f(R,T)\) gravity framework using the revised Newman-Janis algorithm -- the \(f(R,T)\) gravity-motivated rotating Kiselev black holes (FRKBH), which encompasses, as exceptional cases, Kerr (\(K=0\)) and Kerr-Newman (\(K=Q^{2}\)) black holes. These solutions give rise to distinct classes of black holes surrounded by fluids while considering specific values of the equation-of-state parameter, \(w\), for viable choices for the \(f(R,T)\) function. From the parameter space or domain of existence of black holes defined by \(a\) and \(\gamma\) for FKRBH, we discover that when \(a_{1}<a<a_{2}\), there is a critical value \(\gamma=\gamma_{E}\) which corresponds to extreme value black holes portrayed by degenerate horizons. When \(a<a_{1}\) (\(a>a_{2}\)), we encounter two distinct critical values \(\gamma=\gamma_{E1},\ \gamma_{E2}\) with \(\gamma_{E1}>\gamma_{E2}\) (or \(\gamma=\gamma_{E3},\ \gamma_{E4}\) with \(\gamma_{E3}>\gamma_{E4}\). We delve into the horizon and global structure of FKRBH spacetimes and examine their dependence on parameters \(w\) and \(\gamma\). This exploration is motivated by the remarkable effects of \(f(R,T)\) gravity, which gives rise to diverse and intricate spacetime structures within the domain where black holes exist.
## I Introduction
The \(f(R)\) gravity theory undoubtedly explains the existence of a late-time cosmic acceleration of the Universe [1]. Despite \(f(R)\) gravity's potential advantages, this theory can not explain various observational tests conducted in the solar system, such as the motion of planets [2; 3]. It also faces challenges in explaining the cosmic microwave background (CMB) tests, the effects of strong gravitational lensing, and phenomena on galactic scales [4; 5; 6; 7; 8; 9]. Further, \(f(R)\) gravity struggles to account for stable stellar configurations [10; 11; 12]. These shortcomings have prompted scientists to further develop and generalize the theory by exploring the coupling between scalar curvature and matter [13; 14; 15]. By incorporating this coupling, researchers hope to overcome the limitations and improve the overall consistency of the theory. Harko et al. [16] have put forward a generalized form of \(f(R)\) gravity theory, namely \(f(R,T)\) gravity. The presence and distribution of matter affect the curvature of spacetime, and conversely, the curvature of spacetime influences the motion of matter. This coupling leads to exciting phenomena and potentially explains specific astrophysical observations. The \(f(R,T)\) gravity is an intriguing approach that extends the Einstein field equations by introducing additional terms involving the curvature scalar \(R\) and the trace of the energy-momentum tensor \(T\). The theory received significant attention in recent years because of its potential to address various cosmological phenomena and serves as a modified theory of gravity that can account for dark matter and dark energy (see, e.g., [17; 18; 19; 20; 21; 22]). These articles provide an extensive overview of \(f(R,T)\) gravity and discuss its theoretical foundations, mathematical formalism, and cosmological implications. They cover various topics, including models, observational constraints, and the impact of \(f(R,T)\) gravity on the early and late universe. This gravitational theory's dependence on \(T\) can be attributed to quantum effects, imperfect fluids, extra fluids, or an effective cosmological constant. The application of \(f(R,T)\) gravity has produced exciting results in various regimes, as evidenced in references [23; 24; 25; 26; 27].
For high densities and pressures, the effects of modifying gravity through \(f(R,T)\) theory are expected to become more pronounced. Therefore, it is natural to investigate the impact of these modifications on compact objects or black holes. In theories where the Lagrangian density depends on \(T\), it is expected that there will be differences in the solutions compared to general relativity, particularly when considering non-zero energy momentum tensors. This implies additional effects arising from the coupling between matter and geometry. An intriguing system to explore effects is fluids surrounding spherical matter sources. Motivated by this, Santos _et al._[28] discussed spherical black hole solutions within the framework of \(f(R,T)\) gravity, with the black hole being surrounded by the quintessence fluid discussed by Kiselev [29]. They investigated specific physical scenarios corresponding to solutions obtained by carefully selecting appropriate values for the fluid equation of state parameters. They solely focussed on viable choices for the \(f(R,T)\) function to ensure that they could connect the obtained results to an extension of the Kiselev
solution [29]. However, testing astrophysical observations poses a challenge for spherical black hole models since black hole spin, i.e., rotating black holes commonly found in nature, significantly influences astrophysical processes [30]. Furthermore, the lack of rotating black hole models in the modified gravity, including \(f(R,T)\) gravity, hampers the ability to test the theory through observations [31; 32; 33; 34; 35]. The Kerr metric [36], which represents rotating black hole solutions in general relativity resulting from the collapse of massive stars, holds great significance. Remarkably, the revised Newman-Janis method has been successful in generating rotating metrics from non-rotating seed metrics in other modified gravity models [37; 38; 39; 34; 31; 35; 36; 31; 34; 35; 37; 38; 39; 40; 41; 42]. This success has motivated us to pursue a rotating or axisymmetric extension of the spherical metric obtained in [28] or to find a Kerr-like metric, referred to as an \(f(R,T)\) motivated rotating Kiselev black hole (FRKBH) metric, which can be tested using astrophysical observations. We employ the revised Newman-Janis algorithm starting from a spherical black hole in [28] as a seed metric to construct a rotating spacetime. We examine the black hole's various properties, including its horizon's structure, and create Penrose and embedding diagrams for further analysis. To methodically analyze the FRKBH, we use specific values for the fluid parameter \(w\) corresponding to black holes surrounded by different fields. These fields include dust (\(w=0\)), radiation (\(w=1/3\)), quintessence (\(w=-2/3\)), and phantom (\(w=-4/3\)) [29; 40].
The paper is organized as follows: In Section II, we review spherical symmetric black holes in \(f(R,T)\) gravity. Then, in Section III, we construct the rotating counterpart of the spherical seed metric (13), known as the FRKBH metric. Additionally, within the same section, we discuss the general features of the FRKBH metric, including horizon structures and energy conditions. Furthermore, we analyze the phase space of the FRKBH spacetimes, highlighting their unique properties in Section V. In Section IV, we utilize the spacetime isometries to determine the conserved mass and angular momentum of the FRKBH spacetime. Finally, in Section VI, we summarise our main findings.
## II Black hole solution
The \(f(R,T)\) modified theory of gravity is considered to be a generalisation of general relativity [43; 44; 45; 26; 24; 16; 25]. The Einstein-Hilbert action in the context of \(f(R,T)\) gravity takes the form
\[S=\frac{1}{16\pi}\int f(R,T)\sqrt{-g}d^{4}x+\int L_{m}\sqrt{-g}d^{4}x, \tag{1}\]
where the function \(f(R,T)\) is an arbitary function of the Ricci scalar \(R\) and the trace \(T\) of the energy-momentum tensor of matter \(T_{\mu\nu}\). \(L_{m}\) in Eq. (1) denotes the matter Lagrangian density, which is linked to a specific energy-momentum tensor. By varying the action with respect to metric tensor \(g_{\mu\nu}\) results [16]
\[\delta S= \frac{1}{16\pi}\int\left[f_{R}(R,T)R_{\mu\nu}\delta g^{\mu\nu}+f _{R}(R,T)g_{\mu\nu}\Box\delta g^{\mu\nu}\right.\] \[-f_{R}(R,T)\nabla_{\mu}\nabla_{\nu}\delta g^{\mu\nu}+f_{T}(R,T) \frac{\delta(g^{\eta\xi}T_{\eta\xi})}{\delta g^{\mu\nu}}\delta g^{\mu\nu}\] \[\left.-\frac{1}{2}g_{\mu\nu}f(R,T)\delta g^{\mu\nu}+\frac{16\pi} {\sqrt{-g}}\frac{\delta(\sqrt{-g}L_{m})}{\delta g^{\mu\nu}}\right]\sqrt{-g}d^ {4}x, \tag{2}\]
where \(f_{R}(R,T)=\partial f(R,T)/\partial R\) and \(f_{T}(R,T)=\partial f(R,T)/\partial T\). The variation of \(T\) with respect to the metrics tensor yields
\[\frac{\delta(g^{\eta\xi}T_{\eta\xi})}{\delta g^{\mu\nu}}=T_{\mu\nu}+\Theta_{ \mu\nu}. \tag{3}\]
where
\[\Theta_{\mu\nu}\equiv g^{\eta\xi}\frac{\delta T_{\eta\xi}}{\delta g^{\mu\nu}} =-2T_{\mu\nu}+g_{\mu\nu}L_{m}-2g^{\eta\xi}\frac{\partial^{2}L_{m}}{ \partial g^{\mu\nu}g^{\eta\xi}}. \tag{4}\]
After integrating the second and third term in Eq. (II), we obtain the field equations of the \(f(R,T)\) gravity as
\[f_{R}(R,T)R_{\mu\nu}-\frac{g_{\mu\nu}}{2}f(R,T)+(g_{\mu\nu}\Box- \nabla_{\mu}\nabla_{\nu})f_{R}(R,T)\] \[\quad=8\pi T_{\mu\nu}-f_{T}(R,T)T_{\mu\nu}-f_{T}(R,T)\Theta_{\mu \nu}\,. \tag{5}\]
When \(f(R,T)\equiv f(R)\), Eq. (II) reduces to the field equations in the context of \(f(R)\) gravity [40]. The novel feature that \(f(R,T)\) gravity introduces is the possibility of arbitrary coupling between matter and geometry. This paper considers a special case of \(f(R,T)\) gravity such that the \(f(R,T)\) function is given by
\[f(R,T)=R+2f(T), \tag{6}\]
where \(f(T)\) is an arbitrary function of the trace of the energy momentum tensor. On using the Eq. (6) in Eq. (5), the field equations simplifies to
\[R_{\mu\nu}-\frac{g_{\mu\nu}}{2}R= 8\pi T_{\mu\nu}-2f^{\prime}(T)T_{\mu\nu}\] \[-2f^{\prime}_{T}(T)\Theta_{\mu\nu}+f(T)g_{\mu\nu}, \tag{7}\]
where \(f^{\prime}(T)=df(T)/dT\).
We solve Eq. (7) for the quintessence field introduced by Kiselev [29], which is characterized by the equation of state \(p=\omega\rho\). with \(-1/3<\omega<-1\). One possible candidate to explain dark energy is quintessence field. The \(T_{\mu\nu}\) for the quintessence matter reads [29]
\[T^{t}_{\phantom{t}t}=T^{r}_{\phantom{r}r}=\rho(r), \tag{8}\] \[T^{\theta}_{\phantom{\theta}\theta}=T^{\phi}_{\phantom{\phi} \phi}=-\frac{1}{2}\rho(3\omega+1), \tag{9}\]
and \(\omega\) is the parameter of the equation of state. Kiselev black holes [29] have the components of energy-momentum tensor effectively connected to an anisotropic fluid represented by
\[T^{\mu}_{\phantom{\mu}\nu}=\text{diag}(\rho,-p_{r},-p_{t},-p_{t}), \tag{10}\]
where \(p_{r}=-\rho\) and \(p_{t}=\frac{1}{2}\rho(3w+1)\), which can be extracted from the general form of the anisotropic fluid [46]:
\[T_{\mu\nu}=-p_{t}g_{\mu\nu}+(p_{t}+\rho)U_{\mu}U_{\nu}+(p_{r}-p_{t})N_{\mu}N_{ \nu}, \tag{11}\]
where \(p_{t}(r)\), \(\rho(r)\) and \(p_{r}(r)\) are the tangential or transverse pressure, the energy density and the radial pressure of the fluid, respectively. The quantities \(U_{\mu}\) and \(N_{\mu}\) represent the four-velocity and radial unit vector, respectively, such that they obey the conditions \(U_{\nu}U^{\nu}=1\), \(N_{\nu}N^{\nu}=-1\) and \(U_{\nu}N^{\nu}=0\). The matter Lagrangian density associated with
the anisotropic fluid is given by \(L_{m}=(-1/3)(p_{r}+2p_{t})\)[47]. Ghosh [40] was the pioneer in obtaining the rotating counterpart of Kiselev black holes. Eq. (4) can be written as
\[\Theta_{\mu\nu}=-2T_{\mu\nu}-\frac{1}{3}(p_{r}+2p_{t})g_{\mu\nu}. \tag{12}\]
On using Eq's (7), (11) and (12), a static spherical symmetric solution of Einsteins equations reads
\[ds^{2} = \left(1-\frac{2M}{r}+\frac{K}{r^{d}}\right)dt^{2}-\frac{dr^{2}}{ \left(1-\frac{2M}{r}-\frac{K}{r^{d}}\right)}-r^{2}d\Omega^{2}, \tag{13}\]
where
\[d=\frac{8(\gamma\omega+\pi(3\omega+1))}{\gamma(3-\omega)+8\pi}. \tag{14}\]
Thus we have a general form of exact spherically symmetric black holes in \(f(R,T)\) gravity describing a Kiselev black hole, i.e., a black hole solution surrounded by the quintessence matter. The parameter \(-1<\omega<-1/3\) for a de Sitter horizon causes acceleration, and \(-1/3<\omega<0\) for an asymptotically flat solution. Here, \(\omega\) is the parameter of the equation of state, \(\gamma\) is the model-dependent parameter from the \(f(R,T)\) gravity, \(K\) and \(M\) are integration constants. The Kiselev black hole reduces to Schwarzschild black hole for \(K=0\), and to Reisner-Nordstrom black hole for \(d=2\) and \(K=Q^{2}\).
## III Rotating Black Hole
The Newman\(-\)Janis algorithm, initially designed within the framework of general relativity, has found widespread application in building rotating black hole solutions from their non-rotating counterparts [48]. Recently, this algorithm has also been utilized in modified gravity theories to generate rotating solutions based on non-rotating configurations
Figure 2: The parameter space of \(a\) and \(\gamma\) at \(\omega=-2/3\) (top right). The behaviour of horizons at \(\omega=-2/3\) with varying black hole parameter \(\gamma\) at (i) \(a=0.60\) (top right) (ii) \(a=0.80\) (bottom left) and (iii) \(a=0.90\) (bottom right). The solid blue and black lines correspond to extreme values of parameters.
[49; 50; 51; 52; 53]. However, it is essential to note that when applying the Newman\(-\)Janis algorithm to arbitrary non-general relativity spherically symmetric solutions, specific issues may arise in the resulting axially-symmetric metric [54]. In rotating black holes derived from modified gravity using the Newman\(-\)Janis algorithm, additional sources will probably arise alongside the original ones. The metric for a rotating black hole in Einstein-Gauss-Bonnet gravity is also generated by utilizing the modified Newman\(-\)Janis algorithm, which incorporates Azreg-Ainou's non-complexification procedure [55; 56]. This procedure has successfully generated other rotating solutions with imperfect fluid content in Boyer\(-\)Lindquist coordinates, starting from spherically symmetric static solutions [57; 58; 59; 31; 32; 33; 34; 35; 57] and it can also generate rotating Kislev black hole solutions [40]. The resulting rotating black hole metric, the counterpart of the spherically symmetric solution (13), governed by parameters \(M\), \(a\), \(\omega\), and \(\gamma\) encompasses the Kerr solution [36] and
Figure 4: The parameter space of \(a\) and \(\gamma\) at \(\omega=1/3\) (top right). The behaviour of horizons at \(\omega=-2/3\) and \(a=0.80\) with varying black hole parameter \(\gamma\). The solid blue line correspond to extreme value of parameter \(\gamma\).
Figure 3: The parameter space of \(a\) and \(\gamma\) at \(\omega=-4/3\) (top right). The behaviour of horizons at \(\omega=-4/3\) with varying black hole parameter \(\gamma\) at (i) \(a=0.60\) (top right) (ii) \(a=0.86\) (bottom left) and (iii) \(a=0.90\) (bottom right). The solid blue and black lines correspond to extreme values of parameters.
also generalizes the Kerr-Newman solution [60], which in Boyer-Lindquist coordinates reads
\[ds^{2}= \left(\frac{\Delta-a^{2}\sin^{2}\theta}{\Sigma}\right)dt^{2}-\frac{ \Sigma}{\Delta}\,dr^{2}\] \[+2a\sin^{2}\theta\left(1-\frac{\Delta-a^{2}\sin^{2}\theta}{ \Sigma}\right)dt\,d\phi-\Sigma\,d\theta^{2}\] \[-\,\sin^{2}\theta\left[\Sigma+a^{2}\sin^{2}\theta\left(2-\frac{ \Delta-a^{2}\sin^{2}\theta}{\Sigma}\right)\right]d\phi^{2}, \tag{15}\]
with
\[\Delta = r^{2}+a^{2}-2rM(r),\quad\Sigma=r^{2}+a^{2}\cos^{2}\theta,\] \[M(r) = M-\frac{K}{2r^{d-1}}, \tag{16}\]
with \(a\) being the spin parameter. The metric Eq. (II.2) reverts to Kerr black holes for the special case \(K\to 0\), to Kerr-Newman black holes when \(K=Q^{2}\) and \(d=2\) or when \(\gamma\) and \(\omega\) are related via
\[\gamma=\frac{-4\pi(3\omega-1)}{5\omega-3} \tag{17}\]
and to spherically symmetric black holes (13) when only \(a=0\). For definiteness, we call the five-parameter metrics (II.2) \(-\) the \(f(R,T)\) gravity-motivated rotating Kiselev black holes (FRKBH). To comprehensively analyse the FRKBH, we employ specific parameter values \(w\) corresponding to black holes surrounded by various fields. These encompass dust (\(w=0\)), radiation (\(w=1/3\)), quintessence field (\(w=-2/3\)), and phantom field (\(w=-4/3\)) [29; 40]. Interestingly, similar to the Kerr spacetime, the FRKBH spacetime metric (II.2) nevertheless maintains the time-translational and rotational invariance isometries, which, respectively, entail the existence of two Killing vector fields \(\eta^{\mu}_{(t)}=\left(\frac{\partial}{\partial t}\right)^{\mu}\) and \(\eta^{\mu}_{(\phi)}=\left(\frac{\partial}{\partial\phi}\right)^{\mu}\).
In order to further analyze the source associated with the metric (II.2), we use an orthonormal basis in which the energy momentum tensor is diagonal [39; 51; 61]
\[e^{(a)}_{\mu}=\left(\begin{array}{cccc}\sqrt{\mp(g_{tt}-\Omega g_{t\phi})}&0 &0&0\\ 0&\sqrt{\pm g_{rr}}&0&0\\ 0&0&\sqrt{g_{\theta\theta}}&0\\ g_{t\phi}/\sqrt{g_{\phi\phi}}&0&0&\sqrt{g_{\phi\phi}}\end{array}\right), \tag{18}\]
with \(\Omega=g_{t\phi}/g_{\phi\phi}\). The components of the energy momentum tensor in the orthonormal frame read
\[T^{(a)(b)}=e^{(a)}_{\mu}e^{(b)}_{\nu}G^{\mu\nu}.\]
Considering the line element (II.2), we can write the components of the respective energy momentum tensor as
\[\rho = \frac{(d-1)K}{(a^{2}+r^{2})^{2}r^{d-2}}=-P_{1},\] \[P_{2} = \frac{(d-1)\Big{[}a^{2}(d+2)+dr^{2}\Big{]}}{2(a^{2}+r^{2})^{2}r^ {d}}=P_{3}, \tag{19}\]
### Energy conditions and horizons
To check the weak energy condition, we can choose an appropriate orthonormal basis [39; 51; 61] in which the energy momentum tensor reads
\[T^{(a)(b)}=\text{diag}(\rho,P_{1},P_{2},P_{3}). \tag{20}\]
The weak energy condition requires \(\rho\geq 0\) and \(\rho+P_{i}\geq 0\) (\(i=1,\ 2,\ 3\)) [62]. Therefore, \(K\geq 0\) and \(d\geq 1\), which are assumptions used throughout this work, are necessary for the weak energy condition to be satisfied. Furthermore, the strong energy condition is given by
\[\rho\geq 0,\quad\rho+P_{i}\geq 0(i=1,\ 2,\ 3),\quad\rho+P_{r}+2P_{\theta}\geq 0, \tag{21}\]
which are also fulfilled only when \(K\geq 0\) and \(d\geq 1\).
The event horizon is a stationary null surface that serves as the origin of null geodesic rays that are projected into the future but are never able to travel arbitrarily far from the black hole [63; 64], are defined by the surfaces \(g^{\mu\nu}\partial_{\mu}r\partial_{\nu}r=g^{rr}=\Delta=0\), and thus, the radii are zeros of
\[r^{2}+a^{2}-2Mr+\frac{K}{r^{d-2}}=0. \tag{22}\]
For the special case \(d=2\), and \(K=Q^{2}\), Eq. (22) reduces to
\[r^{2}+a^{2}-2Mr+Q^{2}=0, \tag{23}\]
where \(K\) is identified as the charge \(Q^{2}\), and solutions of the above equation give radii of horizons for the Kerr-Newman black hole given by
\[r_{\pm}=M\pm\sqrt{M^{2}-a^{2}-Q^{2}}. \tag{24}\]
An analysis of Eq. (22) reveals that it has a maximum of two real positive roots, corresponding to the inner Cauchy horizon (\(r_{2}\)) and outer event horizon (\(r_{1}\)), such that \(r_{2}\leq r_{1}\). Two distinct real positive roots of \(\Delta=0\) infer the nonextremal black hole, while no black hole in the absence of real positive roots of Eq. (22), i.e., no horizon exists. There exists a particular value of the parameter \(\gamma\), \(\gamma=\gamma_{e}\), for which an extremal black hole occurs, such that Eq. (22) admits a double root; i.e., the two horizons coincide \(r_{2}=r_{1}=r_{e}\). We have explicitly shown that, for fixed values of \(a\), \(K\) and \(\omega\), \(r_{1}\) decreases and \(r_{2}\) increases with increasing \(\gamma\) and eventually coincide for the extremal value of \(\gamma\), i.e., \(r_{2}=r_{1}=r_{e}\) for \(\gamma=\gamma_{e}\). Moreover, we infer that it is possible to find extremal values of parameters \(a=a_{e}\) for fixed \(\gamma\) and \(\omega\), and \(\omega=\omega_{e}\) for fixed \(a\) and \(\gamma\), for which the algebraic equation \(\Delta=0\) has double roots (cf. Figures 1-4).
_III.1.0.1. Case \(\omega=0,\ -2/3,\ \textit{and}-4/3:\)_ Interestingly, when \(a<a_{1}\) (e.g., \(a_{1}\approx 0.5\) for \(\omega=0\)) and \(a>a_{2}\) (e.g., \(a_{2}=0.809814\) for \(\omega=0\)), there exists two extreme values of \(f(R,T)\) gravity parameter viz. \(\gamma=\gamma_{E1},\gamma_{E2}\) (or
\(\gamma_{E3},\gamma_{E4}\)) such that \(\Delta=0\) admits roots corresponding to extremal black holes with degenerate horizons, whereas for \(\gamma_{E2}<\gamma<\gamma_{E1}\), one gets an NS. However, when \(a_{1}<a<a_{2}\), there is only one extreme parameter value, i.e., \(\gamma_{E}\), corresponds to an extremal black hole with degenerate horizons. In this case, again, \(\gamma<\gamma_{E}\) corresponds to the black hole with two horizons and \(\gamma>\gamma_{E}\) leads to naked singularities.
_III.1.0.2. Case \(\omega=1/3:\)_ In this case, for a given \(a\), a critical value of \(\gamma=\gamma_{E}\) exists, such that the \(\Delta=0\) has a double root corresponding to an extremal black hole with degenerate horizons. When \(\gamma<\gamma_{E}\), \(\Delta=0\) has two simple roots and has no zero for \(\gamma>\gamma_{E}\). These two cases correspond to a non-extremal black hole with two horizons and an NS (or no-horizon spacetime).
Further, the static observers in the stationary spacetime follow the worldline of the timelike Killing vector \(\eta^{\mu}_{(t)}\), such that their four-velocity is \(u^{\mu}\propto\eta^{\mu}_{(t)}\) with the proper normalization factor. These observers exist as long as \(\eta^{\mu}_{(t)}\) is timelike, such that \(\eta^{\mu}_{(t)}\eta_{\mu(t)}=g_{tt}=0\) or
\[r^{2}+a^{2}\cos^{2}\theta-2Mr+\frac{K}{r^{d-2}}=0, \tag{25}\]
defines the boundary of the static limit surface (SLS), which, apart from black hole parameters, also depends on \(\theta\) and coincides with the event horizon only at the poles. For the particular case \(K=Q^{2}\) and \(d=2\), Eq. (25) corresponds to the Kerr-Newman black hole as
\[r^{2}+a^{2}\cos^{2}\theta-2Mr+Q^{2}=0\,label\{etteqKN} \tag{26}\]
and admits the solutions
\[r^{\pm}_{SLS}=M\pm\sqrt{M^{2}-a^{2}\cos^{2}\theta-Q^{2}},\]
which can be identified as the SLS radii for the Kerr-Newman black hole. Equation (25) is solved numerically, and the behaviour of SLS is shown in Fig. 6. It is clear from Fig. 6 that the radii of the SLS decrease with increasing \(K\) and \(a\). The two SLS, corresponding to the real positive roots of Eq. (25), coincide for suitably chosen parameters. However, these extremal values are different from those for the degenerate horizons. For fixed values of \(M\) and \(a\), the
SLS radii for the FKRBHs are smaller than the Kerr black hole values. Likewise, the Kerr black hole, apart from \(\Delta=0\), which is merely a coordinate singularity, rotating metric (15) is also singular at \(\Sigma=0\), which is attributed to a ring-shaped physical singularity at the equatorial plane of the centre of the black hole with radius \(a\).
Komar mass and angular momentum
Moreover, zero angular momentum observers, which are stationary observers with zero angular momentum concerning spatial infinity, but due to frame dragging, have the position-dependent angular velocity \(\omega\) given by
\[\omega=\frac{d\phi}{dt}=-\frac{g_{t\phi}}{g_{\phi\phi}}=\frac{a\left(a^{2}- \Delta+r^{2}\right)}{(a^{2}+r^{2})^{2}-a^{2}\Delta\sin^{2}\theta}, \tag{27}\]
which increases as the observer approaches the black hole and eventually takes the maximum value at the event horizon:
\[\Omega=\left.\omega\right|_{r=r_{1}}=\frac{a}{(r_{1}^{2}+a^{2})}, \tag{28}\]
such that observers are in a state of co-rotation with the black hole. Here, \(\Omega\) is the black hole angular velocity, which has the same form as the Kerr black hole [64; 65].
The conserved quantities associated with the asymptotically timelike and spacelike Killing vector fields, respectively, \(\eta^{\mu}_{(t)}\) and \(\eta^{\mu}_{(\phi)}\), correspond to the mass and angular momentum assigned to the stationary, asymptotically flat black hole spacetime. A general argument for equality of the conserved Arnowitt-Deser-Misner mass [66] and of the Komar mass [67] for stationary spacetimes having a timelike Killing vector is established in Refs. [68; 69]. Following the Komar [67] definitions of conserved quantities, we consider a spacelike hypersurface \(\Sigma_{t}\), extending from the event horizon to spatial infinity, which is a surface of constant \(t\) with unit normal vector \(n_{\mu}\)[65]. The two-boundary \(S_{t}\) of the hypersurface \(\Sigma_{t}\) is a constant \(t\) and constant \(r\) surface with unit outward normal vector \(\sigma_{\mu}\). The effective mass reads [67]
\[M_{\rm eff}=-\frac{1}{8\pi}\int_{S_{t}}\nabla^{\mu}\eta^{\nu}_{(t)}dS_{\mu\nu}, \tag{29}\]
where \(dS_{\mu\nu}=-2n_{[\mu}\sigma_{\nu]}\sqrt{\hbar}d^{2}\theta\) is the surface element of \(S_{t}\), \(h\) is the determinant of the \((2\times 2)\) metric on \(S_{t}\), and
\[n_{\mu}=-\frac{\delta_{\mu}^{t}}{|g^{tt}|^{1/2}},\qquad\sigma_{\mu}=\frac{ \delta_{\mu}^{r}}{|g^{rr}|^{1/2}}, \tag{30}\]
are, respectively, timelike and spacelike unit outward normal vectors. Thus, mass integral Eq. (29) turns into an integral over closed 2-surface at infinity
\[M_{\rm eff}= \frac{1}{4\pi}\int_{0}^{2\phi}\int_{0}^{\phi}\frac{\sqrt{g\theta \theta g\phi\phi}}{|g^{tt}g^{rr}|^{1/2}}\nabla^{t}\eta^{r}_{(t)}d\theta d\phi\] \[= \frac{1}{4\pi}\int_{0}^{2\phi}\int_{0}^{\phi}\frac{\sqrt{g\theta g \phi\phi}}{|g^{tt}g^{rr}|^{1/2}}\left(g^{tt}\Gamma^{r}_{tt}+g^{t\phi}\Gamma^{r }_{t\phi}\right)d\theta d\phi. \tag{31}\]
Using the metric elements Eq. (15), we obtain the effective mass of the rotating Kiselev black hole
\[M_{\rm eff}=M-\frac{K}{2r^{d-1}}\left[\left(\frac{r}{a}+\frac{a}{r}\right)(d- 1)\tan^{-1}\frac{a}{r}+1\right], \tag{32}\]
which is corrected due to the field and goes over to the Kerr black hole case that is \(M_{\rm eff}=M\), when \(K=0\). For the special case \(d=2\) and \(K=Q^{2}\), Eq. (32) resembles the effective mass for the Kerr-Newman black hole with \(K\) as the electric charge \(Q^{2}\) and reads [70]
\[M_{\rm eff}^{KN}=M-\frac{Q^{2}}{2r^{2}a}\left[(r^{2}+a^{2})\tan^{-1}\left( \frac{a}{r}\right)+ar\right]. \tag{33}\]
The effective mass for the spherically symmetric Kiselev black hole (\(a=0\)) is obtained from Eq. (32) and reads
\[M_{\rm eff}^{NR}=M-\frac{d\ K}{2r^{d-1}}, \tag{34}\]
which reduces to the Reissner\(-\)Nordstrom black hole for \(K=Q^{2}\) and \(d=2\):
\[M_{\rm eff}^{RN}=M-\frac{Q^{2}}{r},\]
and to the Schwarzschild black hole \(M_{\rm eff}^{S}=M\), when \(K=0\).
Now, we use the spacelike Killing vector \(\eta_{(\phi)}^{\mu}\) to calculate the effective angular momentum [67]
\[J_{\rm eff}=\frac{1}{16\pi}\int_{S_{t}}\nabla^{\mu}\eta_{(\phi)}^{\nu}dS_{\mu\nu}, \tag{35}\]
using the definitions of the surface element, Eq. (35) recast as
\[J_{\rm eff} = -\frac{1}{8\pi}\int_{0}^{2\phi}\int_{0}^{\phi}\nabla^{\mu}\eta_{ (t)}^{\nu}n_{\mu}\sigma_{\nu}\sqrt{h}d\theta d\phi \tag{36}\] \[= \frac{1}{8\pi}\int_{0}^{2\phi}\int_{0}^{\phi}\frac{\sqrt{g_{\theta \theta}g_{\phi\phi}}}{|g^{tt}g^{rr}|^{1/2}}\left(g^{tt}\Gamma_{t\phi}^{r}+g^{t \phi}\Gamma_{\phi\phi}^{r}\right)d\theta d\phi.\]
After performing the integration for the rotating Kiselev black hole Eq. (15), this reads
\[J_{\rm eff} = Ma+\frac{K}{4r^{d}a^{2}}\Big{[}\left((d-3)a^{2}+r^{2}(d-1) \right)ar \tag{37}\] \[-(d-1)(a^{2}+r^{2})^{2}\tan^{-1}\frac{a}{r}\Big{]}\]
which vanishes identically in the limiting case of \(a=0\). For the particular case of \(d=2\) and \(K=Q^{2}\) it reduces to
\[J_{\rm eff}^{KN}=Ma+\frac{Q^{2}(r^{2}-a^{2})}{4ar}-\frac{Q^{2}}{4a^{2}r^{2}}(r^ {2}+a^{2})^{2}\tan^{-1}\left(\frac{a}{r}\right), \tag{38}\]
which can be identified as the Kerr-Newman black hole value [70]. In the asymptotic limits \(r\rightarrow\infty\), the effective angular momentum Eq. (37) restores the value \(J_{\rm eff}^{K}=Ma\), which corresponds to the value for the Kerr black hole. Thus, the effects of the field subside at a very large distance from the black hole. Equations (32) and (37) imply that at a finite radial distance the values of the effective mass and angular momentum get modified from their asymptotic values and depend on of \(\gamma\). Figures 7 and 8, graphically present the normalized values of the effective mass and angular momentum as functions of \(r\) for various \(\gamma\) values. It is evident from the figures that these effective quantities gradually decrease with decreasing \(r\). Introducing parameters \(\gamma\) and \(\omega\) reduces the effective mass and angular momentum, specifically \(M_{eff}/M\leq 1\) and \(J_{eff}/Ma\leq 1\). Moreover, at a fixed radial coordinate, the normalized values of the effective quantities for regular black holes (\(\gamma\neq 0\)) are smaller than those for Kerr black holes (\(K=0\)) (See Ref. [34]). The impact of a non-zero \(\gamma\) is significant only in the vicinity of the event horizon \(r_{+}\). However, it diminishes at larger distances from \(r_{+}\), resulting in \(M_{eff}/M=1\) and \(J_{eff}/Ma=1\) for large \(r\) (see Fig's 7 and 8). Consequently, \(M_{eff}\) and \(J_{eff}\) are consistently smaller than their asymptotic values.
It is well known that the Killing vectors \(\eta_{(t)}^{\mu}\) or \(\eta_{(\phi)}^{\mu}\) are not the generators of the stationary black hole horizon; rather, it is their specific linear combination [65]
\[\chi^{\mu}=\eta_{(t)}^{\mu}+\Omega\eta_{(\phi)}^{\mu}, \tag{39}\]
such that \(\chi^{\mu}\) is globally timelike outside the event horizon, though it is a Killing vector only at the horizon [65]. The Komar conserved quantity at the event horizon associated with \(\chi^{\mu}\) reads as [67]
\[J_{\chi} = -\frac{1}{8\pi}\int_{S_{t}}\nabla^{\mu}\chi^{\nu}dS_{\mu\nu}, \tag{40}\] \[= -\frac{1}{8\pi}\int_{S_{t}}\nabla^{\mu}\left(\eta_{(t)}^{\mu}+ \Omega\eta_{(\phi)}^{\mu}\right)dS_{\mu\nu}.\]
Using Eqs. (32) and (37), we obtain
\[J_{\chi} = M_{\rm eff}-2\Omega J_{\rm eff}, \tag{41}\] \[= \frac{M(r_{1}^{2}-a^{2})}{(r_{+}^{2}+a^{2})}-\frac{\left(r_{+}^{ 2}-(s-1)a^{2}\right)}{(r_{+}^{2}+a^{2})s}\frac{K}{r_{+}^{-(s-2)/s}}.\]
To understand the implication of the above conserved quantity, one must know the black hole horizon temperature [65]
\[T_{+} = \frac{\kappa}{2\pi}=\frac{\Delta^{\prime}}{4\pi(r_{+}^{2}+a^{2})}, \tag{42}\] \[= \frac{r_{+}^{2}-a^{2}}{4\pi r_{+}(a^{2}+r_{+}^{2})}-\frac{K(d-1) }{4\pi r_{+}(a^{2}+r_{+}^{2})r_{+}^{d-2}}\]
Figure 8: The behaviour of effective angular momentum with \(r\) at \(a=0.9\) and with varying \(\omega\) and \(\gamma\).
Figure 7: The behaviour of effective mass with \(r\) at \(a=0.9\) and with varying \(\omega\) and \(\gamma\).
whereas entropy is defined as follows
\[S_{+}=\frac{A}{4}=\pi(r_{+}^{2}+a^{2}). \tag{43}\]
Thus, we have derived the Komar mass and angular momentum for FKRBH spacetimes and calculated the other thermodynamical quantities. We can regain the exceptional cases of the Kerr and Kerr-Newman black holes in the proper limits.
## V Structure of rotating spacetime: Penrose diagrams
The Penrose diagrams are designed to show the entire causal structure of any given geometry on a 2-D finite sheet. They provide a crucial road map for moving around a black hole. Time is vertical, space is horizontal, and null rays are angled at \(45^{*}\) to the axes, much like in the Minkowski diagram. A Penrose diagram, in contrast to a Minkowski diagram, has "points at infinity" added by compactification, which helps to visualize the intricate structure of infinity that results from the union of space and time. Along the symmetric \(z\)-axis (\(\theta=0^{*}\)), we have generated diagrams for the FKRBH spacetime that correspond to each of the many points in the parameter spaces in Figs. 1-4.
For the case when the value of \(\omega\) and \(\gamma\) are such that \(d=0,1\) or \(2\), the Penrose diagrams, as well as the horizons, are precisely like that of Kerr black hole as given in Fig. 9. The surface \(r=0\) is regular and time-like that exists well beyond inner horizon \(r_{1}\). The Universe is represented by area I between \(r=\infty\) and \(r_{1}\), and the black hole is symbolized by region II between \(r_{1}\) and \(r_{2}\). Beyond the surface, \(r=0\) to \(r=-\infty\) lies the antiverse. The region \(\Gamma\) is a mathematical extension of a second copy of the region I geometry glued along the anti-horizon in the opposite direction of time. In addition to a universe, antiverse, or black hole, the full analytic extension of the spacetime geometry also includes a parallel universe, a white hole, and a parallel antiverse as given by I', II', and part of region III' between \(r=0\) and \(r_{\infty}\), for the extremal black hole where the inner and outer horizon merge (\(r_{1}=r_{2}\)), the Penrose diagram is given in Fig. 10 with \(d=0,1\) or \(2\). Its Penrose diagram illustrates that an extremal rotating Kiselev black hole has a different structure than a typical FKRBH. The spacetime, in this case, has only two regions, the region I between \(r=\infty\) and \(r_{1}\) is the universe and the region II between \(r_{1}\) and \(r_{-\infty}\) which includes the timelike surface \(r=0\). The region beyond \(r=0\) is the antiverse. Finally, the Penrose diagram and the horizon structure for the case when \(0<d<2\) but \(d\neq 1\) is shown in Fig. 11. In this case, the spacetime has a singularity at \(r=0\) surface that lies beyond the inner horizon \(r_{1}\).
Figure 9: _Left:_ Plot showing \(\Delta(r)\) vs \(r\) such that \(\Delta(r)=0\) admits two positive root \(r_{2}\) (event horizon) and \(r_{2}\) (Cauchy horizon) at the values of \(\omega\) and \(\gamma\) for which \(d=0,1,2\). _Right:_ Penrose diagrams of the corresponding spacetime in the parameter space (\(M,a,\omega,\gamma\)).
## VI Conclusion
Recently, Santos _et al._[28] made a significant discovery in \(f(R,T)\) gravity. They investigated the parameter of the equations of state, denoted as \(w\), and identified the first spherical black holes within this modified theory. This modification of the Kiselev black hole in the \(f(R,T)\) gravity can reproduce well known solutions of the Einstein field equation as exceptional cases. They examined specific values of \(w\) associated with black holes surrounded by various fields, such as dust (\(w=0\)), radiation (\(w=1/3\)), quintessence (\(w=-2/3\)), cosmological constant (\(w=-1\)), and phantom (\(w=-4/3\)). In their study, the authors considered the model \(f(T)=\varkappa T^{n}\) and explored the conditions necessary to generate Kiselev black holes in the context of \(f(R,T)\) gravity. It was found that when \(n\) takes on the accepted value of 1, several particular values of the parameter \(w\) in \(f(R,T)\) gravity yield solutions that deviate from
Figure 11: _Left:_ Plot showing \(\Delta(r)\) vs \(r\) such that \(\Delta(r)=0\) admits two positive roots \(r_{1}\) (event horizon) and \(r_{2}\) (Cauchy horizon) at the values of \(\omega\) and \(\gamma\) for for which \(0<d<2\) and \(d\neq 1\). _Right:_ Penrose diagrams of the corresponding spacetime in the parameter space \((M,a,\omega,\gamma)\).
Figure 10: _Left:_ Plot showing \(\Delta(r)\) vs \(r\) such that \(\Delta(r)=0\) admits two positive equal roots \(r_{1}=r_{2}\), corresponding to degenerate horizons at the values of \(\omega\) and \(\gamma\) for which \(d=2\). _Right:_ Penrose diagrams of the corresponding spacetime in the parameter space \((M,a,\omega,\gamma)\).
the Kiselev black hole in the general relativity. However, it is challenging to test spherical black hole models through astrophysical observations alone, as black hole spin, precisely rotating black holes commonly observed in nature, play a crucial role in astrophysical processes. Testing modified gravity theories like \(f(R,T)\) without incorporating rotating black hole models typically hinders observational verification. The Kerr metric describes astrophysical black holes and remains the only stationary, vacuum axisymmetric metric that satisfies the Einstein field equations while avoiding pathologies beyond the event horizon. The groundbreaking observations made by the Event Horizon Telescope (EHT) provided images of supermassive black holes, namely Sgr A* and M87*. These observations, documented in studies such as [71; 72; 73; 74], revealed that the size of the black hole shadow agrees with predictions based on the Kerr metric, with an accuracy of around 10%. This discovery offers an additional tool to investigate the nature of strong-field gravity and potentially constrain deviations from the Kerr metric, such as Kerr-like black holes that arise in theories like \(f(R,T)\) gravity or other alternative gravity theories. Thus, the observations from the EHT can test fundamental theories of gravity, including \(f(R,T)\) in the strong field regime, where a rotating black hole plays a vital role. However, the scarcity of rotating black hole models in modified gravity significantly impedes progress in testing such theories through observational means. To address this limitation, we have tackled the problem by utilizing the revised Newman-Janis algorithm, a viable method for generating rotating solutions from a spherical Kiselev seed metric (13). The \(f(R,T)\) gravity changes the structure of the Kerr black hole, resulting in the presence of an additional hair term in the metric (15). This metric describes FKRBH, which is asymptotically flat and encompasses various well-known black hole solutions, including Kerr (\(K=0\)), Kerr-Newman (\(K=Q^{2},\ d=2\)), Reissner-Nordstrom (\(K=Q^{2},\ d=2,\ a=0\)), and Schwarzschild (\(K=0,\ a=0\)) black holes. Like the Kerr black hole, the FKRBH retains the Cauchy and event horizons and the stationary limit surface (SLS). However, the radii of these horizons and the SLS are affected by the parameter \(\gamma\), resulting in their decrease. This has exciting implications for the ergosphere and can lead to significant consequences in the astrophysical Penrose process. From the parameter space defined by \(a\) and \(\gamma\) for FKRBH, we can observe that when \(a_{1}<a<a_{2}\), there is a critical value \(\gamma=\gamma_{E}\) which corresponds to extreme value black holes characterized by degenerate horizons. When \(a<a_{1}\) (\(a>a_{2}\)), we encounter two distinct critical values \(\gamma=\gamma_{E1},\ \gamma_{E2}\) with \(\gamma_{E1}>\gamma_{E2}\) (or \(\gamma=\gamma_{E3},\ \gamma_{E4}\) with \(\gamma_{E3}>\gamma_{E4}\) (cf. Fig 1-4).
Despite the complexity of the FKRBH metric (15), we have analytically derived exact expressions for the conserved mass \(M_{\rm eff}\) and angular momentum \(J_{\rm eff}\) using the Komar prescription. These expressions hold at any radial distance. Notably, the \(f(R,T)\) gravity introduces notable modifications to these conserved quantities compared to those of the Kerr black hole. However, in the limit \(K=0\), the conserved mass \(M_{\rm eff}\) and angular momentum \(J_{\rm eff}\) converge to their values for the Kerr black hole. It is worth mentioning that the influence of the \(f(R,T)\) gravity diminishes at large radial distances from the horizon, as \(M_{\rm eff}\) and \(J_{\rm eff}\) approach the values of the Kerr black hole in the asymptotic region (\(r\rightarrow\infty\)).
In conclusion, the resulting FKRBH spacetime (15) possesses mathematical properties reminiscent of the Kerr metric and several other intriguing features, demonstrating a rich spacetime structure.
Numerous avenues exist for future research, particularly in analyzing the accretion process onto black holes. Notably, the structure of Kerr black holes is significantly influenced by \(f(R,T)\) gravity, which can have various astrophysical implications, such as the impact on wormholes, gravitational lensing properties, and black holes in AdS/CFT. Exploring the possibility of extending these findings to the other forms of \(f(R,T)\) gravity presents an exciting avenue for future investigations. In line with the no-hair conjectures, the role of FKRBH black hole deserves consideration and can potentially provide valuable insights. Moreover, studying the shadow of FKRBHs in light of results obtained from the observations made by the Event Horizon Telescope collaboration is essential.
## VII Acknowledgments
S.U.I and S.G.G. would like to thank SERB-DST for project No. CRG/2021/005771. S.D.M acknowledges that this work is based upon research supported by the South African Research Chair Initiative of the Department of Science and Technology and the National Research Foundation.
|
2304.07523 | Rings of functions whose closure of discontinuity set is in an ideal of
closed sets | Let $\mathcal{P}$ be an ideal of closed subsets of a topological space $X$.
Consider the ring, $C(X)_\mathcal{P}$ of real valued functions on $X$ whose
closure of discontinuity set is a member of $\mathcal{P}$. We investigate the
ring properties of $C(X)_\mathcal{P}$ for different choices of $\mathcal{P}$,
such as the $\aleph_0$-self injectivity and regularity of the ring, if and when
the ring is Artinian and/or Noetherian. The concept of $\mathcal{F}P$-space was
introduced by Z. Gharabaghi, M. Ghirati and A. Taherifar in 2018 in a paper
published in Houston Journal of Mathematics. In this paper, they established a
result stating that every $P$-space is a $\mathcal{F}P$-space. We furnish that
this theorem might fail if $X$ is not Tychonoff and we provide a suitable
counter example to prove our assertion. | Amrita Dey, Sudip Kumar Acharyya, Sagarmoy Bag, Dhananjoy Mandal | 2023-04-15T10:12:21Z | http://arxiv.org/abs/2304.07523v1 | # Rings of functions whose closure of discontinuity set is in an ideal of closed sets
###### Abstract.
Let \(\mathcal{P}\) be an ideal of closed subsets of a topological space \(X\). Consider the ring, \(C(X)_{\mathcal{P}}\) of real valued functions on \(X\) whose closure of discontinuity set is a member of \(\mathcal{P}\). We investigate the ring properties of \(C(X)_{\mathcal{P}}\) for different choices of \(\mathcal{P}\), such as the \(\aleph_{0}\)-self injectivity and regularity of the ring, if and when the ring is Artinian and/or Noetherian. The concept of \(\mathcal{FP}\)-space was introduced by Z. Gharabaghi, M. Ghirati and A. Taherifar in 2018 in a paper published in Houston Journal of Mathematics. In this paper, they established a result stating that every \(P\)-space is a \(\mathcal{FP}\)-space. We furnish that this theorem might fail if \(X\) is not Tychonoff and we provide a suitable counter example to prove our assertion.
Key words and phrases:\(\tau\mathcal{P}\)-space, \(\tau\mathcal{PU}\)-space, \(P\)-completely separated, \(\tau\mathcal{P}\)-compact, \(\mathcal{P}P\)-space 2
## 1. Introduction
Let \(R\) be a reduced commutative ring. Let \(R\) be a reduced commutative ring. Let \(R\) be a reduced commutative ring. Let \(R\) be a reduced commutative ring. Let \(R\) be a reduced commutative ring. Let \(R\) be a reduced commutative ring. Let \(R\) be a reduced commutative ring. Let \(R\) be a reduced commutative ring. Let \(R\) be a reduced commutative ring. Let \(R\) be a reduced commutative ring. Let \(R\) be a reduced commutative ring. Let \(R\) be a reduced commutative ring. Let \(R\) be a reduced commutative ring. Let \(R\) be a reduced commutative ring. Let \(R\) be a reduced commutative ring. Let \(R\) be a reduced commutative ring. Let \(R\) be a reduced commutative ring. Let \(R\) be a reduced commutative ring. Let \(R\) be a reduced commutative ring. Let \(R\) be a reduced commutative ring. Let \(R\) be a reduced commutative ring. Let \(R\) be a reduced commutative ring. Let \(R\) be a reduced commutative ring. Let \(R\) be a reduced commutative ring. Let \(R\) be a reduced commutative ring. Let \(R\) be a reduced commutative ring. Let \(R\) be a reduced commutative ring. Let \(R\) be a reduced commutative ring. Let \(R\) be a reduced commutative ring. Let \(R\) be a reduced commutative ring. Let \(R\) be a reduced commutative ring. Let \(R\) be a reduced commutative ring. Let \(R\) be a reduced commutative ring. Let \(R\) be a reduced commutative ring. Let \(R\) be a reduced commutative ring. Let \(R\) be a reduced commutative ring. Let \(R\) be a reduced commutative ring. Let \(R\) be a reduced commutative ring. Let \(R\) be a reduced commutative ring. Let \(R\) be a reduced commutative ring. Let \(R\) be a reduced commutative ring. Let \(R\) be a reduced commutative ring. Let \(R\) be a reduced commutative ring. Let \(R\) be a reduced commutative ring. Let \(R\) be a reduced commutative ring. Let \(R\) be a reduced commutative ring. Let \(R\) be a reduced commutative ring. Let \(R\) be a reduced commutative ring. Let \(R\) be a reduced commutative ring. Let \(R\) be a reduced commutative ring. Let \(R\) be a reduced commutative ring. Let \(R\) be a reduced commutative ring. Let \(R\) be a reduced commutative ring. Let \(R\) be a reduced commutative ring. Let \(R\) be a reduced commutative ring. Let \(R\) be a reduced commutative ring. Let \(R\) be a reduced commutative ring. Let \(R\) be a reduced commutative ring. Let \(R\) be a reduced commutative ring. Let \(R\) be a reduced commutative ring. Let \(R\) be a reduced commutative ring. Let \(R\) be a reduced commutative ring. Let \(R\) be a reduced commutative ring. Let \(R\) be a reduced commutative ring. Let \(R\) be a reduced commutative ring. Let \(R\) be a reduced commutative ring. Let \(R\) be a reduced commutative ring. Let \(R\) be a reduced commutative ring. Let \(R\) be a reduced commutative ring. Let \(R\) be a reduced commutative ring. Let \(R\) be a reduced commutative ring. Let \(R\) be a reduced commutative ring. Let \(R\) be a reduced commutative ring. Let \(R\) be a reduced commutative ring. Let \(R\) be a reduced commutative ring. Let \(R\) be a reduced commutative ring. Let \(R\) be a reduced commutative ring. Let \(R\) be a reduced commutative ring. Let \(R\) be a reduced commutative ring. Let \(R\) be a reduced commutative ring. Let \(R\) be a reduced commutative ring. Let \(R\) be a reduced commutative ring. Let \(R\) be a reduced commutative ring. Let \(R\) be a reduced commutative ring. Let \(R\) be a reduced commutative ring. Let \(R\) be a reduced commutative ring. Let \(R\) be a reduced commutative ring. Let \(R\) be a reduced commutative ring. Let \(R\) be a reduced commutative ring. Let \(R\) be a reduced commutative ring. Let \(R\) be a reduced commutative ring. Let \(R\) be a reduced commutative ring. Let \(R\) be a reduced commutative ring. Let \(R\) be a reduced commutative ring. Let \(R\) be a reduced commutative ring. Let \(R\) be a reduced commutative ring. Let \(R\) be a reduced commutative ring. Let \(R\) be a reduced commutative ring. Let \(R\) be a reduced commutative ring. Let \(R\) be a reduced commutative ring. Let \(R\) be a reduced commutative ring. Let \(R\) be a reduced commutative ring. Let \(R\) be a reduced commutative ring. Let \(R\) be a reduced commutative ring. Let \(R\) be a reduced commutative ring. Let \(R\) be a reduced commutative ring. Let \(R\) be a reduced commutative ring. Let \(R\) be a reduced commutative ring. Let \(R\) be a reduced commutative ring. Let \(R\) be a reduced commutative ring. Let \(R\) be a reduced commutative ring. Let \(R\) be a reduced commutative ring. Let \(R\) be a reduced commutative ring. Let \(R\) be a reduced commutative ring. Let \(R\) be a reduced commutative ring. Let \(R\) be a reduced commutative ring. Let \(R\) be a reduced commutative ring. Let \(R\) be a reduced commutative ring. Let \(R\) be a reduced commutative ring. Let \(R\) be a reduced commutative ring. Let \(R\) be a reduced commutative ring. Let \(R\) be a reduced commutative ring. Let \(R\) be a reduced commutative ring. Let \(R\) be a reduced commutative ring. Let \(R\) be a reduced commutative ring. Let \(R\) be a reduced commutative ring. Let \(R\) be a reduced commutative ring. Let \(R\) be a reduced commutative ring. Let \(R\) be a reduced commutative ring. Let \(R\) be a reduced commutative ring. Let \(R\) be a reduced commutative ring. Let \(R\) be a reduced commutative ring. Let \(R\) be a reduced commutative ring. Let \(R\) be a reduced ring. Let \(R\) be a reduced commutative ring. Let \(R\) be a reduced ring. Let \(R\) be a reduced ring. Let \(R\) be a reduced commutative ring. Let \(R\) be a reduced ring.
In Section 2, we define zero sets of the form \(Z_{\mathcal{P}}(f)=\{x\in X\colon f(x)=0\}\), for \(f\in C(X)_{\mathcal{P}}\). We define a cozero set of a function in \(C(X)_{\mathcal{P}}\) to be the complement of \(Z_{\mathcal{P}}(f)\) and denote it by \(coz(f)\). We denote the collection of all zero sets of functions in \(C(X)_{\mathcal{P}}\) by \(Z_{\mathcal{P}}[X]\) and the set of all cozero sets of functions in \(C(X)_{\mathcal{P}}\) by \(coz[X]\). Also, for a subset \(S\subseteq C(X)_{\mathcal{P}}\), we write \(Z_{\mathcal{P}}[S]=\{Z_{\mathcal{P}}(f)\colon f\in S\}\) and \(coz[S]=\{coz(f)\colon f\in S\}\). Here, we also introduce the notion of \(\mathcal{P}\)-completely separated subsets of \(X\) and achieve its characterisation via zero sets of functions in \(C(X)_{\mathcal{P}}\). Next, we define \(z_{\mathcal{P}}\)-filters on \(X\) and \(z_{\mathcal{P}}\)-ideals in \(C(X)_{\mathcal{P}}\), and examine the duality existing between them. As expected it is realised that there exists a one-to-one correspondence between the set of all maximal ideals \(Max(C(X)_{\mathcal{P}})\) of \(C(X)_{\mathcal{P}}\) and the set \(\mathcal{U}(X)_{\mathcal{P}}\) of all maximal \(z_{\mathcal{P}}\)-filters on \(X\). We exploit this duality to show that \(M(C(X)_{\mathcal{P}})\) equipped with the hull-kernel topology is homeomorphic to \(\mathcal{U}_{\mathcal{P}}\) equipped with the Stone topology [Theorem 2.18].
In Section 3, we examine when does \(C(X)_{\mathcal{P}}\) become closed under uniform limit. When \(C(X)_{\mathcal{P}}\) is closed uniform limit, we then say \(X\) is a \(\tau\mathcal{PU}\)-space. It is clear that when \(\mathcal{P}=\{\emptyset\}\), \(C(X)_{\mathcal{P}}=C(X)\), which is closed under uniform limit for any topological space \(X\). We also check that for any choice of \(\mathcal{P}\), if the set of all non-isolated points of \(X\) is a member of \(\mathcal{P}\), then \((X,\tau,\mathcal{P})\) is a \(\tau\mathcal{PU}\)-space. For some special choice of \(\mathcal{P}\), the converse of this result is seen to be valid [Theorem 3.3 and 3.6]. It is further noted that when \(C(X)_{\mathcal{P}}\) is isomorphic to \(C(Y)\), for some topological space \(Y\), then \(X\) is a \(\tau\mathcal{PU}\)-space. Using the above results, we have given an alternative proof of Theorem 3.4 in [8]. At the end of this section, we establish a result analogous to the Urysohn's extension Theorem for \(C(X)\), stated in [9], for a \(\tau\mathcal{PU}\)-space.
In Section 4, we define a \(\tau\mathcal{P}\)-space \(X\) to be \(\tau\mathcal{P}\)-compact if every family of zero sets in \(X\), having the finite intersection property has a non-empty intersection. We obtain a characterisation of \(\tau\mathcal{P}\)-compact spaces using fixed \(z_{\mathcal{P}}\)-filters of \(X\) and fixed ideals of \(C(X)_{\mathcal{P}}\). Incidentally, we also define \(\tau\mathcal{P}\)-pseudocompact spaces as follows: a \(\tau\mathcal{P}\)-space \(X\) is \(\tau\mathcal{P}\)-pseudocompact if and only if \(C(X)_{\mathcal{P}}=C^{*}(X)_{\mathcal{P}}\). We go on to show that every \(\tau\mathcal{P}\)-compact space is \(\tau\mathcal{P}\)-pseudocompact. In the same section, we call a maximal ideal \(M\) of \(C(X)_{\mathcal{P}}\) to be real if \(C(X)_{\mathcal{P}}/M\) is isomorphic to \(\mathbb{R}\). We define a \(\tau\mathcal{P}\)-space to be \(\tau\mathcal{P}\)-real compact if every real maximal ideal of \(C(X)_{\mathcal{P}}\) is fixed. We establish that a \(\tau\mathcal{P}\)-space is \(\tau\mathcal{P}\)-compact if and only if it is both \(\tau\mathcal{P}\)-pseudocompact and \(\tau\mathcal{P}\)-real compact. For \(\mathcal{P}=\{\emptyset\}\), this reads as follows : a topological space \(X\) is compact if and only if it is both pseudocompact and real compact, as proved in [9]. We construct examples to ensure that a \(\tau\mathcal{P}\)-pseudocompact space may not be \(\tau\mathcal{P}\)-compact and a real \(\tau\mathcal{P}\)-compact space need not be \(\tau\mathcal{P}\)-compact.
In Section 5, we continue our study of \(C(X)_{\mathcal{P}}\) with the additional hypothesis that each singleton subset of \(X\) is a member of \(\mathcal{P}\). It follows that \(\chi_{\{x\}}\in C(X)_{\mathcal{P}}\), for all \(x\in X\). Under this restriction, we see that \(C(X)_{\mathcal{P}}\) is an almost regular ring and any \(f\in C(X)_{\mathcal{P}}\) is either a zero divisor or an unit. We further show that \(C(X)_{\mathcal{P}}=C(X)\) if and only if \(X\) is discrete if and only if \(C(X)_{\mathcal{P}}\) is a ring of quotients of \(C(X)\). Further, we are able to establish that a necessary and sufficient condition for a \(\tau\mathcal{P}\)-space to be a \(\tau\mathcal{P}\)-compact
space is that \(X\) is finite. We find out a necessary and sufficient conditions under which an ideal of \(C(X)_{\mathcal{P}}\) is a minimal ideal and establish that the socle of \(C(X)_{\mathcal{P}}\) consists of all functions that vanish everywhere except on a finite set and is itself an essential ideal that is also free. We further note that \(\mathit{Soc}(C(X)_{\mathcal{P}})=C(X)_{\mathcal{P}}\iff X\) is finite. Exploiting these results, we establish that \(C(X)_{\mathcal{P}}\) is an Artinian (and Noetherian) ring if and only if \(X\) is finite. We complete this section by providing a set of conditions equivalent to \(C(X)_{\mathcal{P}}\) being an \(IN\)-ring, \(SA\)-ring and Baer ring. We have also provided counter examples to show that these results may fail when the restriction "\(\mathcal{P}\) contains all singleton subsets of \(X\)" is lifted.
In Section 6, we examine the regularity of \(C(X)_{\mathcal{P}}\). Here, we define a \(\tau\mathcal{P}\)-space, \((X,\tau,\mathcal{P})\) to be a \(\mathcal{P}P\)-space if \(C(X)_{\mathcal{P}}\) is regular. We show that a \(P\)-space is a \(\mathcal{P}P\)-space, when \(X\) is Tychonoff. We further provide a counter example to show that this might fail when \(X\) is not Tychonoff. This counter example also shows that Theorem 6.1 in [8] fails when \(X\) is not Tychonoff. We conclude this section by giving a characterisation of a \(\mathcal{P}P\)-space, using the members of \(\mathcal{P}\).
Finally, in the seventh section, we use the concept of \(\phi\)-algebra and an algebra of measurable functions to establish a condition involving a \(\tau\mathcal{P}\mathcal{U}\)-space, under which \(C(X)_{\mathcal{P}}\) is \(\aleph_{0}\)-self injective. We also provide an example that shows that the condition \(X\) is a \(\tau\mathcal{P}\mathcal{U}\)-space is not superfluous
## 2. Definitions and Preliminaries
**Notation 2.1**.: _Let \(\mathcal{P}^{\prime}\) be the ideal of all closed subsets of the set of isolated points of \(X\)._
**Theorem 2.2**.: \(C(X)_{\mathcal{P}}=C(X)\) _if and only if \(\mathcal{P}\subseteq\mathcal{P}^{\prime}\)._
Proof.: Let \(\mathcal{P}\nsubseteq\mathcal{P}^{\prime}\). Then there exists \(A\in\mathcal{P}\) such that there exists \(x_{0}\in A\) which is a non-isolated point in \(X\). Let \(f=\chi_{\{x_{0}\}}\). Then \(f\in C(X)_{\mathcal{P}}\). However, since \(x_{0}\) is a non-isolated point of \(X\), \(f\notin C(X)\). The converse is obvious.
We have stated in the introduction that for \(\mathcal{P}=\mathcal{P}_{nd}\), \(C(X)_{\mathcal{P}}=T^{\prime}(X)\), where \(T^{\prime}(X)\) is the ring of those real valued continuous functions on \(X\) for which there exists a dense open subset \(D\) of \(X\) such that \(f|_{D}\in C(D)\)[4]. We give a proof supporting this statement.
**Theorem 2.3**.: \(C(X)_{\mathcal{P}_{nd}}=T^{\prime}(X)\)__
Proof.: Let \(f\in C(X)_{\mathcal{P}_{nd}}\), then \(\overline{D_{f}}\) is nowhere dense. Therefore \(X\setminus\overline{D_{f}}\) is dense in \(X\). Also, \(\overline{D_{f}}\) is closed in \(X\implies X\setminus\overline{D_{f}}\) is open in \(X\). Finally, \(f\) is continuous on \(X\setminus D_{f}\supseteq X\setminus\overline{D_{f}}\). Thus \(f\in T^{\prime}(X)\). Conversely, let \(g\in T^{\prime}(X)\). Then there exists an open dense subset \(D\) of \(X\) such that \(f|_{D}\in C(D)\implies D_{f}\subseteq X\setminus D.\) So \(\overline{D_{f}}\subseteq\overline{X\setminus D}=X\setminus D\). Since the complement of an open dense set is a closed nowhere dense set, \(X\setminus D\in\mathcal{P}_{nd}\) and so \(\overline{D_{f}}\in\mathcal{P}_{nd}\). Therefore \(f\in C(X)_{\mathcal{P}_{nd}}\).
We define \(\mathcal{I}\) to be the family of all ideals of closed subsets of \(X\) and \(\mathcal{J}(X)\) to be the family of all subrings of \(\mathbb{R}^{X}\) containing \(C(X)\). Both of these families form a lattice with respect to the subset inclusion.
It can be easily observed that for two rings \(S_{1},S_{2}\in\mathcal{J}(X)\),
\(S_{1}\lor S_{2}=\{\sum_{i=1}^{m}f_{i}g_{i}\colon f_{1}\in S_{1},g_{i}\in S_{2},i= 1,2,...,m.\}\) is the smallest subring of \(\mathbb{R}^{X}\) containing \(S_{1}\cup S_{2}\) and for two ideals of closed sets \(\mathcal{P}\) and \(\mathcal{Q}\) in \(\mathcal{I}\), \(\mathcal{P}\vee\mathcal{Q}=\{A\cup B\colon A\in\mathcal{P}\text{ and }B\in\mathcal{Q}\}\) is the smallest ideal of closed subsets of \(X\) containing \(\mathcal{P}\) and \(\mathcal{Q}\). It is obvious that for two rings \(S_{1},S_{2}\in\mathcal{J}(X)\), \(S_{1}\wedge S_{2}=S_{1}\cap S_{2}\) and for two ideals of closed sets \(\mathcal{P}\) and \(\mathcal{Q}\) in \(\mathcal{I}\), \(\mathcal{P}\wedge\mathcal{Q}=\mathcal{P}\cap\mathcal{Q}\). So \(C(X)_{\mathcal{P}\wedge\mathcal{Q}}=C(X)_{\mathcal{P}}\wedge C(X)_{\mathcal{Q}}\). However, \(C(X)_{\mathcal{P}\vee\mathcal{Q}}=C(X)_{\mathcal{P}}\lor C(X)_{\mathcal{Q}}\) may not hold for all \(T_{1}\)-spaces but is true for a topological space \(X\) if every open subspace of \(X\) is \(C\)-embedded.
**Theorem 2.4**.: _Let \(X\) be a topological space where every open subspace of \(X\) is \(C\)-embedded. Then for any \(\mathcal{P}\) and \(\mathcal{Q}\) in \(\mathcal{I}\), \(C(X)_{\mathcal{P}\vee\mathcal{Q}}=C(X)_{\mathcal{P}}\lor C(X)_{\mathcal{Q}}\)._
Proof.: Let \(\alpha=\sum_{i=1}^{m}f_{i}g_{i}\in C(X)_{\mathcal{P}}\lor C(X)_{\mathcal{Q}}\). Then
\(D_{\alpha}\subseteq\bigcup_{i=1}^{m}D_{f_{i}g_{i}}\subseteq \bigcup_{i=1}^{m}(D_{f_{i}}\cup D_{g_{i}})\) where \(\overline{D_{f_{i}}}\in\mathcal{P}\) and \(\overline{D_{g_{i}}}\in\mathcal{Q}\) for all \(i=1,2,...,m\). So \(\overline{D_{f_{i}}\cup D_{g_{i}}}\in\mathcal{P}\vee\mathcal{Q}\) for all \(i=1,2,...,m\). Since \(\mathcal{P}\vee\mathcal{Q}\) is an ideal of closed sets, \(\overline{D_{\alpha}}\in\mathcal{P}\vee\mathcal{Q}\) and hence \(\alpha\in C(X)_{\mathcal{P}\vee\mathcal{Q}}\). Conversely, let \(f\in C(X)_{\mathcal{P}\vee\mathcal{Q}}\). Then \(f|_{X\setminus\overline{D_{f}}}\) is continuous on the open subspace \(X\setminus\overline{D_{f}}\) of \(X\). By our hypothesis there exists \(\widehat{f}\in C(X)\) such that \(\widehat{f}|_{X\setminus\overline{D_{f}}}=f|_{X\setminus\overline{D_{f}}}\). Also \(\overline{D_{f}}\in\mathcal{P}\vee\mathcal{Q}\). So \(\overline{D_{f}}=A\cup B\) where \(A\in\mathcal{P}\) and \(B\in\mathcal{Q}\). We define \(g\colon X\longrightarrow\mathbb{R}\) by \(g(x)=\begin{cases}\widehat{f}(x)\text{ when }x\in X\setminus A\\ f(x)\text{ when }x\in A\setminus B\\ \frac{1}{2}f(x)\text{ when }x\in A\cap B\end{cases}\) and \(h\colon X\longrightarrow\mathbb{R}\) by \(h(x)=\begin{cases}0\text{ when }x\in X\setminus B\\ f(x)-\widehat{f}(x)\text{ when }x\in B\setminus A\\ \frac{1}{2}f(x)\text{ when }x\in A\cap B\end{cases}\). Then \(\overline{D_{g}}\subseteq A\) and \(\overline{D_{h}}\subseteq B\). Therefore \(g\in C(X)_{\mathcal{P}}\) and \(h\in C(X)_{\mathcal{Q}}\) and \(f=g+h\in C(X)_{\mathcal{P}}\lor C(X)_{\mathcal{Q}}\).
It is important to note that if \(X\) is an irreducible spaces, then every open subspace of \(X\) is \(C\)-embedded and thus for an irreducible space \(X\), \(C(X)_{\mathcal{P}\vee\mathcal{Q}}=C(X)_{\mathcal{P}}\lor C(X)_{\mathcal{Q}}\).
We summarise all this in the following theorem.
**Theorem 2.5**.: _Let \(X\) be a topological space where every open subspace of \(X\) is \(C\)-embedded. Then \(\phi\colon\mathcal{I}\longrightarrow\mathcal{J}(X)\) defined by \(\phi(\mathcal{P})=C(X)_{\mathcal{P}}\) is a lattice homomorphism._
We recall that \(f\in C(X)_{\mathcal{P}}\), \(Z_{\mathcal{P}}(f)=\{x\in X\colon f(x)=0\}\) is called a zero set of \(f\). Also, a subset \(A\) of \(X\) is called a zero set if \(A=Z_{\mathcal{P}}(f)\), for some \(f\in C(X)_{\mathcal{P}}\). Let \(Z_{\mathcal{P}}[X]\) be the set of all zero sets of \(X\).
It is easy to verify that:
1. For \(f\in C(X)_{\mathcal{P}}\), \(Z_{\mathcal{P}}(f)=Z_{\mathcal{P}}(|f|)=Z_{\mathcal{P}}(f\wedge\mathbf{1})=Z_{ \mathcal{P}}(f^{n})\), for all \(n\in\mathbb{N}\).
2. \(Z_{\mathcal{P}}(\mathbf{0})=X\) and \(Z_{\mathcal{P}}(\mathbf{1})=\emptyset\).
3. For \(f,g\in C(X)_{\mathcal{P}}\), \(Z_{\mathcal{P}}(fg)=Z_{\mathcal{P}}(f)\cup Z_{\mathcal{P}}(g)\) and \(Z_{\mathcal{P}}(|f|+|g|)=Z_{\mathcal{P}}(f^{2}+g^{2})=Z_{\mathcal{P}}(f)\cap Z _{\mathcal{P}}(g)\).
4. For \(f\in C(X)_{\mathcal{P}}\), \(r,s\in\mathbb{R}\), sets of the form \(\{x\in X\colon f(x)\geq r\}\) and \(\{x\in X\colon f(x)\leq s\}\) are zero sets as we have : 1. \(\{x\in X\colon f(x)\geq r\}=Z_{\mathcal{P}}((f-\boldsymbol{r})\wedge 0)\), and 2. \(\{x\in X\colon f(x)\geq s\}=Z_{\mathcal{P}}((f-\boldsymbol{s})\lor 0)\).
From the above observations, it follows that \(Z_{\mathcal{P}}[X]\) is closed under finite union and intersection. We know that the zero sets in \(Z[X]\) (zero sets of real valued continuous functions on \(X\)) are \(G_{\delta}\)-sets. The following result generalises this fact for functions in \(C(X)_{\mathcal{P}}\).
**Theorem 2.6**.: _Every zero set in \(X\) can be expressed as a disjoint union of a \(G_{\delta}\)-set and a set \(A\) such that \(\overline{A}\in\mathcal{P}\)._
Proof.: Let \(f\in C(X)_{\mathcal{P}}\). Then \(Z_{\mathcal{P}}(f)=G\cup A\), where \(G=Z_{\mathcal{P}}(f)\cap(X\setminus D_{f})\) and \(A=Z_{\mathcal{P}}(f)\cap D_{f}\). Then \(G\) is a zero set of the continuous map \(f|_{X\setminus D_{f}}\) and is therefore a \(G_{\delta}\)-set in \(X\setminus D_{f}\). Since \(D_{f}\) is an \(F_{\sigma}\)-set in \(X\), \(X\setminus D_{f}\) is a \(G_{\delta}\)-set in \(X\). Thus \(G\) is a \(G_{\delta}\)-set in \(X\). On the other hand, \(\overline{A}\subseteq\overline{D_{f}}\in\mathcal{P}\implies\overline{A}\in \mathcal{P}\).
We know that all zero sets in \(Z(X)\) are closed sets in \(X\). However, all zero sets in \(Z_{\mathcal{P}}[X]\) may not be closed. In fact, we establish a necessary and sufficient condition for this to happen.
**Theorem 2.7**.: _For a \(\tau\mathcal{P}\)-space \(X\), all zero sets in \(Z_{\mathcal{P}}[X]\) are closed if and only if \(C(X)_{\mathcal{P}}=C(X)\)._
Proof.: Let \(C(X)_{\mathcal{P}}\neq C(X)\). Then tracing the steps in Theorem 2.2, there exists a non-isolated point \(x_{0}\in X\) such that \(\chi_{\{x_{0}\}}\in C(X)_{\mathcal{P}}\setminus C(X)\) and \(Z_{\mathcal{P}}(\chi_{\{x_{0}\}})=X\setminus\{x_{0}\}\), which is not closed in \(X\). The converse is clear.
The following observations are immediate.
**Proposition 2.8**.: _Let \(X\) be a \(\tau\mathcal{P}\)-space. Then the following statements are true._
1. \(C(X)_{\mathcal{P}}\) _is reduced. (A commutative ring_ \(R\) _is called reduced if it does not contain any non-zero nilpotent elements.)_
2. \(f\) _is a unit in_ \(C(X)_{\mathcal{P}}\) _if and only if_ \(Z_{\mathcal{P}}(f)=\emptyset\)_._
**Definition 2.9**.: Two subsets \(A\) and \(B\) are said to be \(\mathcal{P}\)-completely separated if there exists \(f\in C(X)_{\mathcal{P}}\) such that \(f(X)\subseteq[0,1]\), \(f(A)=\{0\}\) and \(f(B)=\{1\}\).
Thus \(\mathcal{P}\)-completely separated sets reduce to completely separated sets in [9], when \(\mathcal{P}=\{\emptyset\}\).
Let \(\tau_{u}\) be the usual topology on the set \(\mathbb{R}\) of all real numbers and \(\mathcal{P}=\{\emptyset,\{0\}\}\). Define \(f\colon\mathbb{R}\to\mathbb{R}\) by: \(f(x)=\begin{cases}0&x\leq 0\\ 1&x>0\end{cases}\) Then \(f\in C(X)_{\mathcal{P}}\). Thus \((-\infty,0]\) and \((0,\infty)\) are \(\mathcal{P}\)-completely separated. However \((-\infty,0]\) and \((0,\infty)\) are not completely separated.
**Proposition 2.10**.: _Two disjoint subsets \(A\) and \(B\) of a \(\tau\mathcal{P}\)-space \(X\) are \(\mathcal{P}\)-completely separated in \(X\) if and only if they are contained in disjoint zero sets in \(X\)._
Proof.: The first part of the theorem can be proved by closely following the proof of Theorem 1.2 in [9]. To prove the converse, let \(A\) and \(B\) be two disjoint subsets of a \(\tau\mathcal{P}\)-space \(X\) that are contained in disjoint zero sets, \(Z_{\mathcal{P}}(f)\) and \(Z_{\mathcal{P}}(g)\) respectively in \(X\). Define
\[h(x)=\frac{|f|(x)}{(|f|+|g|)(x)},\quad\forall x\in X.\]
Then \(\overline{D_{h}}\subseteq\overline{D_{f}}\cup\overline{D_{g}}\implies\overline {D_{h}}\in\mathcal{P}\) and so \(h\in C(X)_{\mathcal{P}}\). Also, \(h(A)=\{0\}\) and \(h(B)=\{1\}\). Hence \(A\) and \(B\) are \(\mathcal{P}\)-completely separated in \(X\).
**Theorem 2.11**.: _Two disjoint subsets \(A\) and \(B\) of a \(\tau\mathcal{P}\)-space \(X\) are \(\mathcal{P}\)-completely separated in \(X\) if and only if there exists a \(P\in\mathcal{P}\) such that \(A\setminus P\) and \(B\setminus P\) are completely separated in \(X\setminus P\)._
Proof.: The proof of this theorem is analogous to that of Proposition 2.3 of [8].
We now introduce the concept of \(z_{\mathcal{P}}\)-filters on \(X\) and \(z_{\mathcal{P}}\)-ideals of \(C(X)_{\mathcal{P}}\).
**Definition 2.12**.:
1. A non-empty subcollection \(\mathcal{F}\) of \(Z_{\mathcal{P}}[X]\) is called a \(z_{\mathcal{P}}\)-filter on \(X\) if \(\mathcal{F}\) satisfies the following conditions: 1. \(\emptyset\notin\mathcal{F}\), 2. \(\mathcal{F}\) is closed under finite intersections, and 3. If \(Z_{1}\in\mathcal{F}\) and \(Z_{2}\in Z_{\mathcal{P}}[X]\) with \(Z_{1}\subseteq Z_{2}\), then \(Z_{2}\in\mathcal{F}\).
2. A \(z_{\mathcal{P}}\)-filter on \(X\) is said to be a \(z_{\mathcal{P}}\)-ultrafilter on \(X\) if it is not properly contained in any other \(z_{\mathcal{P}}\)-filter on \(X\).
3. An ideal \(I\) of \(C(X)_{\mathcal{P}}\) is called a \(z_{\mathcal{P}}\)-ideal if \(Z_{\mathcal{P}}^{-1}Z_{\mathcal{P}}[I]=I\).
Using the same arguments as used in Theorems 2.3, 2.5, 2.9 and 2.11 of [9], we can prove the following theorems:
**Theorem 2.13**.:
1. _For any ideal_ \(I\) _of_ \(C(X)_{\mathcal{P}}\)_,_ \(Z_{\mathcal{P}}[I]\) _is a_ \(z_{\mathcal{P}}\)_-filter._
2. _For a_ \(z_{\mathcal{P}}\)_-filter_ \(\mathcal{F}\)_,_ \(Z_{\mathcal{P}}^{-1}(\mathcal{F})=\{f\in C(X)_{\mathcal{P}}\colon Z_{\mathcal{P }}(f)\in\mathcal{F}\}\) _is an ideal of_ \(C(X)_{\mathcal{P}}\)_._
3. _If_ \(M\) _is a maximal ideal in_ \(C(X)_{\mathcal{P}}\)_, then_ \(Z_{\mathcal{P}}[M]\) _is a_ \(z_{\mathcal{P}}\)_-ultrafilter on_ \(X\)_._
4. _For a_ \(z_{\mathcal{P}}\)_-ultrafilter_ \(\mathcal{U}\) _on_ \(X\)_,_ \(Z_{\mathcal{P}}^{-1}(\mathcal{U})\) _is a maximal ideal of_ \(C(X)_{\mathcal{P}}\)_._
**Theorem 2.14**.: _For any \(z_{\mathcal{P}}\)-ideal \(I\) of \(C(X)_{\mathcal{P}}\), the following are equivalent:_
1. \(I\) _is prime._
2. \(I\) _contains a prime ideal._
3. _For all_ \(g\)_,_ \(h\in C(X)_{\mathcal{P}}\)_, if_ \(gh=\mathbf{0}\)_, then_ \(g\in I\) _or_ \(h\in I\)_._
4. _For every_ \(f\in C(X)_{\mathcal{P}}\)_, there is a zero-set in_ \(Z[I]\) _on which_ \(f\) _does not change its sign._
**Theorem 2.15**.: _Every prime ideal in \(C(X)_{\mathcal{P}}\) is contained in a unique maximal ideal._
**Corollary 2.16**.: \(C(X)_{\mathcal{P}}\) _is a \(pm\)-ring (A commutative ring \(R\) with unity is called a \(pm\)-ring if every prime ideal of \(R\) is contained in an unique maximal ideal of \(R\).)._
Now we are interested to study the maximal ideal space of \(C(X)_{\mathcal{P}}\). Let \(Max(C(X)_{\mathcal{P}})\) be the collection of all maximal ideals of \(C(X)_{\mathcal{P}}\). For \(f\in C(X)_{\mathcal{P}}\), set \(\mathcal{M}_{f}=\{M\in Max(C(X)_{\mathcal{P}})\colon f\in M\}\). It is easy to see that \(\mathcal{B}=\{\mathcal{M}_{f}\colon f\in C(X)_{\mathcal{P}}\}\) is a base for closed sets for some topology on \(Max(C(X)_{\mathcal{P}})\), which is known as the hull-kernel topology.
Let \(\beta_{\mathcal{P}}X\) be the index set for all \(z_{\mathcal{P}}\)-ultrafilters on \(X\) with the condition that for \(p\in X\), \(A^{p}=A_{p}=\{Z\in Z_{\mathcal{P}}[X]\colon p\in Z\}\). For \(Z\in Z_{\mathcal{P}}[X]\), we set \(\overline{Z}=\{p\in\beta_{\mathcal{P}}X\colon Z\in A^{p}\}\). Then \(\mathcal{B}^{\prime}=\{\overline{Z}\colon Z\in Z_{\mathcal{P}}[X]\}\) forms a base for closed sets for some topology on \(\beta_{\mathcal{P}}X\).
The following observations for \(Z\in Z_{\mathcal{P}}[X]\) are immediate :
**Theorem 2.17**.:
1. \(\overline{Z}\cap X=Z\)_._
2. \(\overline{Z}{=}cl_{\beta_{\mathcal{P}}X}Z\)_._
**Theorem 2.18**.: \(Max(C(X)_{\mathcal{P}})\) _is homeomorphic to \(\beta_{\mathcal{P}}X\)_
Proof.: Define \(\phi\colon\beta_{\mathcal{P}}X\longrightarrow Max(C(X)_{\mathcal{P}})\) by \(\phi(p)=Z_{\mathcal{P}}^{-1}[A^{p}]=M^{p}(\text{say})\). Then \(M^{p}\) is a maximal ideal of \(C(X)_{\mathcal{P}}\). Also, \(\phi\) is a bijective map, by Theorem 2.13. Let \(Z=Z_{\mathcal{P}}(f)\in Z_{\mathcal{P}}[X]\). Then \(\phi(\overline{Z})=\mathcal{M}_{f}\in\mathcal{B}\) and \(\phi^{-1}(\mathcal{M}_{f})=\overline{Z_{\mathcal{P}}(f)}\in\mathcal{B}^{\prime}\). Therefore \(\phi\) exchanges basic closed sets of \(Max(C(X))_{\mathcal{P}}\) and \(\beta_{\mathcal{P}}X\). Hence \(\phi\) is a homeomorphism.
Since \(C(X)_{\mathcal{P}}\) contains unity, \(Max(C(X)_{\mathcal{P}})\) is compact and hence \(\beta_{\mathcal{P}}X\) is compact. Since \(C(X)_{\mathcal{P}}\) is a \(pm-\)ring (by Theorem 2.15), by Theorem 1.2 of [14], \(\beta_{\mathcal{P}}X\) is Hausdorff.
It can be easily seen that when \(\mathcal{P}=\{\emptyset\}\), we have \(\beta_{\mathcal{P}}X=\beta X\).
We further note for \(X=(0,1)\cup\{2\}\) equipped with the subspace topology inherited from the usual topology of \(\mathbb{R}\) and \(\mathcal{P}\) is the ideal of all closed subsets of \((0,1)\). Then, \(C(X)_{\mathcal{P}}=\mathbb{R}^{X}=C(X_{d})\), where \(X_{d}=X\) equipped with the discrete topology on \(X\). It is clear that \(\beta_{\mathcal{P}}X\) has uncountably many isolated points, but \(\beta X\) has only one isolated point. Thus \(\beta_{\mathcal{P}}X\) is not homeomorphic to \(\beta X\) in general.
## 3. When is \(C(x)_{\mathcal{P}}\) closed under uniform limit?
**Definition 3.1**.: A sequence of functions \(\{f_{n}\}\) in a subring \(S\) of \(\mathbb{R}^{X}\) is said to converge uniformly to a function \(f\) on \(X\) if for a given \(\epsilon>0\), there exists \(N\in\mathbb{N}\) such that \(|f_{n}(x)-f(x)|<\epsilon\) for all \(n\geq N\) and for all \(x\in X\).
A subring \(S\) of \(\mathbb{R}^{X}\) is said to be closed under uniform limit if whenever \(\{f_{n}\}\subseteq S\) converges uniformly to a function \(f\in\mathbb{R}^{X}\), \(f\in S\).
A \(\tau\mathcal{P}\)-space \((X,\tau,\mathcal{P})\) is said to be a \(\tau\mathcal{P}\mathcal{U}\)-space if \(C(X)_{\mathcal{P}}\) is closed under uniform limit.
It can be easily observed here that if \(\mathcal{P}=\{\emptyset\}\), then \(X\) is a \(\tau\mathcal{P}\mathcal{U}\)-space. Another trivial example of a ring that is closed under uniform limit is \(\mathbb{R}^{X}\). So if \(C(X)_{\mathcal{P}}=\mathbb{R}^{X}\), then \(X\) is a \(\tau\mathcal{P}\mathcal{U}\)-space. Also, every function in
is continuous on all isolated points of \(X\). In light of these observations, we have the following theorem.
**Theorem 3.2**.: _Let \(\mathcal{P}\) be an ideal of closed subsets of \(X\) such that the set of all non-isolated points in \(X\) is a member of \(\mathcal{P}\). Then \(C(X)_{\mathcal{P}}=\mathbb{R}^{X}\), and hence \(C(X)_{\mathcal{P}}\) is closed under uniform limit, that is \(X\) is a \(\tau\mathcal{PU}\)-space._
The converse of the above theorem holds when \(\mathcal{P}=\mathcal{P}_{f}\), as seen in Theorem 2.9 [13].
The converse of the above theorem also holds for a metrizable space \(X\) and for \(\mathcal{P}=\mathcal{K}\).
**Theorem 3.3**.: _Let \(X\) be a metrizable space and \(\mathcal{P}=\mathcal{K}\). If \(X\) is a \(\tau\mathcal{PU}\)-space, then the set of all non-isolated points in \(X\) is a member of \(\mathcal{K}\)._
Proof.: Let T be the set of all non-isolated points of \(X\). Assume that \(T\) is non-compact. Then \(T\) is not sequentially compact. So, \(\exists\) a sequence \(\{a_{n}\}\in T\) which has no convergent subsequence. Set \(A=\{a_{n}\colon n\in\mathbb{N}\}\). Then \(A\) is a closed non-compact subset of \(X\).
For each \(m\in\mathbb{N}\), define \(f_{m}\) on \(X\) as follows :
\[f_{m}(x)=\begin{cases}\frac{1}{n}&x=a_{n},n<m\\ 0&otherwise.\end{cases}\]
Then, \(f_{m}\in C(X)_{K}\), for each \(m\in\mathbb{N}\) and \(\{f_{m}\}\) is uniformly convergent to a function \(f:X\longrightarrow\mathbb{R}\) where
\[f(x)=\begin{cases}\frac{1}{n}&x=a_{n}\\ 0&otherwise.\end{cases}\]
Clearly, \(\overline{D_{f}}=\overline{A}=A\notin\mathcal{K}\). Thus \(f\notin C(X)_{K}\). Hence \(C(X)_{K}\) is not closed under uniform limit. This completes the proof.
It is well known that \(C(Y)\) is closed under uniform limit for any topological space \(Y\). The next natural question is that if \(C(X)_{\mathcal{P}}\) is isomorphic to \(C(Y)\), then can we conclude that \(C(X)_{\mathcal{P}}\) is also closed under uniform limit?
**Theorem 3.4**.: _Let \(\mathcal{P}\) be an ideal of closed subsets of a space \(X\). If \(C(X)_{\mathcal{P}}\) is isomorphic to \(C(Y)\) for some topological space \(Y\), then \(X\) is a \(\tau\mathcal{PU}\)-space._
Proof.: Let \(\phi\colon C(X)_{\mathcal{P}}\longrightarrow C(Y)\) be an isomorphism. First, we attempt to show that \(\phi\) is an order preserving mapping. For that, let \(g\in C(X)_{\mathcal{P}}\) be such that \(g\geq 0\). Then \(g=l^{2}\) for some \(l\in C(X)_{\mathcal{P}}\). Thus \(\phi(g)=\phi(l^{2})=(\phi(l))^{2}\geq 0\). So, \(\phi\) is order preserving. For \(f\in C(X)_{\mathcal{P}}\), \(\phi(|f|)\geq 0\). Also, \((\phi(|f|))^{2}=\phi(|f|^{2})=\phi(f^{2})=(\phi(f))^{2}\) which implies \(\phi(|f|)=|\phi(f)|\), for all \(f\in C(X)_{\mathcal{P}}................(1)\). Since \(\phi\) is an isomorphism, for any rational number \(r\), \(\phi(\boldsymbol{r})=\boldsymbol{r}................(2)\). On using the above arguments, we can show that (1) and (2) also hold for \(\phi^{-1}\) as well.
Let \(\{f_{n}\}\) be a sequence in \(C(X)_{\mathcal{P}}\) converging uniformly to a function \(f\in\mathbb{R}^{X}\). We now show that \(f\in C(X)_{\mathcal{P}}\).
Let \(\epsilon>0\) be an arbitrary rational. Then there exists \(k\in\mathbb{N}\) such that \(|f_{n}-f_{m}|<\boldsymbol{\epsilon}\) for all \(n,m\geq k\). Since \(\phi\) is order preserving, and using (1) and (2), we have \(|\phi(f_{n})-\phi(f_{m})|<\phi(\boldsymbol{\epsilon})=\boldsymbol{\epsilon}\), for all \(n,m\geq k\). So, there
exists \(h\in C(Y)\) such that \(\{\phi(f_{n})\}\) converges uniformly to \(h\in C(Y)\). By hypothesis, \(\phi\) is onto. Therefore there exists \(g\in C(X)_{\mathcal{P}}\) such that \(\phi(g)=h\).
Now, for a given rational \(\epsilon>0\), as \(\{\phi(f_{n})\}\) converges uniformly to \(h\in C(Y)\), there exists \(k\in\mathbb{N}\) such that
\[|\phi(f_{n})-h|<\boldsymbol{\epsilon}\quad\forall n\geq k\] \[\implies |\phi(f_{n})-\phi(g)|<\boldsymbol{\epsilon}\quad\forall n\geq k\] \[\implies |f_{n}-g|=|\phi^{-1}(\phi(f_{n}))-\phi^{-1}\phi(g)|<\phi^{-1}( \boldsymbol{\epsilon})=\boldsymbol{\epsilon}\quad\forall n\geq k.\]
Thus \(\{f_{n}\}\) converges uniformly to the function \(g\). But \(\{f_{n}\}\) converges uniformly to \(f\). Thus \(f=g\in C(X)_{\mathcal{P}}\). Hence \(C(X)_{\mathcal{P}}\) is closed under uniform limit.
**Corollary 3.5**.: _In a metric space \(X\) the following statements are equivalent :_
1. \(C(X)_{K}\) _is closed under uniform limit._
2. _The set of all non-isolated points in_ \(X\) _is compact._
3. \(C(X)_{K}\) _is isomorphic to_ \(C(Y)\) _for some topological space_ \(Y\)_._
Proof.: \((1)\Longrightarrow(2)\) follows from Theorem 3.3.
From Theorem 3.2, we get \((2)\Longrightarrow\ C(X)_{K}=\mathbb{R}^{X}=C(X_{d})\), where \(X_{d}\) denotes the space \(X\) with discrete topology. This proves \((3)\).
\((3)\Longrightarrow(1)\) follows from Theorem 3.4 for \(\mathcal{P}=\mathcal{K}\).
If we consider the ideal \(\mathcal{P}=\mathcal{P}_{f}\) on a topological space \(X\), then we have the following theorem where we provide an alternative proof for Theorem 3.4 of [8].
**Theorem 3.6**.: _For a topological space \(X\), the following statements are equivalent:_
1. \(C(X)_{F}\) _is closed under uniform limit._
2. \(X\) _has finitely many non-isolated points._
3. \(C(X)_{F}\) _is isomorphic to_ \(C(Y)\)_, for some topological space_ \(Y\)_._
Proof.: \((1)\Longleftrightarrow(2)\) follows from Theorem 2.9 in [13].
From Theorem 3.2, we get \((2)\Longrightarrow\ C(X)_{F}=\mathbb{R}^{X}=C(X_{d})\), where \(X_{d}\) denotes the space \(X\) with discrete topology. This proves \((3)\).
\((3)\Longrightarrow(1)\) follows from Theorem 3.4 for \(\mathcal{P}=\mathcal{P}_{f}\).
The fact that \(C(X)\) is closed under uniform limit is used extensively to prove the Urysohn's Extension Theorem (Theorem 1.17 in [9]). Our aim is to achieve an analog of that result. We need the following definitions to do this.
**Definition 3.7**.: A subspace \(S\) of \(X\) is said to be \(C_{\mathcal{P}}\)-embedded if every \(f\in C(S)_{\mathcal{P}_{S}}\) can be extended to a function in \(C(X)_{\mathcal{P}}\).
A subspace \(S\) of \(X\) is said to be \(C^{*}_{\mathcal{P}}\)-embedded if every \(f\in C^{*}(S)_{\mathcal{P}_{S}}\) can be extended to a function in \(C^{*}(X)_{\mathcal{P}}\).
The following theorem is a generalisation of the Urysohn's Extension Theorem (Theorem 1.17 in [9]) and can be proved by closely following the proof of Theorem 1.17 in [9].
**Theorem 3.8**.: _Let \(X\) be a \(\tau\mathcal{PU}\)-space. Then a subspace \(S\) of \(X\) is \(C_{\mathcal{P}}^{*}\)-embedded in \(X\) if and only if any two \(\mathcal{P}_{S}\)-completely separated sets in \(S\) are \(\mathcal{P}\)-completely separated sets in \(X\)._
Further, as is seen in case of \(C(X)\), a \(C_{\mathcal{P}}^{*}\)-embedded subspace of \(X\) may not be \(C_{\mathcal{P}}\)-embedded.
**Theorem 3.9**.: _A \(C_{\mathcal{P}}^{*}\)-embedded subspace of \(X\) is \(C_{\mathcal{P}}\)-embedded if and only if it is \(\mathcal{P}\)-completely separated from every zero set disjoint from it._
This can be proved by closely following the proof of Theorem 1.18 in [9].
## 4. \(\tau\mathcal{P}\)-compact spaces
In this section, we define \(\tau\mathcal{P}\)-compact, \(\tau\mathcal{P}\)-pseudocompact and real \(\tau\mathcal{P}\)-compact spaces and discuss their characterisations and properties.
**Definitions 4.1**.:
1. \(A\) \(\tau\mathcal{P}\)_-space_ \((X,\tau,\mathcal{P})\) _is said to be_ \(\tau\mathcal{P}\)_-compact if for every family of zero sets_ \(\mathcal{F}\subseteq Z_{\mathcal{P}}[X]\) _having the finite intersection property,_ \(\bigcap\mathcal{F}\neq\emptyset\)_._
2. \(A\) \(z_{\mathcal{P}}\)_-filter_ \(\mathcal{F}\) _is said to be fixed if_ \(\bigcap\mathcal{F}\neq\emptyset\)_, otherwise it is said to be free. An ideal_ \(I\) _of_ \(C(X)_{\mathcal{P}}\) _is said to be fixed if_ \(\bigcap Z_{\mathcal{P}}[I]\neq\emptyset\)_, otherwise it is said to be free._
3. \(A\) \(\tau\mathcal{P}\)_-space_ \(X\) _is said to be_ \(\tau\mathcal{P}\)_-pseudocompact if_ \(C(X)_{\mathcal{P}}=C^{*}(X)_{\mathcal{P}}\)_._
The following observation is immediate.
**Observation 4.2**.: _Let \((X,\tau,\mathcal{P})\) be a \(\tau\mathcal{P}\)-space such that \((X,\tau)\) is a Tychonoff space. Then \((X,\tau)\) is compact if \((X,\tau,\mathcal{P})\) is \(\tau\mathcal{P}\)-compact._
However if \(X=\{0\}\cup\{\frac{1}{n}\colon n\in\mathbb{N}\}\) is endowed with the subspace topology induced from the usual topology on \(\mathbb{R}\) and \(\mathcal{P}=\{\emptyset,\{0\}\}\), then \(X\) is compact but is not \(\tau\mathcal{P}\)-compact. This shows that the converse of the above observation may fail.
Pertaining to the above definitions, we have the following result that has a proof similar to that of Theorem 4.11 in [9].
**Theorem 4.3**.: _The following statements are equivalent:_
1. \(X\) _is_ \(\tau\mathcal{P}\)_-compact._
2. _Every_ \(z_{\mathcal{P}}\)_-filter is fixed._
3. _Every_ \(z_{\mathcal{P}}\)_-ultrafilter is fixed._
4. _Every ideal in_ \(C(X)_{\mathcal{P}}\) _is fixed._
5. _Every maximal ideal in_ \(C(X)_{\mathcal{P}}\) _is fixed._
**Theorem 4.4**.: _If \((X,\tau,\mathcal{P})\) is \(\tau\mathcal{P}\)-compact, then it is \(\tau\mathcal{P}\)-pseudocompact._
Proof.: Let \((X,\tau,\mathcal{P})\) be \(\tau\mathcal{P}\)-compact. If possible, let there exists an unbounded function \(f\in C(X)_{\mathcal{P}}\) and consider \(Z_{n}=f^{-1}((-\infty,-n]\cup[n,\infty))\), for each \(n\in\mathbb{N}\). Then \(\mathcal{B}=\{Z_{n}\colon n\in\mathbb{N}\}\) is a subcollection of \(Z_{\mathcal{P}}[X]\). Each \(Z_{n}\) is non-empty as \(f\) is unbounded. Thus \(\exists\) a \(z_{\mathcal{P}}\)-filter \(\mathcal{F}\) on \(X\) such that \(\mathcal{B}\subseteq\mathcal{F}\). By Theorem 4.3, \(\emptyset\neq\bigcap\mathcal{F}\subseteq\bigcap\mathcal{B}\). Let \(x\in\bigcap\mathcal{B}\). Then \(|f(x)|\geq n\), for all \(n\in\mathbb{N}\), which is not possible. Thus \(C^{*}(X)_{\mathcal{P}}=C(X)_{\mathcal{P}}\). Hence \(X\) is \(\tau\mathcal{P}\)-pseudocompact.
It is important to note that the converse of the above result is false in general, as is shown in the following example.
**Counter Example 4.5**.: _Let us consider \(X=\mathbb{R}\) equipped with the co-countable topology \(\tau\) and take \(\mathcal{P}=\mathcal{P}_{f}\), the collection of all finite subsets of \(X\). For any \(f\in C(X)_{\mathcal{P}}\), we have \(f\) is continuous on \(X\setminus D_{f}\), which is open in \(X\), as \(D_{f}\) is finite. So \(f\) is constant on \(X\setminus D_{f}\). Further, \(D_{f}\) is finite. Thus \(f\in C^{*}(X)_{\mathcal{P}}\). Therefore \((X,\tau,\mathcal{P})\) is \(\tau\mathcal{P}\)-pseudocompact. For each \(x\in X\), let us consider \(f_{x}=\chi_{X\setminus\{x\}}\in C(X)_{\mathcal{P}}\). Then \(Z_{\mathcal{P}}(f_{x})=\mathbb{R}\setminus\{x\}\). Let \(\mathcal{B}=\{Z_{\mathcal{P}}(f_{x})\colon x\in X\}\subseteq Z_{\mathcal{P}}[X]\). Then \(\mathcal{B}\) has the finite intersection property. However \(\bigcap\mathcal{B}=\emptyset\). Thus \(X\) is not \(\tau\mathcal{P}\)-compact._
An interesting characterisation of a \(\tau\mathcal{P}\mathcal{U}\)-space can be achieved for a \(\tau\mathcal{P}\)-pseudocompact space. It has already been noted in [9] that \(C^{*}(X)\), equipped with the norm \(||f||=\sup\{|f(x)|\colon x\in X\}\) is a Banach space. However, \(C^{*}(X)_{\mathcal{P}}\) equipped with the same norm may not be a Banach space.
Since the uniform convergence of a sequence of functions in \(C^{*}(X)_{\mathcal{P}}\) is same as the convergence of the sequence under the norm defined above, we have the following result.
**Theorem 4.6**.: _A \(\tau\mathcal{P}\)-pseudocompact space \(X\) is a \(\tau\mathcal{P}\mathcal{U}\)-space if and only if \((C^{*}(X)_{\mathcal{P}},||.||)\) is a Banach space._
In order to establish a concept similar to real compactness of a topological space \(X\), we need the following definitions.
**Definitions 4.7**.:
1. _An ideal_ \(I\) _of a partially ordered ring_ \(R\) _is said to be convex if_ \(0\leq x\leq y\) _with_ \(y\in I\) _implies_ \(x\in I\)_._
2. _An ideal_ \(I\) _of a lattice ordered ring_ \(R\) _is said to be absolutely convex if whenever_ \(|x|\leq|y|\) _with_ \(y\in I\)_, then_ \(x\in I\)_._
3. _A totally ordered field_ \(R\) _is said to be archimedean if for every element_ \(a\in R\)_, there exists_ \(n\in\mathbb{N}\) _such that_ \(\boldsymbol{n}\geq a\)_, where_ \(\boldsymbol{n}\) _denotes_ \(1_{R}\)_(identity of_ \(R\)_) added_ \(n\) _times._
**Proposition 4.8**.: _Every \(z_{\mathcal{P}}\)-ideal of the ring \(C(X)_{\mathcal{P}}\) is absolutely convex._
Proof.: Straightforward.
From Theorems 5.2, 5.3 in [9], we have the following theorem.
**Theorem 4.9**.: _Let \(I\) be an absolutely convex ideal in a lattice ordered ring \(A\). Then \(A/I\) is a lattice ordered ring, according to the following definition: \(I(a)\geq 0\) if there exists \(x\in A\) such that \(x\geq 0\) and \(I(a)=I(x)\)._
The following proposition is immediate and can be proved by closely following the discussions in 5.4(a) and 5.4(b) of [9].
**Proposition 4.10**.: _For a \(z_{\mathcal{P}}\)-ideal \(I\),_
1. \(I(f)\geq 0\) _if and only if there exists_ \(Z\in Z_{\mathcal{P}}[I]\) _such that_ \(f(x)\geq 0\) _for all_ \(x\in Z\)_._
2. _If_ \(f>\boldsymbol{0}\) _on some zero set of a function in_ \(I\) _then_ \(I(f)>0\)_._
**Definition 4.11**.: A maximal ideal \(M\) of the ring \(C(X)_{\mathcal{P}}\) is said to be real if \(C(X)_{\mathcal{P}}/M\) is isomorphic to the field of real numbers.
**Proposition 4.12**.: _A maximal ideal \(M\) of \(C(X)_{\mathcal{P}}\) is real if and only if \(C(X)_{\mathcal{P}}/M\) is archimedean._
Proof.: Let \(C(X)_{\mathcal{P}}/M\) be archimedean. Then by Theorem 0.21 in [9], there exists an isomorphism \(\phi\) from \(C(X)_{\mathcal{P}}/M\) to a subfield \(S\) of \(\mathbb{R}\). Also, \(\psi\colon\mathbb{R}\longrightarrow C(X)_{\mathcal{P}}/M\) defined by \(\psi(r)=M(\boldsymbol{r})\) for all \(r\in\mathbb{R}\) is an injective homomorphism. Thus \(\phi\circ\psi\) is an injective homomorphism from \(\mathbb{R}\) onto a subfield of \(\mathbb{R}\). By Theorem 0.22 in [9], \(\phi\circ\psi\) is the identity homomorphism from \(\mathbb{R}\) onto \(\mathbb{R}\). Thus \(\phi(C(X)_{\mathcal{P}}/M)=\mathbb{R}\implies M\) is real. The converse is obvious.
**Definition 4.13**.: A \(\tau\mathcal{P}\)-space \((X,\tau,\mathcal{P})\) is called \(\tau\mathcal{P}\)-real compact if every real maximal ideal of \(C(X)_{\mathcal{P}}\) is fixed.
Real maximal ideals of \(C(X)_{\mathcal{P}}\) has the following characterization for a \(\tau\mathcal{P}\mathcal{U}\)-spaces :
**Theorem 4.14**.: _Let \(X\) be a \(\tau\mathcal{P}\mathcal{U}\)-space. Then the following statements are equivalent._
1. \(M\) _is a real maximal ideal of_ \(C(X)_{\mathcal{P}}\)_._
2. \(Z_{\mathcal{P}}[M]\) _is closed under countable intersection._
3. \(Z_{\mathcal{P}}[M]\) _has the countable intersection property._
The above result can be proved by following the proof of Theorem 5.14 [9].
Next we state a characterization of \(\tau\mathcal{P}\)-compact spaces using real \(\tau\mathcal{P}\)-compact and \(\tau\mathcal{P}\)-pseudocompact spaces.
**Theorem 4.15**.: _A \(\tau\mathcal{P}\)-space \(X\) is \(\tau\mathcal{P}\)-compact if and only if it is both \(\tau\mathcal{P}\)-pseudocompact and \(\tau\mathcal{P}\)-real compact._
Proof.: Let \(X\) be both \(\tau\mathcal{P}\)-pseudocompact and \(\tau\mathcal{P}\)-real compact. Let \(\mathcal{U}\) be a \(z_{\mathcal{P}}\)-ultrafilter on \(X\) and \(M=Z_{\mathcal{P}}^{-1}(\mathcal{U})\). Then \(M\) is a maximal ideal of \(C(X)_{\mathcal{P}}\). If \(M\) is not a real maximal ideal, then by Proposition 4.12, there exists \(f\in C(X)_{\mathcal{P}}\) such that \(M(f)\geq M(\boldsymbol{n})\) for all \(n\in\mathbb{N}\). So, for each \(n\in\mathbb{N}\), there exists \(x_{n}\in X\) such that \(f(x_{n})\geq n\). This contradicts that \(X\) is \(\tau\mathcal{P}\)-pseudocompact. Thus \(M\) is a real maximal ideal. Since \(X\) is \(\tau\mathcal{P}\)-real compact, \(\bigcap Z_{\mathcal{P}}[M]\neq\emptyset\) and so \(\bigcap\mathcal{U}\neq\emptyset\). Thus \(X\) is \(\tau\mathcal{P}\)-compact.
The converse follows partially from Theorem 4.4 and the rest is straightforward.
It is to note that every \(\tau\mathcal{P}\)-compact space is always real \(\tau\mathcal{P}\)-compact, the converse of this statement may not be true as is seen in the following example.
**Counter Example 4.16**.: _Let us consider \(X=\mathbb{R}\) equipped with usual topology and \(\mathcal{P}\) be any ideal of closed subsets of \(X\). Then \((X,\tau,\mathcal{P})\) is a \(\tau\mathcal{P}\) space. Let \(M\) be a real maximal ideal of \(C(X)_{\mathcal{P}}\). Then \(\phi\colon\mathbb{R}\longrightarrow C(X)_{\mathcal{P}}/M\) defined as \(\phi(r)=M(\boldsymbol{r})\) is an isomorphism. Now \(i\) (identity map on \(X\)) \(\in C(X)\subseteq C(X)_{\mathcal{P}}\implies\exists\ r\in\mathbb{R}\) such that \(M(\boldsymbol{r})=M(i)\). This implies that \(\{r\}\subseteq Z_{\mathcal{P}}[M]\), that is, the ideal \(M\) is fixed. Thus \(X\) is real \(\tau\mathcal{P}\)-compact. However, since \(X\) is not compact, it is not \(\tau\mathcal{P}\)-compact._
Two interesting observations can be made here:
1. A \(\tau\mathcal{P}\)-pseudocompact space may not be real \(\tau\mathcal{P}\)-compact. This can be seen using Counter Example 4.5 and Theorem 4.15.
2. A real \(\tau\mathcal{P}\)-compact space may not be \(\tau\mathcal{P}\)-pseudocompact. This can be seen using Counter Example 4.16 and Theorem 4.15.
## 5. When \(\mathcal{P}\) contains all singleton subsets of \(X\)
In this section, we study properties of \(C(X)_{\mathcal{P}}\) under the restriction that \(\mathcal{P}\) contain all singleton subsets of \(X\). Since \(X\) is \(T_{1}\) and all singleton subsets of \(X\) are in \(\mathcal{P}\), we have \(\chi_{\{x\}}\in C(X)_{\mathcal{P}}\) for every \(x\in X\). Two of these type of rings, viz, \(T^{\prime}(X)\) and \(C(X)_{F}\) have been studied in [8], [2], [13] and [1].
Throughout this section, we assume that any ideal \(\mathcal{P}\) of closed subsets of a \(\tau\mathcal{P}\)-space \(X\) contains all singleton subsets of \(X\) (unless otherwise specified).
For any non-unit element \(f\in C(X)_{\mathcal{P}}\), there exists \(x_{0}\in X\) such that \(f(x_{0})=0\). This gives \(f\chi_{X\setminus\{x_{0}\}}=f\). This shows that \(C(X)_{\mathcal{P}}\) is almost regular which is summarised in the following theorem.
**Theorem 5.1**.: _For a \(\tau\mathcal{P}\)-space \(X\), \(C(X)_{\mathcal{P}}\) is an almost regular ring._
Also, for a non-unit element \(f\in C(X)_{\mathcal{P}}\), there exists \(y\in Z_{\mathcal{P}}(f)\) such that \(f\chi_{\{y\}}=\mathbf{0}\). So \(f\) is a zero divisor. Thus we have the following result.
**Proposition 5.2**.: _For a \(\tau\mathcal{P}\)-space \(X\), \(f\in C(X)_{\mathcal{P}}\) is either a zero divisor or an unit._
The above result might fail if we remove the condition that \(\mathcal{P}\) contains all singleton subsets of \(X\).
**Counter Example 5.3**.: _Let us consider \(X=\mathbb{R}\) with usual topology and \(\mathcal{P}=\) the collection of all closed subsets of \((0,\infty)\). Then \(f(x)=|x|\) is such that \(f\in C(X)_{\mathcal{P}}\). Since \(Z_{\mathcal{P}}(f)\neq\emptyset\), \(f\) is a non-unit element. Now, let \(g\in C(X)_{\mathcal{P}}\) be such that \(fg=\mathbf{0}\). Then as \(f(x)\neq 0\) for all \(x\neq 0\), we must have, \(g(x)=0\) for all \(x\neq 0\). If \(g(0)\neq 0\), then \(\overline{D_{g}}=\{0\}\notin\mathcal{P}\), which contradicts that \(g\in C(X)_{\mathcal{P}}\). So, \(g(0)=0\) and thus \(g=\mathbf{0}\). Therefore \(f\) is not a zero divisor._
Our next aim is to generalise Proposition 3.1 of [8] in the following way.
**Proposition 5.4**.: _The following statements are equivalent for a \(\tau\mathcal{P}\)-space, \((X,\tau,\mathcal{P})\)._
1. \(C(X)=C(X)_{\mathcal{P}}\)_._
2. \(X\) _is discrete._
3. \(C(X)_{\mathcal{P}}\) _is a ring of quotients of_ \(C(X)\)_._
Proof.: If \(X\) is discrete, then all functions in \(\mathbb{R}^{X}\) are continuous. So, \(\mathbb{R}^{X}=C(X)=C(X)_{\mathcal{P}}\). Further, when \(C(X)=C(X)_{\mathcal{P}}\), then for every \(x\in X\), \(\chi_{\{x\}}\in C(X)_{\mathcal{P}}=C(X)\) which implies that \(\{x\}\) is open. Thus \(X\) is discrete. This shows that \(1\) and \(2\) are equivalent. To show that \(3\) and \(2\) are equivalent, it is already seen that if \(X\) is discrete, then \(C(X)=C(X)_{\mathcal{P}}\). So, for every \(f\in C(X)_{\mathcal{P}}\), we have \(f\cdot\mathbf{1}=f\in C(X)_{\mathcal{P}}=C(X)\). This proves \(3\). Finally, let \(3\) hold. Then for each \(x\in X\), there exists \(f\in C(X)\) such that \(f\chi_{\{x\}}\in C(X)\setminus\{\mathbf{0}\}\). This implies that \(f(x)\neq 0\). But \(\chi_{\{x\}}=\frac{1}{f(x)}f\chi_{\{x\}}\in C(X)\). This shows that \(X\) is discrete.
Similarly, following the proof of Theorem 3.2 of [8] we have:
Theorem 5.5.: _The following are equivalent for a \(\tau\mathcal{P}\)-space, \(X\):_
1. \(X\) _is finite._
2. _Each proper ideal of_ \(C(X)_{\mathcal{P}}\) _is fixed._
3. _Each maximal ideal of_ \(C(X)_{\mathcal{P}}\) _is fixed._
4. _Each proper ideal of_ \(C^{*}(X)_{\mathcal{P}}\) _is fixed._
5. _Each maximal ideal of_ \(C^{*}(X)_{\mathcal{P}}\) _is fixed._
Corollary 5.6.: _A \(\tau\mathcal{P}\)-space \(X\) is \(\tau\mathcal{P}\)-compact if and only if \(X\) is finite._
Proof.: This follows from the above theorem and Theorem 4.3.
However, this result may fail even if \(\mathcal{P}\) fails to contain all singleton subsets of \(X\).
Counter Example 5.7.: _Let \(X=\{\frac{1}{n}\colon n\in\mathbb{N}\}\cup\{0\}\) be the subspace of real line and \(\mathcal{P}=\{\emptyset,\{1\}\}\). Then as \(1\) is an isolated point, \(C(X)_{\mathcal{P}}=C(X)\). Also \(X\) is compact, which implies that every maximal ideal of \(C(X)=C(X)_{\mathcal{P}}\) is fixed; even though \(X\) is an infinite set._
We next move on to discuss certain rings properties of \(C(X)_{\mathcal{P}}\). We start by discussing the structure of the minimal ideals and the socle of the ring.
Theorem 5.8.: _The following assertions are true for a \(\tau\mathcal{P}\)-space \(X\)._
1. _A non zero ideal_ \(I\) _of_ \(C(X)_{\mathcal{P}}\) _is minimal if and only if there exists an_ \(\alpha\in X\) _such that_ \(I=<\chi_{\{\alpha\}}>\) _if and only if_ \(|Z_{\mathcal{P}}[I]|=2\)_._
2. _The socle of_ \(C(X)_{\mathcal{P}}\) _consists of all functions that vanish everywhere except on a finite set._
3. _The socle of_ \(C(X)_{\mathcal{P}}\) _is essential and free._
Proof.:
1. Let \(I\) be a non zero minimal ideal of \(C(X)_{\mathcal{P}}\). For \(f\in I\setminus\{\mathbf{0}\}\), there exists \(\alpha\in X\) such that \(f(\alpha)\neq 0\). Therefore \(\chi_{\{\alpha\}}=\frac{1}{f(\alpha)}\chi_{\{\alpha\}}f\in I\). Since \(I\) is a minimal ideal of \(C(X)_{\mathcal{P}}\), it follows that \(I=<\chi_{\{\alpha\}}>\). This shows that \(Z_{\mathcal{P}}[I]=\{Z_{\mathcal{P}}(\mathbf{0}),Z_{\mathcal{P}}(\chi_{\{ \alpha\}})\}=\{X,X\setminus\{\alpha\}\}\). Thus, \(|Z_{\mathcal{P}}[I]|=2\). Next we show that \(<\chi_{\{\alpha\}}>\) is a minimal ideal of \(C(X)_{\mathcal{P}}\). Let \(I\) be an ideal of \(C(X)_{\mathcal{P}}\), \(\{\mathbf{0}\}\subsetneq I\subseteq<\chi_{\{\alpha\}}>\). Then there exists \(f\in I\setminus\{\mathbf{0}\}\subseteq<\chi_{\{\alpha\}}>\). So \(f=g\chi_{\{\alpha\}}\), for some \(g\in C(X)_{\mathcal{P}}\). But \(f=g\chi_{\{\alpha\}}=g(\alpha)\chi_{\{\alpha\}}\implies\chi_{\{\alpha\}}= \frac{1}{g(\alpha)}f\in I\). Therefore \(I=<\chi_{\{\alpha\}}>\). Finally we assume that \(|Z_{\mathcal{P}}[I]|=2\) and show that \(I\) is a minimal ideal of \(C(X)_{\mathcal{P}}\). There exists \(f\in I\) such that \(f(\alpha)\neq 0\) for some \(\alpha\in X\). So \(\chi_{\{\alpha\}}=\frac{1}{f(\alpha)}\chi_{\{\alpha\}}f\in I\). By our assumption, for any non zero function \(g\in I\), \(Z_{\mathcal{P}}(g)=Z_{\mathcal{P}}(\chi_{\{\alpha\}})=X\setminus\{\alpha\}\). So every non zero \(g\in I\) is of the form \(g=g(\alpha)\chi_{\{\alpha\}}\). Therefore \(I=\{c\chi_{\{\alpha\}}\colon c\in\mathbb{R}\}=<\chi_{\{\alpha\}}>\), which is a minimal ideal, as seen above.
2. By 1, the socle of \(C(X)_{\mathcal{P}}\), \[Soc(C(X)_{\mathcal{P}})=\sum_{\alpha\in X}<\chi_{\{\alpha\}}>=<\left\{\chi_{ \{\alpha\}}\colon\alpha\in X\right\}>.\]
Thus every function in \(Soc(C(X)_{\mathcal{P}})\) vanishes everywhere except for a finitely many points of \(X\). Conversely, let \(f\in C(X)_{\mathcal{P}}\) be such that it vanishes everywhere except for a finitely many points, that is \(Z_{\mathcal{P}}(f)=X\setminus\{\alpha_{i}\colon\alpha_{i}\in X,i=1,...n\}\) where \(n\in\mathbb{N}\). Then \[f=\sum_{i=1}^{n}f(\alpha_{i})\chi_{\{\alpha_{i}\}}\in Soc(C(X)_{\mathcal{P}}).\]
3. Let \(I\) be a non zero ideal of \(C(X)_{\mathcal{P}}\). Then there exists \(f\in I\) such that \(f(\alpha)\neq 0\) for some \(\alpha\in X\). From 1, we have \(\chi_{\{\alpha\}}\in Soc(C(X)_{\mathcal{P}})\) and \(\chi_{\{\alpha\}}=\frac{1}{f(\alpha)}\chi_{\{\alpha\}}f\in I\). This ensures that \(Soc(C(X)_{\mathcal{P}})\cap I\neq\emptyset\). Thus \(Soc(C(X)_{\mathcal{P}})\) is an essential ideal. Also, for an arbitrary \(\alpha\in X\), \(\chi_{\{\alpha\}}\in Soc(C(X)_{\mathcal{P}})\) and \(\chi_{\{\alpha\}}(\alpha)=1\). So \(\alpha\notin Z_{\mathcal{P}}[Soc(C(X)_{\mathcal{P}})]\). This ensures that \(Soc(C(X)_{\mathcal{P}})\) is a free ideal.
We shall note here that the condition that \(\mathcal{P}\) contains all singleton subsets of \(X\) is not a necessary condition for 2 in the above theorem. This can be seen by taking a Tychonoff space \(X\) and \(\mathcal{P}=\{\emptyset\}\). Here \(C(X)_{\mathcal{P}}=C(X)\). The rest follows from Proposition 2.2 in [4].
Using the above results, we establish a condition under which \(C(X)_{\mathcal{P}}\) is an Artinian Ring. We need the following result to do this.
**Proposition 5.9**.: \(Soc(C(X)_{\mathcal{P}})=C(X)_{\mathcal{P}}\) _if and only if \(X\) is finite._
Proof.: Let \(X=\{x_{1},x_{2},...,x_{n}\}.\) Then \(\mathbf{1}=\sum_{i=1}^{n}\chi_{\{x_{i}\}}\in Soc(C(X)_{\mathcal{P}})\). Thus \(C(X)_{\mathcal{P}}=Soc(C(X)_{\mathcal{P}})\). Conversely, let \(Soc(C(X)_{\mathcal{P}})=C(X)_{\mathcal{P}}\). Then there exists \(f_{i}\in C(X)_{\mathcal{P}}\) for \(i=1,2,...,n\) such that \(\mathbf{1}=\sum_{i=1}^{n}f_{i}\chi_{\{x_{i}\}}=\sum_{i=1}^{n}f_{i}(x_{i})\chi _{\{x_{i}\}}.\) So, for each \(x\in X\),
\[1=\sum_{i=1}^{n}f_{i}(x_{i})\chi_{\{x_{i}\}}(x)=f_{i}(x_{i})\chi_{\{x_{i}\}}(x) \text{ for some }i\in\{1,2,...,n\}\]
which implies \(\chi_{\{x_{i}\}}(x)=1\implies x=x_{i}\). Thus \(X\) is finite.
[6] tells us that a commutative ring \(R\) with unity is semisimple if and only if \(rad(R)=\{0\}\). Further \(R\) is Artinian semisimple if and only if \(R\) equals the sum of its minimal ideals.
In the ring \(C(x)_{\mathcal{P}}\), it is easy to see that \(\bigcap_{p\in X}M_{p}=\{\mathbf{0}\}\). So \(rad(C(X)_{\mathcal{P}})=\{\mathbf{0}\}\). Thus \(C(X)_{\mathcal{P}}\) is semisimple.
Under the assumption that \(\mathcal{P}\) contains all singleton subsets of \(X\) and using the above discussions, we have the following corollary.
**Corollary 5.10**.: \(C(X)_{\mathcal{P}}\) _is an Artinian ring if and only if \(X\) is finite._
An obvious question arises here : When is \(C(X)_{\mathcal{P}}\) Noetherian? We are able to answer this question when \(\mathcal{P}\) contains all singleton subsets of \(X\).
**Theorem 5.11**.: \(C(X)_{\mathcal{P}}\) _is an Noetherian ring if and only if \(X\) is finite._
Proof.: By 5.10 and the fact that an Artinian commutative ring is Noetherian, \(X\) is finite implies that \(C(X)_{\mathcal{P}}\) is Noetherian. Conversely let \(X\) be an infinite set. Then \(X\) contains a countably infinite set \(\{x_{n}\colon n\in\mathbb{N}\}\). Then
\(<\chi_{\{x_{1}\}}>\stackrel{{\subset}}{{\not\equiv}}<\chi_{\{x_{1},x_{2 }\}}>\stackrel{{\subset}}{{\not\equiv}}<\chi_{\{x_{1},x_{2},x_{3}\}}> \stackrel{{\subset}}{{\not\equiv}}...\) gives an unbounded ascending chain of ideals of \(C(X)_{\mathcal{P}}\). Thus \(C(X)_{\mathcal{P}}\) is not Noetherian.
It is important to note that the condition \(\mathcal{P}\) contains all singleton subsets of \(X\) is not superfluous in 5.8(1), 5.10 and 5.11. We see that in the following example.
**Counter Example 5.12**.: _Let \(X=\mathbb{R}\) with cofinite topology and \(\mathcal{P}=\{\emptyset\}\). Then \(C(X)_{\mathcal{P}}=C(X)\) which consists of only the constant functions on \(\mathbb{R}\). Thus \(C(X)_{\mathcal{P}}\) is isomorphic to \(\mathbb{R}\) and the only ideals of \(C(X)_{\mathcal{P}}\) are \(\{\mathbf{0}\}\) and itself. So \(\{\mathbf{0}\}\) is the only minimal ideal of \(C(X)_{\mathcal{P}}\) and is not generated by \(\chi_{\{x\}}\) for any \(x\in X\). Also \(|Z_{\mathcal{P}}[\{\mathbf{0}\}]|=1\). Further \(C(X)_{\mathcal{P}}\) is both Artinian and Noetherian, even though \(X\) is an infinite set._
We continue the study of ring properties of \(C(X)_{\mathcal{P}}\) and establish a set of equivalent conditions to determine when is \(C(X)_{\mathcal{P}}\) an \(IN\)-ring, \(SA\)-ring and/or a Baer ring.
**Theorem 5.13**.: _The following statements are equivalent for a \(\tau\mathcal{P}\)-space \((X,\tau,\mathcal{P})\)._
1. _Any two disjoint subsets of_ \(X\) _are_ \(\mathcal{P}\)_-completely separated._
2. \(C(X)_{\mathcal{P}}\) _is an_ \(IN\)_-ring._
3. \(C(X)_{\mathcal{P}}\) _is an_ \(SA\)_-ring._
4. \(C(X)_{\mathcal{P}}\) _is an Baer ring._
5. _The space of all prime ideals of_ \(C(X)_{\mathcal{P}}\) _is extremally disconnected._
6. _Any subset of_ \(X\) _is of the form_ \(coz(e)\) _for some idempotent_ \(e\in C(X)_{\mathcal{P}}\)_._
7. _For any subset_ \(A\) _of_ \(X\)_, there exists_ \(P\in\mathcal{P}\) _such that_ \(A\setminus P\) _is a clopen subset of_ \(X\setminus P\)_._
To prove this result, we need the following two lemmas.
**Lemma 5.14**.: _For any subset \(A\) of \(X\), there exists a subset \(S\) of \(C(X)_{\mathcal{P}}\) such that_
\[A=\bigcup\text{coz}[S]=\bigcup\{\text{coz}(f)\colon f\in S\}.\]
This follows directly since
\(A=\bigcup\text{coz}[S]\) where \(S=\{\chi_{\{x\}}\colon x\in A\}\) and \(\chi_{\{x\}}\in C(X)_{\mathcal{P}}\) for all \(x\in X.\)
**Lemma 5.15**.:
1. _Let_ \(I\) _and_ \(J\) _be ideals of_ \(C(X)_{\mathcal{P}}\)_. Then_ \(Ann(I)\subseteq Ann(J)\) _if and only if_ \(\bigcap Z_{\mathcal{P}}[I]\subseteq\bigcap Z_{\mathcal{P}}[J]\) _if and only if_ \(\bigcap\text{coz}[J]\subseteq\bigcap\text{coz}[I]\)_._
2. _For any subset_ \(S\) _of_ \(C(X)_{\mathcal{P}}\)_,_ \(Ann(S)=\{f\in C(X)_{\mathcal{P}}\colon\bigcup\text{coz}[S]\subseteq Z_{ \mathcal{P}}(f)\}\)_._
Proof.:
1. Let \(Ann(I)\subseteq Ann(J)\) and \(x\in\bigcap Z_{\mathcal{P}}[I]\). Then \(f(x)=0\) for all \(f\in I\implies\chi_{\{x\}}f=\mathbf{0}\) for all \(f\in I\). Therefore \(\chi_{\{x\}}\in Ann(I)\subseteq Ann(J)\implies\chi_{\{x\}}g=\mathbf{0}\) for all \(g\in J\). So \(g(x)=0\) for all \(g\in J\implies x\in\bigcap Z_{\mathcal{P}}[J]\). Conversely, let \(\bigcap Z_{\mathcal{P}}[I]\subseteq\bigcap Z_{\mathcal{P}}[J]\) and \(f\in Ann(I)\). Then \(fh=\mathbf{0}\) for all \(h\in I\). So \(coz(f)\subseteq\bigcap Z_{\mathcal{P}}[I]\subseteq\bigcap Z_{\mathcal{P}}[J]\). Thus \(fh_{1}=\mathbf{0}\) for all \(h_{1}\in J\) and hence \(f\in Ann(J)\).
2. Let \(f\in Ann(S)\). Then \(fg=\mathbf{0}\) for all \(g\in S\). Therefore for \(x\in\bigcup coz[S]\), \(f(x)=0\). Conversely, let \(f\in C(X)_{\mathcal{P}}\) be such that \(\bigcup coz[S]\subseteq Z_{\mathcal{P}}(f)\) and \(g\in S\). Then \(coz(g)\subseteq\bigcup coz[S]\subseteq Z_{\mathcal{P}}(f)\). Therefore \(fg=\mathbf{0}\). Hence \(f\in Ann(S)\).
We now prove Theorem 5.13.
Proof.: Since \(C(X)_{\mathcal{P}}\) is a reduced commutative ring, it follows from Lemma 1.2 that the statements from (2) to (5) are equivalent. We use Lemma 1.1 to prove (1) is equivalent to (2). Let (1) hold and let \(I\) and \(J\) be orthogonal ideals of \(C(X)_{\mathcal{P}}\). Then \(\bigcup coz[I]\) and \(\bigcup coz[J]\) are disjoint subsets of \(X\). By (1) there exists disjoint zero sets in \(C(X)_{\mathcal{P}}\), \(Z_{\mathcal{P}}(f_{1})\) and \(Z_{\mathcal{P}}(f_{2})\) such that \(\bigcup coz[I]\subseteq Z_{\mathcal{P}}(f_{1})\) and \(\bigcup coz[J]\subseteq Z_{\mathcal{P}}(f_{2})\). This implies that \(f_{1}\in Ann(I)\) and \(f_{2}\in Ann(J)\). So \({f_{1}}^{2}+{f_{2}}^{2}\) is a unit in \(Ann(I)+Ann(J)\). This proves (2). Next let (2) be true and also let \(A\) and \(B\) be disjoint subsets of \(X\). By Lemma 5.14, there exists subsets \(S_{A},S_{A}\subseteq C(X)_{\mathcal{P}}\) such that \(A=\bigcup coz[S_{A}]\) and \(B=\bigcup coz[S_{B}]\). Let \(I=<S_{A}>\) and \(J=<S_{B}>\). Then \(\bigcup coz[I]\) and \(\bigcup coz[J]\) are disjoint sets (as \(A\) and \(B\) are disjoint). Therefore \(I\) and \(J\) are orthogonal ideals of \(C(X)_{\mathcal{P}}\). By ( 2) and Lemma 1.1, \(Ann(I)+Ann(J)=C(X)_{\mathcal{P}}\). So there exists \(h_{1}\in Ann(I)\) and \(h_{2}\in Ann(J)\) such that \(h_{1}+h_{2}=\mathbf{1}\), which is a unit. Therefore \(Z_{\mathcal{P}}(h_{1})\) and \(Z_{\mathcal{P}}(h_{1})\) are disjoint. Further \(A=\bigcup coz[S_{A}]\subseteq\bigcup coz[I]\subseteq Z_{\mathcal{P}}(h_{1})\) (since \(h_{1}\in Ann(I)\)). Similarly \(B\subseteq Z_{\mathcal{P}}(h_{2})\). This proves (1).
We next show that (4) is equivalent to (6). Let \(A\subseteq X\). Then there exists \(S\subseteq C(X)_{\mathcal{P}}\) (by Lemma 5.14) such that \(A=\bigcup coz[S]\). Define \(I\) to be the ideal generated by \(S\). By (4) there exists an idempotent \(e^{\prime}\in C(X)_{\mathcal{P}}\) such that \(Ann(I)=<e^{\prime}>=Ann(<e>)\), where \(e=\mathbf{1}-e^{\prime}\) is also an idempotent. By Lemma 5.15, we have \(\bigcup coz[I]=\bigcup coz[<e>]\). It can be easily seen that \(\bigcup coz[S]=\bigcup coz[I]\). Thus \(A=\bigcup coz[S]=\bigcup coz[<e>]=X\setminus Z_{\mathcal{P}}(e)\). This proves (6). Let (6) be true and \(I\) be an ideal of \(C(X)_{\mathcal{P}}\). By (6) there exists an idempotent \(e\in C(X)_{\mathcal{P}}\) such that \(\bigcup coz[I]=coz(e)\). By Lemma 5.15, \(Ann(I)=\{f\in C(X)_{\mathcal{P}}\colon\bigcup coz[I]\subseteq Z_{\mathcal{P}} (f)\}=\{f\in C(X)_{\mathcal{P}}\colon coz(e)\subseteq Z_{\mathcal{P}}(f)\}=Ann (e)=<(\mathbf{1}-e)>.\) This shows that \(C(X)_{\mathcal{P}}\) is a Baer ring.
Finally we show that (6) and (7) are equivalent Let \(A\subseteq X\). By (6), \(A=coz(e)\) for some idempotent \(e\in C(X)_{\mathcal{P}}\). Let \(P=\overline{D_{e}}\in\mathcal{P}\). It is easy to see that \(coz(e)=Z_{\mathcal{P}}(\mathbf{1}-e)\). Thus \(A\setminus P=X\setminus Z(e|_{X\setminus P})=Z((\mathbf{1}-e)|_{X\setminus P})\) is clopen in \(X\setminus P\). Let (7) hold and \(A\subseteq X\). Then by (7), there exists \(P\in\mathcal{P}\) such that \(A\setminus P\) is clopen in \(X\setminus P\). Define \(e=\chi_{A}\). Then \(e|_{A\setminus P}\) is continuous on \(X\setminus P\). Therefore \(\overline{D_{e}}\subseteq P\in\mathcal{P}\). So \(e\in C(X)_{\mathcal{P}}\) and \(A=coz(e)\).
## 6. When is \(C(x)_{\mathcal{P}}\) regular?
We study the regularity of the ring \(C(X)_{\mathcal{P}}\) in this section.
**Definition 6.1**.: A commutative ring \(R\) with unity is said to be a regular ring (Von-Neumann sense) if for every element \(a\in R\), there exists an \(x\in R\) such that \(a=a^{2}x\).
A space \(X\) is called \(P\)-space if every \(G_{\delta}\)-set in \(X\) is open.
**Definition 6.2**.: A \(\tau\mathcal{P}\)-space \((X,\tau,\mathcal{P})\) is called \(\mathcal{P}P\)-space if \(C(X)_{\mathcal{P}}\) is regular.
It has been observed in [9] that for a Tychonoff space \(X\), \(X\) is a \(P\)-space if and only if \(C(X)\) is regular. This fails when \(X\) is not Tychonoff which can be seen in the following counter example.
**Counter Example 6.3**.: _Let \(X=\mathbb{R}\) equipped with co-finite topology. Then \(C(X)\) consists of all real valued constant functions on \(\mathbb{R}\). Therefore, \(C(X)\) is isomorphic to the field \(\mathbb{R}\) which is regular. Thus \(C(X)\) is regular. We define \(G_{r}=X\setminus\{r\}\) for each \(r\in\mathbb{Q}\). Then \(G_{r}\) is open in \(X\) for all \(r\in\mathbb{Q}\), and \(G=\bigcup_{r\in\mathbb{Q}}G_{r}=\mathbb{R}\setminus\mathbb{Q}\) is a \(G_{\delta}\)-set which is not open in \(X\). Thus \(X\) is not a \(P\)-space._
We note that Theorem 6.1 in [8] also fails when \(X\) is not a Tychonoff space. To see this, we consider the next counter example.
**Counter Example 6.4**.: _Let \(X=\mathbb{Q}^{*}=\mathbb{Q}\cup\{\infty\}\), the one-point compactification of \(\mathbb{Q}\). Then every function in \(C(\mathbb{Q}^{*})\) is constant. Thus \(C(X)\) is isomorphic to \(\mathbb{R}\), which is regular. However, \(C(\mathbb{Q})\) is not regular, even though \(\mathbb{Q}\) is a subspace of \(\mathbb{Q}^{*}\). Next, we show that \(C(X)_{F}\) is not regular._
_We define \(f(x)=\begin{cases}\sin(\pi x)&\text{ if }x\in\mathbb{Q}\\ 0&\text{ if }x=\infty\end{cases}\). Then \(f\in C(X)_{F}\). If possible, let there exists \(g\in C(X)_{F}\) such that \(f=f^{2}g\), then \(g(x)=\frac{1}{\sin(\pi x)}\) for all \(x\in X\setminus\mathbb{Z}\). For any \(n\in\mathbb{Z}\), \(g\) is unbounded in any neighbourhood of \(n\) in \(\mathbb{Q}^{*}\). Therefore, \(D_{g}\supseteq\mathbb{Z}\), which contradicts \(g\in C(X)_{F}\). This shows that the regularity of \(C(X)\) might not imply the regularity of \(C(X)_{F}\), that is, a \(P\)-space may not be an \(\mathcal{F}P\)-space._
However, if we assume \(X\) to be Tychonoff, then the following is true.
**Example 6.5**.: _If \(X\) is a \(P\)-space, then it is \(\mathcal{P}P\)-space._
Proof.: Let \(X\) be a \(P\)-space and \(f\in C(X)_{\mathcal{P}}\). Then \(f\in C(X\setminus\overline{D_{f}})\), where \(X\setminus\overline{D_{f}}\) is a \(P\)-space, as it is a subspace of a \(P\)-space (by 4K in [9]). So, \(C(X\setminus\overline{D_{f}})\) is regular. Therefore, \(\exists\ g\in C(X\setminus\overline{D_{f}})\) such that \(f|_{X\setminus\overline{D_{f}}}=(f|_{X\setminus\overline{D_{f}}})^{2}g\). Define \(g^{*}\) on \(X\) by \(g^{*}(x)=\begin{cases}g(x)&\text{ when }x\in X\setminus\overline{D_{f}}\\ \frac{1}{f(x)}&\text{ when }x\in\overline{D_{f}}\setminus Z_{\mathcal{P}}(f)\\ 0&\text{ when }x\in\overline{D_{f}}\cap Z_{\mathcal{P}}(f)\end{cases}\). Then \(D_{g^{*}}\subseteq\overline{D_{f}}\). Therefore, \(g^{*}\in C(X)_{\mathcal{P}}\) and \(f=f^{2}g^{*}\). Thus \(C(X)_{\mathcal{P}}\) is regular, and so \(X\) is a \(\mathcal{P}P\)-space.
The following result gives a generalisation of Theorem 6.2 (1)\(\iff\)(2) [8].
**Theorem 6.6**.: \(X\) _is a \(\mathcal{P}P\)-space if and only if for any zero set \(Z\) in \(X\), there exists a set \(P\in\mathcal{P}\) such that \(Z\setminus P\) is a clopen set in \(X\setminus P\)._
Proof.: Let \(X\) be a \(\mathcal{P}P\)-space and \(f\in C(X)_{\mathcal{P}}\). Then there exists \(g\in C(X)_{\mathcal{P}}\) such that \(f=f^{2}g\). Let \(P=\overline{D_{f}}\cup\overline{D_{g}}\in\mathcal{P}\). Now \(f|_{X\setminus P},(\mathbf{1}-fg)|_{X\setminus P}\) are continuous. Also, \(Z_{\mathcal{P}}(f)\setminus P=Z(f|_{X\setminus P})=(X\setminus P)\setminus Z(( \mathbf{1}-fg)|_{X\setminus P})\) is a clopen subset of \(X\setminus P\). Conversely, let the given condition hold and let
\(f\in C(X)_{\mathcal{P}}\). Then \(Z_{\mathcal{P}}(f)\setminus P\) is a clopen subset of \(X\setminus P\) for some \(P\in\mathcal{P}\). Define \(g\colon X\longrightarrow\mathbb{R}\) by \(g(x)=\begin{cases}\frac{1}{f(x)},&x\notin Z_{\mathcal{P}}(f)\\ 0,&x\in Z_{\mathcal{P}}(f)\end{cases}\). Then \(\overline{D_{g}}\subseteq\overline{P}\cup\overline{D_{f}}\in\mathcal{P} \implies\overline{D_{g}}\in\mathcal{P}\). So, \(g\in C(X)_{\mathcal{P}}\) and \(f=f^{2}g\). Hence \(X\) is a \(\mathcal{P}P\)-space.
## 7. \(\aleph_{0}\)-Self injectiveness of \(C(x)_{\mathcal{P}}\)
In this section, we establish conditions under which \(C(X)_{\mathcal{P}}\) is \(\aleph_{0}\)-self injective. In order to achieve this, we need the following definitions and theory.
**Definition 7.1**.: [3] A ring \(R\) is said to be \(\aleph_{0}\)-self injective if every module homomorphism \(\phi\colon I\longrightarrow R\) can be extended to a module homomorphism \(\widehat{\phi}\colon R\longrightarrow R\) where \(I\) is a countably generated ideal of \(R\).
A lattice-ordered vector space or vector lattice is a partially ordered vector space where the order structure forms a lattice.
**Definition 7.2**.: [12]
An element \(x\) of a vector lattice \(X\) is called a weak order unit in \(X\) if \(x\geq 0\) and also for all \(y\in X\), \(\inf\{x,|y|\}=0\) implies \(y=0\).
**Definitions 7.3**.: [12] _By a lattice-ordered ring \((A,+,.,\vee,\wedge)\), we mean a lattice-ordered group that is a ring in which the product of positive elements is positive. If, in addition, \(A\) is a (real) vector lattice, then \(A\) is said to be a lattice-ordered algebra._
_A lattice-ordered ring \(A\) is said to be Archimedean if, for each non-zero element \(a\in A\), the set \(\{na\colon n\in\mathbb{Z}\setminus\{0\}\}\) is unbounded._
_By a \(\phi\)-algebra \(A\), we mean an Archimedean, lattice-ordered algebra over the real field \(\mathbb{R}\) which has identity element \(1\) that is a weak order unit in \(A\)._
_A \(\phi\)-algebra \(A\) of real valued functions is said to be uniformly closed if it is closed under uniform convergence._
**Definitions 7.4**.: [11] _Let \(A\) be a \(\phi\)-algebra. We denote \(\mathcal{M}(A)\) as the compact space of maximal absolutely convex ring ideals of \(A\) carrying Stone topology. Further, we denote \(\mathcal{R}(A)\) to be the set of all real ideals of \(A\)._
**Definitions 7.5**.: [3] _For a subset \(Q\) of a ring \(R\), \(Ann(Q)=\{r\in R\colon qr=0\) for all \(q\in Q\}\). A subset, \(P\) of a ring \(R\) is said to be orthogonal if the product of any two distinct elements of \(P\) is zero. Suppose \(P\) and \(Q\) are disjoint subsets of \(R\) whose union is an orthogonal subset of \(R\). Then, an element \(a\in R\) is said to separate \(P\) from \(Q\) if_
1. \(p^{2}a=p\) _for all_ \(p\in P\)_, and_
2. \(a\in Ann(Q)\)_._
We shall use Theorem 2.3 in [11] to show that \(C(X)_{\mathcal{P}}\) is isomorphic to an algebra of measurable functions.
We assume \(X\) to be a \(\mathcal{P}P\)-space and a \(\tau\mathcal{PU}\)-space, so that \(C(X)_{\mathcal{P}}\) is regular and closed under uniform convergence. Further, for any \(f\in C(X)_{\mathcal{P}}\setminus\{\mathbf{0}\}\), there exists \(x\in X\) such that \(f(x)\neq 0\). So, the set \(\{\boldsymbol{n}f\colon n\in\mathbb{Z}\setminus\{\mathbf{0}\}\}\) is unbounded. We have already seen that \(C(X)_{\mathcal{P}}\) is a lattice ordered group.
It is also easy to see that it forms a real vector space and for any two positive elements \(f,g\in C(X)_{\mathcal{P}}\), \(fg\) is also positive. Also, \(C(X)_{\mathcal{P}}\) has the identity element \(\mathbf{1}\) which is clearly a weak order unit. Thus, \(C(X)_{\mathcal{P}}\) is a \(\phi\)-algebra which is closed under uniform convergence. That is, \(C(X)_{\mathcal{P}}\) forms a uniformly closed \(\phi\)-algebra.
We have also seen that all maximal ideals of \(C(X)_{\mathcal{P}}\) are \(z_{\mathcal{P}}\)-ideals which are in turn absolutely convex. Therefore, all maximal ideals of \(C(X)_{\mathcal{P}}\) are in \(Max(C(X)_{\mathcal{P}})\).
Define for each \(p\in X\), \(M_{p}=\{f\in C(X)_{\mathcal{P}}\colon f(p)=0\}\). Then, \(C(X)_{\mathcal{P}}/M_{p}\) is isomorphic to \(\mathbb{R}\), for each \(p\in X\). Thus, \(M_{p}\) is a real maximal ideal, for each \(p\in X\) and is thus a member of \(\mathcal{R}(C(X)_{\mathcal{P}})\).
**Theorem 7.6**.: _[_11_, Theorem 2.3]_ _The following conditions on the \(\phi\)-algebra \(A\) are equivalent. (a) \(A\) is uniformly closed, regular, and \(\bigcap\{M\colon M\in\mathcal{R}(A)\}=\{0\}\). (b) \(A\) is isomorphic to an algebra of measurable functions._
We have
\[\bigcap_{p\in X}M_{p}=\{\mathbf{0}\}\implies\bigcap\{M\colon M\in\mathcal{R}( C(X)_{\mathcal{P}})\}=\{\mathbf{0}\}\]
From the above theorem, we have \(C(X)_{\mathcal{P}}\) is isomorphic to an algebra of measurable functions.
Next we show that \(\aleph_{0}\)-self injectiveness of a ring is invariant under ring isomorphism.
**Theorem 7.7**.: _If a ring \(R\) is isomorphic to an \(\aleph_{0}\)-self injective reduced ring \(S\), then \(R\) is also \(\aleph_{0}\)-self injective._
Proof.: We shall use Theorem 2.2 of [10] to prove the result.
Let \(\psi\colon S\longrightarrow R\) be the given isomorphism. It is easy to see that, since \(S\) is reduced, so is \(\psi(S)=R\).
Further, by Theorem 2.2 of [10], \(S\) is regular. Therefore, \(R=\psi(S)\) is also a regular ring.
Let us now consider two disjoint subsets of \(R\), \(P\) and \(Q\) whose union is a countable orthogonal subset of \(R\). Then, \(\psi^{-1}(P)\) and \(\psi^{-1}(Q)\) are disjoint and their union is countable. Also, for any \(s,s^{\prime}\in\psi^{-1}(P)\cup\psi^{-1}(Q)\) with \(s\neq s^{\prime}\), \(\psi(s),\psi(s^{\prime})\in P\cup Q\) with \(\psi(s)\neq\psi(s^{\prime})\) (since \(\psi\) is injective). As \(P\cup Q\) is orthogonal,
\[\psi(s)\psi(s^{\prime})=0\implies\psi(ss^{\prime})=0\implies ss^{\prime}=0 \text{, since $\psi$ is injective.}\]
Thus, \(\psi^{-1}(P)\cup\psi^{-1}(Q)\) is orthogonal. As \(S\) is \(\aleph_{0}\)-self injective, by Theorem 2.2 of [10], there exists \(a\in S\) that separates \(\psi^{-1}(P)\) from \(\psi^{-1}(Q)\).
We now show that \(\psi(a)\) separates \(P\) from \(Q\).
1. Let \(p\in P\), then \(\psi^{-1}(p^{2}\psi(a))=(\psi^{-1}(p))^{2}a=\psi^{-1}(p)\), as \(a\in S\) that separates \(\psi^{-1}(P)\) from \(\psi^{-1}(Q)\). It follows from the injectivity of \(\psi^{-1}\) that \(p^{2}\psi(a)=p\). Thus, \(p^{2}\psi(a)=p\) for all \(p\in P\).
2. Let \(q\in Q\), then \(\psi^{-1}(q)\in\psi^{-1}(Q)\). As \(a\in S\) that separates \(\psi^{-1}(P)\) from \(\psi^{-1}(Q)\), \(\psi^{-1}(q)a=0\) which shows that \(q\psi(a)=0\). Thus, \(\psi(a)\in Ann(Q)\).
Thus, we get an element in \(R\) (\(\psi(a)\)) that separates \(P\) from \(Q\). It follows from Theorem 2.2 of [10] that \(R\) is \(\aleph_{0}\)-self injective.
Finally we use the above theory to establish the following theorem.
**Theorem 7.8**.: _For a \(\tau\mathcal{PU}\)-space, the following conditions are equivalent. (a) \(X\) is a \(\mathcal{P}P\)-space. (b) \(C(X)_{\mathcal{P}}\) is isomorphic to an algebra of measurable functions. (c) \(C(X)_{\mathcal{P}}\) is \(\aleph_{0}\)-self injective._
Proof.: (a) \(\implies\) (b) follows from the above discussions. (b) \(\implies\) (c) can be seen from Theorem 7 of [3] and the above theorem. Finally, (c) \(\implies\) (a) follows directly from Theorem 2.2 of [10].
We cannot omit the condition that \(X\) is a \(\tau\mathcal{PU}\)-space. This can be seen from the following example.
**Counter Example 7.9**.: _Let us consider_
\[X=\mathbb{N}\cup\bigcup_{k\in\mathbb{N}}\{\frac{1}{n}+k\colon n\in\mathbb{N}\}\]
_endowed with the subspace topology inherited from \(\mathbb{R}_{u}\). Also let \(\mathcal{P}=\mathcal{P}_{f}\), that is, the ideal of all finite subsets of \(X\). Then, \(C(X)_{\mathcal{P}}=C(X)_{F}\), which is not uniformly closed (by Theorem 2.9 in [13]). Now, we consider the following subsets of \(X\):_
\[A=\bigcup_{k\in\mathbb{N}}\{\frac{1}{2n}+k\colon n\in\mathbb{N}\}\text{ and }B=\bigcup_{k\in\mathbb{N}}\{\frac{1}{2n-1}+k\colon n\in\mathbb{N}\}.\]
_Define \(P=\{\chi_{\{x\}}\colon x\in A\}\) and \(Q=\{\chi_{\{x\}}\colon x\in B\}\) Then, \(P\) and \(Q\) are disjoint and \(P\cup Q\) is countable and orthogonal. If there exists an \(f\in\mathbb{R}^{X}\) that separates \(P\) from \(Q\), then \(f(A)=\{1\}\) and \(f(B)=\{0\}\). Thus, every point in \(\mathbb{N}\) is a point of discontinuity of \(f\). Therefore \(f\notin C(X)_{F}\). This ensures from Theorem 2.2 of [10] that \(C(X)_{F}\) is not \(\aleph_{0}\)-self injective._
|
2301.02285 | Primary Decompositions of Regular Sequences | Let $R$ be a Noetherian ring and $x_1,\ldots,x_t$ a permutable regular
sequence of elements in $R$. Then there exists a finite set of primes $\Lambda$
and natural number $C$ so that for all $n_1,\ldots,n_t$ there exists a primary
decomposition $(x_1^{n_1},\ldots,x_t^{n_t})=Q_1\cap \cdots \cap Q_\ell$ so that
$\sqrt{Q_i}\in \Lambda$ and $\sqrt{Q_i}^{C(n_1+\cdots + n_t)}\subseteq Q_i$ for
all $1\leq i\leq \ell$. | Thomas Polstra | 2023-01-05T20:25:52Z | http://arxiv.org/abs/2301.02285v1 | # Primary decompositions of regular sequences
###### Abstract.
Let \(R\) be a Noetherian ring and \(x_{1},\ldots,x_{t}\) a permutable regular sequence of elements in \(R\). Then there exists a finite set of primes \(\Lambda\) and natural number \(C\) so that for all \(n_{1},\ldots,n_{t}\) there exists a primary decomposition \((x_{1}^{n_{1}},\ldots,x_{t}^{n_{t}})=Q_{1}\cap\cdots\cap Q_{\ell}\) so that \(\sqrt{Q_{i}}\in\Lambda\) and \(\sqrt{Q_{i}}^{C(n_{1}+\cdots+n_{t})}\subseteq Q_{i}\) for all \(1\leq i\leq\ell\).
Polarts was supported in part by NSF Grant DMS #2101890.
## 1. Introduction
Primary decompositions of ideals in commutative algebra correspond to decompositions of closed subschemes into irreducible subspaces in algebraic geometry. Let \(R\) be a Noetherian ring and \(I\subseteq R\) an ideal. By the main result of [10] there exists an integer \(C\) so that for every \(n\in\mathbb{N}\) there exists a primary decomposition
\[I^{n}=Q_{1}\cap\cdots\cap Q_{\ell}\]
so that \(\sqrt{Q_{i}}^{Cn}\subseteq Q_{i}\) for all \(1\leq i\leq\ell\). Swanson's theorem has applications to multiplicity theory, [12, 13, 14, 15, 16], the uniform symbolic power topology problem, [10, 1, 11, 12], and localizations problems in tight closure theory, [15, 16, 17, 18].
Swanson's proof proceeds by first reducing to the scenario that \(I\) is principally generated by a nonzerodivisor. Indeed, the extended Rees algebra \(S:=R[It,t^{-1}]\) enjoys the property that \(t^{-1}\) is a nonzerodivisor and \(I^{n}=(t^{-1}S)^{n}\cap R\). If \((t^{-1}S)^{n}=Q_{1}\cap\cdots\cap Q_{\ell}\) is a suitable primary decomposition of \((t^{-1}S)^{n}\) then \(I^{n}=(Q_{1}\cap R)\cap\cdots\cap(Q_{\ell}\cap R)\) is a primary decomposition of \(I^{n}\) with the desired properties. Our main result extends Swanson's Theorem to ideals generated by a permutable regular sequence.
**Theorem 1.1**.: _Let \(R\) be a Noetherian ring and \(x_{1},\ldots,x_{t}\) a permutable regular sequence. There exists a finite set of primes \(\Lambda\) and a constant \(C\) so that for every \(n_{1},\ldots,n_{t}\in\mathbb{N}\) there exists a primary decomposition_
\[(x_{1}^{n_{1}},\ldots,x_{t}^{n_{t}})=Q_{1}\cap\cdots\cap Q_{\ell}\]
_so that \(\sqrt{Q_{i}}\in\Lambda\) and \(\sqrt{Q_{i}}^{C(n_{1}+\cdots+n_{t})}\subseteq Q_{i}\) for all \(1\leq i\leq\ell\)._
The methodology of [10] is akin to the techniques of Huneke's Uniform Artin-Rees Theorem, [15]. Other's have re-proven Swanson's theorem without relying on such technicalities, see [10, 11, 12, 13] for more general decomposition statements involving products of powers of ideals \(I_{1}^{n_{1}}\cdots I_{t}^{n_{t}}\) and their integral closures. Similar to their methods, the present article fundamentally depends only upon the standard Artin-Rees Lemma [1, Proposition 10.9], and the theory of injective hulls, [1, Section 3.2].
## 2. Primary Decompositions of Regular Sequences
Let \(I\subseteq R\) be an ideal and \(x\in R\) an element. By the Artin-Rees Lemma there exists a constant \(C\) so that \((x)\cap P^{h+C}=((x)\cap P^{C})P^{h}\subseteq xP^{h}\) for all \(h\), see [1, Proposition 10.9].
**Lemma 2.1**.: _Let \(R\) be a Noetherian ring and \(x\in R\) a non-unit. Let \(P\in\operatorname{Spec}(R)\) and \(E=E_{R}(R/P)\). Let \(C\) be chosen such that \((x)\cap P^{h+C}\subseteq xP^{h}\) for all \(h\in\mathbb{N}\). If \(\varphi:R\xrightarrow{}E\) is an \(R\)-linear map with the property that \(P^{h}\varphi=0\) then there exists an \(R\)-linear map \(\psi:R\xrightarrow{}E\) such that_
* \(\varphi=x\psi\)_;_
* \(P^{h+C}\psi=0\)_._
Proof.: We are assuming that \(C\) is chosen such that \((x)\cap P^{h+C}\subseteq xP^{h}\) for all integers \(h\). In particular, there are natural surjections
\[\frac{R}{(x)\cap P^{h+C}}\xrightarrow{}\frac{R}{xP^{h}}.\]
Therefore there are inclusions
\[\operatorname{Hom}_{R}\left(\frac{R}{xP^{h}},E\right)\xrightarrow{} \operatorname{Hom}_{R}\left(\frac{R}{(x)\cap P^{h+C}},E\right).\]
Equivalently,
\[(0:_{E}xP^{h})\subseteq(0:_{E}((x)\cap P^{h+C})).\]
Even further, there are natural inclusions
\[\frac{R}{(x)\cap P^{h+C}}\subseteq\frac{R}{(x)}\oplus\frac{R}{P^{h+C}}.\]
Therefore there are natural surjections
\[(0:_{E}(x))+(0:_{E}P^{h+C})\twoheadrightarrow(0:_{E}((x)\cap P^{h+C})).\]
In conclusion, if
\[\lambda\in\operatorname{Hom}_{R}(R/xP^{h},E)\cong(0:_{E}xP^{h})\subseteq(0:_ {E}((x)\cap P^{h+C}))\]
then there exists
\[\lambda^{\prime}\in\operatorname{Hom}_{R}(R/(x),E)\cong(0:_{E}(x))\]
and
\[\psi\in\operatorname{Hom}_{R}(R/P^{h+C},E)\cong(0:_{E}P^{h+C})\]
such that \(\lambda=\lambda^{\prime}+\psi\).
The module \(E\) is injective and therefore there exists \(\lambda\) such that \(\varphi=x\lambda\), i.e. the following diagram commutes:
Since \(P^{h}\varphi=0\) we have that \(xP^{h}\lambda=0\). We can therefore write \(\lambda=\lambda^{\prime}+\psi\) so that \(x\lambda^{\prime}=0\) and \(P^{h+C}\psi=0\). Therefore \(\varphi=x\lambda=x\psi\) and \(\psi:R\xrightarrow{}E\) enjoys the desired properties.
Adopt the following notation: Let \(\underline{x}=x_{1},\ldots,x_{t}\) be a sequence of elements of a Noetherian ring \(R\) and \(\underline{n}=(n_{1},\ldots,n_{t})\in\mathbb{N}^{\oplus t}\).
* \(\underline{x^{\underline{n}}}=x_{1}^{n_{1}},\ldots,x_{t}^{n_{t}}\);
* \(e_{i}\in\mathbb{N}^{\oplus t}\) is the element with a \(1\) in the \(i\)th coordinate and \(0\)'s elsewhere;
* \(\underline{1}=(1,\ldots,1)\in\mathbb{N}^{\oplus t}\);
* If \(\underline{n}^{\prime}\in\mathbb{N}^{\oplus t}\) then \(\underline{n}\cdot\underline{n}^{\prime}\) denotes the dot product of \(\underline{n}\) and \(\underline{n}^{\prime}\). In particular, the element \(\underline{n}-(\underline{n}\cdot e_{i}-1)e_{i}\) is the element of \(\mathbb{N}^{\oplus t}\) obtained by replacing the \(i\)th coordinate of \(\underline{n}\) with the number \(1\).
Observe that if \(x_{1},\ldots,x_{t}\) is a permutable regular sequence and \(\underline{n}\in\mathbb{N}^{\oplus t}\) then \((\underline{x^{\underline{n}+e_{i}}}):x_{i}=(\underline{x^{\underline{n}}})\).
**Theorem 2.2**.: _Let \(R\) be a Noetherian ring and \(\underline{x}=x_{1},\ldots,x_{t}\) a permutable regular sequence. Fix a finite list of prime ideals \(\Lambda\), allowing for the possibility of repeated primes in \(\Lambda\), and an embedding_
\[\frac{R}{(\underline{x})}\overset{\varphi_{\underline{1}}}{\hookrightarrow} \bigoplus_{P\in\Lambda}E_{R}(R/P).\]
_Let \(C\) be chosen large enough so that \(P^{C|\underline{1}|}\varphi_{\underline{1}}=0\) and \((x_{i})\cap P^{h+C}\subseteq x_{i}P^{h}\) for all \(P\in\Lambda\), \(h\in\mathbb{N}\), and \(1\leq i\leq t\). Then for all \(\underline{n}\in\mathbb{N}^{\oplus t}\) there exists an embedding_
\[\frac{R}{(\underline{x^{\underline{n}}})}\overset{\varphi_{\underline{n}}}{ \hookrightarrow}E_{\underline{n}}\]
_such that \(E_{\underline{n}}\cong\left(\bigoplus_{P\in\Lambda}E_{R}(R/P)\right)^{ \ell_{\underline{n}}}\) for some integer \(\ell_{\underline{n}}\) and \(P^{C|\underline{n}|}\varphi_{\underline{n}}=0\) for all \(P\in\Lambda\)._
Proof.: By induction, we may suppose that we have constructed the injective module \(E_{\underline{n}}\) for all \(\underline{n}\leq\underline{n}^{\prime}\) and maps \(\varphi_{\underline{n}}:R/(\underline{x^{\underline{n}}})\hookrightarrow E_{ \underline{n}}\) such that \(P^{C|\underline{n}|}\varphi=0\) for the purposes of constructing \(E_{\underline{n}^{\prime}+e_{i}}\) and map \(\varphi_{\underline{n}^{\prime}+e_{i}}:R/(\underline{x^{\underline{n}^{\prime }+e_{i}}})\hookrightarrow E_{\underline{n}^{\prime}+e_{i}}\) such that \(P^{C|\underline{n}^{\prime}+e_{i}|}\varphi_{\underline{n}^{\prime}+e_{i}}=0\). Even further, we suppose that \(E_{\underline{n}}\) consists of direct sums of \(\bigoplus_{P\in\Lambda}E_{R}(R/P)\) for all \(\underline{n}\leq\underline{n}^{\prime}\).
Because \(\underline{x}\) is a permutable regular sequence there exists short exact sequences
\[0\xrightarrow{R}\xrightarrow{\cdot x_{i}}\xrightarrow{R}\xrightarrow{\pi} \xrightarrow{R}\xrightarrow{(\underline{x^{\underline{n}^{\prime}-(\underline{n} ^{\prime}\cdot e_{i}-1)e_{i}}})}\xrightarrow{}0.\]
Lemma 2.1 applied to each of the irreducible direct summands of \(E_{\underline{n}^{\prime}}\) produces a map \(\psi_{\underline{n}^{\prime}}:R/(\underline{x^{\underline{n}^{\prime}+e_{i} }})\xrightarrow{}E_{\underline{n}^{\prime}}\) such that \(P^{C|\underline{n}^{\prime}+e_{i}|}\psi_{\underline{n}^{\prime}}=0\) and the following diagram commutes:
It is straight-forward to verify that \(\varphi_{\underline{n}^{\prime}+e_{i}}:=(\psi_{\underline{n}^{\prime}}, \varphi_{\underline{n}^{\prime}-(\underline{n}^{\prime}\cdot e_{i}-1)e_{i}} \circ\pi)\) is an injective map and \(P^{C|\underline{n}+e_{i}|}\varphi_{\underline{n}^{\prime}+e_{i}}=0\).
**Corollary 2.3** (Swanson's Theorem for regular sequences).: _Let \(R\) be a Noetherian ring and \(\underline{x}=x_{1},\ldots,x_{t}\) a permutable regular sequence. There exists a finite set of primes \(\Lambda\) and a constant \(C\) such that for all \(\underline{n}\in\mathbb{N}^{\oplus t}\) there exists a primary decomposition_
\[(\underline{x^{\underline{n}}})=Q_{1}\cap\cdots\cap Q_{\ell}\]
_such that \(\sqrt{Q_{i}}\in\Lambda\) and \(\sqrt{Q_{i}}^{C|\underline{n}|}\subseteq Q_{i}\) for all \(1\leq i\leq\ell\)._
Proof.: Fix \(\underline{n}\in\mathbb{N}^{\oplus t}.\) By Theorem 2.2 there exists a constant \(C\), not depending on \(\underline{n}\in\mathbb{N}^{\oplus t}\), and a finite set of primes \(\Lambda_{\underline{n}}\), allowing for the possibility of repeated primes in \(\Lambda_{\underline{n}}\), and an embedding
\[\varphi_{\underline{n}}:\frac{R}{(\underline{x}^{\underline{n}})}\hookrightarrow \bigoplus_{P\in\Lambda_{\underline{n}}}E(R/P)\]
such that \(P^{C[\underline{n}]}\varphi_{\underline{n}}=0\) for all \(P\in\Lambda_{\underline{n}}\). Let \(\pi:R\xrightarrow{}R/(\underline{x}^{\underline{n}})\) and \(\pi_{P}:\bigoplus_{P\in\Lambda_{\underline{n}}}E(R/P)\xrightarrow{}E(R/P)\) be the natural surjections. Then
\[(\underline{x}^{\underline{n}})=\bigcap_{P\in\Lambda}\operatorname{Ker}(\pi_ {P}\circ\varphi_{\underline{n}}\circ\pi)\]
is a primary decomposition of \((\underline{x}^{\underline{n}})\) as there are embeddings
\[\frac{R}{\operatorname{Ker}(\pi_{P}\circ\varphi_{\underline{n}}\circ\pi)} \hookrightarrow E(R/P).\]
Furthermore, \(P^{C[\underline{n}]}\subseteq\operatorname{Ker}(\pi_{P}\circ\varphi_{ \underline{n}}\circ\pi)\) since \(P^{C[\underline{n}]}\varphi=0\).
|
2306.09377 | Language Aligned Visual Representations Predict Human Behavior in
Naturalistic Learning Tasks | Humans possess the ability to identify and generalize relevant features of
natural objects, which aids them in various situations. To investigate this
phenomenon and determine the most effective representations for predicting
human behavior, we conducted two experiments involving category learning and
reward learning. Our experiments used realistic images as stimuli, and
participants were tasked with making accurate decisions based on novel stimuli
for all trials, thereby necessitating generalization. In both tasks, the
underlying rules were generated as simple linear functions using stimulus
dimensions extracted from human similarity judgments. Notably, participants
successfully identified the relevant stimulus features within a few trials,
demonstrating effective generalization. We performed an extensive model
comparison, evaluating the trial-by-trial predictive accuracy of diverse deep
learning models' representations of human choices. Intriguingly,
representations from models trained on both text and image data consistently
outperformed models trained solely on images, even surpassing models using the
features that generated the task itself. These findings suggest that
language-aligned visual representations possess sufficient richness to describe
human generalization in naturalistic settings and emphasize the role of
language in shaping human cognition. | Can Demircan, Tankred Saanum, Leonardo Pettini, Marcel Binz, Blazej M Baczkowski, Paula Kaanders, Christian F Doeller, Mona M Garvert, Eric Schulz | 2023-06-15T08:18:29Z | http://arxiv.org/abs/2306.09377v1 | # Language Aligned Visual Representations Predict Human Behavior in Naturalistic Learning Tasks
###### Abstract
Humans possess the ability to identify and generalize relevant features of natural objects, which aids them in various situations. To investigate this phenomenon and determine the most effective representations for predicting human behavior, we conducted two experiments involving category learning and reward learning. Our experiments used realistic images as stimuli, and participants were tasked with making accurate decisions based on novel stimuli for all trials, thereby necessitating generalization. In both tasks, the underlying rules were generated as simple linear functions using stimulus dimensions extracted from human similarity judgments. Notably, participants successfully identified the relevant stimulus features within a few trials, demonstrating effective generalization. We performed an extensive model comparison, evaluating the trial-by-trial predictive accuracy of diverse deep learning models' representations of human choices. Intriguingly, representations from models trained on both text and image data consistently outperformed models trained solely on images, even surpassing models using the features that generated the task itself. These findings suggest that language-aligned visual representations possess sufficient richness to describe human generalization in naturalistic settings and emphasize the role of language in shaping human cognition.
## Introduction
Generalization is a notoriously difficult challenge. Especially in realistic environments, where there are infinitely many different features to describe objects, extracting the dimensions that matter becomes difficult. Nonetheless, humans and other animals can generalize efficiently from experience to novel situations, and adapt their representations to the task at hand. Consider an apple, as an example. There can be many features describing an apple such as its color, taste, shape, or brand. People can use these features and make predictions about how tasteful that apple could be, the environmental impacts of growing it, or the significance of it in different mythological and religious settings. How can we describe this kind of generalization behavior across realistic stimuli? And which representations underlie this ability in the first place?
One way to adapt sensory representations to different situations is to learn functions over them. Cognitive psychologists have studied how people learn these functions using simple learning paradigms. In these tasks, participants are repeatedly shown abstract stimuli, such as geometric objects of different colors and shapes, and they are instructed to learn what kind of objects are more rewarding or belong to a certain category through feedback [3, 4, 5]. This approach has been fruitful for understanding the kinds of learning strategies people use, such as rule-based learning [6], similarity-based learning [7], or heuristics strategies [8]. However, some important aspects of real-life learning have been ignored in these tasks. First, in real-life, associations are often not learned purely through repeated trial-error, but generalizing to novel situations is needed [9]. Second, in this line of work, how to represent the stimuli has largely been ignored, given that stimuli are simple and are therefore easy to represent. Therefore, how well humans can generalize in learning tasks that involve naturalistic stimuli and how humans represent these stimuli in the first place remains unknown.
To address these gaps, we designed a category learning and a reward learning task. These two tasks did not involve any repeating stimuli, requiring generalization for any successful decision. Additionally, we used naturalistic images, which include features humans acquired throughout their lifespans. Naturalistic stimuli are high dimensional [10], and it may be challenging to assign credit to the relevant stimulus dimension. We used these rich naturalistic tasks to study whether and how fast humans can discover the relevant stimulus features from limited experience and exploit this knowledge for novel decisions.
To test how people represent naturalistic stimuli, we turn to deep neural networks (DNNs). Across various domains of cognitive science, these models have
been successful in predicting behavior and neural data in response to naturalistic stimuli. For example, when prompted with the same images, the representational hierarchy of DNNs have shown to be similar to that of the ventral visual stream in primates [(11, 12)]. These models have also been shown to behave similarly to humans in categorization tasks [(13, 14)]. In the auditory domain, similar hierarchical correspondence of representations have been found between DNNs and the auditory hierarchy in the human brain [(15)]. Moving up to higher cognition, it has been shown that representations of large language models can predict brain activity in response to natural language as well as human reading times [(16)]. Whether and what kind of DNN representations can be useful for predicting human behavior in high-dimensional learning tasks that require generalization remains untested.
The stimuli in our category and reward learning tasks were sampled from the THINGS database [(10)]. To assign rewards and category membership to these images, we constructed sparse linear functions over an embedding built to capture human similarity judgment of the THINGS stimuli [(1)]. Participants were exposed to a novel stimulus on each trial and could therefore not solve the task by associating specific stimuli with specific outcomes. Instead, they had to apply their knowledge about regularities between the stimuli they had seen already to solve the task efficiently by means of generalization. We found that humans learned to do this surprisingly quickly, suggesting that they could identify relevant stimulus dimensions within just a few trials and use this knowledge to guide choices. To understand the nature of the representations participants used when solving this task, we first extracted representations from DNNs of various architectures that were trained under different regimes and different modalities of data. Then, we trained linear models over these representations to predict humans' learning trajectories. While all DNN representations could predict human behavior, language-aligned visual representations were consistently better at predicting human choices than their uni-modal counterparts. Our modeling results show that human learning can be modeled using simple strategies when provided with sufficiently rich representations and that DNNs trained on multi-modal data can provide these rich representations. Taken together, our paradigm and behavioral results pave the way to a deeper understanding of representational structure in the human mind.
## Results
### Category Learning
Human participants (\(n=91\)) completed \(120\) trials of an online category learning task, where they were presented with a novel image in each trial. They were asked to deliver these images to one of two dinosaurs, _Jully_ or _Fully_, using key presses. Participants were told that the two dinosaurs had completely non-overlapping preferences for what gifts they enjoyed. After each trial, we gave participants feedback on whether their choice of delivery was correct. An example trial from the task is shown in Fig. 1A.
Participants were assigned to one of three conditions, where in each condition the category boundary was dependent on a different rule. These rules depended on a feature of an embedding that reflected human similarity judgments on the THINGS database images [(1, 10)]. Specifically, the authors built an embedding to predict human choices in an odd-one-out task and found that an embedding with \(49\) human interpretable features described human choices the best. The three chosen features were those that explained the most variance in the embedding and are displayed in Fig. 1C. For each participant, \(120\) unique stimuli from the THINGS database were sampled. A median split over the assigned feature
Figure 1: Task descriptions. (_A_) An example trial from the category learning task, where an incorrect decision is made. (_B_) An example trial from the reward learning task where the best option is chosen and highlighted in orange. (_C_) Examples of where different images fall in the three features of the embedding used to generate the task and how these features relate to category membership and associated rewards in our tasks. The feature labels are those used by [(1)] who constructed the embedding. The original images are replaced with copyright-free alternatives from the THINGSplus database [(2)].
of the sampled stimuli determined the category boundary.
We analyzed participants' behavior (Fig. 2A), using mixed-effects logistic regression. We predicted whether a participant made the correct choice using the trial number as a fixed effect. Additionally, we fitted participant-specific random intercepts, as well as random slopes for the trial number and the assigned experimental condition of participants. We found that participants performed the task above chance level (\(\hat{\beta}=1.14\pm 0.09\), \(z=13.18\), \(P<.001\)) and that their performance improved over trials (\(\hat{\beta}=0.32\pm 0.05\), \(z=6.89\), \(P<.001\)), indicating a learning effect (Fig. 2B). This suggests that humans can very efficiently extract the relevant feature dimension in high-dimensional naturalistic environments despite seeing each individual stimulus only once. To characterize how quickly participants learn the task, we compared the accuracy of participants at each trial against chance level using a one-sided one-sample t-test. We found that participants performed above chance level starting from trial number \(7\), \(t(90)=2.25\), \(p=.01\). See the _Supplementary Information_ for details of testing and Fig. S1 for the results for all trials.
To gain an understanding of the representation that was guiding choices in this task, we assessed what kind of deep neural network representations can best describe humans' choices in our task. We tested representations from \(4s\) different models in addition to the task embedding (17-38). The models were trained on either text, images, or the two combined in order to solve common machine learning tasks such as text-generation, text-representation, or image-recognition. For the models that were trained on image data, we make the additional distinction between self-supervised and supervised learning paradigms. This is because they are
Figure 2: Category learning results. (_A_) Participants’ performance across trials. Shaded lines indicate \(95\%\) confidence intervals. (_B_) Coefficient values from the mixed-effects logistic regression analysis to predict participant choice. Error bars indicate standard errors. (_C_) Performance of the computational models. For each category, we show the learning trajectory of the model predicting participant behavior the best. The curves are smoothed every 10th trial. (_D_) Cross-validated negative log likelihoods of each representation. Lower values indicate better fits to human behavior. The dashed horizontal line indicates chance level performance.
shown to learn different representations even when they share the same architecture [(39)]. In order to extract representations from models trained on image and multi-modal data, we provided the images used in the task to these models. To extract representations from models trained on text, we provided them with the prompt This is the image of a \(X\), where \(X\) was the category label of the task images. We trained logistic regression models in a sequential manner on the different representations and used the predictions of these models to model participants' choices (see _Methods_ for details on representation extraction and modeling). The simulated behavior of these models is shown in Fig. _2C_.
First, we observed that all the representations we tested can do our task and predict human behavior above chance level. This is remarkable given that these representations were obtained through training regimes that were independent of our task, and the semantic representations we used to generate the task were unknown to the other representations. Additionally, we found that \(7\) of the \(48\) candidate representations described participant behavior better than representations used to generate the task. Of these \(7\) representations, one was a large vision transformer, trained in a supervised manner. The other \(6\) were different variants of a self-supervised model, Contrastive Language-Image Pre-training (CLIP) [(17)], that was trained to represent pairs of images and text as similarly as possible. We saw that regardless of encoder architecture choice, this multi-modal training regime produced representations describing human choices in our task exceptionally well. The rest of the supervised and self-supervised vision models, as well as the language models, had a heterogeneous distribution in how well they predicted human behavior, as visualized in Fig. _2D_. The descriptions of the models we extracted representations from are available in the Supplementary Information.
### Reward Learning
To test whether our behavioral and modeling findings can generalize across learning tasks, we designed a second learning task. In our second task, human participants (\(n=82\)) completed \(60\) trials of a reward learning paradigm, in which they were asked to maximize their accumulated reward over the course of the task. In each trial, participants were presented with two images and were asked to select one using key presses. After making a choice, the associated reward with each option was shown. An example trial from the task is shown in Fig. _1B_.
Participants were assigned to one of the three conditions as in the category learning task. Stimuli were sampled in the same way as the category learning task. For each participant, the values of the task-relevant feature were re-scaled linearly between \(0\) and \(100\).
We again analyzed participants' behavior (Fig. 3A) using mixed-effects logistic regression. We predicted whether a participant chose the image on the right using the reward difference between the two options, the trial number, and the interaction of the two terms as fixed effects. We additionally fitted participant-specific random slopes for these terms and for the assigned experimental condition of participants. We found that participants used the reward difference between the options (\(\hat{\beta}=0.89\pm 0.07\), \(z=12.56\), \(P<.001\)). While the trial number did not predict which option the participants chose (\(\hat{\beta}=0.002\pm 0.03\), \(z=0.09\), \(P=.93\)), the interaction of the two terms revealed that participants used the reward differences between the options more effectively over trials (\(\hat{\beta}=0.34\pm 0.04\), \(z=9.30\), \(P<.001\); Fig. 3B). These results suggest that the generalization effect we found in the category-learning task transfers to other naturalistic learning tasks as well. To test how quickly participants learn the task, we compared the accuracy of participants at each trial against chance level using a one-sided one-sample t-test. We found that participants performed above chance level starting from trial number \(6\), \(t(81)=3.01\), \(p=.002\). See the _Supplementary Information_ for details of testing and Fig. S1 for the results for all trials.
The same representations extracted for the category learning task were used in the reward learning task. We trained linear regression models (again in a sequential manner) on the observations of the participants to predict the associated reward with novel images. We then regressed the reward estimates of the linear models onto participant choice in a mixed-effects logistic regression model (see _Materials and Methods_ for details), whose simulated behavior is shown in Fig. 3C.
The modeling results for this task were similar to those of the category learning task. All the models described participant behavior above chance level. Again, \(7\) models, a large vision transformer, and all variants of CLIP predicted participant behavior better than the task features, and we observed heterogeneity in how well language models, supervised vision models, and self-supervised vision models predicted human behavior. The complete modeling results are shown in Fig. _3D_.
### Representational Similarity Analyses
Our modeling results showed that CLIP representations are robustly better than their uni-modal counterparts in describing human behavior, suggesting that they capture essential features of object representations. As the next step, we studied what makes these representations particularly successful. To get a better understanding of these results, we conducted two representational similarity analyses (RSA) [(40)] where we investigated the representations for all the images in the THINGS database. First, we computed Centered Kernel Alignment (CKA) [(41)] using a linear kernel between the task embedding and every representation we tested. While a CKA score of \(0\) indicates maximal dissimilarity between representations, a score of \(1\) indicates maximal similarity. We found that all of the \(6\) different CLIP representa
tions are more similar to the embedding that was used to generate the task than any other representation we tested (\(CKA=0.61\pm.008\)) (see Fig. 4A for the full comparison). This similarity shows they provide a good representational basis to arrive at meaningful solutions in our tasks.
Does the high similarity between the CLIP representations and the task embedding mean that the CLIP is simply a good enough approximation of the task embedding, or are there also meaningful differences between the two? To answer this question, we calculated the CKA between CLIP representations and every other representation, as well as the CKA between the task embedding and every representation except CLIP. Then, we subtracted CKA values for CLIP from those for the task embedding. Here, while a positive value would indicate a representation being more similar to the task embedding than it is to CLIP representations, a negative value would indicate an opposite relationship. We found a clear pattern, where language representations were consistently more similar to the task embedding than they were to CLIP representations. On the contrary, visual representations were more similar to CLIP representations than they were to the task embedding (Fig. 4B). These findings show that in addition to the high similarity between the task embedding and the CLIP representations, there is a meaningful difference that CLIP representations are better aligned with visual representations. This difference plays an important role in predicting participant behavior. To provide better intuition for the differences between the two models and the better predictive accuracy of CLIP models, we show example trials from the last \(30\) trials of the reward learning task in Fig. 5. In these trials, participants and models trained on CLIP representations made the same decision that differed from the decision of models
Figure 3: Reward learning results. (_A_) Participants’ performance across trials. Shaded lines indicate \(95\%\) confidence intervals. (_B_) Coefficient values from the mixed-effects logistic regression analysis to predict participant choice. Error bars indicate standard errors. (_C_) Performance of the computational models. For each category, we show the learning trajectory of the model predicting participant behavior the best. The curves are smoothed every 5th trial. (_D_) Cross-validated negative log likelihoods of each representation. Lower values indicate a better fit for human behavior. The dashed horizontal line indicates chance level performance.
trained on the task embedding. These examples illustrate how CLIP representations can capture human intuition in decisions where the task embedding failed to do so. We ruled out other alternative explanations to our findings such as the type of learning algorithm used and the size of the representational space in extra analyses as shown in Fig. S2, Fig. S3, and Fig. S4.
## Discussion
We developed novel category learning and reward learning tasks to test people's abilities to generalize in high-dimensional spaces. The tasks required participants to identify relevant stimulus dimensions from feedback and use this knowledge to make correct subsequent decisions. Previous work has shown that humans can exploit relational structure for guiding choice in low dimensional physical spaces (5). Here, we observed that humans can exploit much higher dimensional abstract relational structures to make decisions. Furthermore, participants did not require any repetition of the stimuli to generalize effectively.
We believe that the basis of this behavior lies within rich and expressive sensory representations. Over such representations, simple function learning mechanisms can identify relevant stimulus features. We trained linear models over several DNN representations to test what kind of DNN representations contain this richness and can predict human behavior. All \(48\) representations we tested predicted human behavior above chance level. Previous work has shown that DNN representations can predict human similarity judgments (42), performance in psychophysical tasks (43), and visual selectivity tasks (44). This line of work has been generalized to the auditory domain (45), as well as to language (16). We extended this literature by showing that DNN representations can predict human decisions in naturalistic learning tasks. We find it surprising that these representations can reveal task-relevant, semantically meaningful stimulus features after a few observations. First, because most of these representations contain thousands of features and learning which ones are relevant over
Figure 4: Representational similarity analyses (RSA). (_A_) Linear Centered Kernel Alignment (CKA) similarity between the task embedding and every representation tested. (_B_) Difference between the CKA of the task embedding and tested representations to the CKA of the CLIP representations and the other tested representations. Individual points indicate values for different CLIP representations. Error bars indicate \(95\%\) confidence intervals.
a few observations is a challenging learning problem. Second, it was unexpected that DNNs contained information about the tasks' generative features since they were trained on objectives independent of our experimental tasks.
Another interesting finding was the predictive success of the multi-modal representations we tested. As our tasks only used images, visual representations alone should be sufficient to solve these tasks. However, models trained on text and image data combined consistently outperformed vision-only and text-only models. Furthermore, multi-modal representations predicted participant behavior even better than the generative features of the task. We believe the multi-modal representations predicted behavior better than the generative task features because of the way in which the generative features were derived. An unsupervised similarity judgment was used to generate the task features, which does not require as fine-grained consideration as our tasks did. Additionally, our RSA results showed that multi-modal representations were better aligned with visual representations than the generative task features. These results together indicate that grounding in visual information is necessary but insufficient. We conclude that aligning visual representations with language gives rise to rich representations. These representations can then be adapted to generalize in naturalistic learning tasks as humans can. The success of the multi-modal representations offers novel evidence for the importance of language in shaping cognition, which has previously been shown through other methods [46, 47, 48, 49].
An additional important takeaway from our findings is that simple learning strategies can be very effective when modeling human learning in naturalistic cognitive tasks. It has been shown in previous work that linear learning strategies can successfully predict participant behavior in learning tasks that use simple stimuli [4, 50]. We showed that the same strategies can generalize to higher dimensional settings as well, indicating that simple learning strategies can be effective in naturalistic settings too.
Our findings have implications both for cognitive psychology and for computer science. By showing that people can do learning tasks with naturalistic stimuli and that we can model these processes, our findings create the opportunity to study exploration-exploitation [51], contextual learning [52], and learning functions of different structures [53] in more naturalistic settings. In computer science, the attempt to build models aligned with humans has been increasing [34, 54, 44, 55]. Our tasks and modeling approach offer a new way to measure the human alignment of DNN representations and to use this as a metric while building human-aligned DNNs. Previous work in this domain has focused either strictly on psychophysical tasks [43, 56] or on similarity judgments between stimuli [42, 57]. We suggest that representations should not only be aligned at these levels but also translate to higher-level cognition, as measured by our tasks. Alignment at this level can pave the way for artificial systems that can generalize across semantically rich tasks, making them more robust and powerful.
Our work on understanding human representations in cognitive psychology tasks can be extended in multiple ways. First, we have only tested two paradigms and focused on learning problems. Within the learning paradigms, the same approach can be used to test whether these representations can predict human behavior when task rules are determined through non-linear functions over the embedding. Moving beyond learning paradigms, our approach can be used to test and model other cognitive functions such as memory, attention, and inhibition control among others. Second, we have focused on learning from visual observations. Future work can provide participants with text descriptions instead of images and test whether multi-modal representations are still needed to predict behavior or whether text-based representations are sufficient if there are no images. Lastly, in our modeling work, we have considered models of different architectures, different modalities of training data, and different regimes. However, we have not considered the effect of different learning rules DNNs use on the representations. This is a factor that can be investigated in future work.
Figure 5: Example trials showing the similarly between ChIP and human decisions that show disagreement with the task embedding. Each row shows three trials from a different condition. Orange highlighted text shows the option chosen by all CLIP models and the human participant, whereas grey text shows the decision made by the task embedding. As the tasks were generated using the task embedding, all the choices shown here made by CLIP and humans are sub-optimal. Shown examples are from the second half of the task, as to eliminate the learning process as a confound. The original images are replaced with copyright-free alternatives from the THINGSplus database [2].
Previous work has investigated decision-making in naturalistic settings, such as purchasing decisions and ratings of goods. These studies have improved our understanding of various questions such as how to explore in the real world [58] and how to represent tabular data [59]. We extend our understanding of naturalistic decision-making, by showing that people can quickly adapt the relational knowledge they have about natural objects. This allows them to learn very quickly in naturalistic learning tasks and generalize effectively. We also show that DNNs are powerful tools for modeling the structure of these representations. This could open up the door for a whole new cognitive psychology that uses naturalistic tasks and environments and thereby increase the validity of the cognitive sciences more generally.
## Methods
### Participants
For the category learning task, we recruited \(98\) participants (\(48\) females, \(50\) males, mean age\(=28.92\)y, SD\(=7.32\)) on the Prolific platform. Participants with less than \(50\%\) accuracy were excluded from the analyses, leaving us with \(91\) participants. A base payment of \(\mathcal{E}\)\(1.50\) was made, and participants could earn an additional bonus of \(\mathcal{E}\)\(6.00\). The median completion time was \(12\) minutes and \(38\) seconds. The inclusion criteria included having a minimum approval rate of \(97\%\), and a minimum number of \(15\) previous submissions on Prolific. Participation in the reward learning study was an exclusion criterion. For the reward learning task, \(99\) participants were recruited (\(49\) females, \(49\) males, \(1\) other, mean age\(=27.9\) y, SD\(=9.13\)). After applying the \(50\%\) accuracy criteria, we were left with \(82\) participants. A base payment of \(\mathcal{E}\)\(2.00\) was made, and an additional performance-dependent bonus of \(\mathcal{E}4.00\) was offered. The median completion time was \(9\) minutes and \(26\) seconds. The inclusion criteria included having a minimum approval rate of \(95\%\), and a minimum number of \(10\) previous submissions on Prolific. All participants agreed to their anonymized data being used for research. The study was approved by the ethics committee of the medical faculty of the University of Tubingen (number 701/2020BO). Participants gave consent for their data to be anonymously analyzed by agreeing to a data protection sheet approved by the data protection officer of the MPG (Datenschutzbeauftragte der MPG, Max-Planck-Gesellschaft zur Forderung der Wissenschaften).
### Tasks and Stimuli
Both tasks were run online in forced full-screen mode. Participants were shown written instructions and were asked to complete comprehension check questions before they could start the tasks. In both tasks, participants were given unlimited time to make decisions. In the category learning task, binary (correct versus wrong) feedback was given for \(2s\). In the reward learning task, the associated reward with the stimuli was shown for \(1.5s\), and there was an inter-trial interval of \(1s\) where participants were shown a blank screen. Throughout both tasks, the estimated total payment of participants was shown on the upper part of the screen. At the end of the tasks, participants were asked whether they think their data should be used for analysis. Across both tasks, all but one participant responded saying their data should be analyzed, whose data was anyway excluded due to poor performance. The category learning task was programmed using jsPsych [60], whereas the reward learning task was programmed in plain JavaScript.
For each participant, \(120\) stimuli were sampled independently from the THINGS database. Because the loadings of the features were not uniformly distributed, we made \(5\) equally sized bins of the loadings for the assigned feature and sampled object categories uniformly from these bins. From these object categories, the specific images were assigned randomly. For details on the used features and the embedding, see Hebart et al. [1].
### Extracting Representations
All the visual and multi-modal models were given the task images as inputs. The representations were extracted from the penultimate layer if the models had a classification layer, and from the final layer otherwise. For the vision transformer (ViT) models, the [class] token representations were extracted. We used the THINGSVision [39] toolbox for the steps above.
To extract representations from language models, we provided them with the prompt A photo of a \(X\) where \(X\) was the category label of the task image. fastText was only provided with the category label instead. ada-002 and fastText provided representations as outputs, whereas from the other models, we extracted the [class] token representations. All the resulting representational matrices had different observations as rows and different features as columns.
### Modeling
To model participants' learning trajectories, we trained linear models on the different representations separately. The models were conditioned on the observations \(\mathbf{X}\) made until trial \(t-1\), and outputted estimates for the novel observation \(\mathbf{x}\) on trial \(t\). For both tasks, we report results from L2 regularised models because overall they provided the best fit for human data. See Fig. S2 for results from sparse linear models.
For the category learning task, we used the following linear model to estimate the probability that a given observation \(\mathbf{x}\) belonged to the category \(C=1\).
\[p(C=1)=h(\mathbf{x})=\frac{1}{1+e^{-\boldsymbol{\beta}^{T}\mathbf{x}}} \tag{1}\]
where \(\boldsymbol{\beta}\) was estimated by minimising the loss:
\[\mathcal{L}=-\sum_{i}^{t-1}[y_{i}\log h(\mathbf{x}_{i})+(1-y_{i})\log(1-h(\mathbf{x }_{i})]+\alpha||\mathbf{\beta}||^{2} \tag{2}\]
The penalty term \(\alpha\) was determined with grid search to maximize the task performance of each model on a participant basis. Up until observations from both categories were made, the models were conditioned on a single uninformative pseudo-observation for each category. We regressed the probability estimates of the linear learning model onto participant choice using a mixed-effects logistic regression model in order to estimate participants' policies. The probability estimates were the only fixed and random predictors used. The negative log likelihoods were obtained through leave-one-trial-out cross-validation, where each choice in the task served once as the test set.
For the reward learning task, we modeled data using a probabilistic linear model with centered spherical Gaussian priors over model weights, scaled by \(\lambda\). The reward estimate \(\hat{r}\) was computed as follows:
\[\hat{r}(\mathbf{x})=\left(\sigma^{-2}\left(\sigma^{-2}\mathbf{X}^{T}\mathbf{X }+\lambda\mathbf{I}\right)^{-1}\mathbf{X}^{T}\mathbf{r}\right)^{T}\mathbf{x} \tag{3}\]
where \(\lambda\) and observation noise \(\sigma\) were fitted to maximize the log marginal likelihood of the task performance. We then regressed the reward estimate differences between the left and the right options onto participant choice. This was the only fixed and random predictor in the model. For all learning models and mixed-effects models, we centered the training data and divided it by its standard deviation, and we applied the same scaling parameters to the test data. The learning models were constructed using scikit-learn (61), and the mixed-effects models were fitted using lne4 (62).
### RSA
We calculated pairwise similarities between different representations. First, the representations were mean-centered. Then, the linear CKA between two representations \(\mathbf{A}\) and \(\mathbf{B}\) was calculated as follows:
\[CKA(\mathbf{A},\mathbf{B})=\frac{||\mathbf{B}^{T}\mathbf{A}||_{F}^{2}}{|| \mathbf{A}^{T}\mathbf{A}||_{F}||\mathbf{B}^{T}\mathbf{B}||_{F}} \tag{4}\]
where \(||\cdot||_{F}\) denotes the Frobenius norm.
### Data, Materials, and Software Availability
The code for the current study is available through the GitHub repository [https://github.com/candemircan/NaturalCogSci](https://github.com/candemircan/NaturalCogSci). The data are available on the OSF from the following link: [https://osf.io/h3t52/](https://osf.io/h3t52/).
|
2307.09738 | A discretization-invariant extension and analysis of some deep operator
networks | We present a generalized version of the discretization-invariant neural
operator and prove that the network is a universal approximation in the
operator sense. Moreover, by incorporating additional terms in the
architecture, we establish a connection between this discretization-invariant
neural operator network and those discussed before. The
discretization-invariance property of the operator network implies that
different input functions can be sampled using various sensor locations within
the same training and testing phases. Additionally, since the network learns a
``basis'' for the input and output function spaces, our approach enables the
evaluation of input functions on different discretizations. To evaluate the
performance of the proposed discretization-invariant neural operator, we focus
on challenging examples from multiscale partial differential equations. Our
experimental results indicate that the method achieves lower prediction errors
compared to previous networks and benefits from its discretization-invariant
property. | Zecheng Zhang, Wing Tat Leung, Hayden Schaeffer | 2023-07-19T03:33:24Z | http://arxiv.org/abs/2307.09738v1 | # A discretization-invariant extension and analysis of some deep operator networks
###### Abstract
We present a generalized version of the discretization-invariant neural operator in [43] and prove that the network is a universal approximation in the operator sense. Moreover, by incorporating additional terms in the architecture, we establish a connection between this discretization-invariant neural operator network and those discussed in [7] and [26]. The discretization-invariance property of the operator network implies that different input functions can be sampled using various sensor locations within the same training and testing phases. Additionally, since the network learns a "basis" for the input and output function spaces, our approach enables the evaluation of input functions on different discretizations. To evaluate the performance of the proposed discretization-invariant neural operator, we focus on challenging examples from multiscale partial differential equations. Our experimental results indicate that the method achieves lower prediction errors compared to previous networks and benefits from its discretization-invariant property.
## 1 Introduction
Operator learning [7, 26, 43, 21] is an approach to approximate mappings between two function spaces, and can be seen as a generalization of the standard machine learning architectures. These approaches have gained significant attention in recent years, particularly for their applicability to scientific computing applications that require approximations of solution operators for spatiotemporal dynamics [26, 43, 21]. Operator networks have been used to approximate solutions to parametric partial differential equations (PDEs) [27, 43, 16, 2]. Furthermore, operator learning has been employed for modeling control problems in dynamical systems [24, 31]. Additionally, operator learning can be used to train models with varying levels of fidelity [14, 28, 13] and for data-driven prediction [33, 5].
Operator learning was initially proposed in [7, 6] using a shallow network architecture and was shown to be a universal approximation for nonlinear continuous operators. Building upon this work, the Deep Operator Neural Network (DON) was developed in [26, 16] and extended the network in [7] to deep architectures. In particular, [26] extend the two layer operator network to networks of arbitrary depth, while [16] generalized the operator network to handle multi-input and multi-output problems. Convergence analysis of DON can be found in [19, 20]. Additionally,
[23, 25] investigated operator learning in the presence of noise and proposed accelerated training methodologies for DON. Another approach, known as the Fourier neural operator (FNO), was introduced and analyzed in [21, 18, 42, 45]. FNO uses the Fourier transformation and inverse Fourier transformation on the kernel integral approximation for approximating an operator. A comparison between DON and FNO in terms of theory and computational accuracy is presented in [27]. Additional noteworthy operator learning frameworks include [43, 32, 34, 45].
The choice of discretizations and domains is crucial in operator learning, as both the input and output are functions. A neural operator is said to be input (output) discretization-invariant if the network can handle varying discretizations for the input (output) function. This means that the input (or output) functions can be evaluated on different grids during both training and testing phases [43, 21, 22]. This property is sometimes referred to as resolution-invariant or a non-uniform mesh approach. A discretization-invariant method is one that does not require the (1) the input discretization to be fixed, (2) the output discretization to be fixed, and (3) the input and output spaces to be the same [43]. The Basis Enhanced Learning operator network (BelNet) was developed as a discretization-invariant approach. BelNet shares similarities with DON [30] as it learns a representation for the output function space through the _construction net_ (see Figure 3). However, BelNet also learns the projection of the input functions onto a set of "basis" terms obtained during training using the _projection net_. The architecture of BelNet resembles an encoder-decoder, where the projection net acts as an encoder and the subsequent layers in the network decode the reduced-order model produced by the earlier layers. Numerical experiments presented in [43] demonstrate that BelNet can approximate nonlinear operators without relying on fixed grids, although this property has not been formally proven.
In this work, we present a generalization of BelNet and provide a proof of the universal approximation theorem in the operator sense. We introduce a sub-network structure called the _nonlinear net_, which enhances the flexibility of the architecture. This nonlinear net is motivated by the universal approximation theorem proposed by [7]. Our proof strategy involves establishing connections between various discretizations through a proxy sampling that is fixed, thus allowing us to leverage existing approximation theorems. Furthermore, our proof introduces a more comprehensive approach through the encoder-decoder structure. Specifically, we demonstrate that our model obtains a reduced order model within a subnetwork, which while often stated in the literature, has not been shown. To distinguish our proposed network from previous work, we refer to the BelNet framework introduced in [43] as "vanilla BelNet," while our new network is referred to as "BelNet." The proposed network is related to the parallel work of [15], who developed a similar structure based on a universal approximation result for operators with varying sensors. However, the analysis in [15] relies on having access to the continuous inner product layer, which does not imply that the conclusions hold for the discrete (approximated) network.
Data plays a pivotal role in augmenting the learning process for physical systems. Notably, one can gain insights into the underlying principles governing physical phenomena through data-discovery [41, 3, 4, 37, 36, 29, 39, 35, 38, 40]. By employing data-driven approaches, these works have effectively extracted and learned valuable information about the intricate dynamics of the physical systems or governing model. This is one potential of data-driven methods for scientific enhancement and for modeling of complex physical processes. Recently, related methods for solving multiscale problems were proposed, wherein real observation data is employed to enhance a coarse-scale multiscale model [44]. The approach involves training an operator that can map the coarse-scale solution (input function) to a finer-scale solution (output function) using the finer-scale solution obtained at specific locations within the domain.
To evaluate the performance of BelNet, we examine its effectiveness in solving the viscous Burgers' equation and learning the mapping between two multiscale models. Previous work [44]
demonstrated the concept of mapping between coarse and fine-scale solutions using operator learning, showing its applicability in various examples using DON. In our work, we show that BelNet offers more flexibility in selecting observation points for the coarse-scale input functions. To introduce additional complexity to the problem, we employ random sensors to sample the input functions. The corresponding section provides detailed results.
### Contributions
We summarize the key contributions in this work below.
1. We introduce a generalization of the vanilla BelNet [43], a neural operator that is discretization invariant, and a proof of the universal approximation property for this extended model.
2. The new BelNet extends the universal approximation results of [7, 6, 26].
3. We show a learning approach to map between two multiscale models on different grids and showcase the effectiveness of BelNet in handling challenging observation data.
The rest of the paper is organized as follow. In Section 2, we will review DON, vanilla BelNet, and the extended BelNet. We then present the universal approximation analysis in Section 3. In Section 4, we present numerical experiments.
## 2 Preliminary Results and Important Lemmata
Let \(Y\) be a Banach space and assume that \(K_{1}\subset Y\) and \(K_{2}\subset\mathbb{R}\) are both compact. Also, let \(V\subset C(K_{1})\) be compact and \(G:V\to C(K_{2})\) be a continuous and nonlinear operator. DON (see Figure 1) approximates the operator \(G\) using a deep neural network. Specifically, in [7] it was shown that for any \(\epsilon>0\), there exists positive integers \(M,N,K\), constants \(c_{i}^{k},\zeta_{k},\theta_{i}^{k},\varepsilon_{ij}^{k}\in\mathbb{R}\), points \(\omega_{k}\in\mathbb{R}^{d}\), \(y_{j}\in K_{1}\), where \(i\in[M]\), \(k\in[K]\), \(j\in[N]\) such that
\[\left|G(u)(x)-\sum_{k=1}^{K}\sum_{i=1}^{M}c_{i}^{k}\,g\left(\sum_{j=1}^{N} \varepsilon_{ij}^{k}u(y_{j})+\theta_{i}^{k}\right)\,g(\omega_{k}\cdot x+\zeta _{k})\right|<\epsilon\]
holds for all \(u\in V\) and \(x\in K_{2}\).
Figure 1: Stacked version DON. \(\bigotimes\) denotes the inner product in \(\mathbb{R}^{K}\).
The sensors are denoted by \(y_{i}\in K_{1}\) and the input function \(u\) is evaluated on the sensors. That is \(y=[y_{1},...,y_{N}]^{\intercal}\) is the collection of points that represent the discrete grid for the input functions. Theorem 5 in [7] states that the sensors for all input functions \(u\) must be the same (i.e. the input discretization must be fixed). This constraint imposes limitations on the applicability of the DON framework in the regime where one does not have control over the input functions or the sensor locations. Ideally, it would be desirable for the operator learning approach to allow for different input functions \(u\) to have varying or non-uniform discretization, i.e. to have a discretization-invariant method.
The vanilla BelNet, as displayed in Figure 2, learns the basis functions for both the input and output function spaces. The authors provide an explanation of the network structure by examining a special case (linear) and validating the discretization-invariant property through various numerical experiments. Mathematically, let us introduce weights and biases, \(q^{k}\in\mathbb{R}^{d}\), \(W_{y}^{1,k}\in\mathbb{R}^{N_{1}\times N}\), \(W_{y}^{2,k}\in\mathbb{R}^{N\times N_{1}}\), \(b_{x}^{k}\in\mathbb{R}\), and \(b_{y}^{k}\in\mathbb{R}^{N_{1}}\), where \(k=1,...,K\), and activation functions \(a_{x}\), \(a_{y}\) and \(a_{u}\), then the vanilla BelNet, denoted by \(N_{\theta}\), approximates the operator \(G\) as follows,
\[G(u)(x)\approx N_{\theta}(u(y),y)(x)=\sum_{k=1}^{K}a_{x}\left((q^{k})^{ \intercal}x+b_{x}^{k}\right)\,a_{u}\left(\hat{u}^{\intercal}W_{y}^{2,k}\left( a_{y}(W_{y}^{1,k}y+b_{y}^{k})\right)\right), \tag{1}\]
for \(x\in K_{2}\subset\mathbb{R}\), \(u\in V\), and where \(y=[y_{1},...,y_{N}]^{\intercal}\subset K_{1}^{N}\) and \(\hat{u}=[u(y_{1}),...,u(y_{N})]^{\intercal}\). The network structure is also displayed in Figure 2. We do not assume that the sensors \(y_{i}\in K_{1}\) are uniform for all input functions (i.e. they are not fixed).
The motivation behind the vanilla BelNet stems from Mercer's theorem, which involves approximating the linear operator through a kernel integral formulations [43]. However, in order to introduce nonlinearity, the activation function \(a_{u}\) is incorporated. For flexibility and expressiveness, the authors in [43] included an extra trainable layer prior to the activation function, denoted by the term \(a_{u}(WPu)\). Here, \(W\) represents a trainable matrix of the appropriate dimension.
In this work, we generalize the vanilla BelNet and prove the universal approximation theorem. Instead of applying an activation function, we design a network to enforce the nonlinearity, see
Figure 2: Plot of the vanilla BelNet structure. Projection nets are \(K\) independent fully connected neural network with weights and bias \(W_{y}^{2,k}\in\mathbb{R}^{N\times N_{1}}\), \(W_{y}^{1,k}\in\mathbb{R}^{N_{1}\times N}\) and \(b_{y}^{k}\in\mathbb{R}^{N_{1}}\). Construction net is a fully connected neural network with weights and bias \(Q\in\mathbb{R}^{K\times d}\) and \(b_{x}\in\mathbb{R}^{d}\). Here \(Q=[q^{1},q^{2},...,q^{K}]\), where \(q^{i}\in\mathbb{R}^{d}\) are defined in Equation (1). In addition, \(a_{x},a_{y},a_{u}\) are activation functions.
Figure 3. The additional subnetwork is theoretically consistent with an operator approximation and is partly motivated through the analysis detailed in Section 3.
## 3 Main Results
In this section, we prove the universal approximation theorem of BelNet in the sense of operators.
**Definition 3.1**.: If a function \(g:\mathbb{R}\to\mathbb{R}\) (continuous or discontinuous) satisfies that all linear combinations \(\Sigma_{i=1}^{N}c_{i}g(\lambda_{i}x+\theta_{i})\) are dense in \(C[a,b]\), where \(c_{i},\lambda_{i},\theta_{i}\in\mathbb{R}\), then \(g\) is called a _Tauber-Wiener (TW) function_.
Theorem 3 from [7] proves the universal approximation for functions. Unlike the universal approximation theorems from [11, 1, 17], the approximation coefficients \(c_{i}(f)\) is a functional which depends on the input function \(f\).
**Lemma 3.2** (Theorem 3 from [7]).: _Suppose \(H\subset\mathbb{R}^{d}\) is compact, \(V\subset C(H)\) is also compact, and \(g\in TW\). Let \(f\in V\) and for any \(\epsilon>0\), there exists an integer \(K>0\) independent of \(f\), and continuous linear functionals \(c_{i}\) on \(V\) such that_
\[\left|f(y)-\sum_{k=1}^{K}c_{i}(f)g(w_{k}\cdot y+b_{k})\right|<\epsilon,\]
_for all \(y\in H\) and \(f\in V\)._
The following two topological lemmata are used to construct the input function approximation \(u_{k}\) as detailed in Equation 3.
**Lemma 3.3** (Lemma 5 from [7]).: _Let \(Y\) be a Banach space and \(H\subset Y\), then \(H\) is compact if and only if the following two statements are true:_
1. \(H\) _is closed._
2. _For any_ \(\eta>0\)_, there is a_ \(\eta-\)_net_ \(N(\eta)=\{y_{1},...,y_{m(\delta)}\}\)_, i.e., for any_ \(y\in H\)_, there is_ \(y_{k}\in N(\eta)\) _such that_ \(\|y-y_{k}\|<\eta\)_._
We recall some properties of compact subsets of continuous functions.
**Lemma 3.4** (Lemma 6 from [7]).: \(V\subset C(H)\) _is compact set in \(C(H)\), then it is uniformly bounded and equicontinuous, i.e.,_
1. _There is a constant_ \(A>0\) _such that_ \(\|u(y)\|_{C(H)}\leq A\) _for all_ \(u\in V\)_._
2. _For all_ \(\epsilon>0\)_, there exists_ \(\delta>0\) _such that_ \(|u(y^{\prime})-u(y^{\prime\prime})|<\epsilon\) _for all_ \(u\in V\) _provided_ \(\|y^{\prime}-y^{\prime\prime}\|<\delta\)_._
**Remark 1**.: _Let \(f\) be a continuous functional on \(V\). Pick a sequence \(\epsilon_{1}>\epsilon_{2}>...>\epsilon_{n}\to 0\), there exists another sequence \(\delta_{1}>\delta_{2}>...>\delta_{n}>0\), such that, \(|f(u)-f(v)|<\epsilon_{k}\), for all \(|u-v|<\delta_{k}\). By Lemma 3.4, there exists a sequence \(\eta_{1}>\eta_{2}>...>\eta_{n}\to 0\) such that \(|u(y^{\prime})-u(y^{\prime\prime})|<\delta_{k}\) for all \(\|y^{\prime}-y^{\prime\prime}\|<\eta_{k}\) and \(u\in V\)._
We can find a sequence \(\{z_{i}\}_{i=1}^{\infty}\subset H\) and a sequence \(m(\eta_{1})<m(\eta_{2})<...<m(\eta_{n})\) such that the first \(m(\eta_{k})\) elements \(N(\eta_{k})=\{z_{1},...,z_{m(\eta_{k})}\}\) is a \(\eta_{k}\)-net of \(H\). For each \(\eta_{k}\)-net, and \(z_{j}\in N(\eta_{k})\), define a function,
\[T_{k,j}^{*}(y)=\begin{cases}1-\frac{\|y-z_{j}\|_{H}}{\eta_{k}},\|y-z_{j}\|_{H} \leq\eta_{k},\\ 0,\ \text{otherwise}\,\end{cases}\]
where \(y\in H\). Next, we define,
\[T_{k,j}(y)=\frac{T_{k,j}^{*}(y)}{\sum_{j=1}^{m(\eta_{k})}T_{k,j}^{*}(y)}, \tag{2}\]
and a matrix \(T^{k}\in\mathbb{R}^{m(\eta_{k})\times m(\eta_{k})}\), where the \((i,j)^{\text{th}}\)-entry of \(T^{k}\) is \(T_{k,i}(z_{j})\). For any \(u\in V\), we define a function,
\[u_{k}(y)=\sum_{j=1}^{m(\eta_{k})}u(z_{j})T_{k,j}(y), \tag{3}\]
and set \(\hat{u}_{z}^{k}=[u(z_{1}),...,u(z_{k})]^{\intercal}\). Furthermore, we define \(V_{k}=\{u_{k}:u\in V\}\) and \(\tilde{V}=V\cup(\cup k=1^{\infty}V_{k})\). The next lemma establishes the approximation of \(u\) by \(u_{k}\).
**Lemma 3.5** (Lemma 7 from [7]).: _For any \(u\in V=C(K_{1})\), and \(\delta_{k}>0\), there exists a \(\eta_{k}\)-net \(N(\eta_{k})\subset K_{1}\), and \(u_{k}\) defined as in equation (3) such that,_
\[\|u-u_{k}\|_{C(K_{1})}<\delta_{k}.\]
The next lemma establishes the universal approximation to continuous functional using a two layer networks.
**Lemma 3.6** (Theorem 4 from [7]).: _Suppose that \(g\in TW\), \(Y\) is a Banach space, \(K_{1}\subset Y\) is compact, and \(V\subset C(K_{1})\) is also compact. Let \(f\) be a continuous functional on \(V\). For any \(\epsilon>0\), there exist integers \(I,C>0\), weight and bias \(W\in\mathbb{R}^{I\times C}\) and \(c\in\mathbb{R}^{I}\), \(b\in\mathbb{R}^{I}\) and \(\hat{z}=[z_{1},....,z_{C}]\) with \(z_{i}\in K_{1}\) such that,_
\[|f(u)-c^{\intercal}g(W\hat{u}+b)|<\epsilon,\]
_for all \(u\in V\), and \(\hat{u}_{z}=[u(z_{1}),...,u(z_{C})]^{\intercal}\)._
**Remark 2**.: _The proof appears in Theorem 4 from [7], and we discuss an important remark regarding the theorem. The sensors \(\{z_{i}\}_{i=1}^{C}\) are the evaluation points for the input function space \(V\). They form an \(\eta_{k}-\)net of \(K_{1}\), i.e., \(\{z_{i}\}_{i=1}^{C}=N(\eta_{k})=\{z_{1},...,z_{m(\eta_{k})}\}\), where we denote \(C=m(\eta_{k})\). The sensors \(\{z_{i}\}_{i=1}^{C}\), constant \(C\) and the \(\eta_{k}-\)net are determined as follows. For any \(\epsilon>0\), choose \(m(\eta_{k})\) large enough such that \(\|u-u_{k}\|_{C(Y)}<\delta_{k}\) implies \(|\tilde{f}(u)-\tilde{f}(u_{k})|<\epsilon/2\). Here \(\tilde{f}\in\tilde{V}\) is the extension of \(f\) by the Tietze Extension Theorem, i.e.,_
\[f(w)=\tilde{f}(w),\forall w\in V.\]
_One can then define \(\eta_{k}\), and \(\eta_{k}-\)net (the sensors \(\{z_{i}\}_{i=1}^{C}\)) as in Remark 1. We will use the sensors \(\{z_{i}\}_{i=1}^{C}\) from [7] to establish the universal approximation theorem for BelNet._
To prove the universal approximation theorem of BelNet, the key is to show there is a neural network that can map the function values at arbitrary sensors to \(\hat{u}_{z}\). In Lemma 3.7, we show a strategy for selecting a set of appropriate sensors and then prove the existence of the neural network.
**Lemma 3.7**.: _Let \(\hat{z}\) and \(C\) be defined from Lemma 3.6. For any \(\epsilon_{u}>0\), there exist integers \(N,I>0\), \(K_{y}\subset K_{1}^{N}\), and neural networks \(\mathcal{N}^{i}:K_{y}\rightarrow\mathbb{R}^{C}\),_
\[\mathcal{N}^{i}(\hat{y})=W_{2}^{i}a(W_{1}^{i}\hat{y}+b_{1}^{i}), \quad i\in[N]\]
_and \(\mathcal{N}:K_{y}\rightarrow\mathbb{R}^{C\times N}\) defined as \(\mathcal{N}(\hat{y})=[\mathcal{N}^{1}(\hat{y}),...,\mathcal{N}^{N}(\hat{y})]\), such that_
\[\|\hat{u}_{z}-\mathcal{N}(\hat{y})u(\hat{y})\|_{F}<\epsilon,\]
_where \(W_{1}^{i}\in\mathbb{R}^{I\times N}\), \(W_{2}^{i}\in\mathbb{R}^{C\times I}\), and for any \(\hat{y}=[y_{1},...,y_{N}]^{\intercal}\in K_{y}\)._
Proof.: For any \(\delta>0\), by Lemma 3.5, there is a sufficiently large integer \(C_{\delta}\) such that that \(\|u-u_{k}\|_{C(K_{1})}<\delta\). Here \(u_{k}(y)=\sum\limits_{j=1}^{m(\eta_{k})}u(r_{j})T_{k,j}(y)\) is defined as (3) and \(m(\eta_{k})=C_{\delta}\). Moreover we denote \(\hat{r}=[r_{1},...,r_{C_{\delta}}]^{\intercal}\).
For any \(N>0\) and \(\hat{y}=[y_{1},\ldots,y_{N}]^{\intercal}\in K_{1}^{N}\) we can define two continuous operators \(T_{y}:K_{1}^{N}\rightarrow\mathbb{R}^{N\times C_{\delta}}\) and \(T_{z}:K_{1}^{C}\rightarrow\mathbb{R}^{C\times C_{\delta}}\) as
\[T_{y}(\hat{y})=\begin{pmatrix}T_{k,1}(y_{1})&...&T_{k,C_{\delta} }(y_{1})\\...&...&...\\ T_{k,1}(y_{N})&...&T_{k,C_{\delta}}(y_{N})\end{pmatrix},\quad T_{z}(\hat{z})= \begin{pmatrix}T_{k,1}(z_{1})&...&T_{k,C_{\delta}}(z_{1})\\...&...&...\\ T_{k,1}(z_{C})&...&T_{k,C_{\delta}}(z_{C})\end{pmatrix},\]
where \(T_{k,j}\) is defined in Equation (2). For any fixed \(\epsilon_{u}\), \(\delta\), and \(N\), we want to construct a subset \(K_{y}\subset K_{1}^{N}\), a continuous \(v:K_{y}\rightarrow\mathbb{R}^{C\times N}\), such that,
\[v(\hat{y})T_{y}(\hat{y})=T_{z}(\hat{z}), \tag{4}\]
Let us define \(M(\hat{y})=T_{y}^{\intercal}(\hat{y})T_{y}(\hat{y})\) and set
\[v(\hat{y})=T_{z}(\hat{z})M^{-1}(\hat{y})T_{y}^{\intercal}(\hat{y})\]
for any \(\hat{y}\in K_{y}\). We then define a subset \(K_{y}\subset K_{1}^{N}\) as,
\[K_{y}=\left\{\hat{y}\in K_{1}^{N},M(\hat{y})\text{ is invertible and }\|v(\hat{y})\|\leq\frac{\epsilon_{u}}{2\sqrt{C\delta^{2}}}-1\right\}, \tag{5}\]
where \(\|\cdot\|\) is the matrix operator norm. We remark that, for fixed \(\epsilon_{u}\) and \(C\), the set \(K_{y}\) is nonempty when \(\delta>0\) is sufficiently small and \(N\) is sufficiently large (see Remark 3).
Denote \(C_{v}=\sup_{\forall u\in V}\|u\|_{V}\), it follows from the universal approximation theorem for functions [11] that, for any \(\frac{\epsilon_{u}}{2\sqrt{NC_{v}^{2}}}>0\), there exist neural networks \(\mathcal{N}^{i}\) of the form \(W_{2}^{i}\sigma(W_{1}^{i}y+b_{1}^{i})\) and \(\mathcal{N}(y)=[\mathcal{N}^{1}(y),...,\mathcal{N}^{N}(y)]\) such that,
\[\|v(y)-\mathcal{N}(y)\|_{C(K_{1})}<\frac{\epsilon_{u}}{2\sqrt{NC_{v}^{2}}}. \tag{6}\]
For \(\hat{y}=[y_{1},...,y_{N}]^{\intercal}\in K_{y}\) and \(u_{k}(y)=\sum\limits_{j=1}^{C_{\delta}}u(r_{j})T_{k,j}(y)\), by multiplying both sides of \(u_{k}\) by \(v(\hat{y})\), it follows that,
\[v(\hat{y})u_{k}(\hat{y})=\sum\limits_{j=1}^{C_{\delta}}u(r_{j})v(\hat{y})T_{k, j}(\hat{y})=\sum\limits_{j=1}^{C_{\delta}}u(r_{j})T_{k,j}(\hat{z}_{k})=u_{k}( \hat{z}). \tag{7}\]
By equation (7) and Cauchy-Schwartz, we have the bound:
\[\|\mathcal{N}(\hat{y})u(\hat{y})-u(\hat{z})\|_{F} =\left\|\big{(}\mathcal{N}(\hat{y})-v(\hat{y})+v(\hat{y})\big{)}u (\hat{y})-u(\hat{z})\right\|_{F}\] \[=\left\|\big{(}\mathcal{N}(\hat{y})-v(\hat{y})\big{)}u(\hat{y})+ v(\hat{y})\big{(}u(\hat{y})-u_{k}(\hat{y})+u_{k}(\hat{y})\big{)}-u(\hat{z}) \right\|_{F}\] \[\leq\left\|\big{(}\mathcal{N}(\hat{y})-v(\hat{y})\big{)}u(\hat{y} )\right\|_{F}+\left\|v(\hat{y})\big{(}u(\hat{y})-u_{k}(\hat{y})\big{)}\right\| _{F}+\left\|u_{k}(\hat{z})-u(\hat{z})\right\|_{F}\] \[\leq\|\mathcal{N}(\hat{y})-v(\hat{y})\|\|u(\hat{y})\|_{F}+(\|v( \hat{y})\|+1)\sqrt{C\delta^{2}}.\]
Utilizing (5) and (6), the estimation follows.
**Remark 3**.: _We present one example to show \(K_{y}\) is non-empty. Let \(\hat{y}=[\hat{r},\hat{r},...\hat{r}]\), where we repeat \(\hat{r}\)\(n\) times, and define \(T_{y}(\hat{y})\) by,_
\[T_{y}(\hat{y})=\begin{pmatrix}T_{r}\\...\\ T_{r}\end{pmatrix},\text{ where }T_{r}=T_{y}(\hat{r}).\]
_We have \(M=T_{y}^{\intercal}T_{y}=nT_{r}^{\intercal}T_{r}\). Thus if \(T_{r}\) has full column rank \(C_{\delta}\), then the matrix \(M\) is invertible and it follows that_
\[M^{-1}T_{y}^{\intercal}=\frac{1}{n}[(T_{r}^{\intercal}T_{r})^{-1}T_{r}^{ \intercal},...,(T_{r}^{\intercal}T_{r})^{-1}T_{r}^{\intercal}].\]
_We estimate the operator norm of \(M^{-1}T_{y}^{\intercal}\) by studying its largest singular value \(\sigma_{1}\). We have, \(M^{-1}T_{y}^{\intercal}(M^{-1}T_{y}^{\intercal})^{\intercal}=\frac{1}{n}(T_{r} ^{\intercal}T_{r})^{-1}\) which implies that \(\|M^{-1}T_{y}^{\intercal}\|=\sigma_{1}\leq\sqrt{\frac{1}{n}\|(T_{r}^{\intercal }T_{r})^{-1}\|}\). By letting \(n\) be large enough, \(\|v\|=\|T_{z}\|\|M^{-1}T_{y}^{\intercal}\|\) can be sufficiently small, and thus \(K_{y}\) is non-empty._
**Remark 4**.: \(M(y)=T_{y}^{\intercal}(y)T_{y}(y)\)_, this implies that \(rank(M)=rank(T_{y})\leq\min(C_{\delta},N)\). Since \(M\in\mathbb{R}^{C_{\delta}\times C_{\delta}}\), \(M\) is singular if \(N<C_{\delta}\)._
**Remark 5**.: \(\mathcal{N}\) _is the projection net in Figure 3. \(\mathcal{N}u\) is the projection coefficients of \(u\) onto a set of functions ("basis") implicitly learned._
**Theorem 3.8** (**Universal Approximation Theorem for BelNet**).: _Suppose that \(a\in TW\), \(Y\) is a Banach space, \(K_{1}\subset Y\), \(K_{2}\subset\mathbb{R}\) are all compact. \(V\subset C(K_{1})\) is compact and \(G:V\to C(K_{2})\) is continuous and nonlinear. For any \(\epsilon>0\), there exist integers \(N,C,K,I\), weights and biases \(W_{x}^{k}\in\mathbb{R}^{d}\), \(b_{x}^{k}\in\mathbb{R}\), \(W^{k}\in\mathbb{R}^{I\times C}\), \(b^{k}\in\mathbb{R}^{I}\), \(c^{k}\in\mathbb{R}^{I}\), subset of sensors \(K_{y}\subset K_{1}^{N}\) and a trainable network \(\mathcal{N}:K_{y}\rightarrow\mathbb{R}^{C\times N}\) specified in Lemma 3.7, where \(K_{y}\) satisfies Equation (5). Then the following inequality holds_
\[\left|G\left(u\right)(x)-\sum_{k=1}^{K}a(W_{x}^{k}\cdot x+b_{x}^{k})(c^{k})^{ \intercal}a\big{(}W^{k}\mathcal{N}(\hat{y})u(\hat{y})+b^{k}\big{)}\right|<\epsilon,\]
_for all \(x\in K_{2}\), \(\hat{y}=[y_{1},y_{2},...,y_{N}]^{\intercal}\in K_{y}\), and \(u\in V\)._
Proof.: Since \(G\) is continuous and \(V\subset C(K_{1})\) is compact, the range \(G(V)\) is also compact in \(C(K_{2})\). By Lemma 3.2, for any \(\epsilon>0\), there exist a positive integer \(K\) and linear continuous functional \(L_{k}\), \(W_{x}^{k}\in\mathbb{R}^{d}\), \(b_{x}^{k}\in\mathbb{R}\) such that
\[\left|G\left(u\right)(x)-\sum_{k=1}^{K}L_{k}\big{(}G(u)\big{)}a(W_{x}^{k} \cdot x+b_{x}^{k})\right|<\frac{\epsilon}{3},\]
for all \(x\in K_{2}\) and \(u\in V\). By Lemma 3.6, for all \(k\), there exist integers \(C,I\), \(\hat{z}=[z_{1},...,z_{C}]^{\intercal}\) with \(z_{i}\in K\), \(c^{k}\in\mathbb{R}^{I}\), \(W^{k}\in\mathbb{R}^{I\times C}\), \(b^{k}\in\mathbb{R}^{I}\) such that
\[\left|L_{k}\left(G(u)\right)-(c^{k})^{\intercal}a(W^{k}\hat{u}_{z}+b)\right|< \frac{\epsilon}{3C_{u}K},\]
where \(\hat{u}_{z}=[u(z_{1}),...,u(z_{C})]^{\intercal}\) and \(C_{u}=\max\limits_{k,x\in K_{2}}a(W_{x}^{k}\cdot x+b_{x}^{k})\). Therefore, we obtain an approximation to \(G(u)(x)\) as in [7] defined by
\[\mathcal{G}\left(u(\hat{z})\right)(x)=\sum_{k=1}^{K}a(W_{x}^{k}\cdot x+b_{x}^ {k})(c^{k})^{\intercal}a(W^{k}u(\hat{z})+b^{k}) \tag{8}\]
with \(\left|\mathcal{G}\left(u(\hat{z})\right)(x)-G(u)(x)\right|<\frac{2\epsilon}{3}\). Since \(a\) is continuous and \(K_{1}^{C}\) is compact, we can define uniformly continuous functions \(a_{k}:K_{1}^{C}\rightarrow\mathbb{R}\):
\[a_{k}(\hat{u})=a(W^{k}\hat{u}+b^{k}).\]
Thus, there is an \(\epsilon_{u}>0\), such that
\[\left|a_{k}(\hat{u}^{\prime})-a_{k}(\hat{u}^{\prime\prime})\right|<\frac{ \epsilon}{3KLC_{u}} \tag{9}\]
for all \(\|\hat{u}^{\prime}-\hat{u}^{\prime\prime}\|_{F}<\epsilon_{u}\) where \(L=\max\limits_{k}\|c_{k}\|_{l^{1}}\).
By Lemma 3.7, there exists \(N,\mathcal{N}\), and \(K_{y}\subset K_{1}^{N}\) such that
\[\|u(\hat{z})-\mathcal{N}(\hat{y})u(\hat{y})\|_{F}<\epsilon_{u}\quad\text{ for any }y\in K_{y}. \tag{10}\]
Letting \(\hat{y}=[y_{1},...,y_{N}]\in K_{1}^{N}\), the difference is bounded by
\[\left|G\left(u\right)(x)-\sum_{k=1}^{K}a(W_{x}^{k}\cdot x+b_{x}^{k} )(c^{k})^{\intercal}a\big{(}W^{k}\mathcal{N}(\hat{y})u(\hat{y})+b^{k}\big{)}\right|\] \[\leq\underbrace{\left|G\left(u\right)(x)-\mathcal{G}(u(\hat{z})) (x)\right|}_{\hat{\mathcal{E}}_{1}}\] \[+\underbrace{\left|\mathcal{G}(u(\hat{z}))(x)-\sum_{k=1}^{K}a(W _{x}^{k}\cdot x+b_{x}^{k})(c^{k})^{\intercal}a\big{(}W^{k}\mathcal{N}(\hat{y} )u(\hat{y})+b^{k}\big{)}\right|}_{\hat{\mathcal{E}}_{2}}.\]
By equations 9, 8 and 10, the second term \(\mathcal{E}_{2}\) is controlled by:
\[\mathcal{E}_{2}=\left|\sum_{k=1}^{K}a(W_{x}^{k}\cdot x+b_{x}^{k})(c^{k})^{ \intercal}\left(a_{k}(\hat{z})-a_{k}(\mathcal{N}(\hat{y})u(\hat{y}))\right) \right|<\frac{\epsilon}{3}, \tag{11}\]
and the total approximation follows accordingly.
## 4 Numerical Experiments
We apply our approach to a nonlinear scalar PDE and multiscale PDE problems. Specifically, we first test the proposed BelNet extension on the viscous Burgers' equation. Then we show that BelNet can be used to address some of the difficulties associated with learning multiscale operators. The code and examples will be available when the work is published.
### Parametric Viscous Burgers' Equation
Consider the viscous Burgers' equation with periodic boundary conditions:
\[\frac{\partial u_{s}}{\partial t}+\frac{1}{2}\frac{\partial(u_{s} ^{2})}{\partial x}=\alpha\frac{\partial^{2}u_{s}}{\partial x^{2}},\,\,\,x\in[ 0,2\pi],\,t\in[0,0.3]\] \[u_{s}(x,0)=u_{s}^{0}(x),\] \[u_{s}(0,t)=u_{s}(2\pi,t),\]
where \(u_{s}^{0}(x)\) is the initial condition that depends on the parameter \(s\) and the viscosity is set to \(\alpha=0.1\). We consider the operator that maps from the initial condition to the terminal solution at \(t=0.3\).
**Training Data:** In order to obtain more variability between initial samples for the training phase and to include different levels of steepness in the derivative of the initial data, we generate the initial conditions as follows. We first compute a short-time solution (\(t=0.1\)) to Burgers' equation using the periodic boundary conditions, set the viscosity to zero, and use the initial condition \(s\sin(x)\) where \(s\in[0,4]\). The solution of the system at \(t=0.1\) is then used as the initial condition \(u_{s}^{0}\) (resetting time to zero); see the yellow and blue curves in Figure 4 as a display.
The mesh for the input data is as follows. Each initial condition (input function) has 25 sensors, and we used a total of 200 initial conditions for training. For each initial condition, the true system is evolved up to time \(t=0.3\), and a total of 5 time-stamps are collected (the terminal time is not included). Therefore the space-time mesh contains 25-by-5 total sample locations for each initial condition.
**Testing and observations:** We train 100 independent models with the same training dataset, test on the same dataset of 500 samples and compute the average relative error of 100 predictions. To test the neural operators' ability to forecast future states, we do not include the solution at the terminal time \(t=0.3\) in the training dataset. For testing, we use solutions from 500 initial conditions and test each neural operator on the solution at the terminal time with a finer mesh of 151 grid points. We present the relative errors in Table 1. We compare BelNet with the vanilla BelNet, which was previously shown to be more accurate than comparable models [43]. With fewer trainable parameters, listed in Table 1, BelNet obtains a small prediction error than the vanilla BelNet.
### Multiscale Operator Learning
We test BelNet's performance on the multiscale operator learning problem. In particular, we apply BelNet to improve a coarse-scale (low-accuracy) solution from a multiscale PDE solver. This problem was introduced in [44] with the DON framework. Let \(u_{0}\) denote a low-accuracy coarse-scale solution of a given PDE; the target is to construct an operator \(G\) such that \(G(u_{0})(\cdot)\) is a fine-scale solution of the PDE.
To learn the operator, we assume that some observed fine-scale solution data is available. If we denote an approximation to the input function \(u_{0}\) as \(\hat{u}_{0}\) and \(u(x_{i})\) as the fine-scale observed solution at \(x_{i}\), the dataset for training can then be denoted as \(\left\{x_{i},\hat{u}_{0},u(x_{i})\right\}_{i=1}^{N_{p}}\). We can then
\begin{table}
\begin{tabular}{c|c c} Model & Relative Error & Parameter Count \\ \hline vanilla BelNet & 1.42\% & 102.93K \\ \hline BelNet & 1.32\% & 96.5K \\ \end{tabular}
\end{table}
Table 1: Relative errors and trainable parameter counts for viscous Burgers equation. The top row is the vanilla BelNet, while the second row is the BelNet. We perform 100 independent experiments and present the average relative errors.
Figure 4: Plots of two solutions to the viscous Burgers’ equation with our initialization procedure. Note that each example’s sampling points (i.e. the sensors represented by the black dots) for the initial condition differ. The yellow curves are used to generate the initial conditions for the model problem (viscous Burgers’ equation). The initial conditions for the viscous Burgers’ equation are displayed in blue.
construct the loss function as,
\[\sum_{i=1}^{N_{p}}\|u(x_{i})-G_{\theta}(\hat{u}_{0})(x_{i})\|^{2}, \tag{12}\]
where \(G_{\theta}\) denotes the neural network with trainable parameter \(\theta\).
Since DON is not discretization invariant, i.e., the input function must be discretized in the same way, \(\hat{u}_{0}\) is then sampled in the same way for all training samples. This limits the potential accuracy as seen in numerical tests. To fix this, the authors of [44] used a localized patch discretization of \(u_{0}\) which was shown to improve the performance of the DON approximation. However, this is theoretically inconsistent with the DON framework.
Since BelNet is discretization-invariant, a local patch discretize of \(u_{0}\) is theoretically consistent. Let us denote \(P_{i}\) as a patch (neighborhood) around at \(x_{i}\), which is the observed solution coordinate. For example, a three-point patch for a \(1d\) problem is \(P_{i}=\{x_{i}-h_{i}^{1},x_{i},x_{i}+g_{i}^{1}\}\), where \(h_{i}^{1}\) and \(g_{i}^{1}\) are real numbers. The patch is used to discretize the input function \(u_{0}\), i.e., \(\hat{u}_{i}=u_{0}|_{P_{i}}=[u_{0}(x_{i}-h_{i}^{1}),u_{0}(x_{i}),u_{0}(x_{i}+g_ {i}^{1})]^{\intercal}\) is the local discretization of \(u_{0}\). To make the problem more challenging, we assume \(h_{i}\) and \(g_{i}\) are different for all \(i\). We present a 2D display of a patch and the candidate sensors' position in Figure 5.
#### 4.2.1 One dimensional elliptic equation
We first study a 1D example for which we can obtain an exact homogenized solution \(u_{0}\). In particular, let us consider the following equation:
\[-\frac{d}{dx}\left(\kappa(x/\epsilon)\frac{du}{dx}\right)=f,\quad x \in[0,1],\] \[u(0)=u(1)=0,\]
where \(\kappa(x)=0.5\sin(2\pi\frac{x}{\epsilon})+0.8\) and \(f(x)=0.5\). We plot the multiscale permeability \(\kappa(x)\) and the reference solution in Figure (6).
Figure 5: Plot of a \(5\times 5\) patch (red dots) centered at an observation point (black dot). To make the problem more challenging, we randomize the sensor position. Specifically, we randomly place a sensor in a neighborhood centered at each red dot. Blue crosses are all candidate locations to place sensors for the yellow dot, we uniformly pick one blue ’x’ to place one sensor.
The relative error of the homogenized solution is \(0.07\%\); we use exact solutions \(u(x_{i})\) as the observations, \(x_{i}\) are uniformly distributed, and \(N_{p}=16\) points are used in total. We use the oversampling trick which employs a patch of coarse-scale solutions to capture the input function (see Figure 5), and the result is presented in Figure 7.
**Settings, Observations, and Comments:** We use \(N_{p}=16\) exact solutions uniformly distributed in the domain to improve \(u_{0}\). It is important to note that our focus does not involve studying the decay of error in relation to the number of observation points, and for a comprehensive investigation, please refer to the work by [44].
As the size of the patch expands, there is increased flexibility in terms of sensor placement for sampling the input function. Specifically, we consider the placement of 1, 3, 5, 7, and 9 sensors, respectively. The locations of the sensors are randomized for each local patch, denoted as \(h_{i}^{j}\) and \(g_{i}^{j}\), which vary across different samples of \(u_{0}(x_{i})\). Refer to Figure 5 for a display of this randomization process. With an increasing number of sensors, the various possible discretizations of each sample \(u_{0}\) become more numerous, resulting in a more challenging training scenario when dealing with larger patches. However, the results presented in Figure 7 demonstrate that BelNet
Figure 6: 1D elliptic. Left: permeability \(\kappa\). Right: reference solution.
Figure 7: Relative errors with respect to different patch sizes. For each patch size, we perform 100 independent experiments and present the average relative error. All input functions (low accuracy solution) of each experiment do not share the discretization, and the discretizations for the same input function are not the same in 100 independent experiments. The larger the patch size, the more flexibility in sampling the input function, and hence is more challenging to train. We do not observe an increase of the relative error with respect to the patch, which also implies that BelNet is discretization-invariant for this example.
exhibits discretization invariance, as the accuracy remains unaffected and improves with patch size.
#### 4.2.2 2D elliptic equation with one fast variable
We consider the following 2D elliptic equation:
\[-\nabla\cdot(\kappa(x/\epsilon)\nabla u)=f,x\in\Omega=[0,1]^{2}, \tag{13}\] \[u(x)=0,x\in\partial\Omega, \tag{14}\]
where \(\kappa(x/\epsilon)=2+\sin(2\pi x/\epsilon)\cos(2\pi y/\epsilon)\) and \(\epsilon=\frac{1}{8}\). We display \(\kappa(x)\) in Figure (8).
We measure the input function (the low-accuracy solution) by sampling it in a neighborhood around the observation sensor. In order to obtain the low-accuracy solution, we employ mesh-dependent solvers that define all solutions on grid points. As the neighborhood around the grid point sensor expands, there is an increase in the degrees-of-freedom available for sampling the input function. It is important to highlight that the utilization of different discretizations for the input functions poses difficulties during training. Therefore, we require a discretization-invariant tool such as BelNet to address this issue.
**Settings and comments on the results:** We increase the patch size, perform 100 independent experiments for each patch size and compute the average relative error. In each experiment, the input function value \(u_{0}\), representing the low-accuracy solution, was measured by sampling random points within the patch (neighborhood) surrounding the observation point \(x_{i}\).
The results are displayed in Figure 9. When the patch size is 1, only a single point is used to sample \(u_{0}\), rendering it insufficient for an accurate approximation. As the patch size increases, training becomes more challenging due to the increased freedom in sensor placement for sampling \(u_{0}\). However, the approximation error decreases (in trend) indicating that BelNet can effectively handle problems with varying input function meshes.
Figure 8: 2D elliptic with 1 fast variable. Left: permeability \(\kappa\). Right: reference solution.
#### 4.2.3 2D elliptic multiscale PDE
We consider the same equation (14) but with different permeability \(\kappa\):
\[\kappa(x,y)=1+\frac{\sin(2\pi\frac{x}{\epsilon_{0}})\cos(2\pi\frac{y}{\epsilon_{ 1}})}{2+\cos(2\pi\frac{x}{\epsilon_{2}})\sin(2\pi\frac{y}{\epsilon_{3}})}+ \frac{\sin(2\pi\frac{x}{\epsilon_{4}})\cos(2\pi\frac{y}{\epsilon_{5}})}{2+\cos (2\pi\frac{x}{\epsilon_{6}})\sin(2\pi\frac{y}{\epsilon_{7}})},\]
where \(\epsilon_{0}=\frac{1}{5}\), \(\epsilon_{1}=\frac{1}{4}\), \(\epsilon_{2}=\frac{1}{25}\), \(\epsilon_{3}=\frac{1}{16}\), \(\epsilon_{4}=\frac{1}{16}\), \(\epsilon_{5}=\frac{1}{32}\), \(\epsilon_{6}=\frac{1}{3}\), and \(\epsilon_{7}=\frac{1}{9}\). We plot the permeability and the solution in Figure (10).
We obtain the coarse-scale solution by the multiscale finite element method with one local basis [12, 9, 10, 8]. We conduct six sets of experiments with patch sizes \(1\times 1\), \(3\times 3\), and \(5\times 5\), \(7\times 7\), \(9\times 9\) and \(11\times 11\). We train 100 models for each set of experiments and compute the average relative errors of the last 100 epochs of 100 models. The results are shown in Figure 11.
Figure 10: 2D elliptic multiscale PDE. Left: permeability \(\kappa\). Right: reference solution.
Figure 9: Relative errors with respect to different patch sizes. For each patch size, we conducted 100 independent experiments and calculated the average relative error. It is important to note that the input functions (representing low-accuracy solutions) used in the experiments do not share the same discretization. Furthermore, the discretization of each input function varies across the 100 independent experiments.
**Settings, Observations, and Comments:** We used a set of 16 input function samples, denoted as \(u(x_{i})\), which are uniformly distributed to improve the initial function \(u_{0}\). When the patch size is 1, all input functions are sampled as \(u_{0}(x_{i})\). However, this sampling strategy does not yield a satisfactory approximation to \(u_{0}\), resulting in (slightly more) inaccurate prediction as compared to a coarse-scale solution. As the patch size increases from \(1\times 1\) to \(11\times 11\), we incorporate a larger number of sensors. Specifically, the number of sensors used is 1, 9, 25, 49, 81, and 121, respectively. Despite the numerical challenge of using more sensors which are non-overlapping, the prediction accuracy does not deteriorate significantly. The error has a slight increase between \(5\times 5\) and \(9\times 9\) patches which may indicate a saturation of the error within this patch size window (since the error remain around 3.7%). The relative error decreases again as the patch size increases.
## 5 Conclusion
We generalize the vanilla BelNet architecture proposed in [43] by adding a trainable nonlinear layer in the network. We prove the universal approximation theorem of BelNet in the sense of operators, extending the results of [6, 7, 26]. In particular, we show that BelNet can be viewed as a discretization-invariant extension of the operator networks in [7, 7] which allows for several new applications, particularly to multiscale PDE. In particular, the discretization-invariance property allows for the input functions to be observed at different sensor locations which is often the case for applications where the sensor locations move in time or where fluctuations in data acquisition occurs. For multiscale problems, the randomization of patch location and neighboring points necessitates the use of discretization-invariant learning. We test the performance on high-contrast and multiscale parametric PDE. Our experiments show that BelNet typically obtains about 1.2\(\times\) to 2\(\times\) improvement in relative error over the course-scale solution without needing to fully resolve the multiscale problem. Lastly, it is worth noting that part of the theoretical analysis shows that the network obtains a (trained) reduced order model and projection. This is a useful result for assessing the contributions of individual subnetworks within the full operator learning architecture, i.e. peering into the black-box of deep networks for PDE.
Figure 11: Relative errors with respect to different patch sizes. For each patch size, we perform 100 independent experiments and present the average relative error. It should be noted that the input functions (representing the low accuracy solution) used in the experiments do not share the same discretization. Additionally, the discretization of each input function varies among the 100 independent experiments. Even as the patch size expands, BelNet maintains stability.
Acknowledgement
Z. Zhang was supported in part by AFOSR MURI FA9550-21-1-0084. H. Schaeffer was supported in part by AFOSR MURI FA9550-21-1-0084 and an NSF CAREER Award DMS-2331100.
|
2307.11177 | Development of a CsI Calorimeter for the Compton-Pair (ComPair)
Balloon-Borne Gamma-Ray Telescope | There is a growing interest in astrophysics to fill in the observational
gamma-ray MeV gap. We, therefore, developed a CsI:Tl calorimeter prototype as a
subsystem to a balloon-based Compton and Pair-production telescope known as
ComPair. ComPair is a technology demonstrator for a gamma-ray telescope in the
MeV range that is comprised of 4 subsystems: the double-sided silicon detector,
virtual Frisch grid CdZnTe, CsI calorimeter, and a plastic-based
anti-coincidence detector. The prototype CsI calorimeter is composed of thirty
CsI logs, each with a geometry of $1.67 \times 1.67 \times 10 \ \mathrm{cm^3}$.
The logs are arranged in a hodoscopic fashion with 6 in a row that alternate
directions in each layer. Each log has a resolution of around $8 \%$
full-width-at-half-maximum (FWHM) at $662 \ \mathrm{keV}$ with a dynamic energy
range of around $250\ \mathrm{keV}-30 \ \mathrm{MeV}$. A $2\times2$ array of
SensL J-series SiPMs read out each end of the log to estimate the depth of
interaction and energy deposition with signals read out with an IDEAS ROSSPAD.
We also utilize an Arduino to synchronize with the other ComPair subsystems
that comprise the full telescope. This work presents the development and
performance of the calorimeter, its testing in thermal and vacuum conditions,
and results from irradiation by $2-25 \ \mathrm{MeV}$ monoenergetic gamma-ray
beams. The CsI calorimeter will fly onboard ComPair as a balloon experiment in
the summer of 2023. | Daniel Shy, Richard S. Woolf, Clio C. Sleator, Eric A. Wulf, Mary Johnson-Rambert, Emily Kong, J. Mitch Davis, Thomas J. Caligiure, J. Eric Grove, Bernard F. Phlips | 2023-07-20T18:35:20Z | http://arxiv.org/abs/2307.11177v2 | # Development of a CsI Calorimeter for the Compton-Pair (ComPair) Balloon-Borne Gamma-Ray Telescope
###### Abstract
There is a growing interest in astrophysics to fill in the observational gamma-ray MeV gap. We, therefore, developed a CsI:Tl calorimeter prototype as a subsystem to a balloon-based Compton and Pair-production telescope known as ComPair. ComPair is a technology demonstrator for a gamma-ray telescope in the MeV range that is comprised of 4 subsystems: the double-sided silicon detector, virtual Frisch grid CdZnTe, CsI calorimeter, and a plastic-based anti-coincidence detector. The prototype CsI calorimeter is composed of thirty CsI logs, each with a geometry of \(1.67\times 1.67\times 10~{}\mathrm{cm}^{3}\). The logs are arranged in a hodoscopic fashion with 6 in a row that alternate directions in each layer. Each log has a resolution of around \(8\%\) full-width-at-half-maximum (FWHM) at \(662~{}\mathrm{keV}\) with a dynamic energy range of around \(250~{}\mathrm{keV}-30~{}\mathrm{MeV}\). A \(2\times 2\) array of SensL J-series SiPMs read out each end of the log to estimate the depth of interaction and energy deposition with signals read out with an IDEAS ROSSPAD. We also utilize an Arduino to synchronize with the other ComPair subsystems that comprise the full telescope. This work presents the development and performance of the calorimeter, its testing in thermal and vacuum conditions, and results from irradiation by \(2-25~{}\mathrm{MeV}\) monoenergetic gamma-ray beams. The CsI calorimeter will fly onboard ComPair as a balloon experiment in the summer of 2023.
## I Introduction
There are several space-based gamma-ray telescopes in the concept and development phase to address the MeV gap in astronomical observations. These efforts include the Compton Spectrometer and Imager (COSI) [1], Galactic Explorer with a Coded aperture mask Compton telescope (GECCO) [2], e-ASTROGAM [3], Advanced Particle-astrophysics Telescope (APT) [4], SMILE-2+ [5], and the Allsky Medium Energy Gamma-ray Observatory (AMEGO) [6], which is now adapted to AMEGO-X [7]. This work presents the development of a thallium-doped cesium iodide (CsI:Tl) based calorimeter in support of the ComPair balloon-based telescope [8], which serves as a prototype to AMEGO and will be flown in the summer of 2023. The AMEGO concept consists of four subsystems: the double-sided Silicon strip detectors (DSSD) [9], virtual Frisch-grid CdZnTe calorimeter [10], CsI calorimeter, and a plastic-based anti-coincidence detector (ACD) [8]. The main concept behind this 'hybrid' telescope design is to maintain high detection sensitivity and good imaging performance through a large energy range that includes the Compton and pair-creation region. The tracker layer has the dual objective of measuring the energy of a gamma-ray Compton scatter as well as tracking electromagnetic showers during a pair-creation event. Next, the CZT, with its high resolution in the region below 10 MeV could capture the scattered photon. Higher energy events may pass through the CZT to get detected by the CsI calorimeter. The concept, therefore, has a dual use of both Compton and Pair imaging combining the COSI/COMPTEL region and that of the _Fermi_ Large Area Telescope.
Fig. 1 presents a computer aided design (CAD) model of the ComPair balloon instrument with the different subsystems labeled. The CsI calorimeter box is at the bottom of the stack colored in orange. The ComPair CsI calorimeter inherits its design and concept from the calorimeter on board the _Fermi_ Large Area Telescope (LAT) [11, 12, 13]. The major difference between the LAT calorimeter and that of ComPair is the usage of SiPMs rather than PIN diodes. The diodes utilized in the LAT calorimeter did not have the necessary lower threshold required in ComPair and therefore explored using SiPMs.
This manuscript presents an expansion and finalization of
Fig. 1: Cutaway view of the CAD showing ComPair’s subsystems and their dimensions. Figure is adopted from [8]. |
2303.03084 | On Regression in Extreme Regions | The statistical learning problem consists in building a predictive function
$\hat{f}$ based on independent copies of $(X,Y)$ so that $Y$ is approximated by
$\hat{f}(X)$ with minimum (squared) error. Motivated by various applications,
special attention is paid here to the case of extreme (i.e. very large)
observations $X$. Because of their rarity, the contributions of such
observations to the (empirical) error is negligible, and the predictive
performance of empirical risk minimizers can be consequently very poor in
extreme regions. In this paper, we develop a general framework for regression
on extremes. Under appropriate regular variation assumptions regarding the pair
$(X,Y)$, we show that an asymptotic notion of risk can be tailored to summarize
appropriately predictive performance in extreme regions. It is also proved that
minimization of an empirical and nonasymptotic version of this 'extreme risk',
based on a fraction of the largest observations solely, yields good
generalization capacity. In addition, numerical results providing strong
empirical evidence of the relevance of the approach proposed are displayed. | Nathan Huet, Stephan Clémençon, Anne Sabourin | 2023-03-06T12:55:38Z | http://arxiv.org/abs/2303.03084v2 | # On Regression in Extreme Regions
###### Abstract
In the classic regression problem, the value of a real-valued random variable \(Y\) is to be predicted based on the observation of a random vector \(X\), taking its values in \(\mathbb{R}^{d}\) with \(d\geq 1\) say. The statistical learning problem consists in building a predictive function \(\hat{f}:\mathbb{R}^{d}\rightarrow\mathbb{R}\) based on independent copies of the pair \((X,Y)\) so that \(Y\) is approximated by \(\hat{f}(X)\) with minimum error in the mean-squared sense. Motivated by various applications, ranging from environmental sciences to finance or insurance, special attention is paid here to the case of extreme (_i.e._ very large) observations \(X\). Because of their rarity, they contribute in a negligible manner to the (empirical) error and the predictive performance of empirical quadratic risk minimizers can be consequently very poor in extreme regions. In this paper, we develop a general framework for regression in the extremes. It is assumed that \(X\)'s conditional distribution given \(Y\) belongs to a non parametric class of heavy-tailed probability distributions. It is then shown that an asymptotic notion of risk can be tailored to summarize appropriately predictive performance in extreme regions of the input space. It is also proved that minimization of an empirical and non asymptotic version of this 'extreme risk', based on a fraction of the largest observations solely, yields regression functions with good generalization capacity. In addition, numerical results providing strong empirical evidence of the relevance of the approach proposed are displayed.
Statistical Learning Theory Extreme Value Theory Nonparametric Regression Conditional Regular Variation
## 1 Introduction
Regression is a predictive problem of crucial importance in statistical learning, covering a wide variety of applications. In the standard setup, \((X,Y)\) is a pair of random variables defined on the same probability space \((\Omega,\ \mathcal{A},\ \mathbb{P})\) with distribution \(P\), where \(Y\) is a square integrable real-valued r.v. (the output) and \(X\) is a random vector with marginal distribution \(\rho\) taking its values in some measurable space \(\mathcal{X}\) modelling some input information hopefully useful to predict \(Y\). The predictive learning problem consists in building, from a training dataset \(\mathcal{D}_{n}=\{(X_{1},Y_{1}),\ \ldots,\ (X_{n},Y_{n})\}\) composed of \(n\geq 1\) independent copies of \((X,Y)\), a mapping \(f:\mathcal{X}\rightarrow\mathbb{R}\) in order to compute a 'good' prediction \(f(X)\) for \(Y\), with the quadratic risk
\[R_{P}(f)=\mathbb{E}\left[(Y-f(X))^{2}\right] \tag{1}\]
as close as possible to that of \(f^{*}(X)=\mathbb{E}[Y\mid X]\), which obviously minimizes (1) over the space \(L_{2}(\rho)\) of square integrable functions of \(X\):
\[R_{P}^{*}:=R_{P}(f^{*})=\min_{f\in L_{2}(\rho)}R_{P}(f).\]
A natural strategy consists in solving the Empirical Risk Minimization problem (ERM in abbreviated form) \(\min_{f\in\mathcal{F}}R_{\hat{f}_{a}}(f)\), where \(\mathcal{F}\subset L_{2}(\rho)\) is a closed and convex class of functions sufficiently rich to include a reasonable approximant of \(f^{*}\) and \(\hat{P}_{n}\) is a statistical version of the unknown distribution \(P\), typically the raw empirical distribution \((1/n)\sum_{i\leq n}\delta_{(X,Y_{i})}\), denoting by \(\delta_{a}\) the Dirac mass at any point \(a\). The performance of predictive functions \(\hat{f}_{n}\) obtained this way, by _least-square regression_, has been extensively investigated in the statistical learning literature Gyorfi et al. (2002), Massart (2007). Under the assumption that the tails of the random pairs (\(f(X),Y\)) are subgaussian
and appropriate complexity conditions are satisfied by the class \(\mathcal{F}\), confidence upper bounds for the excess of quadratic risk \(R_{P}(\hat{f}_{n}^{2})-R_{p}^{*}=\mathbb{E}[(Y-\hat{f}_{n}(X))^{2}\mid\mathcal{D} _{n}]-R_{p}^{*}\) have been established in Lecue and Mendelson (2016) by means of concentration inequalities for empirical process Boucheron et al. (2013). We now place ourselves in the situation where \(X=[0,\ +\infty)^{d}\). Observations are considered as extreme when their norm exceeds some (asymptotically) large threshold \(t_{n}>0\). The threshold \(t_{n}\) depends on the number \(n\geq 1\) of observations, since 'large' should be naturally understood as large w.r.t the vast majority of data observed. Hence, extreme observations are rare by nature and severely underrepresented in the training dataset \(\mathcal{D}_{n}\) with overwhelming probability. Consequently, the impact of prediction errors in extreme regions of the input space on the global regression error of \(\hat{f}_{n}\) is generally negligible. Indeed, the law of total probability yields
\[R_{P}(f)=\mathbb{P}[\|X\|>t_{n}]\mathbb{E}\left[(Y-f(X))^{2}\mid\|X\|>t_{n} \right]+\mathbb{P}[\|X\|\leq t_{n}]\mathbb{E}\left[(Y-f(X))^{2}\mid\|X\|\leq t _{n}\right]. \tag{2}\]
Because the order of magnitude of \(\mathbb{P}[\|X\|>t_{n}]\) (as well as that of its empirical version with high probability) is extremely small, there is no guarantee that the standard ERM strategy produces a predictive function \(\hat{f}\) that is nearly optimal in the extreme region \(\{x:\ \|x\|>t_{n}\}\), _i.e._ is such that the conditional error \(\mathbb{E}[(Y-\hat{f}(X))^{2}\mid\|X\|>t_{n}]\) is nearly minimum. However, accurate prediction in extreme regions turns out to be crucial in certain practical (safety) applications, in environmental sciences, dietary risk analysis or finance/insurance for instance. It is thus the purpose of the subsequent analysis to investigate the problem of building a prediction function that asymptotically minimizes the first term of the decomposition (2), and thus the quantity referred to as the _quadratic conditional risk_ and given by
\[R_{t}(f):=\mathbb{E}\left[(Y-f(X))^{2}\mid\|X\|>t\right]=R_{P_{t}}(f), \tag{3}\]
denoting by \(P_{t}\) the conditional distribution of \((X,Y)\) given \(\|X\|>t\). In order to develop a framework showing that empirical quadratic conditional risk minimization leads to predictive rules with good generalization capacities in extreme regions, (nonparametric) assumptions related to the tail behavior of the conditional distribution of \(X\) given \(Y\) are required. Multivariate regular variation hypotheses are very flexible in the sense that they correspond to a large nonparametric class of (heavy-tailed) distributions. They are frequently used in applications where the impact of extreme observations should be enhanced, or not neglected at the minimum. This hypothesis has been used to formulate unsupervised learning problems in the extremes, anomaly detection in Thomas et al. (2017) and dimensionality reduction in Goix et al. (2017). It is also used in Jalalzai et al. (2018) in order to develop a framework for binary classification in extreme regions: precisely, it is assumed therein that the two class distributions belong to the maximal domain of attraction of multivariate extreme value distributions with the same tail/shape index. Here we propose to work under a nonparametric assumption related to the joint distribution of \((X,Y)\), ensuring in particular that \(X\)'s conditional distribution given \(Y\) belongs to the maximal domain of attraction of a multivariate extreme value distribution with probability one and guaranteeing that the conditional quadratic risk (3) may converge to a risk functional referred to as _asymptotic quadratic conditional risk_ and describing asymptotic predictive performances in the extremes. Under mild assumptions, we prove that a predictive rule using the angular information only _i.e._ of the form \(f(X)=f_{\Theta}(X/\|X\|)\), where \(f_{\Theta}\) is a real-valued function defined on the intersection of the hypersphere and the positive orthant \(\mathbb{S}=\{x\in[0,+\infty]^{d}:\ \|x\|=1\}\), just like the minimizer of the asymptotic conditional quadratic risk, learned by minimizing an empirical version of (12) based on a fraction \(k/n\) of the training dataset \(\mathcal{D}_{n}\), those corresponding to the largest \(\|X\|\)'s, is nearly optimal w.r.t. the asymptotic conditional quadratic risk. Precisely, nonasymptotic bounds for its excess of asymptotic conditional quadratic risk are established. Beyond these theoretical guarantees, the performance of empirical risk minimization in the extremes is supported by various numerical experiments that have been carried out.
The paper is organized as follows. In Section 2, key notions pertaining to multivariate extreme value theory (MEVT) are briefly recalled for clarity's sake and the probability framework we consider for regression in extreme regions is described at length. The approach we propose for regression in the extremes is detailed in Section 3, together with the dedicated validity framework we develop. Illustrative experimental results are displayed in Section 4, while some concluding remarks are collected in Section 5. Due to space limitations, certain technical details are deferred to the Supplementary Material.
## 2 Background and Preliminaries
We start with recalling some concepts in heavy-tail analysis, involved in the formulation of the statistical problem given next. Here and throughout, \((X,Y)\) is a pair of random variables defined on a probability space \((\Omega,\ \mathcal{A},\ \mathbb{P})\) with distribution \(P\), where \(Y\) is real-valued with marginal distribution \(G\) and \(X=(X^{(1)},\ \dots,\ X^{(d)})\) takes its values in \([0,\ +\infty)^{d}\), \(d\geq 1\), with marginal distribution \(\rho\). We denote by \(F_{j}\) the (univariate) cumulative distribution function (cdf) of the component \(X^{(j)}\), \(j=1,\ \dots,\ d\), by \(\mathds{1}\{\mathcal{E}\}\) is meant the indicator function of any event \(\mathcal{E}\), the integer part of any \(u\in\mathbb{R}\) is denoted by \([u]\) and \(\|\cdot\|\) refers to a norm on \(\mathbb{R}^{d}\). Let \(E=[0,+\infty]^{d}\setminus\{(0,\ \dots,\ 0)\}\) be the punctured positive orthant in \(\mathbb{R}^{d}\) and denote by \(\mathcal{B}(E)\) its Borel \(\sigma\)-algebra. The boundary and the closure of any subset \(B\) of the topological
space \(\mathbb{R}^{d}\) are respectively denoted by \(\partial B\) and \(B\) and we set \(tB=\{tx:\ x\in B\}\) for all \(t\in\mathbb{R}\). We also consider the set \(\mathbb{S}=\{x\in[0,+\infty)^{d}:\ \|x\|=1\}\) and denote by \(\mathcal{B}(\mathbb{S})\) its Borel \(\sigma\)-algebra.
### Heavy Tails - Multivariate Regular Variation
The goal of heavy-tail analysis is to study phenomena that are not ruled by averaging effects but determined by extreme values. To investigate the behavior of a r.v. \(X\) far from the center of its mass, an usual assumption is that \(X\)'s distribution is _multivariate regularly varying_ with tail index \(\alpha>0\), _i.e._ there exist a non zero Borel measure \(\mu\) on \(E\), finite on all Borel measurable subsets of \(E\) bounded away from zero and a _regularly varying_1 function \(b(t)\) with index \(\alpha\) such that
Footnote 1: Recall that a positive Borel measurable function \(b:\mathbb{R}\to\mathbb{R}_{*}\) is said to be regularly varying with index \(\zeta\in\mathbb{R}\) iff \(b(tu)/b(u)\to t^{\zeta}\) as \(u\to+\infty\) for all \(t>0\).
\[b(t)\mathbb{P}\left\{t^{-1}X\in B\right\}\to\mu(B)\text{ as }t\to+\infty, \tag{4}\]
for any Borel measurable set \(B\subset E\) bounded away from zero and s.t. \(\mu(\partial B)=0\). The limit measure \(\mu\) is referred to as the _exponent measure_ of the r.v. \(X\) and is provably homogeneous of degree \(-\alpha\): \(\mu(tB)=t^{-\alpha}\mu(B)\) for all \(t>0\) and Borel set \(B\subset E\) bounded away from the origin. One may refer to Resnick [2013] for alternative formulations/characterizations of the regular variation property and its application to MEVT. It follows from the homogeneity property that the pushforward measure of \(\mu\) by the polar coordinates transformation \(x\in E\mapsto(\|x\|,\Theta(x))\), where \(\Theta(x):=x/\|x\|\) for all \(x\in E\), is the tensor product given by:
\[\mu\left(\{x\in E:\ \|x\|\geq r,\ \Theta(x)\in A\}\right)=r^{-\alpha}\Phi(A),\]
for all \(A\in\mathcal{B}(\mathbb{S})\) and \(r\geq 1\), where \(\Phi\) is a finite positive measure on \(\mathbb{S}\), referred to as the _angular measure_ of the heavy-tailed r.v. \(X\). The regular variation assumption of the r.v. \(X\) implies that the conditional distribution of \((\|X\|)/t,\Theta(X))\) given \(\|X\|>t\) converges as \(t\to+\infty\): for all \(r\geq 1\) and \(A\in\mathcal{B}(\mathbb{S})\) with \(\Phi(\partial A)=0\)
\[\mathbb{P}\left\{t^{-1}\|X\|\geq r,\Theta(X)\in A\ |\ \|X\|>t\right\}_{t\to+\infty} \,c^{-\alpha}\Phi(A), \tag{5}\]
where \(c=\Phi(\mathbb{S})^{-1}=\mu(E\setminus\mathbb{B})^{-1}\) with \(\mathbb{B}:=\{x\in E,\|x\|\leq 1\}\). Hence, the radial and angular components of the r.v. \(X\) are asymptotically independent with standard Pareto of parameter \(\alpha\) and normalized angular measure \(c\Phi\) as respective asymptotic marginal distributions. The angular measure \(\Phi\) describes exhaustively the dependence structure of its components \(X^{(j)}\) in the extremes, _i.e._ the directions \(\Theta(X)\) in which extremes occur with largest probability.
Heavy-tailed models have been the subject of much attention in the machine-learning literature. Among many other works, the regular variation assumption is used in Ohannessian and Dahleh [2012] for rare event probability estimation, in Bubeck et al. [2013], Carpentier and Valko [2014] or Achab et al. [2017] in the context of stochastic bandit problems, in Goix et al. [2015] for the statistical recovery of the dependence structure in the extremes, in Goix et al. [2017] for dimensionality reduction in extreme regions and in Brownllees et al. [2015] for predictive problems with heavy-tailed losses.
Here and throughout, for simplicity, we assume that \(\alpha=1\) rather than performing a standardization of \(X\)'s components to unit-Pareto margins, _i.e._ replacing \(X^{(j)}\) by \(1/(1-F_{j}(X^{(j)}))\) for \(j\in\{1,\ \ldots,\ d\}\), see Remark 1 below.
**Remark 1**: (On marginal standardization) _We point out that, by means of the marginal standardization described above, the case of non-standard regular variation, e.g. allowing for different tail indices \(\alpha_{j}\neq 1\) for the different components \(X^{(j)}\), can also be reduced to the framework considered here. However, the marginal transformations are unknown in practice, just like the \(F_{j}\)'s and must be replaced with their empirical counterparts, see Section 3. As will be discussed therein, this leads to technical difficulties inherent to the dependence of the resulting pseudo-standardized observations Clemencon et al. [2021]._
In order to formulate the regression problem in the extremes, a specific, though general/nonparametric, hypothesis related to the distribution of the pair \((X,Y)\) is required, extending the usual multivariate regular variation property. It is detailed and discussed in the following subsection.
### Regression in the Extremes - The Framework
We now describe rigorously the framework we consider for regression in extreme regions. For simplicity, we suppose that \(Y\) is bounded through this paper. This assumption can be naturally relaxed at the price of additional technicalities.
**Assumption 1**: _The r.v. \(Y\) is bounded: there exists \(M<+\infty\) s.t. \(|Y|\leq M\) almost-surely._
The hypothesis below, related to the asymptotic of \((X,Y)\)'s conditional joint distribution (suitably renormalized) given \(\|X\|>t\) as \(t\to+\infty\), can be viewed as an extension of (4).
**Assumption 2**: _There exist a transition kernel from \([-M,M]\) to \(E\) given by \((y,B)\in[-M,M]\times\mathcal{B}(E)\mapsto\mu_{y}(B)\), with \(\mu_{y}\) a non-null Radon measure on \(\mathcal{B}(E)\) for every \(y\in[-M,M]\), and a regularly varying function \(b\) with index \(1\) such that_
\[\lim_{t\to+\infty}b(t)\mathbb{P}\left\{t^{-1}X\in B\mid Y=y\right\}=\mu_{y}(B), \tag{6}\]
_for all \(y\in[-M,M]\) and \(B\in\mathcal{B}(E)\) bounded away from zero and s.t. \(\mu_{y}(\partial B)=0\)._
_In addition, \(\sup_{P>1}b(t)\mathbb{P}\left\{t^{-1}X\in B\mid Y\right\}\) is an integrable random variable._
For all \(y\in[-M,M]\), the limit measure \(\mu_{y}\) is referred to as the _conditional exponent measure_ of \(X\) given \(Y=y\). As can be shown by means of a straightforward dominated convergence argument, under Assumption 2, for all \(B\in\mathcal{B}(E)\) bounded away from zero s.t. \(\mu_{y}(\partial B)=0\), the mapping \(y\mapsto\mu_{y}(\bar{B})\) is integrable w.r.t. \(G\) and \(X\)'s marginal distribution is regularly varying with exponent measure \(\mu(A)=\lim_{t\to+\infty}b(t)\mathbb{P}\left\{X\in tA\right\}=\int_{-M}^{M} \mu_{y}(A)dG(y)\). Alternative formulations of the _conditional regular variation_ hypothesis above are given in the Supplementary Material. One may also consider the _conditional angular measure_ of \(X\) given \(Y=y\), denoted by \(\Phi_{y}\), given by
\[\frac{\mathbb{P}[\Theta(X)\in A,\|X\|\geq tr\mid Y=y]}{\mathbb{P}[\|X\|\geq t \mid Y=y]}\underset{t\to+\infty}{\longrightarrow}c_{y}r^{-1}\Phi_{y}(A), \tag{7}\]
with \(c_{y}=\Phi_{y}(\mathbb{S})^{-1}=\mu_{y}(E\setminus\mathbb{B})^{-1}\), for all \(A\in\mathcal{B}(\mathbb{S})\), \(r\geq 1\) and \(y\in[-M,M]\). In particular, the two conditional limit measures are linked through the relation
\[\mu_{y}(\{x\in E,\|x\|\geq r,\Theta(x)\in A\})=r^{-1}\Phi_{y}(A) \tag{8}\]
for all \(y\in[-M,M]\), \(r\geq 1\) and \(A\in\mathcal{B}(\mathbb{S})\). Observe that
\[\lim_{t\to+\infty}\frac{\mathbb{P}\left\{\Theta(X)\in A,\|X\|\geq t\mid Y=y \right\}}{\mathbb{P}\left\{\|X\|>t\mid Y=y\right\}}=c_{y}\Phi_{y}(A),\]
for all \(y\in[-M,M]\), so that \(c_{y}\Phi_{y}\) is the asymptotic conditional probability distribution of \(\Theta(X)\) given \(Y=y\) and \(\|X\|>t\) as \(t\to+\infty\). Notice also that \(X\)'s angular measure writes \(\Phi(A)=\int_{-M}^{M}\Phi_{y}(A)dG(y)\). Denote by \(G_{t}(y)=\mathbb{P}[Y\leq y\mid\|X\|>t]\) the conditional cdf of \(Y\) given \(\|X\|>t\). As \(t\to+\infty\), under Assumption 2, a dominated convergence argument shows that it converges everywhere to the cdf
\[G_{\infty}(y):=\Phi(\mathbb{S})^{-1}\int_{-M}^{y}\Phi_{u}(\mathbb{S})dG(u), \ \ y\in[-M,M]. \tag{9}\]
Let \(P_{\infty}(dx\times dy)\) be the probability distribution on \(\mathcal{B}(E\setminus\mathbb{B})\times[-M,M]\) determined by its second marginal distribution \(G_{\infty}\) and \(c_{y}\mu_{y}\) its conditional distribution given \(y\). Then, using (9), we have: \(\forall B\in\mathcal{B}(E\setminus\mathbb{B})\), \(\forall y\in[-M,M]\),
\[P_{\infty}\left\{B\times[-M,y]\right\}=\int_{-M}^{y}c_{u}\mu_{u}(B)dG_{\infty} (u)=\int_{-M}^{y}\frac{\mu_{u}(B)}{\Phi_{u}(\mathbb{S})}\frac{\Phi_{u}(\mathbb{ S})}{\Phi(\mathbb{S})}dG(u)=\lim_{t\to+\infty}\mathbb{P}\{t^{-1}X\in B,Y\leq y \mid\|X\|\geq t\}. \tag{10}\]
The limit result above describes the asymptotic behavior of the conditional distribution of \((X/t,Y)\) given \(\|X\|>t\) as \(t\to+\infty\). Here and throughout, \((X_{\infty},Y_{\infty})\) shall denote a pair of r.v. with distribution \(P_{\infty}\). Observe that, under Assumptions 1 and 2, the r.v. \(Y_{\infty}\) is almost-surely bounded in amplitude by \(M<+\infty\).
The example below describes a typical situation, where Assumption 1 and Assumption 2 are satisfied. Refer also to the Supplementary Material for an additional generic example (_i.e._ multiplicative noise model).
**Example 1**: (Additive noise model with heavy-tailed random design) _Suppose that \(X\) is a heavy-tailed random vector, independent from a real-valued r.v. \(\varepsilon_{0}\), bounded and centered, modelling some noise, drawn from a distribution with continuous density function \(g_{0}\) w.r.t. the Lebesgue measure in which the r.v._
\[Y=f^{*}(X)+\varepsilon_{0} \tag{11}\]
_is observed, where \(f^{*}:\mathbb{R}^{d}\to\mathbb{R}\) is a Borel-measurable and bounded mapping. Assume also that there exists a continuous (necessarily bounded) function \(f^{*}_{\Theta}:\mathbb{S}\to\mathbb{R}\) s.t._
\[\sup_{x\in E:\|x\|>t}\big{|}f^{*}(x)-f^{*}_{\Theta}(x/\|x\|)\big{|}\to 0\text{ as }t\to+\infty.\]
_The random pair \((X,Y)\) can be shown to fulfill Assumption 1 and conditional regular variation of Assumption 2, refer to the Supplementary Material for further details._
Here, by _regression in the extremes_ is meant the usual regression problem in regions far from the origin. More precisely, the objective pursued is the construction of a real-valued, bounded and Borel-measurable function \(f(x)\) minimizing the _asymptotic conditional quadratic risk_, defined as
\[R_{\infty}(f):=\limsup_{t\to+\infty}R_{t}(f)=\limsup_{t\to+\infty}\mathbb{E} \left[\left(Y-f(X)\right)^{2}\parallel\left\|X\right\|\geq t\right]. \tag{12}\]
It is immediate to see that any function that coincides with \(f^{*}(x)=\mathbb{E}\left[Y\mid X=x\right]\) on the region \([x\in E,\left\|x\right\|>t]=t(E\setminus\mathbb{B})\) minimizes the risk functional \(R_{t}\), _i.e._ the quadratic risk taken w.r.t. distribution \(P_{t}\). Set \(R_{t}^{*}:=R_{t}(f^{*})=\inf_{f}R_{t}(f)\) for all \(t>0\) and observe that \(f^{*}\) also minimizes the asymptotic conditional quadratic risk functional: \(R_{\infty}^{*}:=R_{\infty}(f^{*})=\inf_{f}R_{\infty}(f)\).
As will be shown in the next section, the objective formulated above can be connected to a classical regression problem related to the pair of random variables \((X_{\infty},Y_{\infty})\) with distribution \(P_{\infty}\) defined in (10) by means of Assumptions 1 and 2 and describing the limit behavior of the conditional distribution of the observable pair \((X/t,Y)\) given \(\left\|X\right\|>t\) as \(t\to+\infty\). Precisely, given a class \(\mathcal{F}\) of real-valued Borel measurable and bounded functions defined on \(E\setminus\mathbb{B}\), the goal is to find \(f\) in \(\mathcal{F}\) that minimizes the quantity
\[R_{P_{\infty}}(f):=\mathbb{E}\left[\left(Y_{\infty}-f(X_{\infty})\right)^{2} \right], \tag{13}\]
referred to as the _extreme quadratic risk_. The best predictive performance one can hope is achieved by the function \(f_{P_{\infty}}^{*}\) defined by \(Y_{\infty}\)'s conditional expectation given \(X_{\infty}\), \(f_{P_{\infty}}^{*}(X_{\infty}):=\mathbb{E}[Y_{\infty}\mid X_{\infty}]\) almost-surely, and is denoted by \(R_{P_{\infty}}^{*}:=R_{P_{\infty}}(f_{P_{\infty}}^{*})=\inf_{f}R_{P_{\infty}} (f)\). Based on its connection to extreme quadratic risk minimization, an algorithmic approach to regression in the extremes can be designed. Before detailing it and developing a sound theoretical framework to guarantee its validity, a few remarks are in order.
**Remark 2**: (Heavy-tailed input vs heavy-tailed output) _Attention should be paid to the fact that the heavy-tail assumption is here on the distribution of the input/explanatory \(x.v.\)\(X\), in contrast to other works devoted to regression such as Brownlees et al. (2015); Lugosi and Mendelson (2016) or Mendelson (2017) where it is the loss/response that is supposedly heavy-tailed._
**Remark 3**: (Alternative to ERM) _In the case where the output/response variable \(Y\) is heavy-tailed (or possibly contaminated by a heavy-tailed noise), alternatives to the ERM approach Lugosi and Mendelson (2016). Extension of this approach is beyond the scope of this paper but will be the subject of a forthcoming work._
## 3 Least Squares Regression in the Extremes
Under the assumptions previously listed, we now develop a rigorous framework for least squares regression in extreme regions. We first establish probabilistic results that connect the (asymptotic) quadratic conditional risk and the extreme quadratic risk, as well as their minimizers. Based on the latter, a statistical learning strategy to solve regression in the extremes is next devised, a nonasymptotic upper confidence bound for the excess of \(R_{\infty}\)-risk of the predictive function thus constructed.
### Extreme Quadratic Risk Minimization through Regression in Extreme Regions
Here, we shall establish the existence of _angular predictive functions_ (_i.e._ functions of the form \(f(x)=(h\circ\theta)(x)\), where \(h:\mathbb{S}\to\mathbb{R}\) is a Borel measurable function) that asymptotically minimize the \(R_{\infty}\)-risk, defined in (12). Based on this result, we shall next propose a practical method to regression in the extremes (see Algorithm 1) based on the angular information \(\Theta(X_{i})\) provided by a fraction of the largest observations \(X_{i}\) (_i.e._ those with the largest amplitude \(\left\|X_{i}\right\|\)) solely and prove its accuracy (see subsection 3.2). Observe first that minimizers of the extreme quadratic risk (13) are of angular type. Indeed, it follows from the definition (10) of the limit distribution \(P_{\infty}\) and the homogeneity of the \(\mu_{Y}\)'s that the r.v. \(Y_{\infty}\) and \(\left\|X_{\infty}\right\|\) are independent, so that: \(f_{P_{\infty}}^{*}(X_{\infty})=\mathbb{E}\left[Y_{\infty}\mid X_{\infty}\right] =\mathbb{E}\left[Y_{\infty}\mid\Theta_{\infty}\right]\) almost-surely, where \(\Theta_{\infty}:=\Theta(X_{\infty})\). Hence, the sole part of information carried by \(X_{\infty}\) that is useful to predict \(Y_{\infty}\) is its angular component \(\Theta_{\infty}\). In order to express the optimal solution \(f_{P_{\infty}}^{*}\) in terms of the conditional angular probability distributions, observe that \(\Phi_{Y}\) is absolutely continuous w.r.t. the marginal angular measure \(\Phi\) with probability one and set \(\varphi_{Y}(\theta)=(d\Phi_{Y}/d\Phi)(\theta)\), for all \(\theta\in\mathbb{S}\).
**Proposition 1**: _Suppose that Assumptions 1 and 2 are satisfied. For all \(x\in E\), the conditional distribution of \(Y_{\infty}\) given \(X_{\infty}=x\) (resp. given \(\Theta_{\infty}=\Theta(x)\)) is absolutely continuous w.r.t. \(G_{\infty}\) with density_
\[g_{Y_{\infty}\left\|X_{\infty}=x}(y)=\frac{\varphi_{Y}(\Theta(x))/\Phi_{y}( \mathbb{S})}{\int_{-M}^{M}\varphi_{u}(\Theta(x))/\Phi_{u}(\mathbb{S})dG_{ \infty}(u)},\ \ y\in[-M,M].\]
_In addition, the minimizer of the extreme quadratic risk can be expressed as \(f^{*}_{P_{m}}(x)=f^{*}_{\Theta}(\Theta(x))\), where: \(\forall\theta\in\mathbb{S}\),_
\[f^{*}_{\Theta}(\theta)=\frac{\int_{-M}^{M}u\varphi_{u}(\theta)/\Phi_{u}( \mathbb{S})dG_{\infty}(u)}{\int_{-M}^{M}\varphi_{u}(\theta)/\Phi_{u}(S)dG_{ \infty}(u)}=\frac{\int_{-M}^{M}u\varphi_{u}(\theta)dG(u)}{\int_{-M}^{M} \varphi_{u}(\theta)dG(u)}.\]
The proof mainly follows from the homogeneity property of \(\mu_{y}\), \(y\in[-M,M]\), and is detailed in the Supplementary Material.
The conditional expectations and variances below are involved in the following: \(\forall x\in E\), \(\forall\theta\in\mathbb{S}\),
\[m_{P_{m}}(\theta) =\mathbb{E}\left[Y_{\infty}^{2}\mid\Theta_{\infty}=\theta\right], V_{P_{m}}(\theta) =\mathrm{Var}[Y_{\infty}\mid\Theta_{\infty}=\theta],\] \[m(x) =\mathbb{E}\left[Y^{2}\mid X=x\right], V(x) =\mathrm{Var}[Y\mid X=x].\]
Equipped with these notations, the minimum extreme quadratic risk can be expressed as follows:
\[R^{*}_{P_{m}}=R_{P_{m}}(f^{*}_{P_{m}})=\mathbb{E}\Big{[}\mathrm{Var}[Y_{\infty} \mid\Theta_{\infty}]\Big{]}.\]
Observe also that: \(\forall x\in E\), \(\forall\theta\in\mathbb{S}\),
\[V_{P_{m}}(\theta) =m_{P_{m}}(\theta)-(f^{*}_{\Theta}(\theta))^{2}, V(x) =m(x)-(f^{*}(x))^{2}\,.\]
The (mild) technical hypothesis below is also required.
**Assumption 3**: _The functions \(f^{*}_{\Theta}\) and \(m_{P_{m}}\) are continuous on \(\mathbb{S}\). In addition, we have, as \(t\) tends to infinity,_
\[\sup_{\theta\in\mathbb{S}}|f^{*}(\theta)-f^{*}_{\Theta}(\theta)\mid\to 0\text{ and }\sup_{\theta\in\mathbb{S}}|m(\theta)-m_{P_{m}}(\theta)|\to 0. \tag{14}\]
We point out that Assumption 3 is not very restrictive. Indeed, it is automatically fulfilled as soon as additional classic mild conditions are satisfied by the conditional density functions \(\varphi_{y}\) De Haan and Resnick (1987). Precisely, if for all \(y\in[-M,M]\) the conditional distribution of \(X\) given \(Y=y\) is absolutely continuous w.r.t. Lebesgue measure on \(\mathbb{R}^{d}\) with density \(g_{X|Y=y}\) such that
\[\sup_{\theta\in\mathbb{S}}|b(t)t^{d}g_{X|Y=y}(t\theta)-\varphi_{y}(\theta)| \underset{t\rightarrow+\infty}{\longrightarrow}0, \tag{15}\]
then the uniform convergences in (14) hold true. Refer to Appendix D and Appendix E in the Supplementary Material for further details. Obviously, under Assumption 3, \(V_{P_{m}}\) is also continuous on \(\mathbb{S}\) and the uniform convergence below holds true:
\[\sup_{x\in E:|t|\geq t}|V_{P_{m}}(\Theta(x))-V(x)|\underset{t\rightarrow+ \infty}{\longrightarrow}0. \tag{16}\]
By relating asymptotic conditional quadratic risk minimization to extreme quadratic risk minimization, the next proposition suggests to restrict the search of minimizers of the \(R_{\infty}\)-risk to a class of angular predictive functions.
**Proposition 2**: _Suppose that Assumptions 1, 2 and 3 are satisfied. Then, the following assertions hold true._
* _We have, as_ \(t\rightarrow+\infty\)_:_ \(R^{*}_{t}\underset{t\rightarrow+\infty}{\longrightarrow}R^{*}_{P_{m}}\) _and, in particular,_ \(R^{*}_{\infty}=R^{*}_{P_{m}}\)_._
* _The regression function_ \(f^{*}_{P_{m}}=f^{*}_{\Theta}\circ\Theta\) _minimizes the asymptotic conditional quadratic risk:_ \[R^{*}_{\infty}=R_{\infty}(f^{*}_{P_{m}}).\]
The proposition stated above is proved in the Supplementary Material. It reveals that the solution \(f^{*}_{P_{m}}\) of the extreme risk minimization problem, which is of angular type, is also a minimizer of the asymptotic conditional quadratic risk (and that the minima coincide). Beyond the assertions in Proposition 2, for any continuous function \(h:\mathbb{S}\rightarrow\mathbb{R}\), we also have: \(\lim_{t\rightarrow+\infty}R_{t}(h)=R_{\infty}(h)=R_{P_{m}}(h)\), see Appendix A in the Supplementary Material. The next result, relying on the same argument, shows that the conditional quadratic risk of the angular predictive function \(f^{*}_{P_{m}}\) converges to the minimum asymptotic conditional quadratic risk.
**Corollary 1**: _Suppose that Assumptions 1, 2 and 3 are satisfied. We have \(R_{t}(f^{*}_{\Theta}\circ\Theta)\underset{t\rightarrow+\infty}{\longrightarrow} R^{*}_{\infty}\) and, consequently,_
\[0\leq\inf_{h}R_{t}(h\circ\Theta)-R^{*}_{t}\to 0\text{ as }t\rightarrow+\infty,\]
_where the infimum is taken over the class of all bounded Borel measurable functions \(h:\mathbb{S}\rightarrow\mathbb{R}\)._
The corollary above encourages us to replace the original minimization problem \(\min R_{\infty}(f)\) by
\[\min_{h}R_{t}(h\circ\Theta) \tag{17}\]
with a threshold \(t\) large enough, the bias resulting from the restriction to the class of angular predictive functions being asymptotically negligible as \(t\) tends to \(+\infty\). For \(t\) suitably chosen (_i.e._ an appropriate order statistic related to the \(\|X_{t}\|\)'s), an empirical version of the problem (17) can be formulated. A practical and generic approach to regression in the extremes then naturally consists in solving it, as described by Algorithm 1.
```
Input: Training data set \(\mathcal{D}_{n}=\{(X_{1},Y_{1}),\ \ldots,\ (X_{n},Y_{n})\}\) with \(X_{i}=(X_{i}^{(1)},\ \ldots,\ X_{i}^{(d)})\in[0,+\infty[^{d},\ i=1,\ \ldots,\ n;\) class \(\mathcal{H}\) of predictive functions on \(\mathbb{S}\); number \(k\leq n\) of "extreme' observations selected. Standardization: Standardize the input vectors by means of the transformation \(\hat{V}_{i}=\hat{T}(X_{i})\) for \(i=1,\ \ldots,\ n\), where \(\hat{T}(x)=(1/(1-\hat{F}_{1}(x^{(1)})),\ \ldots,\ 1/(1-\hat{F}_{d}(x^{(d)})))\), with \(\hat{F}_{j}(x^{(j)})=(1/(n+1))\sum_{i=1}^{n}\mathds{1}\{X_{i}^{(j)}\leq x^{(j )}\}\) for \(1\leq j\leq d\) and all \(x\in\mathbb{R}^{d}\). Truncation: Sort the training data by decreasing order of magnitude of the transformed input \(\|\hat{V}_{(1)}\|\geq\ldots\geq\|\hat{V}_{(n)}\|\) and form a set of \(k\)_extreme training observations_
\[\left\{\left(\hat{V}_{(1)},Y_{(1)}\right),\ \ldots,\ \left(\hat{V}_{(k)},Y_{(k)} \right)\right\}. \tag{18}\]
Empirical quadratic risk minimization: Solve the optimization problem
\[\min_{h\in\mathcal{H}}\frac{1}{k}\sum_{i=1}^{k}\left(Y_{(0)}-h\left(\Theta \left(\hat{V}_{(0)}\right)\right)\right)^{2}, \tag{19}\]
producing the solution \(\hat{h}(\theta)\). Output: Predictive function (\(\hat{h}\circ\Theta\))(\(x\)).
```
**Algorithm 1** Least Squares Minimization in the Extremes
The corollary above encourages us to replace the original minimization problem \(\min R_{\infty}(f)\) by
\[\min_{h}R_{t}(h\circ\Theta) \tag{17}\]
with a threshold \(t\) large enough, the bias resulting from the restriction to the class of angular predictive functions being asymptotically negligible as \(t\) tends to \(+\infty\). For \(t\) suitably chosen (_i.e._ an appropriate order statistic related to the \(\|X_{t}\|\)'s), an empirical version of the problem (17) can be formulated. A practical and generic approach to regression in the extremes then naturally consists in solving it, as described by Algorithm 1.
Notice that any algorithm for quadratic risk minimization can be used to solve (19), refer to _e.g._Gyorfi et al. (2002).
**Remark 4**: (Handling undefined quantities) _In Algorithm 1, the empirical estimator of the cdf \(F_{j}\) is not exactly the usual version, the factor \(1/n\) is replaced by \(1/(n+1)\) in order to avoid dividing by zero possibly in the marginal transformation._
**Remark 5**: (Choosing the fraction \(k\) of extreme observations) _Choosing the number of observations of a sample considered as extreme, i.e. the hyper-parameter \(k\) in Algorithm 1, is a recurrent issue in EVT and a variety of methods to select the optimal value of \(k\) have been proposed in the literature, see e.g. Goix et al. (2016); Goix et al. (2017) or Nakamura (2021) among others. However, rules of thumb are generally used in practice, with the default choice \(k=\lfloor\sqrt{n}\rfloor\) in particular._
In the next subsection, a nonasymptotic analysis of the performance of the approach proposed for regression in the extremes is carried out. An upper confidence bound for the excess of \(R_{\infty}\)-risk of a solution of (19) is established, when the class \(\mathcal{H}\) over which empirical minimization is performed is of controlled complexity. As previously mentioned, because the goal of this paper is to explain the main ideas to tackle the problem of regression in the extremes, for simplicity the input 1-d marginal distributions \(F_{j}\) are supposedly unit Pareto. Hence, in order to avoid a tedious technical analysis, the effect of step 1 of Algorithm 1 (empirical standardization) is thus neglected here. It is however possible to incorporate it in the rate bound analysis by means of the concentration results in Clemencon et al. (2021) at the price of a significant amount of additional technicalities, see the dedicated discussion in the Supplementary Material.
### Theoretical Guarantees - Generalization Bounds
The rationale behind the approach proposed above to find a predictive function that nearly minimizes the asymptotic conditional quadratic risk \(R_{\infty}\) (12) consists in solving an empirical version of the non asymptotic optimization problem
\[\min_{h\in\mathcal{H}}R_{t}\left(h\circ\Theta\right),\]
where the minimum is taken over a class \(\mathcal{H}\) of continuous bounded Borel measurable functions on \(\mathbb{S}\) of controlled complexity (see Assumption 4 below) but hopefully rich enough to contain a reasonable approximant of \(f_{\Theta}^{*}\) and \(t\) is a large threshold. Large being naturally meant in comparison with the observations of the training dataset \(\mathcal{D}_{n}\), the latter
are first sorted by decreasing order of magnitude of the \(\|X_{i}\|\)'s, \(\|X_{(1)}\|>\ldots>\|X_{(n)}\|\), and one next sets \(t=\hat{t}_{n,k}:=\|X_{(k)}\|\). The threshold involved in the empirical risk minimization problem is the statistical counterpart of \(t_{n,k}\), the quantile of level \(1-k/n\) of \(\|X\|\)'s distribution. The empirical version of the \(R_{t_{n,k}}\)-risk of a predictive mapping of the form \(h\circ\Theta\) is
\[\hat{R}_{t_{n,k}}(h\circ\Theta):=\frac{1}{k}\sum_{i=1}^{k}\left(Y_{(i)}-h( \Theta(X_{(i)}))\right)^{2}. \tag{20}\]
We point out that the statistic above is not an average of independent random variables and that investigating its concentration properties is far from straightforward.
**Assumption 4**: _The class \(\mathcal{H}\) is a set of continuous real-valued functions on \(\mathbb{S}\) of VC dimension \(V_{\mathcal{H}}<+\infty\), uniformly bounded: \(\exists M<+\infty\) s.t. \(\forall h\in\mathcal{H}\), \(\forall\theta\in\mathbb{S}\), \(|h(\theta)|\leq M\)._
Note that 'VC dimension' refers here to the generalization of the usual VC dimension for collections of sets to classes of real-valued functions, also called _Pollard's pseudo-dimension_ sometimes Pollard (1990). Under the complexity hypothesis above, the following result provides an upper confidence bound for the maximal deviations between the conditional quadratic risk \(R_{t_{n,k}}\) and its empirical version, uniformly over the class \(\mathcal{H}\). It also shows that the bias inherent in the approximation of the limit risk by a nonsmastic version \(R_{t}\) vanishes as \(t\to+\infty\), provided that the density of the conditional distribution of \(\Theta(X)\) given \(\|X\|\geq t\), denoted by \(\phi_{t}\), w.r.t. \(\Phi_{\Theta(X)}\), the marginal distribution of \(\Theta(X)\), is uniformly bounded. Refer to the Supplementary Material for its technical proof.
**Proposition 3**: _Suppose that Assumptions 1-4 hold true._
1. _Let_ \(\delta\in(0,1)\)_. We have with probability larger than_ \(1-\delta\)_:_ \[\sup_{h\in\mathcal{H}}\left|\hat{R}_{t_{n,k}}(h\circ\Theta)-R_{t_{n,k}}(h \circ\Theta)\right|\leq\frac{4M^{2}}{\sqrt{k}}\Big{(}C\sqrt{V_{\mathcal{H}}}+2 \sqrt{2\log(3/\delta)}\Big{)}+\frac{8M^{2}\log(3/\delta)}{3k},\] _where_ \(C\) _is a universal constant._
2. _Suppose that_ \(\sup_{t\geq 1,\ \theta\in\Theta}\phi_{t}(\theta):=D<+\infty\)_. We have:_ \[\sup_{h\in\mathcal{H}}|R_{t}(h\circ\Theta)-R_{\infty}(h\circ\Theta)|\to 0 \text{ as }t\to+\infty.\]
The corollary below provides an upper confidence bound for the excess of \(R_{\infty}\)-risk of any solution \(\hat{f}_{\Theta,k}\) of the problem
\[\min_{h\in\mathcal{H}}\hat{R}_{t_{n,k}}(h\circ\Theta).\]
It immediately follows from the obvious bound
\[R_{\infty}(\hat{f}_{\Theta,k}\circ\Theta)-R_{\infty}^{*}\leq 2\sup_{h\in\mathcal{H}}\left|\hat{R}_{t_{n,k}}(h\circ\Theta)-R_{t_ {n,k}}(h\circ\Theta)\right|+\left(\inf_{h\in\mathcal{H}}R_{t_{n,k}}(h\circ \Theta)-R_{t_{n,k}}^{*}\right)+\left(R_{t_{n,k}}^{*}-R_{\infty}^{*}\right)\] \[+R_{\infty}(\hat{f}_{\Theta,k}\circ\Theta)-R_{t_{n,k}}(\hat{f}_{ \Theta,k}\circ\Theta),\]
combined with Proposition 3.
**Corollary 2**: _Suppose that Assumptions 1-4 are staisfied. Let \(\delta\in(0,1)\). We have with probability larger than \(1-\delta\):_
\[R_{\infty}(\hat{f}_{\Theta,k}\circ\Theta)-R_{\infty}^{*}\leq \frac{8M^{2}}{\sqrt{k}}\Big{(}C\sqrt{V_{\mathcal{H}}}+2\sqrt{2 \log(3/\delta)}\Big{)}+\frac{16M^{2}\log(3/\delta)}{3k}+\left(\inf_{h\in \mathcal{H}}R_{t_{n,k}}(h\circ\Theta)-R_{t_{n,k}}^{*}\right)+\left(R_{t_{n,k}}^ {*}-R_{\infty}^{*}\right) \tag{21}\] \[+\sup_{h\in\mathcal{H}}\left|R_{t_{n,k}}(h\circ\Theta)-R_{\infty} (h\circ\Theta)\right|. \tag{22}\]
As expected, \(k\) being the number of extreme observations used to implement the learning procedure, the stochastic error term in the bound stated above is of order \(O(1/\sqrt{k})\). Two types of bias error are also involved: one is due to the restriction to the class \(\mathcal{H}\) (model bias), while the other results from the substitution of the conditional quadratic risk for its asymptotic limit. In particular, under the assumptions stipulated, as \(n\) and \(k\) tend to \(+\infty\) so that \(k/n\to 0\), we have \(t_{n,k}\sim n/k\to+\infty\) and the last two terms on the right hand side of (21) vanish. A quantification of their decay rate would require to extend the conditional multivariate regular variation property introduced in subsection 2.2 and specify second-order conditions, as in de Haan and Resnick (1996).
## 4 Numerical Experiments
We now investigate the performance of the approach previously described and theoretically analyzed for regression in the extremes from an empirical perspective on several simulated and real datasets. The MSE in extreme regions of angular regression functions output by specific implementations of Algorithm 1 are compared to those of the classic regression functions, learned in a standard fashion. As a first go, we present the simulation results for an additive noise model with heavy-tailed design (see Example 1): \(Y=f^{\circ}(X)+\varepsilon_{0}\), where \(f^{\circ}(x)=\beta^{T}\Theta(x)(1-1/2\sqrt{\|x\|})\) for \(x\in\mathbb{R}^{d}\) and \(\|\cdot\|=\|\cdot\|_{2}\), the design \(X\) is generated according to a multivariate logistic distribution with dependence parameter equal to \(\xi=1\) (which means that extreme observations occur along the axes Stephenson (2003)), the input 1-d marginals are standard Pareto with shape parameter \(\alpha=3\) and \(\varepsilon_{0}=Z_{0}\mathds{1}\left\|Z_{0}\right\|\leq 1\)) is the noise, where \(Z_{0}\) is a centered Gaussian r.v. with standard deviation \(\sigma=0.1\). The simulated data are of dimension \(d=10\), the size of the training dataset is \(n_{train}=10.000\), truncated so as to keep only the \(k_{train}=223\) (\(=\lfloor\sqrt{n_{train}}\rfloor\)) largest observations to learn regression functions in the extremes. The size of the test dataset is \(n_{test}=100.000\) and the \(k_{test}=316\) (\(=\lfloor\sqrt{n_{test}}\rfloor\)) largest instances are used to evaluate predictive performance in the extremes.
We implemented four different regression algorithms by means of _scikit-learn_(Pedregosa et al., 2011) with the default parameters, namely Ordinary Least Squares (OLS), Support Vector Regression (SVR) (with linear kernel for additive noise and Gaussian kernel for multiplicative noise), CART (tree) and Random Forest (RF). Predictive functions have been learned using the full training dataset, the truncated version composed of the \(k_{train}\) largest observations and the angles of the truncated version: \(e=20\) independent replications of the experiment have been performed. Table 1 displays the performance of the algorithms considered for the simulated additive noise model. A similar experiment has been carried out for the multiplicative noise model with heavy-tailed design \(Y=\varepsilon_{1}f^{\circ}(X)\), where \(f^{\circ}(x)=\cos(1/\|x\|)\sum_{i=1}^{d/2}\Theta(x)_{2i-1}\sin(\Theta(x)_{2i}\pi)\), the design \(X\) is generated according to a multivariate logistic distribution with dependence parameter equal to \(\xi=0.7\), the input 1-d marginals are standard Pareto with shape parameter \(\alpha=2\) and \(\varepsilon_{1}=Z_{1}\mathds{1}\{0\leq Z_{1}\leq 2\}\) is the noise, \(Z_{1}\) being a Gaussian r.v. with mean \(\mu=1\) and standard deviation \(\sigma=0.1\). We consider the case of dimension \(d=16\), with \(n_{train}=10.000\) and \(n_{test}=100.000\) and then, \(k_{train}=223\) and \(k_{test}=316\), and \(e=20\). The averaged MSE are displayed in Table 1. The latter shows that, in both experiments, the approach we promote for regression in the extremes and analyzed in the previous section clearly outperforms its competitors, no matter the algorithm (_i.e._ the model bias) considered. This paper being the first to consider regression in the extremes (see Remark 3 for a description of regression problems of different nature with heavy-tailed data), no other alternative approach is documented in the literature. Encouraged by this first agreement between theoretical and numerical results, experiments on real data have been conducted. Four datasets are considered: _dataset_sales_ (10738 instances and 15 attributes) from OpenML repository ([https://www.openml.org/](https://www.openml.org/)), _bank32NH_ (8192 instances and 32 attributes) from LIACC repository ([https://www.dcc.fc.up.pt/](https://www.dcc.fc.up.pt/) ltorgo/Regression), _CCPP_ (9568 instances and 4 attributes) and _CASP_ (45730 instances and 9 attributes) from UCI repository ([https://archive.ics.uci.edu/](https://archive.ics.uci.edu/)). Each dataset has been randomly split 20 times into a test set of size \(n_{test}\), with one third of the original dataset, and a train set of size \(n_{train}\) with the rest. The number of training extreme observations is \(k_{train}=\lfloor 0.1*n_{train}\rfloor\) and \(k_{test}=\lfloor 0.1*n_{test}\rfloor\). Results are summarized in Table 2. Our approach to regression in the extremes generally surpassed standard regression or regression using the truncated sample, except in a few situations: when the "extreme regime" is not fully reached, regression based on \(X\mid\|X\|\) large may outperform regression based on \(\Theta\mid\|X\|\) large (see the results for _bank32NH_) and the impact of the model bias, reflected in the bound of Corollary 2 should not be ignored (see the results for _CASP_, best for the
\begin{table}
\begin{tabular}{l c c c} \hline \hline Datasets/Models & Train on \(X\) & Train on \(X\mid\|X\|\) large & Train on \(\Theta\mid\|X\|\) large \\ \hline Add. model: OLS & \(0.359_{\pm 0.182}\) & \(0.0518_{\pm 0.0285}\) & \(\mathbf{0.0034_{\pm 0.0003}}\) \\ SVR & \(0.618_{\pm 0.279}\) & \(0.0627_{\pm 0.0429}\) & \(\mathbf{0.0041_{\pm 0.0006}}\) \\ tree & \(0.059_{\pm 0.012}\) & \(0.0267_{\pm 0.0066}\) & \(\mathbf{0.0203_{\pm 0.0102}}\) \\ RF & \(0.053_{\pm 0.012}\) & \(0.0189_{\pm 0.0051}\) & \(\mathbf{0.0110_{\pm 0.0031}}\) \\ \hline Mult. model: OLS & \(1.72_{\pm 4.40}\) & \(0.1402_{\pm 0.2462}\) & \(\mathbf{0.0029_{\pm 0.0005}}\) \\ SVR & \(0.0091_{\pm 0.0020}\) & \(0.0072_{\pm 0.0017}\) & \(\mathbf{0.0067_{\pm 0.0006}}\) \\ tree & \(0.014_{\pm 0.004}\) & \(0.011_{\pm 0.004}\) & \(\mathbf{0.003_{\pm 0.002}}\) \\ RF & \(0.008_{\pm 0.001}\) & \(0.006_{\pm 0.001}\) & \(\mathbf{0.002_{\pm 0.001}}\) \\ \hline \hline \end{tabular}
\end{table}
Table 1: Average MSE (and standard deviation) for regression functions trained using all observations, extreme observations and angles of extreme observations, over 20 independent replications of the dataset generated by the additive/multiplicative noise models.
SVR algorithm). Practical application of the methodology proposed/analyzed here in high-dimensional situations is of crucial importance, though beyond the scope of the present paper. As the approach promoted can be combined with any regression method, it should be then naturally coupled with appropriate algorithms, based on regularization techniques in particular.
## 5 Conclusion
We have provided a sound strategy to solve'regression in the extremes'. The asymptotic framework we have developed crucially relies on the (novel) notion of _conditional regular variation_. When the distribution of the input \(X\) conditioned upon the output \(Y\) is regularly varying, the problem can be stated and analyzed in a rigorous manner. We have described the optimal solution and proved that it can be nearly recovered with nonasymptotic guarantees by implementing a variant of the ERM principle, based on the angular information carried by a fraction of the largest observations only. We have also carried out numerical experiments to support the approach promoted, highlighting the necessity of using a dedicated methodology to perform regression in the extremes with guarantees.
\begin{table}
\begin{tabular}{l c c c} \hline \hline Datasets/Models & Train on \(X\) & Train on \(X\mid\|X\|\) large & Train on \(\Theta\mid\|X\|\) large \\ \hline dataset\_sales: OLS & 65.9\({}_{\pm 7.8}\) & 57.7\({}_{\pm 6.0}\) & **39.1\({}_{\pm 4.9}\)** \\ SVR & 65.5\({}_{\pm 10.0}\) & 70.5\({}_{\pm 9.4}\) & **46.8\({}_{\pm 6.2}\)** \\ tree & 59.9\({}_{\pm 8.7}\) & **37.8\({}_{\pm 7.0}\)** & 46.1\({}_{\pm 8.2}\) \\ RF & 31.7\({}_{\pm 3.1}\) & **24.2\({}_{\pm 3.7}\)** & 27.4\({}_{\pm 3.7}\) \\ \hline bank32NH: OLS & 0.019\({}_{\pm 0.001}\) & 0.018\({}_{\pm 0.002}\) & **0.014\({}_{\pm 0.001}\)** \\ SVR & 0.021\({}_{\pm 0.001}\) & 0.015\({}_{\pm 0.001}\) & **0.014\({}_{\pm 0.001}\)** \\ tree & 0.040\({}_{\pm 0.003}\) & **0.021\({}_{\pm 0.004}\)** & 0.022\({}_{\pm 0.002}\) \\ RF & 0.021\({}_{\pm 0.001}\) & **0.010\({}_{\pm 0.001}\)** & 0.011\({}_{\pm 0.001}\) \\ \hline CCPP: OLS & **22.4\({}_{\pm 3.1}\)** & 348.8\({}_{\pm 13.9}\) & 59.5\({}_{\pm 4.5}\) \\ SVR & 310.8\({}_{\pm 15.8}\) & 271.0\({}_{\pm 14.8}\) & **60.9\({}_{\pm 4.9}\)** \\ tree & 28.3\({}_{\pm 5.4}\) & **22.4\({}_{\pm 4.5}\)** & 73.5\({}_{\pm 12.3}\) \\ RF & 15.1\({}_{\pm 2.9}\) & **13.4\({}_{\pm 2.7}\)** & 41.5\({}_{\pm 4.4}\) \\ \hline CASP: OLS & **14.5\({}_{\pm 1.3}\)** & 43.7\({}_{\pm 1.2}\) & 15.7\({}_{\pm 1.2}\) \\ SVR & 13.5\({}_{\pm 1.0}\) & 28.9\({}_{\pm 1.3}\) & **11.9\({}_{\pm 1.1}\)** \\ tree & **14.2\({}_{\pm 2.0}\)** & 14.5\({}_{\pm 1.6}\) & 19.1\({}_{\pm 2.0}\) \\ RF & 8.0\({}_{\pm 0.7}\) & **7.7\({}_{\pm 0.8}\)** & 9.5\({}_{\pm 1.1}\) \\ \hline \hline \end{tabular}
\end{table}
Table 2: Average MSE (and standard deviation) for predictive functions learned using all observations, extremes and angles of the extreme observations, over 20 random splits of each of the 4 datasets. |
2304.07245 | Machine Learning-Based Multi-Objective Design Exploration Of Flexible
Disc Elements | Design exploration is an important step in the engineering design process.
This involves the search for design/s that meet the specified design criteria
and accomplishes the predefined objective/s. In recent years, machine
learning-based approaches have been widely used in engineering design problems.
This paper showcases Artificial Neural Network (ANN) architecture applied to an
engineering design problem to explore and identify improved design solutions.
The case problem of this study is the design of flexible disc elements used in
disc couplings. We are required to improve the design of the disc elements by
lowering the mass and stress without lowering the torque transmission and
misalignment capability. To accomplish this objective, we employ ANN coupled
with genetic algorithm in the design exploration step to identify designs that
meet the specified criteria (torque and misalignment) while having minimum mass
and stress. The results are comparable to the optimized results obtained from
the traditional response surface method. This can have huge advantage when we
are evaluating conceptual designs against multiple conflicting requirements. | Gehendra Sharma, Sungkwang Mun, Nayeon Lee, Luke Peterson, Daniela Tellkamp, Anand Balu Nellippallil | 2023-04-14T16:48:51Z | http://arxiv.org/abs/2304.07245v1 | # DRAFT: MACHINE LEARNING-BASED MULTI-OBJECT DESIGN EXPLORATION OF FLEXIBLE DISC ELEMENTS
###### Abstract
Design exploration is an important step in the engineering design process. This involves the search for design/s that meet the specified design criteria and accomplishes the predefined objective/s. In recent years, machine learning-based approaches have been widely used in engineering design problems. This paper showcases Artificial Neural Network (ANN) architecture applied to an engineering design problem to explore and identify improved design solutions. The case problem of this study is the design of flexible disc elements used in disc couplings. We are required to improve the design of the disc elements by lowering the mass and stress without lowering the torque transmission and misalignment capability. To accomplish this objective, we employ ANN coupled with genetic algorithm in the design exploration step to identify designs that meet the specified criteria (torque and misalignment) while having minimum mass and stress. The results are comparable to the optimized results obtained from the traditional response surface method. This can have huge advantage when we are evaluating conceptual designs against multiple conflicting requirements.
Keywords: Design Exploration; Artificial Neural Network; Disc Design Optimization
## Nomenclature
ANN Artificial Neural Network
RSM Response Surface Model
GA Genetic Algorithm
FEA Finite Element Analysis
ML Machine Learning
## 1 Problem Statement: Design of Flexible Disc Elements
Couplings are mechanical components that join two rotating parts to transmit mechanical power. While transmitting mechanical power, it is also required to offer torque resilience to resist torsional forces caused by two rotating parts [1]. The inability to handle misalignment between the rotating parts and system loads causes early torsional failures [2]. A disc coupling is one kind of coupling that uses flexible disc elements in its design. The disc couplings can transmit high torque, operate at high speed, and compensate for misalignment than other designs do by using flexible disc elements [3]. The disc elements as shown in Figure 1 are generally stacked together and connected between rotating parts. As the disc elements play a role to provide torque and take misalignment, the design of these flexible disc elements is critical in designing a disc coupling.
Despite being one of the important transmission components, the lack of published resources limit designers' ability to make contribution in the improvement of disc design. In this paper, we detail the challenges and considerations to be taken for designing disc elements. We utilize these knowledge in formulating a design problem aimed towards improving disc design. We showcase the design improved in using two approaches, that are, coupled ANN-GA and coupled RSM-GA.
### Torque Transmission through Discs
In Figure 2 is shown a disc design with 6 links. Each hole location labeled from 1 to 6 in the Figure 2 is referred to as joints. The segment between the consecutive joints is referred to as links. For example, the segment joining Joint 1 and Joint 6 is called Link 1_6 and so on. In detail, the discs are assembled in couplings in such a way that Joints 1,3 and, 5 are connected to one side of the rotating part while Joints 2, 4 and, 6 are connected to the other side (Figure 2). When the torque transmission takes place through these discs, torsional forces exerted to the discs cause deformation, as shown in Figure 2, resulting in the Joint 1 to shift to 1'. It should be noted that this is a very small shift. As a reaction to this shift, reaction forces are created in the Link 1_2 and Link 1_6. Link 1_6 is under compressive force while Link 1_2 is under tensile force. In Figure 2, F2 represents the reaction offered by Link 1_6 and F1 represents the reaction offered by Link 1_2. As these discs are thin structures, compressive force induces buckling in the disc links. Therefore, buckling load that the Link 1_6 can take limits the maximum reaction force F2 that the link can offer. It is to be noted that the discs are to be designed to have a better buckling resistance to improve the torque carrying capability as the buckling is one of the critical forces causing failure in the discs [4]. Another way to improve torque transmission capability is adding more discs. While adding more discs improves torque transmission capability, we restrict the scope of this work to improvement of the disc design.
As a result of the minute shift from of Joint 1 from 1 to 1', one link gets stretched and the other link gets compressed.
\[\text{Elongation of link 1\_2}=\text{Length of 1'\_2}-\text{Length of 1\_2}\]
\[\text{Compression of link 1\_6}=\text{Length of 1\_6}-\text{Length of 1'\_6}\]
As the shift is very small, these are nearly equal; hence, F1 = F2. As the disc has 6 links, \(\Theta=60^{0}\) and hence, we can establish that the resultant force (F) at joint 1 is
\[F=\sqrt{F1^{2}+F2^{2}+2(F1)(F2)cos\Theta}\]
\[\text{Equation 1}\]
Substituting \(\Theta\) in Equation 1, we get,
\[F=\sqrt{3}\ F2\]
\[\text{Equation 2}\]
The disc has 3 links that undergo buckling and 3 links that undergo tensile stretch. Hence, the force resolution shown in Equation 2 can also be resolved at Joint 3 and Joint 5. As the disc has Pitch Circle Diameter (PCD) equal to d, the maximum torque (T) that the disc can transmit without buckling is given by:
\[T=(Number\ of\ buckling\ links)*F*\frac{d}{2}\]
\[T=3\sqrt{3}F2*\frac{d}{2}\]
\[\text{Equation 4}\]
As discussed previously, buckling will cause these thin structures to fail during torque transmission (assuming there are no misalignment). By improving the disc design to take more buckling load, we can enhance the torque transmission capability.
### Reaction Forces Due to Axial Misalignment in Discs
Figure 1: Flexible Disc Element
Figure 2: Disc Load During Torque Transmission
The axial misalignment causes bending in the discs as shown in Figure 3. This deformation causes high flexural stresses at the ends and are the reason that the discs fail. The stresses induced by torque transmission and axial misalignment are almost steady however tilting movement induces fluctuating stresses [4]. In this paper, we have considered torque and axial misalignment and hence focus on steady stresses. However, tilting in the discs can be added into the formulation by taking appropriate safety factors against fatigue failure.
There are two fundamental sources of loads/stresses in discs, i.e., torque and misalignment. Primarily, the torque induces buckling in the disc links (Figure 4), while the misalignment induces high bending stresses (Figure 5). While avoiding failure due to buckling and bending, we are interested in a disc design that has minimal mass and stress while being able to take specified axial misalignment and buckling load.
Due to the disc radial symmetry, Finite Element Analysis (FEA) is carried out in one-third disc segment to save computational time. Two different disc designs as shown in Figure 6 and Figure 7 are selected as base designs that need to be improved.
The design of flexible disc elements (Design A and Design B) that is to be used in disc couplings, is required. The disc has fixed number of links, that is, 6 (see Figure 1). The objective is to improve the quality of design disc while ensuring an ability to tolerate axial misalignment of 0.25 mm. The quality of design is measured in terms of design objectives. Specifically, we need a design that has a lower mass and stress without any sign of buckling in the discs due to torque load. Alloy steel with the following material properties (Table 1) is used in the design.
\begin{table}
\begin{tabular}{|c|c|} \hline \multicolumn{1}{c}{Design A and B} \\ \hline
**Material Properties** & **Values** \\ \hline Elastic Modulus & 210 GPa \\ \hline Density & 7700 Kg/m3 \\ \hline Yield Strength & 620 MPa \\ \hline Poisson’s Ratio & 0.28 \\ \hline \end{tabular}
\end{table}
Table 1: Considered Material Properties of Alloy Steel for Disc Design A and B
Figure 4: Buckling in Discs
Figure 5: Bending in Discs
Figure 3: Deformation of the Link Due to Axial Misalignment in Discs
Figure 6: Design A
Figure 7: Design B
Introduction
In this section, we present a review of machine learning-based methods applied to engineering design problems. Consequently, we present brief introduction to the deep learning techniques and approaches that give context to the works presented in this paper is provided. It includes a brief summary of the neural network architecture and learning strategies.
The availability of data and computational power and new methods have resulted in ML being applied to engineering design problems in several domains, such as materials design [5, 6], manufacturing design [7], topology optimization [8], etc. A novel deep learning-based method to carry out the optimal design without an iterative scheme is proposed by Yu and co-authors [9]. A framework to generate new structural and topology designs in an iterative fashion using generative adversarial networks is proposed by Oh and co-authors [10]. An ML-based method for real-time structural design optimization is proposed by Lei and co-authors [11]. Yang and co-authors used Generative adversarial networks to generate synthetic images of material microstructures [12]. Liu and Wang [13] proposed a multi-fidelity physics-constrained neural network (MF-PCNN) for material modeling. A parametric level set method for topology optimization based on a deep neural network is proposed by Deng and Albert [8]. All these applications demonstrate the success of ML tools and methods to address engineering problems across wide applications.
Deep learning is a powerful tool capable of processing a huge amount of unstructured information to generate insightful results. These impressive results come from the nonlinear processing of data in multiple layers [14]. Deep learning models have various architectures and can perform supervised, semi-supervised, and unsupervised learning [15]. The work presented in this paper uses Artificial Neural Networks (ANN) to carry out supervised learning on simulation data. Specifically, besides input and output layers, the networks include one or more intermediate layers between the input and output layers called hidden layers. A neuron or node in hidden layers can have one or more inputs of \(x_{i}\), which come from the neurons in the input layer or a previous layer. These inputs are then multiplied with weights \(w_{i}\) to be summed and shifted by a bias \(b\), resulting in an intermediate single-valued result to pass through an activation function \(f(\cdot)\) that smoothly maps the intermediate result in the desired range, e.g., 0 to 1 or -1 to 1. This sequence of calculations, multiplication-sum-shift-mapping, generates a resulting output \(h\) and is performed as many as a predefined number of neurons in the next layer. Likewise, all outputs in the layer are repeatedly become the inputs to a neuron in the subsequent layers until it reaches the output layer as many as a predefined number of hidden layers. The construction of the network is quite flexible that the number of neurons and layers can be independently varied. The use of multiple hidden layers and activation functions in their connections helps in modeling complex nonlinear relationships. The activation function can be in various forms that are differentiable such as linear, hyperbolic tangent, and sigmoid. The overall equation of one output neuron from all neurons at a current layer and an illustration of Deep Neural Network (DNN) with hidden layers are shown in Equation 5 and Figure 8, respectively.
\[h=f\Bigg{(}b+\sum_{i=1}^{n}\mathbf{w}_{i}\mathbf{x}_{i}\Bigg{)}\] Equation 5
In Figure 8, \(n\) is the number of nodes in the current layer. For example, \(n=3\) for the input layer if the control variables are three, such as mass, stress, and buckling load. It is essential to perform a training process before deploying the DNN for prediction. During the training, the weights and biases for all neurons are iteratively improved through an optimization process where the objective is to minimize the error in predicting the desired output for the input vector of a training dataset. Gradient descent is the most common optimization algorithm [16]. Often, the trained network does not properly respond to novel, unobserved inputs outside of the training dataset, which are called "underfitting" and "overfitting" issues. These issues can be addressed by constructing a proper size of the network and hyperparameter tuning. Furthermore, regularization techniques can be applied to the training not only to minimize the training error but also to minimize the weights themselves in the assumption that the true underlying function has a degree of smoothness. In this work, we employed the Bayesian regularization technique that seeks a balance between two objectives that minimize the training error while keeping the weights small by estimating the objective function parameters based on the Bayes rule, which leads to a better prediction than non-regularized optimizations [17]. The details on developing deep neural network architecture and training them are available in [18-21].
In the following section, we present the design problem. The capability of this method is demonstrated through a problem of
Figure 8: Deep Neural Network with Multiple Hidden Layers, n Input Units and m Output Units with L Neurons of the First Hidden Layer with Weights w and Bias b. The Activation Shown Here is Hyperbolic Tangent Function. Some Connections are Omitted for Better Visualization.
the design of flexible disc elements in disc couplings. Deep learning is applied to the data generated through FEA simulations. Optimization is then carried out on the deep learning models for exploring optimal design solutions.
## 3 Materials and Methods.
### Simulation Setup
On both the designs (Design A and Design B), FEA simulations are carried out with a tangential load (accounts for torque load) and axial misalignment capability of 0.25 mm. The design variables are length (l), width (b) and thickness (t). Design A has the same width across length while Design B has a variable width as a result of circle arc. Hence, for Design B width (b) represents the width across the center of the link as shown in Figure 9.
With alloy steel as a material and with previously defined loading conditions (torque and misalignment), simulations were carried out to capture the performance change as a result of change in design variables. The design variables, their bounds, and the response of interest for both the designs are tabulated in Table 2.
SolidWorks is utilized in carrying out FEA simulations. From the simulation, for a given set of input variables (length, width, and thickness), the output responses of interest (mass, stress and buckling load) were recorded. All training and predictions were carried out using Matlab software. All output responses of the dataset were normalized to have zero mean and a unity standard deviation in order to make the responses are consistent in terms of magnitude so as to facilitate the training process. Also, the prediction results were scaled back to the original data range by its inverse operation. As to the neural network, the hyperbolic tangent is used as an activation function for all hidden layers and linear for the output layer because it demonstrated the best performance in terms of prediction.
### Predicting and Optimizing Designs
The data generated are used in developing models that are to be applied in predicting and optimizing design performances. For the purpose of demonstration mass and stress are selected as performance parameters while length (l), width (b) and thickness (t) are performance predictors. The goal is to design discs with minimal stress and mass that satisfies torque and misalignment requirement while avoiding failure due to buckling. Two methods are simultaneously developed and implemented from the same data for the purpose of comparison as shown in Figure 10.
Statistical method is applied in developing response surface models from the generated data. Equation 6, Equation 7 and, Equation 8 are the response surface models for DesignA. Equation 9, Equation 10 and, Equation 11 are the response surface models for DesignB. The response surface models are used as models for optimization. All these models have R\({}^{2}\geq\) 0.93. Similarly, deep learning method is applied on the same data for developing models with deep neural network configurations. Statistical and deep learning models are developed to predict mass, stress and buckling load as a function of length (I), width (b) and thickness (t). Using these predictive models, a method is developed to optimize design performances (mass and stress in this case) against design requirement and failure criteria.
**DESIGNA:**
Mass = 0.00199 lbt - 0.00371 bt + 0.00369 Equation 6
Stress = 263.3 + 1065.3 t\({}^{2}\) - 0.47 lb Equation 7
- 25.1 l\({}^{2}\)
Buckling load = - 0.995 l\({}^{2}\)bt\({}^{3}\) + 2075.19 bt\({}^{3}\) Equation 8
**DESIGN B:**
Mass = 0.00153 lbt + 0.01613 lt - 0.262 t +
0.00044
Stress = 292.9 + 769.3 t\({}^{2}\) - 5.17 l - 17.52 l\({}^{2}\) Equation 10
Buckling load = - 1.47792 lbt\({}^{3}\) + 3078.22 bt\({}^{3}\) Equation 11
For design optimization in both the methods (see optimization formulation in Table 3), non-dominated sorting genetic algorithm II (NSGA-II) [22] is considered in this work. NSGA-II is a popular multi-objective optimization approach with its fast non-dominated sorting approach, a fast crowded distance estimation procedure, and simple crowded comparison operator. NSGA-II follows the general outline of a genetic algorithm with a modified mating and survival selection. The initial populations (a set of points) were selected, evaluated, and sorted based on non-domination into multiple fronts. Each individual in each front is assigned rank (fitness), values and a parameter called crowding distance that quantifies proximity of an individual to its neighbors. Low rank and larger crowding distance will give better fitness and more diversity in the population, respectively. Based on the rank and crowding distance, individuals are selected to generate offspring (the next set of points for further evaluation) through a process called a binary tournament mating selection. The evaluations results from the offspring are sorted again based on non-domination for the next selection process until it reaches the maximum number of generations (iterations). The total population is maintained with the best results throughout the procedure. A typical example of the algorithm is shown in Figure 11 for the first few generations and the last with Pareto front (also called Pareto frontier), which a set of optimal solutions usually forms a line for a problem with two objectives and a surface for three or more objectives.
## 4 Results and Discussion
### Parametric Study on Network Size
The total numbers of data samples prepared for this work were 127 for Design A and 128 for Design B. In order to measure the prediction capability of the neural network, different sizes of the networks were trained with 100 training samples for both designs. On the other hand, the remaining 27 samples for Design A and 28 samples for Design B were reserved for testing so as training to test ratio to be 8 to 2. For each network, the performance was measured ten times independently with randomly drawn samples for train and test data sets with random initial values for weights and biases to account for stochastic nature wherein. As an example, the following figure (Figure 12) shows an instance of random sampling when 100 and 27 samples are drawn for training and test, respectively.
Figure 11: Example of two objectives optimization using NSGA-II for multiple generations
The error metric used here is mean absolute percent error.
\[e=\left[\frac{1}{N}\sum_{j}\left|\frac{\mathcal{Y}_{j}-\mathcal{Y}_{j}}{\mathcal{Y} _{j}}\right|\right]\cdot 100\qquad\text{Equation 12}\]
where \(\mathcal{Y}_{j}\) is the ground-true response scalar value of the data sample \(\mathbf{j}_{j}\), \(\mathcal{Y}_{j}\) is the prediction value by the trained network, and \(N\) is the total number of data sample. Note that this equation shows for one response, e.g., mass or stress. All of the percent errors for each response of each network were then averaged. Table 4 and Table 5 show the performance of different sizes of the network, i.e., different number of layers and neurons, in terms of prediction error for two cases when only the test data set is used and both test and training data set are used altogether.
Several observations can be made. First, increasing the number of neurons generally improves the accuracy but the differences are marginal if 20 or more neurons are used. Second, adding more layers does not guarantee better accuracy as seen in the inconsistent results of the three layers. Third, overall low standard deviations (less than one percent) indicate that trained networks are stable and robust to the random initialization and random selections of samples for the training dataset. Finally, the overall error when using all data is not much different when using only test data even with the increased number of parameters, indicating the models are well-generalized with the help of the Bayesian regularization. In other words, it is expected to correctly respond to novel unseen data. Therefore, we can safely choose any networks between 1 to 2 layers with 20-40 neurons within. In this work, we chose 2 hidden layers that have 20 neurons (2\(\times\)20) because it is the simplest model that gives better or comparable predictions.
### Parametric Study on Different Size of Training Dataset
In this section, we studied the size effect of the training dataset. Different numbers of training set are used for training while the network is fixed as 2\(\times\)20 as chosen in the previous section. The change in the size of the training data resulted in the different sizes of test data. The same as the study in the previous section, 10 random trials were averaged for statistical analysis. Also, predictions were performed for test-only data and test-and-training data to show how the models are well-generalized. As seen in the tables (Table 6 and Table 7), overall errors go down as more training data are added, but the differences converge. One thing to note is that the standard deviation of the Design B test prediction result with 120 training data is higher than that of smaller number of training data, indicating that training data were overfitted.
**Comparison with conventional surrogate modeling:**
In Figure 13,we show the visual comparison of the prediction results using ANN, specifically 2\(\times\)20 architecture, and the response surface model (RSM) for Design A and B when all training and test data set are utilized. In the figure, stress as a function of mass is plotted. Unlike RSM, the ANN predictions are the averaged results, so they are plotted with crossbars for standard deviation. Both of the approaches generally agree with the ground truth data, but the ANN predicts better for the extreme cases where data are fewer. Also, overall narrow crossbars indicate that with any random initialization of parameters and randomly selected training data the developed networks reliably predict the results. Also, it is shown that the same 2\(\times\)20 architecture works for both Design A and B equally well. The overall results, along with the results of the parametric study, give designers confidence that they can choose a reasonably well-trained model without having to tune the model too much for a particular problem.
Finally, we performed design optimization through the genetic algorithm, NSGA-II. A Matlab implementation of NSGA-II was utilized for the simulations [23]. The original codes were modified to seamlessly work with our DNN models. The codes were changed to have more vectorization, which improved the computation performance by factor of two. Stress and mass were simultaneously optimized for designs that have an ability to withstand at least buckling load equal to 150 N. 150 N is selected
\begin{table}
\begin{tabular}{c||c|c|c|c|c} \hline \hline \multirow{2}{*}{DesignB} & \multicolumn{5}{c}{Number of Training Data} \\ \cline{2-6} & 40 & 60 & 80 & 100 & 120 \\ \hline Test & 4.6\(\pm\)1.4 & 3.4\(\pm\)1.0 & 2.1\(\pm\)0.5 & **1.640.5** & 1.4\(\pm\)0.7 \\ All & 3.3\(\pm\)0.9 & 2.0\(\pm\)0.6 & 1.0\(\pm\)0.2 & **0.640.1** & 0.3\(\pm\)0.1 \\ \hline \hline \end{tabular}
\end{table}
Table 7: Parametric Study On Training Data Size For Design B (Percent Error With Std. Deviation). “All” Data Include Training and Test Data. Note: The Number of Test Data = 128 Total Data – The Number of Training Data.
Figure 13: Prediction comparison with Response Surface Model (RSM). (a) 2\(\times\)20 ANN and (b) RSM for Design A, (c) 2\(\times\)20 ANN and (d) for Design B. Circles represent ground truth data and dots predictions. For ANN, the crossbar represents the standard deviation of 10 different networks and data compositions.
to ensure that each feasible disc design has the capability to transmit at least 30 Nm of torque (derived using Equation 4 and variable bounds shown in Table 2). The simulations were performed with 500 populations for 300 generations (iterations).The optimization results from genetic algorithms are compared in Figure 14, Table 8, and Table 9. The solutions of the final generations are plotted with two extreme solutions and one optimum solution that is the point in case no importance for each objective is considered. Specifically, the "optimum" solution is one that has the minimum distance between the zero-reference point and all possible optimal solutions for mass and stress in the Pareto front. For the distance measure, all solutions were normalized with zero mean and unity standard deviation.
In Table 8, it can be seen that statistical method optimizes mass for Design A to 0.05 g while deep learning method optimizes mass for Design A to 0.04 g. Similarly, statistical method optimizes stress for Design A to 106.72 MPa while deep learning method optimizes stress for Design A to 118.68 MPa. In Table 9, it can be seen that statistical method optimizes mass for Design B to 0.10 g and deep learning method also optimizes mass for Design B to 0.10 g. Similarly, statistical method optimizes stress for Design A to 92.39 MPa while deep learning method optimizes stress for Design A to 95.05 MPa.
In Table 8, it can be seen that statistical method optimizes mass for Design A to 0.05 g while deep learning method optimizes mass for Design A to 0.04 g. Similarly, statistical method optimizes stress for Design A to 106.72 MPa while deep learning method optimizes stress for Design A to 118.68 MPa. In Table 9, it can be seen that statistical method optimizes mass for Design B to 0.10 g and deep learning method also optimizes mass for Design B to 0.10 g. Similarly, statistical method optimizes stress for Design A to 92.39 MPa while deep learning method optimizes stress for Design A to 95.05 MPa.
## 5 Conclusion
In recent years, machine learning-based predictive models have been used to generate solutions to complex engineering problems. In this paper, we couple the machine learning developed predictive models with a genetic algorithm to identify optimal design solutions of flexible disc elements. The data generated from the simulation are used to train the machine learning models. A genetic algorithm utilized these trained models to navigate the design space and identify optimal design solutions. To demonstrate the efficacy of the method, we compare it with results obtained from statistical method.
Another important aspect of this paper is to enable disc coupling designers to leverage machine learning methods in identifying optimal design solutions. First, we establish the necessary foundation required for designing flexible discs for disc couplings. Starting with two different discs design, we showcase how we navigate through the design space and identify solutions that are optimal. Our intention is to establish value in the proposed method as a generic method that can be applied in a variety of engineering design problems.
## Acknowledgements
We thank the Center for Advanced Vehicular Systems (CAVS) at Mississippi State University for supporting this effort.
|
2301.03825 | An Analytical Theory for the Growth from Planetesimals to Planets by
Polydisperse Pebble Accretion | Pebble accretion is recognized as a significant accelerator of planet
formation. Yet, only formulae for single-sized (monodisperse) distribution have
been derived in the literature. These can lead to significant underestimates
for Bondi accretion, for which the best accreted pebble size may not be the one
that dominates the mass distribution. We derive in this paper the polydisperse
theory of pebble accretion. We consider a power-law distribution in pebble
radius, and we find the resulting surface and volume number density
distribution functions. We derive also the exact monodisperse analytical pebble
accretion rate for which 3D and 2D accretion are limits. In addition, we find
analytical solutions to the polydisperse 2D Hill and 3D Bondi limits. We
integrate the polydisperse pebble accretion numerically for the MRN
distribution, finding a slight decrease (by an exact factor 3/7) in the Hill
regime compared to the monodisperse case. In contrast, in the Bondi regime, we
find 1-2 orders of magnitude higher accretion rates compared to monodisperse,
also extending the onset of pebble accretion to 1-2 order of magnitude lower in
mass. We find Myr-timescales, within the disk lifetime, for Bondi accretion on
top of planetary seeds of masses $10^{-6}-10^{-4} M_\oplus$, over a significant
range of the parameter space. This mass range overlaps with the high mass end
of the planetesimal initial mass function, and thus pebble accretion is
possible directly following formation by streaming instability. This alleviates
the need for mutual planetesimal collisions as a major contribution to
planetary growth. | Wladimir Lyra, Anders Johansen, Manuel H. Cañas, Chao-Chin Yang | 2023-01-10T07:42:49Z | http://arxiv.org/abs/2301.03825v1 | # An Analytical Theory for the Growth from Planetesimals to Planets by Polydisperse Pebble Accretion
###### Abstract
Pebble accretion is recognized as a significant accelerator of planet formation. Yet, only formulae for single-sized (monodisperse) distribution have been derived in the literature. These can lead to significant underestimates for Bondi accretion, for which the best accreted pebble size may not be the one that dominates the mass distribution. We derive in this paper the polydisperse theory of pebble accretion. We consider a power-law distribution in pebble radius, and we find the resulting surface and volume number density distribution functions. We derive also the exact monodisperse analytical pebble accretion rate for which 3D and 2D accretion are limits. In addition, we find analytical solutions to the polydisperse 2D Hill and 3D Bondi limits. We integrate the polydisperse pebble accretion numerically for the MRN distribution, finding a slight decrease (by an exact factor 3/7) in the Hill regime compared to the monodisperse case. In contrast, in the Bondi regime, we find 1-2 orders of magnitude higher accretion rates compared to monodisperse, also extending the onset of pebble accretion to 1-2 order of magnitude lower in mass. We find Myr-timescales, within the disk lifetime, for Bondi accretion on top of planetary seeds of masses \(10^{-6}-10^{-4}M_{\oplus}\), over a significant range of the parameter space. This mass range overlaps with the high mass end of the planetesimal initial mass function, and thus pebble accretion is possible directly following formation by streaming instability. This alleviates the need for mutual planetesimal collisions as a major contribution to planetary growth.
Pebble accretion, planet formation. +
Footnote †: journal: ApJ
0000-0002-8002-8880]Wladimir Lyra
0000-0002-3188-7880]Anders Johansen
0000-0002-0703-3870]Manuel H. Canas
0000-0002-3133-0888]Chao-Chin Yang
## 1 Introduction
Despite significant theoretical and observational advances in the past decade, a comprehensive theory of planet formation still remains elusive. Planet formation starts from the accumulation of sub-\(\mu\)m interstellar grains, growing by means of coagulation, in hit-and-stick low-velocity collisions (Safronov, 1972; Nakagawa et al., 1981; Tominaga et al., 2021). Laboratory experiments (Blum and Wurm, 2008; Guttler et al., 2010) and numerical simulations (Guttler et al., 2009; Geretshauser et al., 2010; Zsom et al., 2010) provide evidence that this process is efficient in growing solid grains up to mm and cm radius (hereafter called "pebbles") with growth beyond this size being unlikely, due to bouncing, fragmentation, and drift (Dullemond and Dominik, 2005; Brauer et al., 2008; Krijt et al., 2015), unless the possibility of very high porosities is introduced (Suyama et al., 2008, 2012).
The streaming instability (Youdin and Goodman, 2005; Youdin and Johansen, 2007; Johansen and Youdin, 2007; Kowalik et al., 2013; Lyra and Kuchner, 2013; Krapp et al., 2019; Squire and Hopkins, 2020; Schafer et al., 2020; Paardekooper et al., 2020; Chen and Lin, 2020; McNally et al., 2021; Lin, 2021; Flock and Mignone, 2021; Zhu and Yang, 2021; Yang and Zhu, 2021) whereby the drift of grains through the gas is unstable, has been established as a mechanism to produce the first planetesimals (Johansen et al., 2007; Yang and Johansen, 2014; Carrera et al., 2015; Simon et al., 2016; Yang et al., 2017; Schaffer et al., 2018; Nesvorny et al., 2019; Li et al., 2019; Klahr and Schreiber, 2021; Visser et al., 2021; Li and Youdin, 2021), through concentration of pebbles into dense filaments that display a fractal structure with large overdensities reached at the smallest scales of the simulations (Johansen et al., 2015). Yet, growth by binary accretion of plan
etesimals into progressively larger objects, while able to explain the growth of a giant planet's core at 5 AU (if migration is ignored, Pollack et al., 1994), is not viable already at the orbital position of Saturn, Uranus, or Neptune (Thommes et al., 2003; Johansen and Bitsch, 2019).
This shortcoming of planetesimal accretion motivated the search for other avenues of planetary growth. Fast accretion rates of marginally coupled solids up to planetary masses were first seen in the simulations of Lyra et al. (2008). In that model, vortices trap pebbles and collapse them into Moon-mass objects via direct gravitational instability, which scoop up the remaining pebbles at a vertiginous rate, achieving Mars and Earth mass within a few hundred orbits. Whereas this growth was assisted by vortices, it illustrates that gas-assisted accretion of pebbles is potentially much faster than planetesimal accretion, due to the presence of gas drag as a dissipative mechanism. A similar result was found by Johansen and Lacerda (2010), showing fast accretion rates onto a 100 km seed, highlighting the importance of pebble accretion for planetary growth, and suggesting for the first time that a significant fraction of the accretion of planetary bodies proceeds via pebbles (as opposed to planetesimals), before the dissipation of the gas disk.
An analytical theory of pebble accretion was later developed by Ormel and Klahr (2010) and Lambrechts and Johansen (2012), elucidating the existence of two regimes: one for small masses, where the seed mass accretes from a pebble headwind, a process reminiscent of Bondi-Hoyle-Lyttleton accretion (Bondi and Hoyle, 1944; Hoyle and Lyttleton, 1939); and another, for higher masses, where pebbles are accreted from the whole Hill sphere of the seed. These regimes were dubbed "drift-dominated" and "shear-dominated" by Ormel and Klahr (2010), respectively, whereas Lambrechts and Johansen (2012) called them "Bondi" and "Hill". As a rule of thumb, planetesimals accrete in the Bondi regime, protoplanets in the Hill regime (Ormel, 2017; Johansen and Lambrechts, 2017), and both can yield orders-of-magnitude higher mass accretion rates than planetesimal accretion.
Since its inception, the model has quickly risen to paradigmatic status, by virtue of a number of successes. Pebble accretion explains the formation of the gas giants (Lambrechts and Johansen, 2012), of the ice giants with low gas fractions (Lambrechts et al., 2014); the preponderance of superEarths around other stars (Lambrechts et al., 2019; Bitsch et al., 2019; Izidoro et al., 2021); it achieves a better planet population synthesis matching exoplanet populations than a planetesimal-based accretion model (Bitsch et al., 2019; Drazkowska et al., 2022), and it is also compatible with the drift-dominated evolution of dust in T-Tauri disks (a flux of \(\sim\) 100 Earth masses over the disk lifetime, Appelgren et al., 2020). Even the classical giant impact model for terrestrial planet formation (Raymond et al., 2004) is challenged now by a hybrid view where terrestrial planets accrete their mass from a combination of planetesimals and small pebbles (Johansen et al., 2015, 2021).
However, most previous works on pebble accretion considered a monodisperse distribution of pebbles. In reality, the pebbles will have a distribution of sizes, ranging from sub-\(\mu\)m to mm or cm-size. A monodisperse distribution can be a reasonable assumption because, for the interstellar grain size distribution, following a power-law of -3.5 of the grain radius (Mathis et al., 1977; Hirashita and Kobayashi, 2013, MRN henceforth) most of the mass resides in the largest pebbles; a result that stands even after dust evolution away from MRN in the protoplanetary disk is considered (Birnstiel et al., 2012). This makes the Hill regime of pebble accretion relatively insensitive to the dust spectrum, and either the dominant pebble size (Lambrechts and Johansen, 2014) or a mass weighted representative pebble size (Guilera et al., 2020; Venturini et al., 2020) yield sensible results.
Indeed, in a recent work, Andama et al. (2022), considering polydisperse Hill accretion, find larger final core masses, not because of faster accretion rates, but because the smaller grains drift more slowly, lingering around for longer times than the largest pebbles, and thus extending the duration of accretion. Drazkowska et al. (2021) also considering the Hill regime, focus on the beneficial aspects of fragmentation on keeping the pebbles sizes small, because too large pebbles accrete poorly. Both works consider a body already near the Bondi-Hill transition mass, a polydisperse size spectrum from the prescription of Birnstiel et al. (2012), and solve numerically for the mass accretion rates. Both works also highlight how the mass accretion rate is dependent on the embryo mass but not on pebble size.
In stark constrast, in the Bondi regime the size distribution should matter significantly for the mass accretion rate itself. In the Bondi regime, the best accreted pebbles are those of friction time similar to the time the pebble takes to cross the Bondi radius, i.e., the Bondi time. For small enough seed mass, the larger, cm-sized, pebbles, drift so fast past the protoplanet that these pebbles essentially behave like planetesimals. In this case, the cross section for accretion is geometric (for high speeds), or gravitationally focused (for low speeds), and only slightly aided by gas drag. As a result, even though these pebbles dominate the mass budget, their mass accretion rate by the planetesimal can be lower than of the smaller pebbles for which Bondi accretion is more efficient. If that is the case, the pebble accretion rates in the Bondi regime may be underestimated by the current monodisperse prescriptions. Indeed, Lorek and Johansen (2022) recently find that planetesimal accretion is insignificant beyond 5 AU, so the onset of pebble accretion has to overlap with the high-mass end of the planetesimal mass function if planet formation is to proceed.
In this paper, we work out the polydisperse extension of pebble accretion. We find that indeed Bondi accretion is 1-2 orders of magnitude more efficient in the polydisperse case. We also find that the onset of polydisperse Bondi accretion occurs at lower masses than monodisperse, by 1-2 orders of magnitude. Hill accretion is slightly less efficient, by a factor 3/7, for the MRN distribution. We find the exact solution to the 2D-3D transition, as well as analytical expressions for the polydisperse 2D Hill and 3D Bondi accretion rates.
This paper is structured as follows. In Sect. 2 we derive the grain size distribution functions; in Sect. 3 we apply them to pebble accretion, deriving the polydisperse model, and proceeding with the analysis. In Sect. 4 we work out the analytical expressions for 2D Hill and 3D Bondi polydisperse accretion. A summary concludes the paper in Sect. 7. A table of mathematical symbols used in this work is shown in Table 1.
## 2 Distribution functions
Consider the grain size distribution
\[F(a,z)\equiv\frac{\partial n}{\partial a} \tag{1}\]
that defines the number density \(n\); here, \(a\) is the grain radius and \(z\) the vertical coordinate. We integrate it to yield
\[n(a,z)=\int_{0}^{a}F(a^{\prime},z)\ da^{\prime}, \tag{2}\]
and \(n(z)\equiv n(a_{\rm max},z)\). The volume density is found by multiplying \(F(a,z)\) by the mass \(m(a)\) of a single grain
\[\rho_{d}(a,z)=\int_{0}^{a}m(a^{\prime})\,F(a^{\prime},z)\ da^{\prime}. \tag{3}\]
and again, \(\rho_{d}(z)\equiv\rho_{d}(a_{\rm max},z)\). Due to sedimentation, we can write, for an equilibrium between diffusion and gravity (Dubrulle et al., 1995)
\[F(a,z)\equiv f(a)\ e^{-z^{2}/2H_{d}^{2}}, \tag{4}\]
defining the function \(f(a)\), which is the size distribution function in the midplane. In Eq. (4), \(H_{d}\) is the grain scale height, a function of \(a\)(Klahr & Henning, 1997; Lyra & Lin, 2013)
\[H_{d}=H_{g}\ \sqrt{\frac{\alpha}{\text{St}+\alpha}}, \tag{5}\]
where \(H_{g}\) is the gas scale height, \(\alpha\) is a dimensionless vertical diffusion parameter1, and St is the Stokes number, a non-dimensionalization of the grain radius, normalized by the grain internal density \(\rho_{\bullet}\) and the gas column density \(\Sigma_{g}\)
Footnote 1: This parameter is equivalent to the Shakura-Sunyaev parameter (Shakura & Sunyaev, 1973) for isotropic turbulence of equal diffusion of mass and momentum (Youdin & Lithwick, 2007; Yang et al., 2018).
\[\text{St}\equiv\frac{\pi}{2}\frac{a}{\Sigma_{g}}. \tag{6}\]
### The distribution function in the midplane
To find \(f(a)\), consider spherical grains
\[m(a)=\frac{4\pi}{3}a^{3}\rho_{\bullet} \tag{7}\]
and the column density
\[\Sigma_{d}(a)\equiv\int_{-\infty}^{\infty}\rho_{d}(a,z)\ dz. \tag{8}\]
Substituting Eq. (3), and integrating in \(z\), we find
\[\Sigma_{d}(a)=\frac{2^{5/2}\pi^{3/2}}{3}\int_{0}^{a}\rho_{\bullet}\,H_{d}\ a^{ \prime\,3}\ f(a^{\prime})\ da^{\prime}, \tag{9}\]
and the total column density \(\Sigma_{d}\equiv\Sigma_{d}(a_{\rm max})\). We keep the internal density \(\rho_{\bullet}\) inside the integral because it is in general a function of radius, if grains have different composition. Given
\[\Sigma_{d}(a)=\int_{0}^{a}\frac{\partial\Sigma_{d}(a^{\prime})}{\partial a^{ \prime}}da^{\prime}, \tag{10}\]
we find, equating the integrands of Eq. (9) and Eq. (10), and solving for \(f(a)\)
\[f(a)=\frac{3}{2^{5/2}\pi^{3/2}H_{g}\rho_{\bullet}}\sqrt{1+\frac{\text{St}}{ \alpha}}\ a^{-3}\frac{\partial\Sigma_{d}(a)}{\partial a}, \tag{11}\]
where we also substituted Eq. (5) for \(H_{d}\) as a function of St. The distribution is determined if we find an expression for \(\partial_{a}\Sigma_{d}(a)\).
#### 2.1.1 Sedimented and unsedimented limits
To find the general solution, we need to find the expression for \(\partial_{a}\Sigma_{d}\) in Eq. (11). We do so by realizing that even though the midplane volume density is modified by sedimentation, the column density \(\Sigma_{d}\) is not. The two limits of \(f(a)\) are, first, the "sedimented" limit, for St \(\gg\alpha\)
\[f(a)^{\rm(sed)}=\frac{3}{8\pi H_{g}\rho_{\bullet}^{1/2}\Sigma_{g}^{1/2} \alpha^{1/2}}\ a^{-25}\frac{\partial\Sigma_{d}(a)}{\partial a}, \tag{12}\]
and, second, the unsedimented limit, for St \(\ll\alpha\)
\[f(a)^{\rm(unned)}=\frac{3}{2^{5/2}\pi^{3/2}H_{g}\rho_{\bullet}}\ a^{-3}\frac{ \partial\Sigma_{d}(a)}{\partial a}, \tag{13}\]
where we have substituted the Stokes number given by Eq. (6). Since the column density does not change with sedimentation, we can find \(\partial_{a}\Sigma_{d}\) by either limit.
We assume a power-law dependency for the unsedimented distribution in the midplane
\[f(a)^{\text{(unsed)}}\;\propto\;a^{-k} \tag{14}\]
where \(k\) is a constant (the MRN distribution corresponds to \(k\) = 3.5). Equating Eq. (13) and Eq. (14)
\[\frac{\partial\Sigma_{d}(a)}{\partial a}\propto\rho_{\bullet}\;a^{3}\;a^{-k}. \tag{15}\]
We thus write
\[\frac{\partial\Sigma_{d}(a)}{\partial a}\propto a^{-p}; \tag{16}\] \[\rho_{\bullet}\propto a^{-q};\] (17) \[p-q=k-3. \tag{18}\]
We can then write the column density distribution as a power law
\[\frac{\partial\Sigma_{d}(a)}{\partial a}=D\;a^{-p}. \tag{19}\]
Integrating it in \(a\), equating to Eq. (10), and solving for the constant \(D\), we find
\[D=\frac{(1-p)Z\Sigma_{g}}{a_{\text{max}}^{1-p}}; \tag{20}\]
here we also substitute \(\Sigma_{d}=Z\Sigma_{g}\), where \(Z\) is the metallicity.
Considering now the variation of the internal density
\[\rho_{\bullet}(a)=\rho_{\bullet}^{(0)}\left(\frac{a}{a_{\text{max}}}\right)^{- q}, \tag{21}\]
\begin{table}
\begin{tabular}{l l l l l l} \hline Symbol & Definition & Description & Symbol & Definition & Description \\ \hline \(F\) & Eq. (1) & pebble size distribution & \(\rho_{d0}\) & Eq. (39) & dust density at midplane \\ \(a\) & & pebble radius & \(\delta v\) & Eq. (40) & approach velocity \\ \(z\) & & vertical coordinate & \(S\) & Eq. (34) & stratification integral \\ \(n\) & Eq. (2) & number density & \(\Delta v\) & & Sub-Keplerian velocity reduction \\ \(m\) & Eq. (7) & pebble mass & \(\Omega\) & \(\sqrt{\frac{GM_{\odot}}{r^{2}}}\) & Keplerian frequency \\ \(\rho_{d}\) & Eq. (3) & volume density & \(\hat{R}_{\text{acc}}\) & Eq. (53) & accretion radius \\ \(H_{d}\) & Eq. (5) & pebble scale height & \(\chi\) & Eq. (41) & coefficient \\ \(f\) & Eq. (22) & pebble distribution in midplane & \(\tau_{f}\) & \(\text{St}/\Omega\) & friction time \\ \(H_{g}\) & \(\Omega/c_{s}\) & gas scale height & \(t_{p}\) & Eq. (42) & passing timescale \\ \(\alpha\) & & Shakura-Sunyaev viscosity & \(\gamma\) & Eq. (41) & coefficient \\ St & Eq. (6) & Stoker number & \(G\) & & gravitational constant \\ \(\rho_{\bullet}\) & & internal pebble density & \(M_{p}\) & & planetesimal mass \\ \(\Sigma_{g}\) & Eq. (23) & gas column density & \(R_{H}\) & Eq. (43) & Hill radius \\ \(\Sigma_{d}\) & \(Z\Sigma_{g}\) & pebble column density & \(t_{B}\) & Eq. (47) & Bondi time \\ \(k\) & Eq. (14) & power law of unsedimented distribution & \(R_{B}\) & Eq. (46) & Bondi radius \\ \(p\) & Eq. (16) & power law of column density distribution & \(M_{t}\) & Eq. (49) & transition mass \\ \(q\) & Eq. (17) & power law of internal density & \(M_{\text{HB}}\) & Eq. (48) & Hill-Bondi transition mass \\ \(D\) & Eq. (20) & Coefficient of column density distribution & \(R\) & \(\sqrt{\frac{3M_{p}}{4\pi\rho_{\bullet}}}\) & planetesimal radius \\ \(Z\) & \(\Sigma_{d}/\Sigma_{g}\) & Dust-to-gas ratio & \(v_{\text{esc}}\) & \(\sqrt{\frac{2GM_{\bullet}}{R}}\) & escape velocity \\ \(\rho_{\bullet}^{(0)}\) & & internal density of largest grain & \(\text{St}_{p}\) & Eq. (51) & Stokes number past planetesimal \\ \(r\) & & radial coordinate & \(M_{\text{BL}}\) & Eq. (52) & Bondi-geometric transition mass \\ \(r_{c}\) & Eq. (23) & cutoff radius & \(t_{\text{acc}}\) & Eq. (54) & accretion time \\ \(W\) & Eq. (27) & column density distribution & \(h\) & \(H_{g}/r\) & aspect ratio \\ \(R_{\text{acc}}\) & Eq. (41) & drag-modified accretion radius & \(m_{p}\) & characteristic streaming instability mass \\ \(\xi\) & Eq. (41) & coefficient & \(\psi\) & Eq. (64) & shorthand \\ \(\dot{M}\) & & mass accretion rate & \(T\) & Eq. (24) & gas temperature \\ \(c_{s}\) & \(\sqrt{Tc_{p}(\Gamma-1)}\) & sound speed & \(\Gamma\) & & adiabatic index \\ \(\mu\) & & mean molecular weight & \(c_{p}\) & \(\frac{R_{\text{max}}}{\mu}\frac{\Gamma}{(\Gamma-1)}\) & specific heat at constant pressure \\ \(R_{\text{gas}}\) & & gas constant & \(c_{r}\) & \(c_{p}/\Gamma\) & specific heat at constant volume \\ \hline \end{tabular}
\end{table}
Table 1: Symbols used in this work.
the full distribution is found at last
\[f(a)=\frac{3(1-p)Z\Sigma_{g}}{2^{5/2}\pi^{3/2}H_{g}\rho_{\bullet}^{(0)}a_{\rm max }^{4-k}}\sqrt{1+a\frac{\pi}{2}\frac{\rho_{\bullet}(a)}{\Sigma_{g}\alpha}}\ a^{-k}. \tag{22}\]
Notice that to keep \(f(a)\) positive definite, the solution requires \(p<1\). For \(q=0\), Eq. (18) constrains \(k<4\).
The gas density used is
\[\Sigma_{g}=10^{3}\,\mathrm{g}\,\mathrm{cm}^{-2}\ \left(\frac{r}{\mathrm{AU}} \right)^{-1}\ e^{-r/r_{c}} \tag{23}\]
i.e. the self-similar solution to the viscous evolution equations (Lynden-Bell & Pringle, 1974). Here \(r\) is the distance to the star, and \(r_{c}\) a truncation radius. We choose \(r_{c}=\)100 AU. For the temperature, we use the irradiated, radially optically thick, vertically optically thin model of Kusaka et al. (1970, see also Ida et al., 2016)
\[T=150\,\mathrm{K}\ \left(\frac{r}{\mathrm{AU}}\right)^{-3/7} \tag{24}\]
In addition, we assume metallicity \(Z=0.01\), adiabatic index 1.4, and mean molecular weight 2.3.
We plot the resulting distributions in Fig. 1, for 20 AU, maximum grain size \(a_{\rm max}=1\) cm, and \(k=-3.5\). For internal density we use \(\rho_{\bullet}^{(0)}=3.5\,\mathrm{g}\,\mathrm{cm}^{-3}\) and \(q=0\). The left panel shows \(f(a)\); the right panel \(m(a)f(a)\). The functions are shown for three values of \(\alpha\) (solid lines). The unsegmented (\(\mathrm{St}\ll\alpha\), black dashed line) and "sedimented" (\(\mathrm{St}\gg\alpha\), dotted lines) limits are shown for comparison. We see that the sedimented distributions follow the unsegmented line for \(\mathrm{St}\lesssim\alpha\), and the sedimented line for \(\mathrm{St}\gtrsim\alpha\), as expected. The flat profile for the sedimented cases is due to the MRN exponent, coupled with \(\sqrt{\mathrm{St}}\) from the sedimentation.
### Column density
For completeness, we define the vertically-integrated grain size distribution
\[W(a)\equiv\int_{-\infty}^{\infty}F(a,z)\ dz=\sqrt{2\pi}\ H_{d}\ f(a), \tag{25}\]
so that the pebble column density is
\[\Sigma_{d}(a)=\int_{0}^{a}m(a^{\prime})\ W(a^{\prime})\ da^{\prime}. \tag{26}\]
Substituting Eq. (22) in Eq. (25), we find the column density distribution function
\[W(a)=\frac{3(1-p)Z\Sigma_{g}}{4\pi\rho_{\bullet}^{(0)}a_{\rm max}^{4-k}}\ a^{-k}, \tag{27}\]
which indeed yields \(\Sigma_{d}=Z\Sigma_{g}\) when integrated according to Eq. (26).
## 3 Pebble accretion
Having found the size distribution function for the pebble density, we are in position to apply it to pebble accretion. Pebble accretion is usually split into three regimes of accretion (loosely coupled, Bondi, and Hill accretion), each with 2D and 3D limits. We start by deriving the exact solution for the 2D-3D transition.
Figure 1: _Left:_ The grain distribution function \(f(a)\) in the midplane. Integrated over \(a\), this function yields the number density \(n\) in the midplane. This model is calculated at 20 AU with density and temperature according to Eqs. (23) and (24), \(Z=0.01\), constant \(\rho_{\bullet}\), and MRN (unsedimented, \(\mathrm{St}\ll\alpha\), black dashed line). Three values of \(\alpha\) are shown (solid lines). The “sedimented” limits (\(\mathrm{St}\gg\alpha\), dotted lines) are shown for comparison. _Right:_ mass density distribution, i.e., the left panel multiplied by the mass of a pebble. Integrated, this function yields the grain density \(\rho_{\alpha 0}\) in the midplane. The distributions follow the unsegmented line for \(\mathrm{St}\lesssim\alpha\), and the sedimented line for \(\mathrm{St}\gtrsim\alpha\), as expected. The mass function is constant with \(a\) in the sedimented limit because of the MRN choice: the \(a^{-3.5}\) power law is canceled by the combination of the mass of the particle \(a^{3}\) and the extra \(\sqrt{a}\) dependency from the sedimentation. Large dots mark the point where \(\mathrm{St}=\alpha\).
### Exact solution for the monodisperse 2D-3D transition
The 3D and 2D limits of pebble accretion correspond to whether or not the accretion is embedded, i.e, if the accretion radius \(R_{\rm acc}\) exceeds the height of the pebble column. The quantity governing the transition is \(R_{\rm acc}/H_{d}\), or rather
\[\xi\equiv\left(\frac{R_{\rm acc}}{2H_{d}}\right)^{2}, \tag{28}\]
which we will show a posteriori. The monodisperse mass accretion rates in these limits are (Lambrechts & Johansen, 2012)
\[\dot{M}_{\rm 3D} =\lim_{\xi\to 0}\dot{M}=\pi R_{\rm acc}^{2}\rho_{d0}\delta v, \tag{29}\] \[\dot{M}_{\rm 2D} =\lim_{\xi\to\infty}\dot{M}=2R_{\rm acc}\Sigma_{d}\delta v, \tag{30}\]
where \(\delta v\) is the velocity at which the pebble approaches the accretor, and \(\rho_{d0}\) is the midplane density. In principle, we could apply Eq. (3) with Eq. (22) on Eq. (29); and Eq. (26) with Eq. (27) on Eq. (30), working with the two limits separately. Yet, given that \(\xi\) is a function of grain size, and there are other transitions to deal with (loose coupling/Bondi/Hill), it is preferable to work with a general expression for \(\dot{M}\), which we derive in this section.
Considering parallel horizontal chords of infinitesimal thickness in the vertical direction until the full accretion radius is taken into account, the general expression for the mass accretion rate is
\[\dot{M}=\int_{-R_{\rm acc}}^{R_{\rm acc}}2\sqrt{R_{\rm acc}^{2}-z^{2}}\ \rho_{d0}\ \exp\left(-\frac{z^{2}}{2H_{d}^{2}}\right)\delta v\ dz. \tag{31}\]
Following Johansen et al. (2015) we define the stratification integral
\[S\equiv\frac{1}{\pi R_{\rm acc}^{2}}\int_{-R_{\rm acc}}^{R_{\rm acc}}2\sqrt{R _{\rm acc}^{2}-z^{2}}\ \exp\left(-\frac{z^{2}}{2H_{d}^{2}}\right)\ dz, \tag{32}\]
so that the mass accretion rate is generalized into one expression as
\[\dot{M}=\pi R_{\rm acc}^{2}\rho_{d0}\ S\ \delta v. \tag{33}\]
While Johansen et al. (2015) use a square approximation for the accretion radius, we find the exact solution of the stratification integral
\[S=e^{-\xi}\left[I_{0}(\xi)+I_{1}(\xi)\right], \tag{34}\]
where \(I_{\nu}(\xi)\) are the modified Bessel functions of the first kind, and \(\xi\) is given by Eq. (28). The exact monodisperse accretion rate is
\[\dot{M}=\pi R_{\rm acc}^{2}\rho_{d0}\ \delta v\,e^{-\xi}\left[I_{0}(\xi)+I_{1}( \xi)\right]. \tag{35}\]
Indeed for \(\xi\to 0\), the Bessel functions tend to \(I_{0}(0)=1\), \(I_{1}(0)=0\), and we recover 3D accretion (Eq. 29). For \(\xi\to\infty\), both Bessel functions tend to \(e^{\xi}/\sqrt{2\pi\xi}\), and 2D accretion is recovered (Eq. 30). Fig. 2 shows the agreement graphically. The square approximation is shown for comparison.
### Polydisperse prescription
To generalize Eq. (35) into a polydisperse description, we consider the integrated polydisperse accretion rate to be \(\dot{M}\equiv\dot{M}(a_{\rm max})\), where
\[\dot{M}(a)=\int_{0}^{a}\frac{\partial\dot{M}(a^{\prime})}{\partial a^{\prime} }da^{\prime}, \tag{36}\]
with
\[\frac{\partial\dot{M}(a)}{\partial a}=\pi R_{\rm acc}^{2}(a)\ \delta v(a)\ S(a)\ m(a)\ f(a). \tag{37}\]
Indeed, Eq. (36) with the integrand given by Eq. (37) is equivalent to Eq. (35) if
\[\int_{0}^{a_{\rm max}}R_{\rm acc}^{2}(a)\ \delta v(a)\ S(a)\ m(a)\ f(a)da= \vec{R}_{\rm acc}^{2}\ \overline{\delta v}\ \bar{S}\,\rho_{d0}, \tag{38}\]
where the overline denotes that the quantity is an "effective" quantity, independent of pebble size. If the accretion radius \(R_{\rm acc}\), the approach velocity \(\delta v\), and the stratification integral \(S\) were independent of the grain radius \(a\), Eq. (38) would be exactly equivalent to replacing the midplane dust density by the integrated grain size distribution
\[\rho_{d0}=\int_{0}^{a_{\rm max}}m(a)f(a)da, \tag{39}\]
which is intuitive. We can now use much of the formalism of pebble accretion already derived in the literature. The approach velocity \(\delta v\) is given by
\[\delta v\equiv\Delta v+\Omega R_{\rm acc}, \tag{40}\]
Figure 2: General expression for the monodisperse pebble accretion rate (Eq. 35). The 3D and 2D limits (eqs 29 and 30, respectively) are recovered. The square approximation is also shown for comparison.
where \(\Delta v\) is the sub-Keplerian velocity reduction and \(\Omega\) is the Keplerian frequency. The accretion radius is (Ormel and Klahr, 2010)
\[R_{\rm acc}\equiv\hat{R}_{\rm acc}{\rm exp}\left[-\chi(\tau_{f}/t_{p})^{\gamma} \right], \tag{41}\]
where \(\tau_{f}={\rm St}/\Omega\) is the pebble friction time, \(\chi=0.4\) and \(\gamma=0.65\) are empirically-determined coefficients, and
\[t_{p}\equiv\frac{GM_{p}}{\left(\Delta v+\Omega R_{H}\right)^{3}} \tag{42}\]
is the characteristic passing time scale. Here \(G\) is the gravitational constant, \(M_{p}\) the mass of the planetesimal, and \(R_{H}\) its Hill radius
\[R_{H}\equiv\left(\frac{GM_{p}}{3\Omega^{2}}\right)^{1/3}. \tag{43}\]
The variable \(\hat{R}_{\rm acc}\) depends on the accretion regime. For Hill accretion it is
\[\hat{R}_{\rm acc}^{\rm(Hill)}=\left(\frac{{\rm St}}{0.1}\right)^{1/3}R_{H}, \tag{44}\]
and for Bondi accretion it is
\[\hat{R}_{\rm acc}^{\rm(Bondi)}=\left(\frac{4\tau_{f}}{t_{B}}\right)^{1/2}R_{B}, \tag{45}\]
where
\[R_{B}\equiv\frac{GM_{p}}{\Delta v^{2}} \tag{46}\]
is the Bondi radius and
\[t_{B}\equiv\frac{R_{B}}{\Delta v} \tag{47}\]
is the Bondi time. The transition mass between Bondi and Hill accretion is defined by (Ormel, 2017)
\[M_{HB}=\frac{M_{t}}{8{\rm St}}, \tag{48}\]
where
\[M_{t}\equiv\frac{\Delta v^{3}}{G\Omega}. \tag{49}\]
A third regime also exists, of accretion of loosely coupled pebbles, for which the accretion radius is the physical radius \(R\) augmented by the gravitational focusing cross-section
\[R_{\rm acc}^{\rm(geo)}=R\sqrt{1+\frac{v_{\rm esc}^{2}}{\Delta v^{2}}}, \tag{50}\]
where \(v_{\rm esc}\) is the escape velocity of the planetary seed. In this regime the grains are so loosely coupled they behave almost like planetesimals, except for small enough grains, that remain coupled to the gas and follow the gas streamlines. The quantity that defines this latter transition is (Ormel, 2017)
\[{\rm St}_{\rm p}=\frac{\Delta v\;\tau_{f}}{R}, \tag{51}\]
that is, the friction time normalized by the time to pass past the planetesimal; a planetesimal Stokes number (hence the "p" in \({\rm St}_{\rm p}\)). For \({\rm St}_{\rm p}<1\), we set \(R_{\rm acc}^{\rm(geo)}=0\). The transition mass \(M_{BL}\) between Bondi and loosely coupled accretion happens at (Ormel, 2017)
\[M_{BL}=\frac{M_{t}}{8}{\rm St}. \tag{52}\]
### Polydisperse vs Monodisperse
We show in the left panel of Fig. 3 a reproduction of the monodisperse accretion rates from Johansen and Lambrechts (2017), for \(a=10\,{\rm cm}\), and at \(5\,{\rm AU}\). Even though the observations do not support the existence of these large grains, we use it for benchmark purposes. The different lines show the pebble accretion rates in the Hill and Bondi regimes, as well as the loosely coupled regime for low masses.
The Hill limit (blue dashed line) is recovered for Eq. (35) with \(\hat{R}_{\rm acc}\) given by Eq. (44), and \(\delta v=\Omega R_{\rm acc}^{\rm(Hill)}\). The Bondi limit (red dashed line) is recovered for Eq. (35) with \(\hat{R}_{\rm acc}\) given by Eq. (45), and \(\delta v=\Delta v+\Omega R_{\rm acc}^{\rm(Bondi)}\). The actual solution (black thick line) uses
\[\hat{R}_{\rm acc}=\left\{\begin{array}{ll}\hat{R}_{\rm acc}^{\rm(Hill)}& \mbox{if $M\geq M_{\rm HB}$},\\ \hat{R}_{\rm acc}^{\rm(Bondi)}&\mbox{if $M<M_{\rm HB}$},\end{array}\right. \tag{53}\]
and the general \(\delta v\) given by Eq. (40). The mass accretion rate is then the maximum between this and the loosely coupled accretion rates. The loosely coupled regime is given by Eq. (35) with \(\delta v=\Delta v\) and \(R_{\rm acc}\) given by Eq. (50) if \({\rm St}_{\rm p}\geq 1\), and zero otherwise.
The right panel of Fig. 3 shows how the accretion rates differ when we include a particle size distribution. In this panel we are showing the integrated accretion rate \(\dot{M}\equiv\dot{M}(a_{\rm max})\) given by Eq. (36). The monodisperse line is shown for comparison.
#### 3.3.1 Slightly lower efficiency in the Hill regime
From comparing the plots in Fig. 3, we see that the polydisperse accretion rate is slightly lower in the regime of Hill accretion; this occurs because, in the Hill regime, there is less mass at the biggest pebble size \(a_{\rm max}\) compared to monodisperse (where all pebbles are of \(10\,{\rm cm}\)). We work out in Sect. 4 this reduction factor to be exactly 3/7.
#### 3.3.2 Significantly higher efficiency in the Bondi regime
In the Bondi regime, conversely, there are now pebbles to accrete of friction time similar to the Bondi time. In
the monodisperse regime there were only the 10 cm pebbles that, for very low mass seed, behave like infinite St and do not accrete well. As a result, in the polydisperse case, Bondi accretion is more efficient than loosely coupled accretion over a wider range of low seed masses. At the mass where monodisperse experiences the onset of pebble accretion (about \(10^{-4}M_{\oplus}\)), the polydisperse distribution is well into the Bondi regime, which is about 100\(\times\) more efficient. We also see that the onset of pebble accretion occurred between \(10^{-6}\) and \(10^{-5}M_{\oplus}\), i.e., between 100-200 km. This is a significant early onset of pebble accretion, that may eliminate the need for planetesimal accretion to bridge the gap between the largest masses formed by streaming instability and the onset of efficient pebble accretion (Johansen et al., 2015; Schafer et al., 2017; Li et al., 2019).
We plot in Fig. 4 the differential mass accretion rate as a function of pebble size (horizontal axis) and seed mass (vertical axis). The left panel shows the polydisperse mass accretion rate \(\partial_{\ln a}\dot{M}\), and the right panel shows the ratio between that and the same quantity for the largest grain size in the dis
Figure 4: _Left:_ The polydisperse pebble accretion rate \(\partial_{\ln a}\dot{M}\) (Eq. 37), as a function of grain radius. In the Hill accretion regime the largest pebble present dominates the mass accretion rate. Conversely, for Bondi accretion, we see that at a given seed mass the differential accretion rate is non-monotonic with grain size. For low enough seed masses, the biggest grains, although dominating the mass distribution, accrete in the loosely coupled regime. _Right:_ Same as the left plot, but normalized by the accretion rate for \(a_{\rm max}\) (proxy for monodisperse). The bright red contours are the regions were polydisperse accretion is enhanced over monodisperse. We see that it mostly corresponds to the region where monodisperse is in the loosely coupled regime, but polydisperse is already in Bondi. The best accreted pebbles are those for which the stopping time \(\tau_{f}\) equals the Bondi time \(t_{B}\). Absent in the monodisperse description, these pebbles may contribute less to the mass budget, but their enhanced accretion ends up dominating the mass accretion rate.
Figure 3: Comparison between monodisperse (left) and the integrated polydisperse (right) accretion rates (Eq. 36). The left panel uses the parameters of Fig. 4 of Johansen and Lambrechts (2017), except that we use the monodisperse general equation here derived (Eq. 35). A pebble size of 10 cm is not supported by observations but we keep this size for benchmarking purposes. The polydisperse accretion rate is reproduced in the left plot, and the monodisperse accretion rate in the right plot (grey lines), for comparison. The Hill accretion yields a lower accretion rate (3/7 lower than monodisperse) because other pebbles sizes are present, not only \(a=10\) cm. The main difference is the accretion rate for polydisperse Bondi accretion being up to two orders of magnitude more efficient than monodisperse, and the onset of pebble accretion happening over one order of magnitude lower in mass. This occurs because the best-accreted pebble is not present in the monodisperse distribution, and \(a_{\rm max}\) is too loosely coupled, accreting poorly. Notice the smooth transition from Bondi to Hill accretion with the exact 2D-3D transition.
tribution, which we take as a proxy for monodisperse. The three accretion regimes are labeled in the left plot; one sees the smooth transition between Hill and Bondi accretion, and the discontinuous transition from Bondi to loosely coupled. It is seen that, at a given mass, Hill accretion is monotonic with particle size, but Bondi accretion is not. A local maximum of mass accretion rate occurs, corresponding to the size for which \(\tau_{f}=t_{\rm Bondi}\), which in turn leads to a linear dependency on the best accreted particle size for a given seed mass. The bright red parts of the right plot show where Bondi accretion is more efficient than monodisperse. It is the more efficient accretion of these grains that boosts the Bondi accretion rates in the polydisperse case. We see that it corresponds chiefly to the region of the parameter space for which monodisperse accretion was in the loosely coupled regime, but the polydisperse is well within Bondi. This confirms that indeed it is the accretion of the smaller, Bondi-optimal, pebbles, that is increasing the accretion rate.
### Effect of distance
We explore now the parameter space of stellocentric distance; the results are shown in Fig. 5, showing the accretion rates at 10, 25, and 40 AU (notice also we decreased \(a_{\rm max}\) to 1 cm). The left plots show the integrated mass accretion rates \(\dot{M}\), the middle plots the distribution \(\partial_{\ln a}\,\dot{M}\), and the right plots the distribution normalized by the accretion rate for \(a_{\rm max}\). The Hill accretion rate decreases only slightly with distance for this model, because the drop in \(\dot{\Omega}\) and \(\Sigma_{d}\) with distance is equally compensated by the increase in the Hill radius.
As for the Bondi regime, we see that at the grain size where monodisperse would transition to loosely coupled, polydisperse is still about two orders of magnitude more efficient, over all distances considered. The seed mass for onset of pebble accretion is also pushed down 1 order of magnitude, from \(\sim 5\times 10^{-5}\) to \(\sim 5\times 10^{-6}M_{\oplus}\) at 10 AU. This is about 100-200 km radius (for internal densities 3.5 and 0.5 g/cm\({}^{3}\), respectively), reaching the range where pebble accretion onto the direct products of streaming instability is possible. At 40 AU the onset of pebble accretion is pushed from \(\sim 10^{-3}M_{\oplus}\) in monodisperse to \(\sim 10^{-4}M_{\oplus}\) in polydisperse. A significant reduction, but still in the mass range of planetary embryos, so planetesimals formed at that distance should remain planetesimals. This is in accordance to the solar system constrain given by the existence of the cold classical Kuiper Belt objects at 40-50 AU, presumably undisturbed planetesimals.
As distance increases, both the accretion rate and the size of the best accreted pebble decreases. While at 10 AU the best accreted size for a \(10^{-5}M_{\oplus}\) seed (150-300 km radius) is 1 mm, at 40 AU it decreases to 10 \(\mu\)m. This has implications for the densities of formed objects if the smaller pebbles have different composition, e.g. the smaller ones being silicate in nature and the larger ones being icy. Then a planetesimal seed will preferentially accrete pebbles of rocky composition until it grows enough in mass to start accreting ices efficiently.
The left panel of Fig. 6 shows the integrated polydisperse pebble accretion rate as a function of distance, from 1 to 100 AU. The mass accretion rate of a \(10^{-4}M_{\oplus}\) seed at 20 AU is about \(10^{-10}M_{\oplus}\)yr\({}^{-1}\). The thick black dashed line shows the typical mass of objects formed by streaming instability (Liu et al., 2020; Lorek & Johansen, 2022). The thick grey dashed line shows 10 times that mass, proxy for the most massive objects formed directly by streaming instability.
In the right panel we show the accretion time
\[t_{\rm acc}\equiv\frac{M_{p}}{\dot{M}}, \tag{54}\]
along with the same curves for objects formed by streaming instability. The plot shows that a 0.1 Pluto mass (\(2\times 10^{-4}M_{\oplus}\)) seed has e-folding growth time of 1 Myr at 20 AU, and 10 Myr at 30 AU; that is, a Charon-mass planetary embryo can efficiently increase its mass by Bondi accretion during the lifetime of the disk. This implies that the formation of Pluto in the solar Nebula as far as 30 AU is possible by Bondi accretion of 10-100 \(\mu\)m grains onto a 0.1 Pluto mass seed.
The plot also shows that up to 20 AU, the objects typically formed by streaming instability (thick black dashed line) have growth times up to 3 Myr, within the lifetime of the nebula. Notice that, in the inner solar system, Bondi accretion on \(10^{-6}M_{\oplus}\) seeds (\(\approx 100\) km radius) at 3 Myr timescale is possible up to 3 AU. We conclude that Bondi accretion directly on planetesimals is possible in the inner solar system, dismissing the need for mutual planetesimal collisions as a major contribution to planetary growth.
#### 3.4.1 Effect of maximum grain size
In Fig. 7 we show the model for 3 different maximum grain sizes, from left to right: 3 mm, 1 mm, and 0.3 mm. The main feature is that, as the maximum grain size decreases, the mass accretion rate (accretion time) for given seed mass at a given distance decreases (increases).
The 3 Myr contour reaches \(10^{-6}M_{\oplus}\) at 3 AU for \(a_{\rm max}\) = 3 mm, \(10^{-6}M_{\oplus}\) at 2 AU for 1 mm, and \(10^{-4}M_{\oplus}\) at 10 AU for 0.3mm. The conclusion is similar: Myr-timescale Bondi accretion on top of 100 km seeds (\(10^{-6}M_{\oplus}\)) is possible in the inner solar system. Except for the model with \(a_{\rm max}\) = 0.3 mm, the typical products of streaming instability can grow by pebble accretion in 3 Myr timescales.
We calculate also a 10\(\times\) more massive model. The higher dust mass also comes with a higher gas mass, and thus a reduction in Stokes number for the same pebble size. It is unclear a priori which effect dominates. In Fig. 8 we show the
formation times for the model, using \(a_{\rm max}=1\,\)cm. The formation times are overall shorter compared to the right panel of Fig. 6, pushing the 3 Myr e-folding contour to double the distance vis-a-vis the lower mass model (7 AU for 100 km, 30 AU for \(10^{-2}\) Pluto mass, and 60 AU for \(10^{-1}\) Pluto mass). Even at this higher mass model, a 100 km seed has an e-folding growth time of over 100 Myr at 40 AU, and should remain planetesimals, as expected.
#### 3.4.2 Effect of sedimentation
In Fig. 9 we show the e-folding growth times for the planetary seeds formed by streaming instability (typical objects and most massive objects), as a function of the turbulent viscosity parameter \(\alpha\). Its function in the model is only on how it influences sedimentation. The grey dotted line in the plot marks the threshold of 3 Myr. For moderately high turbulence (\(\alpha=10^{-3}\)), the typical seeds have longer growth times than 3 Myr already beyond 6 AU. For lower turbulence, \(\alpha=10^{-5}\), as most pebbles are sedimented, the distance where growth occurs within 3 Myr increases to 40 AU. The most massive objects, well into the Bondi regime, all have fast growth times.
## 4 Analytical solutions
In this section we derive the analytical solutions in the relevant limits of 2D Hill accretion and 3D Bondi accretion. In a polydisperse distribution, the pebble scale height is a function of pebble radius, so the pebbles are not necessarily all in the 2D regime or all in the 3D regime. Also, because the transitions between loosely coupled and Bondi, and from Bondi to Hill are St-dependent, the pebbles are not all in the same regime of accretion either.
Yet, in practice these limits still yield reasonably accurate accretion rates. Because the distribution is top heavy, the 2D Hill regime is applicable for large seed masses, that are accreting in this regime the biggest pebbles, which are responsible for most of the mass accretion rate. The 3D Bondi
Figure 5: _Left:_ Same as Fig. 3, right plot, but for the density and temperature of Eqs. (23) and (24), Z=0.01, and at different distances. Hill accretion is not much affected by distance, but Bondi accretion becomes increasingly less efficient as distance increases. Yet, the general trend remains, of polydisperse pebble accretion being 1-2 orders of magnitude more efficient than monodisperse at maximum, and showing an earlier onset in mass also by 1-2 orders of magnitude. _Middle and Right:_ same as Fig. 4, at difference distances. The pebble size that maximizes Bondi accretion decreases as distance increases. This has interesting implications, because in the outer disk, the seeds, presumably icy, should accrete small grains, presumably silicates. This implies the possibility a two-mode formation of Kuiper belt objects: icy planetesimal produced by streaming instability of larger grains, followed by pebble accretion of smaller, silicate, grains.
regime is applicable as long as \(R_{\rm acc}<2H_{d}\) (Eq. 28), which solving for mass yields
\[M_{p}<\frac{\Delta\nu\Omega H_{g}^{2}\alpha}{G\rm St(St+\alpha)}. \tag{55}\]
Normalizing by the transition mass \(M_{t}\), we find
\[\frac{M_{p}}{M_{t}}\lesssim\frac{\alpha}{h^{2}\rm St(St+\alpha)}, \tag{56}\]
where \(h\equiv H_{g}/r\) is the disk aspect ratio. For \(\alpha\sim 10^{-4}\) and \(h\sim 10^{-2}\), 3D Bondi accretion should apply close to the transition mass, except for big enough pebbles, as expected, because these are too sedimented. Yet, as we have established, these pebbles contribute poorly to the mass accretion rate. For particles of \(\tau_{f}=t_{B}\), and assuming \(\rm St\gg\alpha\), we find
\[\frac{M_{p}}{M_{t}}\lesssim\left(\frac{\alpha}{h^{2}}\right)^{1/3}, \tag{57}\]
i.e., within the expected ranges of \(\alpha\) and \(h\), the seed mass for which \(\tau_{f}=t_{B}\) is within a factor of order unity from the transition mass. We conclude that a 3D approximation for the Bondi regime should lead to acceptable results.
We work now the analytical expressions in these limits.
### Analytical Polydisperse 2D Hill accretion
Figure 6: _Left_: Integrated polydisperse pebble mass accretion rate, as a function of distance. The model uses the density and temperature of Eqs. (23) and (24), Z=0.01, and \(\rho_{\bullet}\) constant. The thick black dashed line shows the characteristic size of the planetesimals formed by streaming instability (Liu et al., 2020; Lorek and Johansen, 2022); the grey line represents bodies of \(10\times\) the typical mass. _Right:_ Accretion times \(M/\dot{M}\) for the same model. The contour of 6.5 (3Myr) marks the boundary where accretion during the lifetime of the nebula is feasible by pebble accretion, without the need for planetesimal accretion. That contour corresponds to 3 AU, 10 AU, and 30 AU, for \(10^{-6}M_{\oplus}\), \(2\times 10^{-5}M_{\oplus}\), and \(2\times 10^{-4}M_{\oplus}\), respectively. These masses correspond to 100 km radius, \(10^{-2}\) and \(10^{-1}\) Pluto masses, respectively. The typical products of streaming instability have \(<\)3 Myr growth times up to 30 AU.
Figure 7: Same as Fig. 6, but exploring the parameter space of maximum grain radius \(a_{\rm max}\), from left to right: 3 mm, 1 mm, and 0.3 mm. Upper plots show mass accretion rate, lower plots the accretion times. The trend seen is that Bondi accretion rates decrease with \(a_{\rm max}\) for the same seed mass and distance. The contour of 6.5 (3Myr) marks the boundary where accretion during the lifetime of the nebula is feasible by pebble accretion, without the need for planetesimal accretion. This translates into \(\approx 3\) AU for 100 km seeds (\(10^{-6}M_{\oplus}\)), 10 AU for 0.01 Pluto mass (\(2\times 10^{-5}M_{\oplus}\)), and up to 30 AU for 0.1 Pluto mass (\(2\times 10^{-4}M_{\oplus}\)), for the first two models. The typical products of streaming instability grow in Myr timescales except for the last model, of maximum grain size 0.3 mm.
We can integrate the polydisperse Hill regime analytically in the 2D limit by generalizing Eq. (30) with \(\varSigma_{d}\) given by Eq. (26)
\[\dot{M}_{\rm 2D,Hill}=2\times 10^{2/3}\varOmega R_{H}^{2}\int_{0}^{a_{\rm max}} \mathrm{St}(a)^{2/3}\,m(a)\,W(a)\,da. \tag{58}\]
Given the scalings \(\mathrm{St}\propto a^{1-q}\), \(m\propto a^{3-q}\), and \(W\propto a^{-k}\), the dependency of the integrand of Eq. (58) on \(a\) is
\[\frac{\partial\dot{M}(a)}{\partial a}_{\rm 2D,Hill}\propto a^{(11-5q-3k)/3} \tag{59}\]
Integrating it in \(a\), we find the exact solution
\[\dot{M}_{\rm 2D,Hill}=\frac{6(1-p)}{14-5q-3k}\left(\frac{\mathrm{St}_{\rm max}}{ 0.1}\right)^{2/3}\varOmega R_{H}^{2}\,Z\,\varSigma_{g}. \tag{60}\]
Eq. (60) differs from the monodisperse case (Eq. 30) by an efficiency factor
\[\left(\frac{\dot{M}_{\rm poly}}{\dot{M}_{\rm mono}}\right)_{\rm 2D,Hill}= \frac{3(1-p)}{14-5q-3k}\left(\frac{\mathrm{St}_{\rm max}}{\mathrm{St}}\right) ^{2/3}. \tag{61}\]
For MRN (\(k=3.5\)), \(q=0\), and \(\mathrm{St}=\mathrm{St}_{\rm max}\), this yields
\[\left(\frac{\dot{M}_{\rm poly}}{\dot{M}_{\rm mono}}\right)_{\rm 2D,Hill}^{k=3.5,q=0}=\frac{3}{7}, \tag{62}\]
that is, about \(43\%\) of the monodisperse. Deviations from this number are due to not all pebbles being in the 2D Hill regime. For large enough seed mass, the deviations should be small, as indeed it is seen in the plots of Figs. 3 and 5.
### Analytical Polydisperse 3D Bondi accretion
The Bondi accretion in the 3D regime limit is found by generalizing Eq. (29) with \(\rho_{d0}\) given by Eq. (39)
\[\dot{M}_{\rm 3D,Bondi} = \frac{4\pi R_{B}\Delta v^{2}}{\varOmega}\times\] \[\int_{0}^{a_{\rm max}}\mathrm{St}\ e^{-2\psi}m(a)f(a)\left[1+2 \left(\mathrm{St}\frac{\varOmega R_{B}}{\Delta v}\right)^{1/2}e^{-\psi} \right]da,\]
where we use the shorthand notation
\[\psi\equiv\chi[\mathrm{St}/(\varOmega t_{p})]^{\gamma}. \tag{64}\]
We will split Eq. (63) into two integrals
\[\dot{M}_{\rm 3D,Bondi} = \frac{4\pi R_{B}\Delta v^{2}}{\varOmega}\left[\int_{0}^{a_{\rm max }}e^{-2\psi}\ \mathrm{St}\ m(a)\ f(a)\ da\right.\] \[+ \left.2\left(\frac{\varOmega R_{B}}{\Delta v}\right)^{1/2}\int_{0 }^{a_{\rm max}}e^{-3\psi}\ \mathrm{St}^{3/2}\ m(a)\ f(a)\ da\right].\]
The function \(f(a)\) has a dependency on \(\sqrt{1+\mathrm{St}/\alpha}\), which makes these functions non-integrable sauf specific cases. We
Figure 8: Same as the right panel of Fig. 6, but for 10 times the disk mass. Although the Stokes number decreases for the same particle radius, the increase in dust mass is the dominant effect, and accretion times decrease for the same seed mass and distance. Compared to the lower-mass model, the line of 3 Myr e-folding growth time is pushed to about twice the distance, allowing for pebble accretion on top of 100 km seeds (\(10^{-6}M_{\oplus}\)) up to 7 AU. 200 km objects (\(10^{-5}M_{\oplus}\)) can accrete pebbles efficiently up to 30 AU. At 40 AU accretion on 100 km seeds takes over 100 Myr and they should remain planetesimals, consistent with evidence from the Solar System.
Figure 9: Polydisperse pebble accretion timescales for different \(\alpha\) values for the typical masses produced by streaming instability (solid lines), and ten times this mass (dashed lines), taken as proxy for the end of the streaming instability mass function. The grey dotted line marks 3 Myr. For \(\alpha=10^{-3}\), the typical seeds only grow within the lifetime of the nebula in the inner solar system, up to \(\approx\)5-10 AU. For lower turbulence, \(\alpha=10^{-5}\), as most pebbles are sedimented, the distance increases to 40 AU.
will thus use the following approximation, valid at \(x\to 0\) and \(x\to\infty\)
\[\sqrt{1+x}\approx 1+\sqrt{x}. \tag{66}\]
While the error incurred with this approximation at \(x\approx 1\) can be large, we are interested in the definite integral from \(0\) to \(x_{\rm max}\). In this case, the error decreases if the range of integration is large enough, tending to zero for \(x_{\rm max}\to\infty\), as shown in Fig. 10. Confident in the accuracy of Eq. (66), we write the approximate solution
\[\dot{M}_{\rm 3D,Bondi} \approx \frac{3(1-p)Z\Sigma_{g}R_{B}\Delta v^{2}}{\sqrt{2\pi}H_{g}\Omega _{\bullet}^{(0)}a_{\rm max}^{4-k}}\times \tag{67}\] \[\left[\int_{0}^{a_{\rm max}}e^{-2\psi}\ {\rm St}\ m(a)\ a^{-k}da\right.\] \[+ \left.\alpha^{-1/2}\int_{0}^{a_{\rm max}}e^{-2\psi}\ {\rm St}^{3/2}\ m(a)\ a^{-k}da\right.\] \[+ \left.2\left(\frac{\Omega R_{B}}{\Delta v}\right)^{1/2}\int_{0}^ {a_{\rm max}}e^{-3\psi}\ {\rm St}^{3/2}\ m(a)\ a^{-k}da\right.\] \[+ \left.2\left(\frac{\Omega R_{B}}{\alpha\Delta v}\right)^{1/2} \int_{0}^{a_{\rm max}}e^{-3\psi}\ {\rm St}^{2}\ m(a)\ a^{-k}da\right].\]
The four integrals are of the form below, for which there is an analytical solution in terms of lower incomplete gamma functions
\[\int_{0}^{a_{\rm max}}e^{-j\mu^{\prime}}a^{k}da=\frac{\gamma_{l}\left(\frac{b+ 1}{s},ja_{\rm max}^{a}\right)}{sj^{(b+1)/s}}. \tag{68}\]
We thus write the solution of Eq. (67)
\[\dot{M}_{\rm 3D,Bondi} \approx C_{1}\frac{\gamma_{l}\left(\frac{b_{\rm s}+1}{s},j_{1}a_{\rm max }^{a}\right)}{sj_{1}^{(b+1)/s}}+C_{2}\frac{\gamma_{l}\left(\frac{b_{\rm s}+1}{ s},j_{2}a_{\rm max}^{a}\right)}{sj_{2}^{(b+2)/s}}+ \tag{69}\] \[C_{3}\frac{\gamma_{l}\left(\frac{b_{\rm s}+1}{s},j_{3}a_{\rm max }^{a}\right)}{sj_{3}^{(b_{\rm s}+1)/s}}+C_{4}\frac{\gamma_{l}\left(\frac{b_{ \rm s}+1}{s},j_{4}a_{\rm max}^{a}\right)}{sj_{4}^{(b_{\rm s}+1)/s}},\]
where the coefficients are
\[s = \gamma(1-q) \tag{70}\] \[b_{1} = 4-2q-k\] (71) \[b_{2} = b_{3} = (9-5q-2k)/2\] (72) \[b_{4} = 5-3q-k\] (73) \[{\rm St}^{\prime} = \frac{\pi}{2\Sigma_{g}}\rho_{\bullet}^{(0)}a_{\rm max}^{q}\] (74) \[j^{\prime} = \chi\left(\frac{{\rm St}^{\prime}}{\Omega t_{p}}\right)^{\gamma}\] (75) \[j_{1} = j_{2} = 2j^{\prime}\] (76) \[j_{3} = j_{4} = 3j^{\prime}\] (77) \[m^{\prime} = \frac{4\pi}{3}\rho_{\bullet}^{(0)}a_{\rm max}^{q}\] (78) \[K = \frac{3(1-p)Z\Sigma_{g}R_{B}\Delta v^{2}}{\sqrt{2\pi}H_{g}\Omega _{\bullet}^{(0)}a_{\rm max}^{4-k}}\] (79) \[C_{1} = K{\rm St}^{\prime}m^{\prime}\] (80) \[C_{2} = K{\rm St}^{\prime 3/2}m^{\prime}\alpha^{-1/2}\] (81) \[C_{3} = Z{\rm St}^{\prime 3/2}m^{\prime}\left(\frac{\Omega R_{B}}{\Delta v} \right)^{1/2}\] (82) \[C_{4} = Z{\rm St}^{\prime 2}m^{\prime}\left(\frac{\Omega R_{B}}{\alpha \Delta v}\right)^{1/2} \tag{83}\]
Fig. 11 shows that the agreement between the numerical integration of Eq. (37) and the analytical solutions (Eq. (60) and Eq. (69)) is excellent in the range of validity. Having Eq. (60) and Eq. (69) as analytical expressions is of great interest for future studies including pebble accretion analytically, instead of having to integrate the mass accretion rates numerically with the particle size distributions.
Figure 10: Approximating \(\sqrt{1+x}\) in by \(1+\sqrt{x}\) (the asymptotic expansion for \(x\to 0\) and \(x\to\infty\)), to make the sedimented midplane distribution integrable analytically. As long as the function is integrated to a large value of \(x_{\rm max}\), the error incurred is small.
## 5 Effect of slope \(k\) of grain size distribution
So far we have considered only the MRN value for the index \(k\) of the grain size distribution (Mathis et al., 1977), but this index should depend on the collisional evolution, velocities, and material strength of the pebbles (Kobayashi & Tanaka, 2010; Kobayashi et al., 2016). Having found the analytical solution for the accretion rates, we can more easily determine the impact of varying this parameter, which we show in Fig. 12. As the slope steepens, the mass accretion rate decreases in the Hill regime. Compared to monodisperse, the effect is small, but accelerates as \(k\) approaches 4. This insensitivity is expected, as the Hill regime is dominated by the larger grains. The effect on mass accretion rate is more pronounced for the Bondi regime, as expected, as the amount of mass in different grain sizes more strongly affects this accretion regime. As the slope steepens and more mass is made available in small grain sizes, the accretion rates onto smaller planetesimal seeds increases, although the effect is nonlinear.
## 6 Limitations
We are limited in this work by the vast expanses of the parameter space and by the circular restricted 3-body problem solution that forms for the underlying assumption of the gas and pebble flow. While the former would be a valiant endeavour, it is not the scope of this work to derive results applicable to all possible situations, but to derive the model in first place. As such, we kept our equations general in metallicity, internal density, and grain size distribution, but apply it mostly for \(Z=0.01\), \(\rho_{\bullet}^{(0)}=3.5\,\mathrm{g\,cm^{-3}}\), \(q=0\). These parameters will vary with dust drift (lower the metallicity), composition (varying the internal density if ices or silicates), and porosity.
As for going beyond the circular restricted 3-body problem, recently the impact of the gravity of the planetary seed on the accretion flow has been calculated from hydrodynamical simulations (Okamura & Kobayashi, 2021), for the Hill regime (Kuwahara & Kurokawa, 2020) and for the Bondi regime (Kuwahara & Kurokawa, 2020). In the Bondi regime, the trajectories are modified for \(\mathrm{St}\lesssim 10^{-3}\), with the gas flow reducing the accretion rate. Thus, Eq. (45) is overestimated for small \(\mathrm{St}\). We find that the best accreted pebbles, that give the bulk of the boost in Bondi accretion, are slightly above the \(\mathrm{St}\sim 10^{-3}\) transition found by (Kuwahara & Kurokawa, 2020); as such, this aspect of our results are not severely affected by the planet-induced flow.
## 7 Conclusion
In this paper, we worked out the theory of polydisperse pebble accretion, finding analytical solutions when possible. Our main findings are as follows:
* We find that polydisperse Bondi accretion is 1-2 orders of magnitudes more efficient than in the monodisperse case, This is because the best-accreted pebbles in the Bondi regime are those of friction time similar to Bondi time, not the largest pebbles present. The large pebbles, although dominating the mass budget, are weakly coupled across the Bondi radius and thus accrete poorly. The pebbles that are optimal for Bondi accretion may contribute less to the mass budget, but their enhanced accretion significantly impacts the mass accretion rate.
* The onset of polydisperse pebble accretion is extended by 1-2 orders of magnitude lower in mass compared to monodisperse, for the same reason. The onset of pebble accretion with Myr-timescales reaches 100-350 km
Figure 11: Agreement between the numerically calculated polydisperse pebble accretion rate and the analytical solutions for 2D Hill accretion Eq. (60) and 3D Bondi accretion Eq. (69). While the Hill solution is exact, the Bondi solution is approximate. Yet, the agreement seen is excellent, because the best accreted pebbles in this regime are in the 3D range.
Figure 12: Effect of varying the slope \(k\) of the grain size distribution. The Hill regime is relatively insensitive to \(k\), as this regime is dominated by the largest grains. For Bondi accretion, the mass accretion rates increase significantly as the slope steepens.
sized objects depending on stellocentric distances and disk model. For the model considered, Bondi accretion on Myr timescales, within the lifetime of the disk, is possible on top of \(10^{-6}M_{\oplus}\) (100 km) seeds up to 4 AU, on top of \(10^{-5}M_{\oplus}\) (200 km) seeds up to 10 AU, and on \(10^{-4}M_{\oplus}\) (350 km) seeds up to 30 AU. A model 10 times more massive doubles these distances.
* In all models considered, at 40 AU a 100 km seed has growth time over 100 Myr, and should thus remain as planetesimals, in accordance with the existence of the cold classical Kuiper Belt population, presumably undisturbed planetesimals.
* We find the analytical solution of the stratification integral, and thus the exact solution for the 3D-2D transition (Eq. 35),
* We find analytical solutions for the polydisperse 2D Hill (Eq. 60) and 3D Bondi regime (Eq. 69). For the MRN distribution, the Hill accretion is a factor 3/7 (about 42%) as efficient in polydisperse than monodisperse.
The fact that Myr-growth timescales, within the lifetime of the disk, is possible for polydisperse pebble accretion onto 100-350 km seeds over a significant range of the parameter space, has significant implications. This mass range overlaps with the high mass end of the planetesimal initial mass function (Johansen et al., 2015; Schafer et al., 2017; Li et al., 2019), and thus pebble accretion is possible directly following formation by streaming instability, removing the need for planetesimal accretion. This conclusion is supported by the lack of of craters generated by 1-2 km on Pluto (Singer et al., 2019), and recent findings by Lorek & Johansen (2022) that planetesimal accretion are not able to sustain accretion rates beyond 5 AU.
While we do most of our numerical solutions with constant \(\rho_{\bullet}\), we keep the analytical solutions general for varying this parameter, expecting that smaller pebbles should be of lower density, and the bigger pebbles of higher density, reflecting different compositions (Morales et al., 2016). We notice that as the distance increases, the pebble size that maximizes pebble accretion is increasingly smaller. This implies the possibility of a two-mode formation of Kuiper belt objects: streaming instability of the largest pebbles forming icy objects of the order of \(\gtrsim 100\) km in diameter, followed by pebble accretion leading to objects of the order of 1000 km, where silicates are incorporated mostly at the pebble accretion stage, due to their low Stokes number. This scenario would lead to a different composition for the smaller objects, mostly formed by ice streaming instability, and the larger objects, grown by ice and silicate pebble accretion on top of the icy planetesimal seeds. A continuum of rock-to-ice fraction should be produced. Indeed a trend is clear in the Kuiper belt, of constant density around 0.5 g cm\({}^{-3}\) for the smaller objects (diameter less than 500 km), and increasing for larger objects (Brown, 2012; Grundy et al., 2015; McKinnon et al., 2017). We will explore how our findings in this paper can reproduce this result in a future work.
WL acknowledges support from the NASA Theoretical and Computational Astrophysical Networks (TCAN) via grant 80NSSC21K0497, from the NASA Emerging Worlds program via grant 22-EW22-0005, and by NSF via grant AST-2007422. AJ is supported by the Swedish Research Council (Project Grant 2018-04867), the Danish National Research Foundation (DNRF Chair grant DNRF159), and the Knut and Alice Wallenberg Foundation (Wallenberg Academy Fellow Grant 2017.0287). A.J. further thanks the European Research Council (ERC Consolidator Grant 724 687-PLANETESYS), the Goran Gustafsson Foundation for Research in Natural Sciences and Medicine, and the Wallenberg Foundation (Wallenberg Scholar KAW 2019.0442) for research support. MHC is supported by grant 22-EW22-0005 from the NASA Emerging Worlds program. We acknowledge conversations with Andrew Youdin, Jake Simon, Orkan Umurhan, Debanjan Sengupta, and Daniel Carrera.
|
2306.00353 | Constructing Semantics-Aware Adversarial Examples with Probabilistic
Perspective | We propose a probabilistic perspective on adversarial examples. This
perspective allows us to view geometric restrictions on adversarial examples as
distributions, enabling a seamless shift towards data-driven, semantic
constraints. Building on this foundation, we present a method for creating
semantics-aware adversarial examples in a principle way. Leveraging the
advanced generalization capabilities of contemporary probabilistic generative
models, our method produces adversarial perturbations that maintain the
original image's semantics. Moreover, it offers users the flexibility to inject
their own understanding of semantics into the adversarial examples. Our
empirical findings indicate that the proposed methods achieve enhanced
transferability and higher success rates in circumventing adversarial defense
mechanisms, while maintaining a low detection rate by human observers. | Andi Zhang, Mingtian Zhang, Damon Wischik | 2023-06-01T05:16:44Z | http://arxiv.org/abs/2306.00353v2 | # Constructing Semantics-Aware Adversarial Examples with Probabilistic Perspective
###### Abstract
In this study, we introduce a novel, probabilistic viewpoint on adversarial examples, achieved through box-constrained Langevin Monte Carlo (LMC). Proceeding from this perspective, we develop an innovative approach for generating semantics-aware adversarial examples in a principled manner. This methodology transcends the restriction imposed by geometric distance, instead opting for semantic constraints. Our approach empowers individuals to incorporate their personal comprehension of semantics into the model. Through human evaluation, we validate that our semantics-aware adversarial examples maintain their inherent meaning. Experimental findings on the MNIST and SVHN datasets demonstrate that our semantics-aware adversarial examples can effectively circumvent robust adversarial training methods tailored for traditional adversarial attacks.
## 1 Introduction
The purpose of generating adversarial examples is to deceive a classifier by making minimal changes to the original data's meaning. In image classification, most existing adversarial techniques ensure the preservation of adversarial example semantics by limiting their geometric distance from the original image [18; 6; 2; 12]. These methods are able to deceive classifiers with a very small geometric based perturbation. However, when targeting robust classifiers trained using adversarial methods, an attack involving a relatively large geometric distance may be necessary. Unfortunately, these considerable distances can be so vast that they ultimately undermine the original image's semantics, going against the core objective of creating adversarial examples. As illustrated in the left portion of Figure 1, when applying the PGD attack [12] constrained by \(L_{2}\) norm on a robust classifier, the attacked images that successfully deceive the classifier consistently lose their original meaning, which is undesirable.
To counter this problem, we propose an innovative approach for generating semantics-aware adversarial examples. Instead of being limited by geometric distance, our approach hinges on a proposed semantic divergence. Specifically, we treat generating adversarial examples as a box-constrained non-convex optimization problem. We employ box-constrained Langevin Monte Carlo (LMC) to find near-optimal solutions for this complex problem. As LMC samples converge to a stationary distribution, we gain a probabilistic understanding of the adversarial attack. Within this probabilistic perspective, the geometric constraint of the adversarial attack can be viewed as a distribution. By replacing this geometric-based distribution with a semantic-based distribution, we can define a semantics-aware adversarial attack in a principled manner. The corresponding divergence induced by the semantic-based distribution is called semantic divergence. Our semantics-aware adversarial attack is capable of deceiving robust classifiers while preserving most of the original image's semantics, as demonstrated in the right section of Figure 1.
## 2 Preliminaries
### Adversarial examples
The notion of adversarial examples was first introduced by Szegedy et al. [18]. Let's assume we have a classifier \(C:[0,1]^{n}\rightarrow\mathcal{Y}\), where \(n\) represents the dimension of the input space and \(\mathcal{Y}\) denotes the label space. Given an image \(\mathbf{x}_{\text{ori}}\in[0,1]^{n}\) and a target label \(y_{\text{tar}}\in\mathcal{Y}\), the optimization problem for finding an adversarial instance for \(\mathbf{x}_{\text{ori}}\) can be formulated as follows:
\[\text{minimize }\mathcal{D}(\mathbf{x}_{\text{ori}},\mathbf{x}_{\text{adv}}) \quad\text{ such that }C(\mathbf{x}_{\text{adv}})=y_{\text{tar}}\text{ and }\mathbf{x}_{\text{adv}}\in[0,1]^{n}\]
Here, \(\mathcal{D}\) is a distance metric employed to assess the difference between the original and perturbed images. This distance metric typically relies on geometric distance, which can be represented by \(L_{0}\), \(L_{2}\), or \(L_{\infty}\) norms.
However, solving this problem is challenging. As a result, Szegedy et al. [18] propose a relaxation of the problem:
\[\text{minimize }\mathcal{L}(\mathbf{x}_{\text{adv}},y_{\text{tar}}):=c_{1} \cdot\mathcal{D}(\mathbf{x}_{\text{ori}},\mathbf{x}_{\text{adv}})+c_{2}\cdot f (\mathbf{x}_{\text{adv}},y_{\text{tar}})\quad\text{ such that }\mathbf{x}_{\text{adv}}\in[0,1]^{n} \tag{1}\]
Figure 1: **Top left**: Targeted attack on an adversarially trained MadryNet [12] for MNIST using Projected Gradient Descent (PGD) with \(L_{2}\) norm. To ensure successful targeted attacks in most cases, we increased the \(\epsilon\) to \(5\). **Bottom left**: Targeted attack on an adversarially trained ResNet18 [8] for SVHN using PGD with \(L_{2}\) norm and \(\epsilon=5\). **Top right & Bottom right**: Our proposed method applied to targeted attacks on the same MadryNet and ResNet18 for MNIST and SVHN, respectively. A green border signifies a successful deception of the victim classifier, while a red border indicates failure. **Notably, with PGD, a successful attack often results in the alteration of the source image’s semantics, which is undesirable.** Additional PGD attack examples are provided in Appendix E.
where \(c_{1}\), \(c_{2}\) are constants, and \(f\) is an objective function closely tied to the classifier's prediction. For example, in [18], \(f\) is the cross-entropy loss function, while Carlini and Wagner [2] suggest several different choices for \(f\). Szegedy et al. [18] recommend solving (1) using box-constrained L-BFGS.
### Adversarial training
Adversarial training, a widely acknowledged method for boosting adversarial robustness in deep learning models, has been extensively studied [18; 6; 10; 12]. This technique uses adversarial samples as (part of) the training data, originating from Szegedy et al. [18], and has evolved into numerous variations. In this paper, we apply the min-max problem formulation by Madry et al. [12] to determine neural network weights, denoted as \(\theta\). They propose choosing \(\theta\) to solve:
\[\min_{\theta}\mathbb{E}_{(\mathbf{x},y)\sim p_{\text{data}}}\left[\max_{ \left\lVert\delta\right\rVert_{p}\leq\epsilon}\mathcal{L}_{\text{CE}}(\theta, \mathbf{x}+\delta,y)\right] \tag{2}\]
where \(p_{\text{data}}\) represents the data distribution, \(\mathcal{L}_{\text{CE}}\) is the cross-entropy loss, \(\left\lVert\cdot\right\rVert_{p}\) denotes the \(L_{p}\) norm, and \(\epsilon\) specifies the radius of the corresponding \(L_{p}\) ball. In what follows, we will use the term "robust classifier" to refer to classifiers that have undergone adversarial training.
### Energy-based models (EBMs)
An Energy-based Model (EBM) [9; 4] involves a non-linear regression function, represented by \(E_{\theta}\), with a parameter \(\theta\). This function is known as the energy function. Given a data point, \(\mathbf{x}\), the probability density function (PDF) is given by:
\[p_{\theta}(\mathbf{x})=\frac{\exp(-E_{\theta}(\mathbf{x}))}{Z_{\theta}} \tag{3}\]
where \(Z_{\theta}=\int\exp(-E_{\theta}(\mathbf{x}))\mathrm{d}\mathbf{x}\) is the normalizing constant that ensures the PDF integrates to \(1\).
### Langevin Monte Carlo (LMC)
Langevin Monte Carlo (also known as Langevin dynamics) is an iterative method that could be used to find near-minimal points of a non-convex function \(g\)[13; 25; 20; 14]. It involves updating the function as follows:
\[\mathbf{x}_{0}\sim p_{0},\quad\mathbf{x}_{t+1}=\mathbf{x}_{t}-\frac{\epsilon^ {2}}{2}\nabla_{x}g(\mathbf{x}_{t})+\epsilon\mathbf{z}_{t},\quad\mathbf{z}_{t }\sim\mathcal{N}(0,I) \tag{4}\]
where \(p_{0}\) could be a uniform distribution. Under certain conditions on the drift coefficient \(\nabla_{x}g\), it has been demonstrated that the distribution of \(\mathbf{x}_{t}\) in (4) converges to its stationary distribution [3; 14], also referred to as the Gibbs distribution \(p(\mathbf{x})\propto\exp(g(\mathbf{x}))\). This distribution concentrates around the global minimum of \(g\)[5; 24; 14]. If we choose \(g\) to be \(-E_{\theta}\), then the stationary distribution corresponds exactly to the EBM's distribution defined in (3). As a result, we can draw samples from the EBM using LMC. By replacing the exact gradient with a stochastic gradient, we obtain Stochastic Gradient Langevin Dynamics (SGLD) [23; 19].
### Training EBM
To train an EBM, we aim to minimize the minus expected log-likelihood of the data, represented by
\[\mathcal{L}_{\text{EBM}}=\mathbb{E}_{X\sim p_{d}}[-\log p_{\theta}(X)]= \mathbb{E}_{X\sim p_{d}}[E_{\theta}(X)]-\log Z_{\theta}\]
where \(p_{d}\) is the data distribution. The gradient is
\[\nabla_{\theta}\mathcal{L}_{\text{EBM}}=\mathbb{E}_{X\sim p_{d}}[\nabla_{ \theta}E_{\theta}(X)]-\nabla_{\theta}\log Z_{\theta}=\mathbb{E}_{X\sim p_{d}}[ \nabla_{\theta}E_{\theta}(X)]-\mathbb{E}_{X\sim p_{\theta}}[\nabla_{\theta}E_{ \theta}(X)] \tag{5}\]
(see [16] for derivation). The first term of \(\nabla_{\theta}\mathcal{L}_{\text{EBM}}\) can be easily calculated as \(p_{d}\) is the distribution of the training set. For the second term, we can use LMC to sample from \(p_{\theta}\)[9].
Effective training of an energy-based model (EBM) typically requires the use of techniques such as sample buffering and regularization. For more information, refer to the work of Du and Mordatch [4].
## 3 Generating semantics-aware adversarial examples
In this section, we introduce a probabilistic approach to understanding adversarial examples. Through this lens, we establish the concept of semantic divergence, offering an alternative to conventional geometric distance. This concept of semantic divergence enables individuals to integrate their unique understanding of semantics into the model, thereby facilitating the creation of semantics-aware adversarial examples.
### A probabilistic perspective on adversarial examples
LMC and SGLD are not directly applicable to the optimization problem presented in (1) due to their incompatibility with box-constrained optimization problems. To overcome this limitation, Lamperski [11] proposed Projected Stochastic Gradient Langevin Algorithms (PSGLA). By employing PSGLA to generate samples near the solution of the optimization problem specified in (1), we obtain the subsequent update rule:
\[\mathbf{x}_{0}\sim p_{0},\quad\mathbf{x}_{t+1}=\Pi_{[0,1]^{n}}\left(\mathbf{x }_{t}-\frac{\epsilon^{2}}{2}\nabla_{x}\mathcal{L}(\mathbf{x}_{t},y_{\text{tar}} )+\epsilon\mathbf{z}_{t}\right),\quad\mathbf{z}_{t}\sim\mathcal{N}(0,I) \tag{6}\]
where \(\Pi[0,1]^{n}\) is a clamp projection that enforces the constraints within the \([0,1]^{n}\) interval. We refer to the stationary distribution of PSGLA as the adversarial distribution \(p_{\text{adv}}(\mathbf{x};y_{\text{tar}})\propto\exp(-\mathcal{L}(\mathbf{x},y_{\text{tar}}))\), since samples drawn from this distribution are in close proximity to the optimal value of the optimization problem presented in (1).
Then by definition of \(\mathcal{L}\), the adversarial distribution can be represented as a product of expert distributions [9]:
\[p_{\text{adv}}(\mathbf{x}_{\text{adv}};\mathbf{x}_{\text{ori}},y_{\text{tar}} )\propto p_{\text{vic}}(\mathbf{x}_{\text{adv}};y_{\text{tar}})p_{\text{dis}}( \mathbf{x}_{\text{adv}};\mathbf{x}_{\text{ori}}) \tag{7}\]
where \(p_{\text{vic}}(\mathbf{x}_{\text{adv}};y_{\text{tar}})\propto\exp(-c_{2}\cdot f (\mathbf{x}_{\text{adv}},y_{\text{tar}}))\) denote the victim distribution and \(p_{\text{dis}}(\mathbf{x}_{\text{adv}};\mathbf{x}_{\text{ori}})\propto\exp(-c_ {1}\cdot\mathcal{D}(\mathbf{x}_{\text{ori}},\mathbf{x}_{\text{adv}}))\) represent the distance distribution.
The victim distribution \(p_{\text{vic}}\) is dependent on the victim classifier. As suggested by Szegedy et al. [18], \(f\) could be the cross-entropy loss of the classifier. We can sample from this distribution using Langevin dynamics. Figure 2(a) presents samples drawn from \(p_{\text{vic}}\) when the victim classifier is subjected to standard training, exhibiting somewhat indistinct shapes of the digits. This implies that the classifier has learned the semantics of the digits to a certain degree, but not thoroughly. In contrast, Figure 2(b) displays samples drawn from \(p_{\text{vic}}\) when the victim classifier undergoes adversarial training. In this scenario, the shapes of the digits are clearly discernible. This observation suggests that we can obtain meaningful samples from adversarially trained classifiers, indicating that such classifiers depend more on semantics, which corresponds to the fact that an adversarially trained classifier is more difficult to attack. A similar observation concerning the generation of images from an adversarially trained classifier has been reported by Santurkar et al. [15].
The distance distribution \(p_{\text{dis}}\) relies on \(\mathcal{D}(\mathbf{x}_{\text{ori}},\mathbf{x}_{\text{adv}})\), representing the distance between \(\mathbf{x}_{\text{adv}}\) and \(\mathbf{x}_{\text{ori}}\). By its nature, samples that are closer to \(\mathbf{x}_{\text{ori}}\) may yield a higher \(p_{\text{adv}}\), which is consistent with
Figure 2: **(a) and (b) display samples drawn from \(p_{\text{vic}}(\cdot;y_{\text{tar}})\) with the victim classifier being non-adversarially trained and adversarially trained, respectively. (c) showcases samples from \(p_{\text{dis}}(\cdot;\mathbf{x}_{\text{ori}})\) when \(\mathcal{D}\) is the square of \(L_{2}\) norm. (d) illustrates \(t(\mathbf{x}_{\text{ori}})\) for \(t\sim\mathcal{T}\), where \(\mathcal{T}\) represents a distribution of transformations, including TPS (see Section 4.2), scaling, rotation, and cropping. The \(\mathbf{x}_{\text{ori}}\)s in (c) and (d) consist of the first 36 images from the MNIST test set.**
the objective of generating adversarial samples. Moreover, if \(\mathcal{D}\) represents the square of the \(L_{2}\) norm, then \(p_{\text{dis}}\) becomes a Gaussian distribution with a mean of \(\mathbf{x}_{\text{ori}}\) and a variance determined by \(c_{1}\). Figure 2(c) portrays samples drawn from \(p_{\text{dis}}\) when \(\mathcal{D}\) is the square of the \(L_{2}\) distance. The samples closely resemble the original images, \(\mathbf{x}_{\text{ori}}\)s, from the MNIST testset, because each sample is positioned near an optimal point, and these optimal points are the original images, \(\mathbf{x}_{\text{ori}}\)s.
### From Geometric Distance to Semantic Divergence
Based on the probabilistic perspective, we propose a semantic divergence, denoted by a non-symmetric divergence \(\mathcal{D}_{\text{sem}}(\mathbf{x}_{\text{adv}},\mathbf{x}_{\text{ori}}):=E( \mathbf{x}_{\text{adv}};\mathbf{x}_{\text{ori}})\), where \(E(\cdot;\mathbf{x}_{\text{ori}})\) represents the energy of an energy-based model trained on a dataset consisting of \(\{t_{1}(\mathbf{x}_{\text{ori}}),t_{2}(\mathbf{x}_{\text{ori}}),\dots\}\). Here, \(t_{i}\sim\mathcal{T}\), and \(\mathcal{T}\) is a distribution of transformations that do not alter the original image's semantics. In practice, the choice of \(\mathcal{T}\) depends on human subjectivity related to the dataset. Individuals are able to incorporate their personal comprehension of semantics into the model by designing their own \(\mathcal{T}\). For instance, in the case of the MNIST dataset, the transformations could include scaling, rotation, distortion, and cropping, as illustrated in Figure 2(d). We assume that such transformations do not affect the semantics of the digits in the MNIST dataset. Consequently, our proposed semantic divergence induces the corresponding distance distribution \(p_{\text{dis}}(\mathbf{x}_{\text{adv}};\mathbf{x}_{\text{ori}})\propto\exp(-c_ {1}\cdot E(\mathbf{x}_{\text{adv}};\mathbf{x}_{\text{ori}}))\).
We claim that, given an appropriate \(\mathcal{T}\), semantic divergence can surpass geometric distance. Empirically, maintaining the semantics of the original image by limiting the geometric distance between the adversarial image and the original image when deceiving a robust classifier is challenging: as shown in Figure 1 and Figure 3, it is difficult to preserve the semantics of the original images. The attacked images either display a'shadow' of the target digits or reveal conspicuous tampering traces, such as in Figure 3(c), where the attacked digit turns gray. This phenomenon was empirically observed and tested by Song et al. [17] through an A/B test. Conversely, as depicted in Figure 4, the samples from \(p_{\text{adv}}\) neither exhibit the'shadow' of the target digits nor any obvious traces indicating adversarial attack. While semantic divergence can't entirely prevent the generation of a sample resembling the target class, as shown in Figure 4(a), we discuss certain techniques to mitigate this issue in Section 4.1.
A plausible explanation for this is that the utilization of geometric distance causes \(p_{\text{dis}}(\cdot,\mathbf{x}_{\text{ori}})\) to overly focus on \(\mathbf{x}_{\text{ori}}\). However, when applying semantic divergence induced by a suitable \(\mathcal{T}\), the density of the distance distribution \(p_{\text{dis}}(\cdot,\mathbf{x}_{\text{ori}})\) spreads out relatively more, resulting in a higher overlap between \(p_{\text{dis}}(\cdot,\mathbf{x}_{\text{ori}})\) and \(p_{\text{vic}}\). This, in turn, provides more opportunities for their product \(p_{\text{adv}}\) to reach a higher value.
## 4 Deceiving robust classifiers
In this section, we present several techniques that enhance the performance of our proposed method in generating high-quality adversarial examples.
### Victim distributions
The victim distribution \(p_{\text{vic}}\propto\exp(c_{2}\cdot f(\mathbf{x}_{\text{adv}},y_{\text{tar}}))\) is influenced by the choice of function \(f\). Let \(g_{\phi}:[0,1]^{n}\rightarrow\mathbb{R}^{|\mathcal{Y}|}\) be a classifier that produces logits as output with \(\phi\) representing the neural network parameters, \(n\) denoting the dimensions of the input, and \(\mathcal{Y}\) being the set of labels (the output of \(g_{\phi}\) are logits). Szegedy et al. [18] suggested using cross-entropy as the function \(f\), which can be expressed as
\[f_{\text{CE}}(\mathbf{x},y_{\text{tar}}):=-g_{\phi}(\mathbf{x})[y_{\text{tar}} ]+\log\sum_{y}\exp(g_{\phi}(\mathbf{x})[y])=-\log\sigma(g_{\phi}(\mathbf{x}))[ y_{\text{tar}}]\]
where \(\sigma\) denotes the softmax function.
Carlini and Wagner [2] explored and compared multiple options for \(f\). They found that, empirically, the most efficient choice of their proposed \(f\)s is:
\[f_{\text{CW}}(\mathbf{x},y_{\text{tar}}):=\max(\max_{y\neq y_{\text{tar}}}g_{ \phi}(\mathbf{x})[y]-g_{\phi}(\mathbf{x})[y_{\text{tar}}],0).\]
From Figure 3 and Figure 4, we observe that \(f_{\text{CW}}\) outperforms \(f_{\text{CE}}\) when the \(p_{\text{dis}}\) depends on either geometric distance or semantic divergence. A potential explanation for this phenomenon is that, according to its definition, \(f_{\text{CW}}\) becomes \(0\) if the classifier is successfully deceived during the iteration process. This setting ensures that the generator does not strive for a relatively high softmax probability for the target class; it simply needs to reach a point where the victim classifier perceives the image as belonging to the target class. Consequently, after the iteration, the victim classifier assigns a relatively low predictive probability to the target class \(\sigma(g_{\phi}(\mathbf{x}_{\text{adv}}))[y_{\text{tar}}]\), as demonstrated in Figure 3(d) and Figure 4(d).
In this study, we introduce two additional choices for the function \(f\). Although these alternatives are not as effective as \(f_{\text{CW}}\), we present them in Appendix C for further exploration.
### Data Augmentation by Thin Plate Splines (TPS) Deformation
Thin-plate-spline (TPS) [1] is a commonly used image deforming method. Given a pair of control points and target points, TPS computes a smooth transformation that maps the control points to the target points, minimizing the bending energy of the transformation. This process results in localized deformations while preserving the overall structure of the image, making TPS a valuable tool for data augmentation.
Figure 4: **(a) & (c): Samples from \(p_{\text{adv}}(:,\mathbf{x}_{\text{ori}},y_{\text{tar}})\propto\exp(-c_{1} \cdot\mathcal{D}(\mathbf{x}_{\text{ori}},\mathbf{x}_{\text{adv}}))\exp(-c_{2} \cdot f(\mathbf{x}_{\text{adv}},y_{\text{tar}}))\), where \(\mathbf{x}_{\text{ori}}\) refers to the original image of digit “7” shown in Figure 1 and \(y_{\text{tar}}\) refers to class 9. \(\mathcal{D}\) represents our proposed semantic divergence. In (a), \(f\) is the cross-entropy \(f_{\text{CE}}\), while in (c), \(f\) is \(f_{\text{CW}}\). Constants are set as \(c_{1}=1.0\) and \(c_{2}=10^{-2}\). A green border indicates successful deception of the victim classifier, whereas a red border denotes failure. **(b) & (d)**: The predictive probability (softmax probability) of the target class, corresponding to each digit in Figures (a) and (c) on a one-to-one basis.
As introduced in Section 3.2, we aim to train an energy-based model on transformations of a single image \(\mathbf{x}_{\text{ori}}\). In practice, if the diversity of the augmentations of \(\mathbf{x}_{\text{ori}}\), represented as \(t(\mathbf{x}_{\text{ori}})\), is insufficient, the training of the probabilistic generative model is prone to overfitting. To address this issue, we use TPS as a data augmentation method to increase the diversity of \(t(\mathbf{x}_{\text{ori}})\). For each \(\mathbf{x}_{\text{ori}}\), we set a \(5\times 5\) grid of source control points, \(\mathcal{P}_{\text{sou}}=\{(x^{(i)},y^{(i)})\}_{i=1}^{5\times 5}\), and defining the target points as \(\mathcal{P}_{\text{tar}}=\{(x^{(i)}+\epsilon_{x}^{(i)},y^{(i)}+\epsilon_{y}^{( i)})\}_{i=1}^{5\times 5}\), where \(\epsilon_{x}^{(i)},\epsilon_{y}^{(i)}\sim\mathcal{N}(0,\sigma^{2})\) are random noise added to the source control points. We then apply TPS transformation to \(\mathbf{x}_{\text{ori}}\) with \(\mathcal{P}_{\text{sou}}\) and \(\mathcal{P}_{\text{tar}}\) as its parameters. This procedure is depicted in Figure 5. By setting an appropriate \(\sigma\), we can substantially increase the diversity of the one-image dataset while maintaining its semantic content.
### Rejection Sampling
Directly sampling from \(p_{\text{adv}}(\cdot;\mathbf{x}_{\text{ori}},y_{\text{tar}})\) does not guarantee the generation of samples capable of effectively deceiving the classifier. To overcome this issue, we adopt rejection sampling [22], which eliminates unsuccessful samples and ultimately yields samples from \(p_{\text{adv}}(\mathbf{x}_{\text{adv}}|\arg\max_{y}g_{\phi}(\mathbf{x}_{\text {adv}})[y]=y_{\text{tar}};\mathbf{x}_{\text{ori}},y_{\text{tar}})\).
### Sample Refinement
After rejection sampling, the samples are confirmed to successfully deceive the classifier. However, not all of them possess high visual quality, as demonstrated in Figure 4(c). To automatically obtain \(N\) semantically valid samples1, we first generate \(M\) samples from the adversarial distribution. Following rejection sampling, we sort the remaining samples and select the top \(\kappa\) percent based on the softmax probability of the original image's class, as determined by an auxiliary classifier. Finally, we choose the top \(N\) samples with the lowest energy \(E\), meaning they have the highest likelihood according to the energy-based model.
Footnote 1: In practice, we could select adversarial samples by hand, but we focus on automatic selection here.
The auxiliary classifier is trained on the data-augmented training set. We do not use the energy of the samples as the sole criterion for selection because some low-visual quality samples may also have a high likelihood. This occurrence is further explained and examined in Appendix D. The entire process of rejection sampling and sample refinement is portrayed in Algorithm 1.
```
0: A trained energy based model \(E(\cdot;\mathbf{x}_{\text{ori}})\) based on the original image \(\mathbf{x}_{\text{ori}}\), the victim classifier \(g_{\phi}\), an auxiliary classifier \(g_{\psi}\), number of initial samples \(M\), number of final samples \(N\), the percentage \(\kappa\).
0:\(N\) adversarial samples \(\mathbf{x}\). \(\mathbf{x}=\emptyset\) for\(0\leq i<M\)do \(\mathbf{x}_{\text{adv}}\sim p_{\text{adv}}(\cdot;\mathbf{x}_{\text{ori}},y_{ \text{tar}})\)\(\triangleright\) Sample from the adversarial distribution. if\(\arg\max_{y}g_{\phi}(\mathbf{x}_{\text{adv}})[y]=y_{\text{tar}}\)then\(\triangleright\) Accept if \(\mathbf{x}_{\text{adv}}\) deceive the classifier. \(\mathbf{x}=\mathbf{x}\cup\{\mathbf{x}_{\text{adv}}\}\) endif endfor Sort \(\mathbf{x}\) by \(\sigma(g_{\psi}(\mathbf{x}_{i}))[y_{\text{ori}}]\) for \(i\in\{1,\dots,|\mathbf{x}|\}\) in descent order \(\mathbf{x}=(\mathbf{x}_{i})_{i=1}^{|\kappa|\mathbf{x}|}\)\(\triangleright\) Select the first \(\kappa\) percent elements from \(\mathbf{x}\). Sort \(\mathbf{x}\) by \(E(\mathbf{x}_{i};\mathbf{x}_{\text{ori}})\) for \(i\in\{1,\dots,|\mathbf{x}|\}\) in ascent order \(\mathbf{x}=(\mathbf{x}_{i})_{i=1}^{N}\)\(\triangleright\) Select the first \(N\) elements from \(\mathbf{x}\).
```
**Algorithm 1** Rejection Sampling and Sample Refinement
The auxiliary classifier is trained on the data-augmented training set. We do not use the energy of the samples as the sole criterion for selection because some low-visual quality samples may also have a high likelihood. This occurrence is further explained and examined in Appendix D. The entire process of rejection sampling and sample refinement is portrayed in Algorithm 1.
Figure 5: TPS as a data augmentation. **Left**: The original image \(\mathbf{x}_{\text{ori}}\) superimposed with a \(5\times 5\) grid of source control points \(\mathcal{P}_{\text{sou}}\). **Right**: The transformed image overlaid with a grid of target control points \(\mathcal{P}_{\text{tar}}\).
## 5 Experiment
### Implementation
We implemented our proposed semantics-aware adversarial attack on two datasets: MNIST and SVHN. For the MNIST dataset, the victim classifier we used was an adversarially trained MadryNet [12]. For the SVHN dataset, we utilized an adversarially trained ResNet18, in accordance with the methodology outlined by Song et al. [17]. On the distance distribution side, for every original image denoted as \(\mathbf{x}_{\text{ori}}\), we trained an energy-based model on the training set, which is represented as \(\{t_{1}(\mathbf{x}_{\text{ori}}),t_{2}(\mathbf{x}_{\text{ori}}),\dots\}\). In this case, \(t_{i}\) follows a distribution of transformations, \(\mathcal{T}\), that do not change the semantics of \(\mathbf{x}_{\text{ori}}\). For the MNIST dataset, we characterized \(\mathcal{T}_{\text{MNIST}}\) as including Thin Plate Spline (TPS) transformations, scaling, and rotation. For the SVHN dataset, we defined \(\mathcal{T}_{\text{SVHN}}\) as comprising Thin Plate Spline (TPS) transformations and alterations in brightness and hue. Detailed specifics related to our implementation can be found in Appendix A.
### Evaluation
Our method generates adversarial samples that can deceive classifiers, but it does not guarantee the preservation of the original label's semantic meaning. As such, we consider an adversarial example successful if human annotators perceive it as having the same meaning as the original label, in line with the approach by Song et al. [17]. To enhance the signal-to-noise ratio, we assign the same image to five different annotators and use the majority vote as the human decision, as done in [17]. The screenshot of the annotator's interface is in Appendix B.
In detail, we begin with an original image \(\mathbf{x}_{\text{ori}}\), its label \(y_{\text{ori}}\), and a target class \(y_{\text{far}}\). We draw \(M=2000\) samples from \(p_{\text{adv}}(\cdot;\mathbf{x}_{\text{ori}},y_{\text{far}})\), rejecting those that fail to deceive the victim classifier. After sample refinement, we obtain \(N=100\) adversarial examples, \(\mathbf{x}_{\text{adv}}^{(i)}\) for \(i\in\{1,\dots,N\}\). We express the human annotators' decision as function \(h\) and derive the human decision \(y_{\text{hum}}^{(i)}=h(\mathbf{x}_{\text{adv}}^{(i)})\). As previously mentioned, an adversarial example \(\mathbf{x}_{\text{adv}}^{(i)}\) is considered successful if \(y_{\text{hum}}^{(i)}\) is equal to \(y_{\text{ori}}\). We then compute the success rate \(s\) as follows:
\[s=\frac{\sum_{i=1}^{N}\mathbbm{1}\left(y_{\text{hum}}^{(i)}=y_{\text{ori}} \right)}{N}\]
where \(\mathbbm{1}\) represents the indicator function.
We randomly select 10 digits, each representing a different class, from the MNIST/SVHN test set to serve as the original image \(\mathbf{x}_{\text{ori}}\). These are depicted on the left side of Figure 1. For each \(\mathbf{x}_{\text{ori}}\), we iterate through the target class \(y_{\text{far}}\) ranging from 0 to 9, excluding the class \(y_{\text{ori}}\) that signifies
Figure 6: The success rates (%) of our targeted unrestricted adversarial attack. Corresponding sample examples for each grid are depicted in the top right and bottom right sections of Figure 1. Refer to Table 1 for overall success rate.
the ground-truth label of \(\mathbf{x}_{\text{ori}}\). As previously described, for every pair of \(\mathbf{x}_{\text{ori}}\) and \(y_{\text{tar}}\), we generate \(N=100\) adversarial examples post sample refinement. The result of each pair is illustrated in Figure 6. The overall success rate is illustrated in Figure 1.
### Results
As depicted in Figure 6 and Table 1, our proposed method often succeeds in fooling robust classifiers, all the while preserving the original semantics of the input. It should be noted, however, that this does not occur in every instance.
## 6 Related work
Unrestricted adversarial examplesSong et al. [17] proposed generating unrestricted adversarial examples from scratch using conditional generative models. In their work, the term "unrestricted" indicates that the generated adversarial samples, \(\mathbf{x}_{\text{adv}}\), are not restricted by a geometric distance such as the \(L_{2}\) norm or \(L_{\infty}\) norm. The key difference between their approach and ours is that their adversarial examples \(\mathbf{x}_{\text{adv}}\) are independent of any specific \(\mathbf{x}_{\text{ori}}\), while our model generates \(\mathbf{x}_{\text{adv}}\) based on a given \(\mathbf{x}_{\text{ori}}\). By slightly modifying (7), we can easily incorporate Song's "unrestricted adversarial examples" into our probabilistic perspective:
\[p_{\text{adv}}(\mathbf{x}_{\text{adv}};y_{\text{sou}},y_{\text{tar}}):=p_{ \text{vic}}(\mathbf{x}_{\text{adv}};y_{\text{tar}})p_{\text{dis}}(\mathbf{x}_ {\text{adv}};y_{\text{sou}}) \tag{8}\]
where \(y_{\text{sou}}\) is the source class. It becomes evident that the adversarial examples generated by our \(p_{\text{adv}}(\cdot;\mathbf{x}_{\text{ori}},y_{\text{tar}})\) adhere to Song's definition when \(\mathbf{x}_{\text{ori}}\) is labeled as \(y_{\text{sou}}\).
TPS as a Data Augmentation TechniqueTo the best of our knowledge, Vinker et al. [21] were the first to employ TPS as a data augmentation method. They utilized TPS as a data augmentation strategy in their generative model for conditional image manipulation based on a single image.
## 7 Limitation
This work's foremost limitation pertains to the inherent difficulties in training energy-based models (EBMs), as underscored in the earlier studies by Du and Mordatch [4] and Grathwohl et al. [7]. The EBM training process is notoriously challenging, and a notable gap persists between the generation quality of EBMs and that of other widely-used probabilistic generative models, such as variational autoencoders and diffusion models. Consequently, we are currently unable to generate adversarial samples for images with higher resolution.
## 8 Conclusion
In this work, we present a probabilistic perspective on adversarial examples by employing Langevin Monte Carlo. Building on this probabilistic perspective, we introduce semantic divergence as an alternative to the commonly used geometric distance. We also propose corresponding techniques for generating semantically-aware adversarial examples. Human participation experiments indicate that our proposed method can often deceive robust classifiers while maintaining the original semantics of the input, although not in all cases.
\begin{table}
\begin{tabular}{l l l} \hline \hline Robust Classifier &
\begin{tabular}{l} Success Rate of \\ Song et al. [17] \\ \end{tabular} & Our Success Rate \\ \hline MadryNet [12] on MNIST & 85.2 & **96.2** \\ ResNet18 [8] (adv-trained) on SVHN & 84.2 & **86.3** \\ \hline \hline \end{tabular}
\end{table}
Table 1: Success rate comparison between the method proposed by Song et al. [17] and ours. The results presented in this table are for reference only, as Song’s results are taken directly from their paper, and we did not use the same group of annotators for our evaluation. |
2305.06272 | FedPDD: A Privacy-preserving Double Distillation Framework for
Cross-silo Federated Recommendation | Cross-platform recommendation aims to improve recommendation accuracy by
gathering heterogeneous features from different platforms. However, such
cross-silo collaborations between platforms are restricted by increasingly
stringent privacy protection regulations, thus data cannot be aggregated for
training. Federated learning (FL) is a practical solution to deal with the data
silo problem in recommendation scenarios. Existing cross-silo FL methods
transmit model information to collaboratively build a global model by
leveraging the data of overlapped users. However, in reality, the number of
overlapped users is often very small, thus largely limiting the performance of
such approaches. Moreover, transmitting model information during training
requires high communication costs and may cause serious privacy leakage. In
this paper, we propose a novel privacy-preserving double distillation framework
named FedPDD for cross-silo federated recommendation, which efficiently
transfers knowledge when overlapped users are limited. Specifically, our double
distillation strategy enables local models to learn not only explicit knowledge
from the other party but also implicit knowledge from its past predictions.
Moreover, to ensure privacy and high efficiency, we employ an offline training
scheme to reduce communication needs and privacy leakage risk. In addition, we
adopt differential privacy to further protect the transmitted information. The
experiments on two real-world recommendation datasets, HetRec-MovieLens and
Criteo, demonstrate the effectiveness of FedPDD compared to the
state-of-the-art approaches. | Sheng Wan, Dashan Gao, Hanlin Gu, Daning Hu | 2023-05-09T16:17:04Z | http://arxiv.org/abs/2305.06272v2 | # FedPDD: A Privacy-preserving Double Distillation Framework for Cross-silo Federated Recommendation
###### Abstract
Cross-platform recommendation aims to improve recommendation accuracy by gathering heterogeneous features from different platforms. However, such cross-silo collaborations between platforms are restricted by increasingly stringent privacy protection regulations, thus data cannot be aggregated for training. Federated learning (FL) is a practical solution to deal with the data silo problem in recommendation scenarios. Existing cross-silo FL methods transmit model information to collaboratively build a global model by leveraging the data of overlapped users. However, in reality, the number of overlapped users is often very small, thus largely limiting the performance of such approaches. Moreover, transmitting model information during training requires high communication costs and may cause serious privacy leakage. In this paper, we propose a novel privacy-preserving double distillation framework named FedPDD for cross-silo federated recommendation, which efficiently transfers knowledge when overlapped users are limited. Specifically, our double distillation strategy enables local models to learn not only explicit knowledge from the other party but also implicit knowledge from its past predictions. Moreover, to ensure privacy and high efficiency, we employ an offline training scheme to reduce communication needs and privacy leakage risk. In addition, we adopt differential privacy to further protect the transmitted information. The experiments on two real-world recommendation datasets, HetRec-MovieLens and Criteo, demonstrate the effectiveness of FedPDD compared to the state-of-the-art approaches.
+
Footnote †: Corresponding author.
## I Introduction
Benefiting from the explosion of data, deep learning-based recommendation systems have gained significant attention by overcoming obstacles of conventional models and achieving high recommendation quality [1]. Unfortunately, in reality, this wealth of data is often separated into different platforms and owned by different entities. For example, people can chat with friends on WhatsApp, watch favorite videos on TikTok or Youtube, and buy wanted stuff on Amazon. Collecting these features from different platforms can help to build a more accurate user profile and provide a better recommendation. However, cross-silo collaborations among different platforms are restricted by data protection regulations such as General Data Protection Regulation (GDPR) and data cannot be centralized for training.
To tackle the privacy issue for cross-silo recommendation, a practical solution is Federated Learning (FL) [2, 3]. FL enables multiple parties to collaboratively train a global model while private data resides locally on the data owners and therefore can largely reduce systemic privacy risks. Existing cross-silo FL methods [4, 5, 6] try to fix this problem by using the overlapped samples across participants and viewing it as a multi-view learning problem. The performance of such approaches highly relies on the number of overlapped users between parties. However, in reality, such overlapped data is often limited and thereby may cause the performance to be even worse than the locally trained models. Moreover, these approaches transmit feature or model information during training, which requires high communication costs and has serious privacy weaknesses. Recent studies [7, 8, 9] show that sharing model or feature information could still lead to private data leakage.
To address these challenges, we propose a novel privacy-preserving double distillation framework named FedPDD for cross-silo federated recommendation. We design a double distillation strategy, which enables local models to learn both implicit knowledge from themselves and explicit knowledge from the other party. Specifically, we distill implicit knowledge
Fig. 1: In cross-device FL setting, participants are a large number of individual customers (2C) that have the same feature space. In cross-silo FL setting, participants are a small number of business partners (2B) that have partially overlapped user spaces and different feature spaces. Here we assume that there is no feature overlapping in our cross-silo FL setting. Data in the red box are used for training.
from the past predictions of local models and distill explicit knowledge from the ensemble predictions of current local models. The key idea is that we provide multiple informative sources for local model training. Therefore, by learning from these sources, FedPDD is able to enhance model performance and generalization ability. Moreover, we employ an offline distillation strategy and only transmit model output during training. Parties only communicate with the server during the federated ensemble stage and the size of the model output is much smaller than the model itself. Accordingly, our training strategy largely reduces communication needs and limits the exposure of private information to the server. In addition, we adopt differential privacy [10] to further protect the communication process. We experiment on two real-world recommendation datasets showing that FedPDD significantly boosts local model performance by up to 3.94% and outperforms the state-of-the-arts by up to 3.98%.
Overall, our main contributions are as follows:
* We propose a novel privacy-preserving double distillation method named FedPDD for cross-silo federated recommendation. Our method enables local models to learn from not only private labels and the other party but also itself, which enhances model performance and generalization ability when overlapped samples are limited.
* We employ an offline training strategy to reduce communication needs and privacy leakage risk. Moreover, we adopt differential privacy to further protect the communication process and provide a theoretical privacy analysis of FedPDD.
* We conduct experiments on two public real-world datasets and the results demonstrate the effectiveness of FedPDD by up to 3.98% further improvements compared to the state-of-the-art approaches.
## II Background and Related Work
### _Federated Learning_
Federated Learning [2, 3] allows multiple participants to collaboratively train a global model while keeping training data in local. The data resides locally on the data owners and therefore largely reduces systemic privacy risks. Based on scenarios, FL can be divided into two kinds of settings: cross-device FL and cross-silo FL [3]. As shown in Figure 1, in the cross-device setting, participants are a large number of individual customers (2C) that have the same feature space, while participants are a small number of business partners (2B) that have partially overlapped user spaces and different feature spaces in the cross-silo setting. In this work, we focus on the cross-silo setting that features are not overlapped.
Liu et al. [4] first proposed a transfer learning method named federated transfer learning (FTL) to transfer knowledge through the overlapped user data between parties. They assumed that only one party owns the labels and aims to improve the model performance by leveraging the knowledge (i.e., features) from other parties. Under such an assumption, their approach merely leveraged the overlapped users across parties. Existing cross-silo FL methods [5, 6] mostly follow this direction which views the distributed features of overlapped users in different parties as different views of these data and regards it as a multi-view learning problem. Specifically, Feng et al. [5] established a Multi-participant Multi-class Vertical Federated Learning (MMVFL) framework. They utilized multi-view learning methods to securely share label information between participants. Kang et al. [6] proposed a self-supervised multi-view learning method called FedMVT under the cross-silo FL setting. They built a model based on the overlapped data to predict the missing features. However, these studies build models on the overlapped user data across parties and prediction accuracy of the global model highly relies on the amount of overlapped data. When such data is limited, the performance of these approaches may be even worse than the local fine-tuned models. In contrast, FedPDD leverages both the overlapped data and the non-overlapped data to enhance model performance through knowledge distillation and can achieve superior performance compared to the state-of-the-arts when overlapped data is limited.
### _Federated Recommendation System_
Inspired by the success of federated learning [11], federated recommendation system is proposed to address the privacy and data silo problems in the recommendation system [2]. Existing federated recommendation works such as federated matrix factorization methods [12, 13, 14, 15, 16] and federated collaborative filtering methods [17, 18] mainly focus on the cross-device FL setting. They adopt the idea of the typical federated learning algorithm FedAvg [11] and average the updates of gradient information from participants. However, this line of methods has serious privacy issues. Sharing gradient information could lead to private data leakage as proofed by [14].
We notice that there is a contemporary work [19] similar to our approach. The authors propose a cross-silo federated recommendation framework using split knowledge distillation. However, they assume that there are massive unlabeled overlapped data between parties and ignore privacy issues, which is a key concern in cross-silo FedRec. Exposing more overlapped data during training will inevitability lead to a higher risk of privacy leakage and largely increase the privacy budget.
### _Federated Knowledge Distillation_
Knowledge distillation [20] distills knowledge from a teacher model to a student model through the soft target of the teacher model. More precisely, the student model learns from the teacher model by imitating the soft target distribution of the teacher model as model output through a Kullback-Leibler (KL) divergence loss defined as:
\[L_{KD}(\mathbf{p},\mathbf{q})=T^{2}KL(\mathbf{p}||\mathbf{q}), \tag{1}\]
where \(\mathbf{p}\) and \(\mathbf{q}\) are the soften output of the student model and teacher model. T is the temperature parameter. Denote the student logit as \(\mathbf{z_{s}}\) and teacher logit as \(\mathbf{z_{t}}\). Then \(\mathbf{p}=softmax(\mathbf{z_{s}}/T)\) and \(\mathbf{q}=softmax(\mathbf{z_{t}}/T)\).
Ensemble distillation [21] ensembles knowledge from multiple teacher networks to yield better teacher knowledge. Current federated knowledge distillation methods [22, 23, 24, 25, 26] adopts this idea to fuse heterogeneous local models into a global model and share model outputs instead of gradients. FedMD [22] first trained student models through averaged logits of each sample on a public labeled dataset to reduce communication costs during training. Along this direction, Lin et al. [23] designed a more robust model distillation framework named FedDF, which allows for heterogeneous client models and data. Chang et al. [24] used group knowledge transfer to reduce the communication and computation overload for edge devices. Gong et al. [25] proposed to use cross-domain unlabelled public data to protect private data privacy. These studies are designed for the cross-device setting and transfer knowledge through labeled public datasets [27, 22], or unlabeled public datasets [23, 25, 26]. The improvement of their model performance unavoidably benefits from the extra public data. Li et al. [26] proposed a practical one-shot federated learning algorithm for cross-silo FL setting. However, they still leverage an unlabeled public dataset to transfer knowledge.
Along with these studies, FedPDD is designed to transfer knowledge through limited overlapped user data for cross-silo FedRec. Unlike the above methods that only explore explicit knowledge from the other party, we propose a double distillation strategy that enables local models to fully exploit both explicit knowledge from the other party and implicit knowledge from itself.
## III Methodology
### _Problem Statement_
Consider two parties A, B and a central server. Each party has a private labeled dataset \(D^{k}:=\{\mathbf{x_{i}^{k}},y_{i}\}_{i=1}^{|D^{k}|}\) and can independently design its model \(f^{k}\), \(k=\{A,B\}\). There exists limited overlapped users shared between two parties \(D^{c}:=\{\mathbf{x_{i}}^{A},\mathbf{x_{i}}^{B},y_{i}\}_{i=1}^{|D^{c}|}\). Let \(\alpha=\frac{|D^{c}|}{|D^{A}|+|D^{B}|}\) denotes the overlapped data ratio. Our target is to improve the performance of the local models in both parties as well as the global model in the central server without exchanging raw data. Moreover, due to privacy concerns, the private information exposed to the server or the other party should be as less as possible. We summarize the notations used in this paper in table I.
### _Overview of FedPDD_
The overall pipeline of FedPDD consists of three steps: pretraining, federated ensemble and local training. In the first stage, local models are pretrained on the local private datasets from scratch. In the federated ensemble stage, each party makes predictions on the overlapped user data using its pretrained local model. Then the central server aggregates the local predictions from two parties to obtain the ensemble teacher knowledge and distributes it to parties. The aggregation is protected by differential privacy as illustrated in section III-E. In the local training stage, we propose a double knowledge distillation strategy to fully explore both implicit knowledge from itself and explicit knowledge from the other party. That is to say, local models have two teachers and learn from multiple informative sources simultaneously during training. Therefore, by providing more teachers, our double distillation strategy enhances model performance and generalization ability. The details are illustrated in section III-C and III-D.
We denote one local training and one federated ensemble as a round and repeat these two steps until models achieve convergence. Note that all training is performed offline and
Fig. 2: The overview of our proposed FedPDD. During training, each party trains its local model via three kinds of knowledge from the ground truth labels, ensemble of local models and past predictions of local models.
communications are only required in the federated ensemble stage. This offline learning strategy largely decreases communication needs and privacy leakage risk. Algorithm 1 summarizes the whole training process of FedPDD.
### _Distilling Implicit Knowledge_
In order to fully explore the implicit knowledge of local models, we propose a self-distillation strategy to enable the local model to distill knowledge from itself. That is to say, the teacher is the student model itself. Specifically, the student in the previous rounds becomes the teacher of itself in the current round. The key idea is that we regard the previous student model outputs as different views of the features, which provides more information for the training [28].
We design a simple but effective method to obtain the self-teacher model. We compare all intermediate local models in previous rounds as the candidates and select the one with the best performance as the teacher model for the current round. Considering deep learning models usually have a large model size, we only maintain one historical best model for each party and replace it whenever there is an improved one.
Denote the best local model on party \(k\) in previous \(n-1\) round as \(f^{k}_{b_{(n-1)}}\). To learn from the implicit knowledge, we let the output of local model \(f^{k}\) approximate the output of the teacher \(f^{k}_{b_{(n-1)}}\) in round \(n\) through the self-distillation (SD) loss \(L^{k}_{SD}\). It is given by the KL divergence between local model output \(\mathbf{p^{k}}\) and the teacher model output \(\mathbf{p^{k}_{b}}\):
\[\mathcal{L}^{k}_{SD}(\mathbf{p^{k}},\mathbf{p^{k}_{b}})=T^{2}_{SD}KL(\mathbf{p^{k}}||\mathbf{ p^{k}_{b}}), \tag{2}\]
where \(T_{SD}\) is the self-distillation temperature.
### _Distilling Explicit Knowledge_
We adopt ensemble distillation [21] to leverage explicit knowledge from the other party through the overlapped user data. The key idea is that the ensemble of student models often yields improvements in system performance compared to the performance of individual models. To distill the explicit knowledge, we regard the ensemble results of local model predictions on the overlapped data as the ensemble teacher knowledge. By imitating this teacher knowledge, local models are able to learn from the other party.
Denote the ensemble teacher knowledge as \(\mathbf{p^{t}_{c}}\). The ensemble distillation loss is given by the KL divergence between local model output \(\mathbf{p^{k}}\) and the ensemble teacher knowledge:
\[\mathcal{L}^{k}_{KD}(\mathbf{p^{k}},\mathbf{p^{t}_{c}})=T^{2}_{ED}KL(\mathbf{p^{k}}||\mathbf{ p^{t}_{c}}), \tag{3}\]
where \(T_{ED}\) is the ensemble distillation temperature.
### _Federated Ensemble_
During federated ensemble stage, communication is protected by two levels of privacy. First, we only send model outputs instead of sending model parameters or gradients. Second, if the central server is curious, directly updating local logits may have the risk of privacy leakage. Inspired by PATE [29], we perturb the local output logits with a Gaussian noise to ensure a higher privacy guarantee.
```
0: Local dataset \(D_{A}\), \(D_{B}\), overlapped dataset \(D_{c}\), round number \(n\), temperature \(T\), trade-off weights \(\beta\), \(\gamma\), \(w\).
0: Best local models \(f^{A}_{b_{(n)}}\), \(f^{B}_{b_{(n)}}\)
1: Let \(i=1\).
2:while\(i\leq n\)do
3: // Perform local training
4:for\(k\in\{A,B\}\)do
5:while not converge do
6:\(f^{k}_{b_{(i)}}=f^{k}_{b_{(i-1)}}\)
7: Compute Loss based on equation 10
8: Compute gradients and update \(f^{k}\)
9:if\(f^{k}\) is better than \(f^{k}_{b_{(i)}}\)then
10: Update \(f^{k}_{b_{(i)}}=f^{k}\)
11:endif
12:endwhile
13:endfor
14: // Perform federated ensemble
15:for\(k\in\{A,B\}\)do
16:for each overlapped data \(x_{c}\) in \(D_{c}\)do
17: Compute \(\mathbf{z^{k}_{c}}\) and perturb with Gaussian noise
18: Send the perturbed logit \(\mathbf{z^{\prime t}_{c}}\) to the server
19: Server computes the ensemble soft target distribution \(\mathbf{p^{t}_{c}}\) and send it back to parties
20:endfor
21:endfor
22:endfor
23:endwhile
24:return best local models \(f^{A}_{b_{(n)}}\), \(f^{B}_{b_{(n)}}\)
```
**Algorithm 1** Proposed FedPDD algorithm
Consider an overlapped sample \(\mathbf{x_{c}}\in D^{c}\). In round \(n\), party \(k\) first presents local prediction through local best model \(f^{k}_{b_{(n)}}\) obtained from local training. Denote the output logit of \(f^{k}_{b_{(n)}}\) as \(\mathbf{z^{k}_{c}}\):
\[\mathbf{z^{k}_{c}}=f^{k}_{b_{(n)}}(\mathbf{x_{c}}). \tag{4}\]
The perturbed ensemble logit \(\mathbf{z^{\prime t}_{c}}\) is a linear combination of the perturbed local output logits of \(f^{k}_{b_{(n-1)}}\). It can be expressed as:
\[\mathbf{z^{\prime\prime k}_{c}}=f^{k}_{b_{(n)}}(\mathbf{x_{c}})+\mathcal{N}(0, \sigma^{2}) \tag{5}\] \[\mathbf{z^{\prime t}_{c}}=w\mathbf{z^{\prime A}_{c}}+(1-w)\mathbf{z^{\prime B }_{c}}, \tag{6}\]
where \(w\) is the ensemble weight and \(\sigma\) is the variance of Gaussian noise. Then the soft target distribution \(\mathbf{p^{t}_{c}}\) of \(\mathbf{x_{c}}\) can be defined as
\[\mathbf{p^{\prime t}_{c}}=\sigma_{T}(\mathbf{z^{\prime t}_{c}}) \tag{7}\] \[\sigma_{T}(\mathbf{z^{\prime\prime}_{c}})=\frac{\exp(\mathbf{z^{\prime t}_{ c}}/T_{ED})}{\sum_{i=1}^{n}\exp(z^{\prime t}_{ci}/T_{ED})}, \tag{8}\]
where \(\sigma_{T}\) is the general softmax function tuned by the ensemble temperature \(T_{ED}\). The standard softmax function can be viewed as a special case of the general softmax function with \(T=1\).
### _Local Training_
We use an offline training scheme due to efficiency and privacy concerns. Offline training can largely reduce communication needs and therefore limit the exposure of private information to the server. During the local training, each party trains its local model \(f^{k}\) parameterized by \(f^{k}(\theta)\). \(\theta^{*}\) is optimized by minimizing the training objective function \(\mathcal{L}_{train}\):
\[\theta^{*}=argmin_{\theta}\ \mathcal{L}_{train}. \tag{9}\]
In this stage, we leverage three kinds of knowledge to enhance the model performance: direct knowledge from private labeled data, implicit knowledge from the best local model in the previous round and explicit knowledge from the other party. The direct knowledge is learned through the cross-entropy loss \(\mathcal{L}_{CE}\) computed by the local model outputs \(\mathbf{p^{k}}\) and the ground truth labels \(y\). Our overall training objective function \(\mathcal{L}_{train}\) is a weighted combination of three loss terms. Combining equation 2 and 3, \(\mathcal{L}_{train}\) can be written as:
\[\mathcal{L}_{train}=\mathcal{L}_{SD}+\beta\mathcal{L}_{KD}+\gamma\mathcal{L}_ {CE}, \tag{10}\]
where \(\gamma\) and \(\beta\) are the corresponding trade-off weights. We simply remain these weights to be unchanged during training. More experimental details are introduced in section IV-C.
Note that local models have converged on their private data during the pretraining stage, which means that the initial best local models already contain valuable information. Therefore, both the self teacher knowledge and ensemble teacher knowledge are informative from the first round.
### _Inference_
In the inference phase, given a sample \(x\), a party first checks whether the test sample \(x\) is aligned with the other party. If the sample is aligned between both parties, two parties first infer locally through their obtained best local models and then ensemble the local predictions to give a joint prediction as the final result. Otherwise, the party directly returns the prediction of its local model as the final result.
### _Communication Analysis of FedPDD_
We analyze the communication cost of FedPDD in this section. Assume that the communication cost of updating or downloading a record (i.e. logit of the local prediction for \(m\) class classification) from the server once is m. The total number of communication rounds until local models converge is n. Then the overall communication cost is \(2mn|D^{c}|\). We can see that overall communication cost relates to three factors: the number of communication rounds, the amount of overlapped data involved in training and the size of updates. Our offline training strategy only has O(1) communication rounds as shown in section IV-E1 and the size of model output \(m\) is also O(1) for each record. Therefore, the communication cost of FedPDD is O(\(|D^{c}|\)). In contrast, the online training strategy requires O(100) communication rounds and transmitting model information which usually involves thousands of parameters will cost more communication overload.
### _Privacy Analysis of FedPDD_
In this section, we follow the prior works [30, 31] and give the privacy analysis of FedPDD.
**Definition 1**: _(Differential Privacy [31]). A randomized mechanism \(M:\mathcal{X}\rightarrow\mathcal{Y}\) is \((\epsilon,\delta)\)-DP if for every pair of datasets \(X,X^{\prime}\in\mathcal{X}\) that only differ in one sample, and every possible output \(T\in\mathcal{Y}\). The following inequality holds:_
\[\mathbb{P}[M(x)\in E]\leq e^{\varepsilon}\mathbb{P}\left[M\left(x^{\prime} \right)\in E\right]+\delta, \tag{11}\]
_where \(\epsilon,\delta\geq 0\) are privacy loss parameters._
**Definition 2**: _(\(l_{2}-\)sensitivity). The \(l_{2}\)-sensitivity of a function \(f:\mathcal{X}\rightarrow\mathbb{R}^{d}\) is_
\[\Delta_{2}(f)=\max_{X,X^{\prime}\in\mathcal{X}}\left\|f(X)-f\left(X^{\prime} \right)\right\|_{2}. \tag{12}\]
**Definition 3**: _(Analytic Gaussian Mechanism [30]). Let \(f:\mathcal{X}\rightarrow\mathbb{R}^{d}\) be a function with global \(L_{2}\) sensitivity \(\Delta\). For any \(\epsilon\geq 0\) and \(\delta\in[0,1]\), the Gaussian output perturbation mechanism \(M(x)=f(x)+Z\) with \(Z\sim\mathcal{N}(0,\sigma^{2}I)\) is \((\epsilon,\delta)-DP\) if and only if_
\[\Phi(\frac{\Delta}{2\sigma}-\frac{\epsilon\sigma}{\Delta})-e^{\epsilon}\Phi(- \frac{\Delta}{2\sigma}-\frac{\epsilon\sigma}{\Delta})\leq\delta, \tag{13}\]
_where \(\Phi\) is the CDF function of \(\mathcal{N}(0,1)\)._
**Definition 4**: _(Composition of DP Algorithms [32, 33]). Suppose \(M=(M_{1},M_{2},...,M_{k})\) is a sequence of algorithms, where \(M_{i}\) is \((\epsilon_{i},\delta_{i})\)-DP, and the \(M_{i}\)'s are potentially chosen sequentially and adaptively. Then \(M\) is \((\sum_{i=1}^{k}\epsilon,\sum_{i=1}^{k}\delta)\)-DP._
For a meaningful privacy guarantee, we have \(\delta=o(\frac{1}{n})\), where \(n\) is the size of the dataset. By fixing privacy budget \(\epsilon\), we calibrate the noise with proper \(\sigma\) given by definition 3. Therefore, our method preserves \((\epsilon,\delta)-DP\).
## IV Experiments
In this section, we evaluate our proposed FedPDD on two public real-world datasets. We aim to answer the following two questions through our experiments:
* Q1: How well does FedPDD perform compared to the SOTA baselines on the benchmark datasets?
* Q2: How does differential privacy influence the performance of FedPDD?
### _Datasets_
The statistics of two benchmark public datasets are summarized in Table III.
**HetRec-MovieLens Dataset** This dataset 1 is a heterogeneous dataset for movie recommendation with 86W ratings. It is an extension of the MovieLens10M dataset, which contains personal ratings and tags about movies. The movies are linked to Internet Movie Database (IMDb) and RottenTomatoes movie review systems, greatly extending the feature space. The dataset is converted as a classification task by taking instances with a rating lower than 3 as negative instances, otherwise positive. In our setting, party A holds 21 features mainly coming from MovieLens, including user ID, movie ID, tags
and etc., while party B holds 13 features from RottenTomatoes, including movie information, critics score, Rotten data and etc.
**Criteo** The Criteo dataset 2 is generated from the original criteo dataset by randomly sampling 1 million instances. The task is to predict the ad click-through rate (CTR). It consists of 13 numerical features as well as 26 categorical features. We randomly split both numerical and categorical features into two parts with 19, 20 features respectively for two parties.
Footnote 2: [https://labs.criteo.com/category/dataset/](https://labs.criteo.com/category/dataset/)
### _Baselines_
We compare FedPDD with the following baselines:
**Locally trained DeepFM**: DeepFM [34] can handle high-order feature interaction of user embeddings and item embeddings in a centralized manner. Each party locally trains a DeepFM model on its own private dataset, and can not make use of the private features from the other party.
**Ensemble of DeepFM**: Each party first locally trains a DeepFM model, then the local model outputs are aggregated to obtain the final joint prediction. We average the predictions from local models for model aggregation.
**FTL baseline**: The FTL [4] approach maps the samples from heterogeneous feature spaces from two parties into a common latent space. Then, the two feature representations are concatenated and input to a classifier for prediction. We train a unique feature extractor for feature extraction in each party and then collaboratively train a classifier for label prediction.
**PFML baseline**: The PFML [36] approach integrates deep mutual learning [37] into the local update process in each party to improve the performance of both the global model and the personalized local models.
**FedKD baseline**: The FedKD [25] algorithm uses ensemble distillation for robust model fusion on heterogeneous local models. To transfer knowledge, an unlabeled dataset is used to sample data for all participants to compute logits and distill knowledge. We adopt this approach to our setting. In each communication round, all parties perform ensemble distillation, in which the local model parameters are evaluated on the aligned samples to generate logit outputs that are used to train each student model.
### _Experiment Settings_
For experiments, we randomly sample 80% data as a training dataset and the rest for testing. Then we randomly split the dataset into two local datasets according to the overlapped data ratio. For the test dataset, we assume that all the data is aligned and shared by two parties.
We adopt DeepFM [34] as the backbone, which is proposed to handle sophisticated feature interactions behind user behaviors for recommendation tasks. For training, we follow our two-stage training process as described in section III. The model is optimized by Adam [38]. We set the learning rate for both local models to 0.001, the weight decay to 0.0001, the number of communication rounds to 5, and the batch size to 1024. The training process stops when the training achieves the maximum communication round. For experiments on the HetRec-MovieLens dataset, we set the temperature \(T\) to 30, the trade-off loss weights \(\beta\), \(\gamma\) to 10, and the ensemble weight \(w\) to 0.5. For experiments on the Criteo dataset, we set the temperature \(T\) to 30, the trade-off loss weights \(\beta\), \(\gamma\) to 3, and the ensemble weight \(w\) to 0.5. For all the ablation settings, we conduct experiments three times and report the average.
We use accuracy as the metric to evaluate our experiment results. The closer the value of accuracy to 1, the better the performance of prediction is.
### _Main Results_
**Comparison with local training**. To demonstrate the effectiveness of our proposed method, we first compare it with the local fine-tuned baselines (i.e. Local A and Local B in Table II). From Table II, it is observed that 1) both local models benefit significantly from our approach, with an increment of 3.26%/2.44% on the HetRec-MovieLens and 2.62%/2.14% on Criteo. 2) The joint prediction further brings extra performance gain to the local models by around 0.04% and 1.61%, respectively. These results show that our approach successfully transfers knowledge between two parties and therefore improves the performance of their local models.
**Comparison with SOTAs**. FTL-based methods [4] leverage the overlapped data to make a joint prediction. Therefore, we only compare it with the joint prediction of FedPDD. From Table II, we can see that the joint prediction of FedPDD is better than FTL by 3.98% on HetRec-MovieLens and 2.78% on Criteo. These results show that our proposed framework outperforms the FTL-based methods in the situation where overlapped data is limited.
We also compare FedPDD with two knowledge distillation-based federated learning strategies PFML [36] and FedKD [25]. From Table II, we can find that the local model performance of FedPDD outperforms the PFML baseline by 2.46% and 1.92% on two datasets on average. Besides, our FedPDD outperforms FedKD by an additional 1.25% and 0.99% on two datasets on average. This indicates that our double distillation method can generate better teacher logits from not only the ensemble of cross-party local models but also the previous local models of the same party, thereby effectively enhancing the local model performance.
For joint prediction on aligned test samples, the predictions of both local models are averaged as the final result. Therefore, the performance of federated joint prediction mainly depends on the performance of local models. From Table II, we can observe that FedPDD outperforms FedKD by 1.21% and 1.27% on two datasets, respectively, and outperforms PFML by 1.60% and 1.15% on two datasets, respectively. This is reasonable as the local models trained by FedPDD achieve higher accuracy than FedKD and PFML on both datasets. Meanwhile, more accurate joint predictions can in turn transfer more knowledge to both local models.
### _Hyper-parameter Tuning_
#### Iv-E1 Effect of communication round \(r\)
From Figure 3, we can find that FedPDD converges after 5 rounds on the HetRec-MovieLens dataset and 7 rounds on the Criteo dataset, which demonstrates that FedPDD only requires a few times of communication need between two parties during training.
#### Iv-E2 Effect of overlapped data ratio \(\alpha\)
As we mentioned previously, the major challenge of the multi-view federated learning problem is that the overlapped data is often limited. We adjust the overlapped data ratio \(\alpha\) from \(0.1\) to \(0.01\) to test the effectiveness of FedPDD. The experimental results are shown in Figure 4. We can observe that when \(\alpha\) decreases to 0.01, the performance of the FTL approach drops significantly by 13.42% while FedPDD still remains above 0.8 on the HetRec-MovieLens dataset. On the Criteo dataset, FedPDD almost remains the same while the FTL baseline drops around 5.20% when \(\alpha\) decreases. These results demonstrate the effectiveness of FedPDD in our setting.
#### Iv-E3 Effect of temperature \(T\)
In knowledge distillation, the temperature is used to soften the probability output, leading the students to pay more attention to the small number of logits [20]. In this section, we conduct experiments to find out the influence of temperature on our model. We let \(T=T_{SD}=T_{ED}\). In Table II, we set the temperature to 30 to highlight the best performance of FedPDD on two benchmark datasets. Here we fine-tune the temperature \(T\) over a large range from 1 to 50 in the online scheme. The results in Table IV and V show that the careful selection of temperature can bring a little performance enhancement on local models.
#### Iv-E4 Effect of differential privacy budget \(\epsilon\)
In Figure 5, we demonstrate the effect of differential privacy on the HetRec-MovieLens dataset. We change the \(\epsilon\) from 0.05 to 10 to explore the change in local model performance and federated model performance. It can be observed that the accuracy drops only by 1.53% for local models in FedPDD and 1.30% for joint prediction of FedPDD, respectively. The performance of local models outperforms locally trained baselines when \(\epsilon>0.05\).
## V Conclusion
In this paper, we propose a novel cross-silo federated recommendation framework FedPDD. We design a double distillation strategy that leverages knowledge not only from the ensemble of local models but also from previous local models to efficiently improve the model performance. Besides, FedPDD largely reduces communication needs and privacy leakage risk by utilizing an offline training strategy and only transmitting model output during training. Additionally, differential privacy is introduced to protect the communication process with a higher level of privacy protection. Experimental
Fig. 4: The comparison between FedPDD and FTL baseline when the overlapped data ratio \(\alpha\) decreases
Fig. 5: The impact of DP parameter \(\epsilon\) on model performance on HetRec-MovieLens dataset.
Fig. 3: The relationship between communication round \(r\) and performance of FedPDD during training
results demonstrate that our approach can effectively exploit both implicit knowledge and explicit knowledge and thereby enhance the performances of both local and overall joint prediction tasks. Moreover, our framework can also be adopted to learn and predict financial risks associated with various internet finance platforms with heterogeneous information features and strong privacy-preserving needs.
## VI Acknowledgment
The authors gratefully acknowledge funding from Guangdong Province Focus Research Project (Grant Number: 2019KZDZX2014), Guangdong Province Research Fund (Grant Number: 2019QN01X277), National Natural Science Foundation of China (Grant Numbers: 71971106, 72001099), and Shenzhen Humanities & Social Sciences Key Research Bases. We would like to show our gratitude to Guangheng Hu, Ce Ju, Ben Tan and Prof. Qiang Yang for their advice on the earlier manuscript and we thank all the reviewers for valuable comments.
|
2307.10269 | Probing quantum chaos with the entropy of decoherent histories | Quantum chaos, a phenomenon that began to be studied in the last century,
still does not have a rigorous understanding. By virtue of the correspondence
principle, the properties of the system that lead to chaotic dynamics at the
classical level must also be present in the underlying quantum system. In the
classical case, the exponential divergence of nearby trajectories in time is
described in terms of the Lyapunov exponent. However, in the quantum case, a
similar description of chaos is, strictly speaking, impossible due to absence
of trajectories. There are different approaches to remedy this situation, but
the universal criterion of quantum chaos is absent. We propose the quantum
chaos definition in the manner similar to the classical one using decoherent
histories as a quantum analogue of trajectories. For this purpose, we consider
the model of an open quantum kicked top interacting with the environment, which
is a bosonic bath, and illustrate this idea. Here, the environment plays the
role of a trajectory recording device. For the kicked top model at the
classical level, depending on the kick strength, crossover occurs between the
integrable and chaotic regimes. We show that for such a model, the production
of entropy of decoherent histories is radically different in integrable and
chaotic regimes. Thus, the entropy of an ensemble of quantum trajectories can
be used as a signature of quantum chaos. | Evgeny Polyakov, Nataliya Arefyeva | 2023-07-17T21:57:05Z | http://arxiv.org/abs/2307.10269v3 | # Probing quantum chaos with the entropy of decoherent histories
###### Abstract
Quantum chaos, a phenomenon that began to be studied in the last century, still does not have a rigorous understanding. By virtue of the correspondence principle, the properties of the system that lead to chaotic dynamics at the classical level must also be present in the underlying quantum system. In the classical case, the exponential divergence of nearby trajectories in time is described in terms of the Lyapunov exponent. However, in the quantum case, a similar description of chaos is strictly speaking impossible due to absence of trajectories. There are different approaches to remedy this situation, but the universal criterium of quantum chaos is absent. We propose the quantum chaos definition in the manner similar to classical one using decoherent histories as a quantum analog of trajectories. For this purpose we consider the model of open quantum kicked top interacting with environment, which is bosonic bath and illustrate this idea on it. Here environment plays the role of trajectory recording device. For kicked top model on classical level depending on the kick strength there is crossover between integrable and chaotic regimes. We show that for such a model the production of entropy of decoherent histories is radically different in the integrable and chaotic regimes. Thus, the entropy of an ensemble of quantum trajectories can be used as a signature of quantum chaos.
## I Introduction
Chaotic behavior plays a significant role in various fields of science (for example, it underlies classical thermodynamics [1; 2; 3] and hydrodynamics [4]). In classical systems, the chaos is characterized by the exponential sensitivity of the evolution of the system in time to initial conditions, but in quantum mechanics it is not possible to characterize chaos in the same way, since the concept of phase space trajectories loses its meaning due to the Heisenberg uncertainty principle. There are different approaches to the definition of quantum chaos: through the statistics of energy levels [5; 6; 7; 8]; spectral form factors [8]; Loschmidt echo [14]; out-of-time ordered correlators (OTOC) [15; 16; 17]; in the context of quantum modeling through fidelity decay [13] and others. However, the true understanding of the nature of quantum chaos and the limits of using its various diagnostics, as well as the possible connection between them, is the subject of ongoing research, both theoretical and experimental. Until now, it has not been possible to present a universal criterion for determining quantum chaos and to rigorously understand this phenomenon. The methods of diagnosing quantum chaos have their drawbacks. For example, level statistics are poorly defined for small systems and there are specific examples, for which it doesn't work [10], OTOC does not work for billiard systems and in this case it is not possible to distinguish integrable behavior from chaotic [17]. Thus, interest in finding universal criteria for quantum chaos for classically chaotic systems, as well as understanding the nature of the appearance of this phenomenon, is motivated.
Interest in quantum chaos is caused by its wide application in explaining fundamental problems, such as: the thermalization mechanism in isolated systems, for which the eigenstates of quantum chaotic systems play a significant role [9; 18; 19]; quantum information scrambling [16]; in relation to open quantum systems, the influence of chaos on the processes of decoherence and dissipation [22; 23; 24; 25; 26], etc. At present, there are various experimental realizations of chaotic behavior, for example, in spin chains implemented with cold atoms [27] or spin chains on surfaces [20].
In this work we rely on the idea of Berry's work [21] which is that for the emergence of quantum chaos, the environment is important. Quantum decoherence that occurs in non-isolated systems inhibits the quantum suppression of chaos (due to the fact that quantum systems have discrete, quantized energy levels that control the evolution of dynamic quantities, therefore, this evolution cannot be truly chaotic). Thanks to the environment it is possible to introduce the concept of quantum trajectories of system as a record that is stored in certain degrees of freedom of the environment [41].
We consider a model where a quantum environment is connected to the open quantum system (OQS), which in some degrees of freedom records how the system behaved as it evolved over time. This is similar in spirit to the decoherent histories also known as consistent histories approach [29; 30; 36]. Therefore, we call the recorded information about OQS the decoherent history.
To correctly determine the decoherent histories, it is necessary to identify the degrees of freedom that carry information about how the OQS moved in the past. The formalism developed in this paper consists of several stages. First, the environment's degrees of freedom (later we can call them modes) that can carry information about the OQS are determined. There are infinitely
many degrees of freedom in the environment, but only those degrees of freedom that have significantly interacted with the OQS can carry useful information. To do this, it is convenient to introduce the Lieb-Robinson light cone formalism Lieb and Robinson (1963), which describes the propagation of perturbation. Then the effectively interacting degrees of freedom will be inside the light cone. Secondly, from these degrees of freedom, the irreversibly decoupled ones are determined, since the trajectory record should not change at future times and should not depend on the future evolution of the OQS, another words, they must carry away information about the OQS and stop interacting. Knowing these degrees of freedom, we can measure them one after another and the sequence of measurement results is a quantum trajectory (decoherent history).
The analogue of the trajectory appears due to the fact that the system interacts with the environment Fig.1. The formation of quantum trajectories corresponds to the emergence of decoherent histories in the environment Susskind (1998); Susskind (1998).
The approach used in this work is based on the method of work Susskind (1998), which allows one to model the dynamics of OQS beyond the limits of applicability of the Markov approximation Gardiner (1993); Gardiner (1993). In this work, this approach is adapted and the modes of the environment, which contain information about the motion of the OQS, are microscopically derived. With the help of this, the concept of decoherent histories is constructed and the entropy of the ensemble of quantum trajectories Gardiner (1993); Gardiner (1993) is calculated. It is reasonable to assume that the entropy of the ensemble of these quantum trajectories will be radically different in the integrable and chaotic regimes, which was proved in this paper.
The paper is structured as follows. Sections II and III introduce the model in question. In Section IV we explain our treatment of decoherence history approach. Section V describes a method for derivation of the degrees of freedom of the environment, which contain information about the motion of the OQS. In Section VI we construct quantum trajectories (decoherent histories) and calculate the entropy of an ensemble of such trajectories. In Section VII we present our results. We conclude in Section VIII.
## II The considered chaotic system
We consider the model of a quantum kicked top Kicked (1993) as OQS, which at the classical level has chaotic behavior for certain values of the kick strength \(K\) (hereinafter, the natural system of units is used everywhere: \(\hbar=1\)). This model is well studied in the context of quantum chaos Gardiner (1993); Gardiner (1993):
\[\widehat{H}_{S}=\frac{p}{\tau}\widehat{J}_{y}+\frac{K}{2j}\left(\widehat{J}_{ z}-\beta\right)^{2}\sum_{n=-\infty}^{\infty}\delta(t-n\tau) \tag{1}\]
The system is characterized by the angular momentum \(\vec{J}=(J_{x},J_{y},J_{z})\) with the corresponding commutators: \([J_{i},J_{j}]=i\epsilon_{ijk}J_{k}\) (\(i,j,k\) run through \(x,y,z\)). The classical limit is reached by tending \(j\rightarrow\infty\), \(\hbar\to 0\) while preserving \(\hbar j\). The first term is responsible for the precession around the \(y\) axis with the angular frequency \(\frac{p}{\tau}\), the second one is related to the periodic sequence of kicks at the time distance \(\tau\).
By changing \(K\), the motion of the system changes from integrable to chaotic. Fig.2 shows the level spacing distributions for different values of the kicked strength \(K\).
Figure 1: a) Classical Lyapunov exponent through classical trajectories; b) Incident field is scattered by particle. Quantum analogue of trajectory is encoded in the scattered field, which can be modeled by coupling target particle to quantum environment through environment’s operator \(\widehat{a}(t)\).
The physical implementation of this model is provided by the system of interacting spins [33].
## III Open chaotic quantum system
Our main idea is to introduce trajectories in the quantum case in order to obtain a way to diagnose the quantum chaos. To do this, it is necessary to connect the environment to the considered chaotic system (in this work the model of a quantum kicked top) (Section II). The role of the environment is played by a bosonic bath.
The complete Hamiltonian of the system is as follows:
\[\widehat{H}=\widehat{H}_{S}+\widehat{H}_{E}+\widehat{H}_{int} \tag{2}\]
where \(\widehat{H}_{S}\), \(\widehat{H}_{E}\), \(\widehat{H}_{int}\) are the free Hamiltonians of the OQS (1) and the environment and the Hamiltonian of the interaction between them, respectively:
\[\widehat{H}_{E}=\int\limits_{0}^{\infty}\omega\widehat{a}^{+}(\omega)\widehat{ a}(\omega)d\omega \tag{3}\]
\[\widehat{H}_{int}=\widehat{J}_{y}(\widehat{a}^{+}+\widehat{a}),\quad\widehat{ a}=\int\limits_{0}^{\infty}c(\omega)\widehat{a}(\omega)d\omega \tag{4}\]
where \(\widehat{a}^{+}(\omega)\), \(\widehat{a}(\omega)\) are bosonic environment's creation and annihilation operators with \([\widehat{a}(\omega),\widehat{a}^{+}(\tilde{\omega})]=\delta(\omega-\tilde{ \omega})\), \(c(\omega)\) is coupling. Such an interaction means that the environment records the trajectory of the projection of the y-component of the angular momentum of kicked top.
Fig.3 shows the behavior of a quantum kicked top in the case of integrable and chaotic motion. In the following sections we describe how does these results were obtained.
In the interaction picture with respect to free bosonic environment:
\[\widehat{H}(t)=\widehat{H}_{S}(t)+\widehat{J}_{y}(\widehat{a}^{+}(t)+\widehat {a}(t)) \tag{5}\]
\[\widehat{a}(t)=\int\limits_{0}^{\infty}c(\omega)\widehat{a}(\omega)e^{-i \omega t}d\omega\]
In our work, it is convenient to represent the environment in the equivalent chain representation [11]. This is necessary in order to introduce the concept of the Lieb-Robinson light cone [34]. For a sufficiently wide class of spectral densities, there exists an unitary operator \(U\) that takes the system into a chain representation [11]. Using the unitary operator, the environment is represented as a chain where only neighboring modes interact:
\[a_{n}^{+}=\int\limits_{0}^{\infty}U_{n}(\omega)\widehat{a}^{+}(\omega)d\omega \tag{6}\]
\[\widehat{H}(t)=\widehat{H}_{S}(t)+\widehat{J}_{y}h(\widehat{a_{0} }^{+}+\widehat{a_{0}})+\\ +\sum\limits_{n=0}^{\infty}\left(\epsilon_{n}\widehat{a}_{n}^{+} \widehat{a}_{n}+h_{n}\widehat{a}_{n+1}^{+}\widehat{a}_{n}+h_{n}\widehat{a}_{n }^{+}\widehat{a}_{n+1}\right) \tag{7}\]
with commutator \([a_{i},a_{j}^{+}]=\delta_{ij}\). Knowing the spectral density, the coefficients \(\epsilon_{n}\) and \(h_{n}\) and \(h\) can be calculated by recurrent formulas using orthogonal polynomials [11].
In the interaction picture with respect to free bosonic environment in chain representation we obtain:
\[\widehat{H}(t)=\widehat{H}_{S}(t)+\widehat{J}_{y}h(\widehat{a_{0}}^{+}(t)+ \widehat{a_{0}}(t)) \tag{8}\]
## IV Quantum environment as the decoder of OQS trajectories
In this work, the main idea is that we consider the environment as a recording device that records information about the movement of the OQS in some its degrees of freedom. Thus, in the environment there is a sequence of projection operators corresponding to these records (facts) about how the OQS moved. The definition of trajectories we introduce is related to the approach of decoherent histories [29; 30; 36].
Figure 2: Crossover between integrable (Poisson statistics) and chaotic (Wigner-Dyson statistics) motion; for the left statistics \(K=2\), for right \(K=3\).
The consistent histories, also known as decoherent histories (DH) formalism was introduced by Griffiths, Omnes, Gell-Mann, Hartle. This formalism is an interpretation of quantum mechanics that allows one to resolve/tame the main quantum paradoxes. DH approach is based on the assumption of the probabilistic nature of quantum time dependence [36].
A "history" is a set of events or prepositions, represented by projection operators \(P^{1}_{\alpha_{1}},...,P^{n}_{\alpha_{n}}\) at a succession of times \(t_{1},...,t_{n}\) time-ordered with unitary evolution between each projection [29; 38]:
\[C_{\alpha_{1},...,\alpha_{n}}=P^{n}_{\alpha_{n}}(t_{n})P^{n-1}_{\alpha_{n-1}}( t_{n-1})...P^{1}_{\alpha_{1}}(t_{1}) \tag{9}\]
where
\[P^{i}_{\alpha_{i}}(t_{i})=e^{\frac{i}{\hbar}H_{E}(t_{i}-t_{i-1})}P^{i}_{ \alpha_{i}}e^{-\frac{i}{\hbar}H_{E}(t_{i}-t_{i-1})} \tag{10}\]
Here we present this approach in relation to a free bath, in contrast to the original approach presented for the entire isolated system. We consider a bipartite system: an OQS and a bosonic bath with density matrix \(\rho=|\Psi\rangle\langle\Psi|\) (\(|\Psi\rangle\) -- wave function OQS plus bath) and develop the decoherent histories approach only for a free bath.
The probability for history is given by [38]:
\[p(\alpha_{1},...,\alpha_{n})=Tr(C_{\alpha_{1},...,\alpha_{n}}\,\rho\,C^{+}_{ \alpha_{1},...,\alpha_{n}}) \tag{11}\]
For consistency the following conditions are necessary: (i) the sum of probabilities of the histories is unity \(\sum_{\alpha_{i}}P^{i}_{\alpha_{i}}=I\); (ii) two distinct histories are mutually orthogonal \(P^{i}_{\alpha_{i}}P^{j}_{\beta_{j}}=\delta_{\alpha_{i}\beta_{j}}P^{i}_{\alpha _{i}}\). Generalization of this condition is:
\[Tr(\widehat{C}_{\alpha_{1},...,\alpha_{n}}\rho\,\widehat{C}^{+}_{\beta_{1},...,\beta_{n}})=0\,,\quad\text{for }\alpha\neq\beta \tag{12}\]
In practice, it turns out that this condition is strictly impossible to achieve. However, it can be approximated arbitrarily well with respect to some given level of significance. Thus, we arrive at the condition of weak consistency:
\[Tr(\widehat{C}_{\alpha_{1},...,\alpha_{n}}\rho\,\widehat{C}^{+}_{\beta_{1},...,\beta_{n}})\approx 0\,,\quad\text{for }\alpha\neq\beta \tag{13}\]
Figure 3: Mean value of \(J_{y}\) versus time along the one trajectory. Top images are for regular motion \(K=1\); lower images, for \(K=-10\); the right plots enlarge the left ones. Initial condition \(|\Psi(0)\rangle=|J_{y}=0\rangle\otimes|0\rangle_{E}\), \(|0\rangle_{E}\) is vacuum state of the environment. The images are obtained with the following parameters for the environment: \(\epsilon_{n}=1\), \(h_{n}=0.2\), \(h=0.05\).
In the approach of decoherent histories, the question arises of how to build these projectors \(P^{i}_{\alpha_{i}}\) and what observables and time moments to consider. There is some arbitrariness in the choice of this [39]. Moreover it is a difficult problem to construct them. Recently it was proposed to search for them on a quantum computer [35]. Our approach naturally resolves it. On the one hand, we have a physical model, the scattered field carries away information about the OQS motion, and on the other hand, we propose a formal consideration how these projectors may be found. Projectors must match the degrees of freedom of the environment and naturally arise from a properties of the environment. In the next Section we derive such degrees of freedom.
## V Environment degrees of freedom which carry the information about the trajectory
In this Section we describe our procedure for derivation the environment's degrees of freedom carrying useful information about the trajectory of the OQS.
### Statistically significant interacting modes
The quantum environment is treated as a recording device. Its records can be measured and decoherent histories can be obtained. Decoherent histories can only be contained within a light cone, so it is necessary to be able to evaluate it. Below is an algorithm for computing the light cone a priori.
The light cone allows one to determine which degrees of freedom are significant and which are not. The region outside the light cone consists of degrees of freedom that will only be significantly excited at future times, or will never be excited at all. In particular, for each chain site in eq.(7), there is a point in time after which it becomes statistically significant for the evolution of the system.
In order to estimate which modes the OQS excited, it is necessary to introduce a measure that determines the influence of the OQS on the considered mode. For this, it is convenient to use the commutator \([\widehat{a}_{0}(t),\widehat{a}_{j}^{+}]\), which will show whether the operator \(a_{0}(t)\) affects \(a_{j}\). If the mode \(a_{j}\) is currently interacting with the OQS, then the norm of this commutator will be different from zero:
\[\|[\widehat{a}_{0}(t),\widehat{a}_{j}^{+}]\|=\sqrt{Tr\left([\widehat{a}_{0}(t ),\widehat{a}_{j}^{+}][\widehat{a}_{0}(t),\widehat{a}_{j}^{+}]^{+}\right)} \tag{14}\]
where \(\widehat{a_{0}}(t)\) is the degree of freedom with which the OQS interacts at time \(t\). The operator \(\widehat{a_{0}}(t)\) in the interaction picture can be expressed in terms of the original chain operators, according to:
\[\widehat{a}_{0}(t)=\sum_{k=0}^{\infty}\phi_{k}(t)\widehat{a}_{k} \tag{15}\]
where \(\phi_{k}(t)\) is one-particle wave function, which satisfies the following first-quantized Schrodinger equation with the initial condition corresponding to the interaction quench at time \(t=0\):
\[\begin{cases}\partial_{t}\phi_{k}(t)=\frac{1}{i}\widehat{H}_{1}\phi_{k}(t)\\ \phi_{k}(0)=\delta_{k0}\end{cases} \tag{16}\]
where
\[H_{1}=-\left(\begin{array}{ccccc}\epsilon_{0}&h_{0}&0&\ldots&\ldots\\ h_{0}&\epsilon_{1}&h_{1}&0&\ldots\\ 0&h_{1}&\epsilon_{1}&h_{2}&0\\ \vdots&\vdots&\vdots&\ddots&\vdots\\ 0&\ldots&0&h_{m(t)-1}&\epsilon_{m(t)}\end{array}\right) \tag{17}\]
Here \(m(t)\) is the number of environmental degrees of freedom (later we can call it modes) that have been excited due to co-evolution with the OQS over time \(t\). The perturbation propagates along the Lieb-Robinson light cone [34] from the zero site \(a_{0}\), with which the OQS is connected. Fig.4 shows the spread of the operator \(\widehat{a}_{0}(t)\) over sites of chain.
For the simplest case of a linear environment, commutator (14) will be a C-number, since \([\widehat{a}_{i},\widehat{a}_{j}^{+}]=\delta_{ij}\). Thus, instead of the trace from the product of the commutators, one can calculate their vacuum average:
\[C_{j}(t)=\begin{cases}Tr\left([\widehat{a}_{0}(t),\widehat{a}_{j}^{+}][ \widehat{a}_{0}(t),\widehat{a}_{j}^{+}]^{+}\right)\\ \langle 0|[\widehat{a}_{0}(t),\widehat{a}_{j}^{+}][\widehat{a}_{0}(t),\widehat {a}_{j}^{+}]^{+}|0\rangle\end{cases}=\\ =M_{j}(t)\begin{cases}Tr1\\ \langle 0|0\rangle\end{cases} \tag{18}\]
This function is OTOC [16]. The condition \(C_{j}(t)\geq 0\) will mean that the \(\phi_{j}(t)\) mode coupled with the OQS at a given time. If \(C_{j}(t)\) is negligible, then the excitation of this mode due to the OQS is also negligible.
Figure 4: Wave function \(\phi_{k}(t)\) propagating the interaction operator \(a_{0}(t)\) over time. The color matches \(|\phi_{k}(t)|\). It can be seen that the perturbation propagates along the light cone.
The light cone is determined not by the instantaneous intensity of the mode interaction, but rather by the average intensity of the mode interaction over the time interval from \(0\) to \(t\). During the time \(t\), only those modes enter the light cone that interact significantly on average over the entire interval. Therefore, it is necessary to consider only statistically significant interactions during the chosen time interval and eliminate sudden short-term excitations of environmental modes, which will make a negligible contribution. Thus, the condition is given by OTOC averaged over time:
\[\langle C_{j}^{+}(t)\rangle=\int\limits_{0}^{t}C_{j}(\tau)d\tau \tag{19}\]
And we consider the modes that are effectively coupled and interact with OQS, influencing their joint evolution.
### Records may be nonlocal
Information about the OQS is not necessarily carried by local chain degrees of freedom \(a_{j}\), these can be their arbitrary linear combinations, so it is necessary to be able to take into account the statistical significance of such linear combinations (nonlocal with respect for chain); so it is necessary to be able to take into account the statistical significance and determine whether they fall inside the light cone. The average statistical significance of \(\chi=\sum_{k=0}^{\infty}\chi_{k}|k\rangle\) state (\(|k\rangle\) is quantum localized in \(k\) chain site) with corresponding creation and annihilation operators \(\widehat{\chi}^{+}=\sum_{k=0}^{\infty}\chi_{k}\widehat{a}_{k}^{+}\) is therefore:
\[\int\limits_{0}^{t}\langle 0|[\widehat{a}_{0}(\tau),\sum_{k} \chi_{j}\widehat{a}_{j}^{+}][\widehat{a}_{0}(\tau),\sum_{l}\chi_{l}\widehat{a }_{l}^{+}]^{+}|0\rangle d\tau=\] \[=\langle\chi|\int\limits_{0}^{t}|\phi(\tau)\rangle\langle\phi( \tau)|d\tau|\chi\rangle=\langle\chi|\rho_{+}(t)|\chi\rangle \tag{20}\]
with
\[\rho_{+}(t)=\int\limits_{0}^{t}|\phi(\tau)\rangle\langle\phi(\tau)|d\tau \tag{21}\]
We introduce a metric that determines whether the contribution of the \(\chi\) state is significant or not:
\[g_{+}(\chi,t)=\langle\chi|\rho_{+}(t)|\chi\rangle-a_{cut} \tag{22}\]
if \(g_{+}(\chi,t)<0\), then the contribution of this mode can be neglected with threshold \(a_{cut}\). Modes lying inside the light cone, that is satisfying condition \(g_{+}(\chi,t)>0\) contain information about the OQS (the kicked top). Fig.5 shows the modes (chain sites) coupled to the OQS over time determined according to eq.(22).
There is a drawback to the light cone defined in a chain basis, namely that the environment's degrees of freedom are not statistically independent. By analogy with the Kotelnikov theorem, truly independent degrees of freedom will appear in a basis where the speed of propagation of the light cone is minimal, and the intervals between the times of appearance of modes are proportional to the width of the spectral density of the environment as the bandwidth of the environment as a recording device [12]. Since a Lieb-Robinson metric (22) is defined for arbitrary mode we can turn to the basis where the light cone propagates with a minimum speed which we call the minimal light cone. Further, unless otherwise stated, we will work with him. The information about OQS is recorded in nonlocal environment's degrees of freedom.
A detailed algorithm for obtaining the minimal light cone is derived in the work [12]. We will denote these modes coupled to OQS for time interval \([0,T]\) as \(\kappa_{l}^{in},...,\kappa_{m_{in}(T)}^{in}\) and the discrete times of their appearance as \(t_{1}^{in},...,t_{m_{in}(T)}^{in}\).
The total joint state of the quantum kicked top and bosonic bath \(|\Psi(t)\rangle\) effectively evolves with Hamiltonian:
\[\widehat{H}_{eff}(t)=\widehat{H}_{S}(t)\ +\] \[+\sum_{l=1}^{m_{in}(t)}\left\{\widehat{J}_{y}\langle\phi(t)| \kappa_{l}^{in}\rangle\widehat{\kappa}_{l}^{in}+\widehat{J}_{y}\langle\phi(t)| \kappa_{l}^{in}\rangle^{*}\widehat{\kappa}_{l}^{in+}\right\} \tag{23}\]
Entanglement between degrees of freedom is neglected when their statistical significance is below a threshold.
### Irreversibly decoupled modes - stable records
Records carrying information about the OQS must be stable facts, so it is necessary to consider modes that are irreversibly decoupled from the OQS.
Figure 5: The chain sites coupled to the OQS, depending on time, form a forward light cone. Coupled modes are defined according to eq.(22).
Two different cases of outgoing (decoupled) modes are possible: (a) the modes, which have never interacted with the OQS; (b) modes were interacting with the OQS and then irreversibly decoupled from it. The first situation does not contain any information about the OQS, and we discard these modes from consideration. However it is necessary to track the evolution of the (b). The mode decoupled from the OQS at time \(t_{l}^{out}\) must be a linear combination of \(\kappa_{1}^{in}\), \(\kappa_{2}^{in}\),..., \(\kappa_{m_{in}(t_{l}^{out})}^{in}\), they must be in the subspace of modes coupled to the OQS for the time interval \([0,t_{l}^{out}]\). It is these modes that will store information about the trajectory of the OQS.
Analogically eq.(20), for outgoing modes, the measure of statistical significance at the time \(t\), which determines the decoupled of the mode from the OQS is
\[\langle C^{-}(t,\chi)\rangle=\langle\chi|\int\limits_{t}^{T}|\phi(\tau) \rangle\langle\phi(\tau)|d\tau|\chi\rangle=\langle\chi|\rho_{-}(t)|\chi\rangle \tag{24}\]
with
\[\rho_{-}(t)=\int\limits_{t}^{T}|\phi(\tau)\rangle\langle\phi(\tau)|d\tau \tag{25}\]
A mode can be considered irreversibly decoupled if the OTOC averaged over future times is negligible.
We are interested in modes that carry information about the trajectory of the OQS, so the condition of lack of statistical significance for them is:
\[g_{-}(\kappa^{in},t)=\langle\kappa^{in}|\rho_{-}(t)|\kappa^{in}\rangle-a_{ cut}<0 \tag{26}\]
These modes can be found by analogy with the search for coupled modes in minimal light cone by some unitary rotation of the basis of coupled modes \(\kappa_{1}^{in},...,\kappa_{m_{in}(T)}^{in}\). We will denote irreversibly decoupled modes for time interval \([0,T]\) as \(\kappa_{1}^{out},...,\kappa_{m_{out}(T)}^{out}\) and the discrete times of their decoupled as \(t_{1}^{out},...,t_{m_{out}(T)}^{out}\). The information about the OQS contains irreversibly decoupled modes that previously interacted with it. For more details see work [12].
### Relevant modes
By the time \(t_{k}^{out}\), when the k-st \(\kappa_{k}^{out}\) mode was decoupled, \(m_{in}(t_{k}^{out})-1\) modes remained coupled. We call these modes relevant modes, because they are statistically significant for future evolution. At time \(t_{k}^{out}\) there is \(k-1\) irreversibly decoupled modes, \(m_{in}(t_{k}^{out})\) coupled modes and their difference are relevant modes \(r(t_{k}^{out})\):
\[r(t_{k}^{out})=m_{in}(t_{k}^{out})-k+1 \tag{27}\]
The total system state \(|\Psi(t)\rangle\) evolves over the time interval \([t_{k},t_{k}^{out}]\), where \(t_{k}\) is the time of the previous mode coupled or decoupled with relevant modes.
Fig.6 shows the number of connected modes, disconnected modes and relevant modes over time. It can be seen that the number of relevant modes saturates and practically does not change during the evolution of the system.
### Relation to decoherent histories approach
The problem with the decoherent histories approach is related to the inability to achieve the consistency condition (12). Our approach suggests an effective solution to this problem. Since the records are contained in irreversibly decoupled degrees of freedom, and it is over them that the projectors are taken, then the corresponding terms from the Hamiltonians will be thrown out. Thus, firstly, the sum of the probabilities will always be one (i), and secondly, they will always be orthogonal (ii).
In fact, the problem of decoherent histories lies in the fact that it is not taken into account that after interaction quench the OQS is renormalized (by analogy with the electron in high energy physics). In this paper, we solve this problem by assuming that the renormalizable OQS consists of a bare OQS and relevant modes with which it interacts significantly at future times. And the irreversibly decoupled modes (the stable records) are really those degrees of freedom that contain information (facts) and can be measured. In this case, a weak consistency is obtained, which converges exponentially quickly in the number of relevant modes.
## VI Simulating decoherent histories
Knowing the degrees of freedom in which the environment records information about the trajectory of the kicked top, they can be measured. The measurement statistics will give an ensemble of quantum trajectories -- decoherent histories.
Figure 6: The number of modes in the system over time, \(m_{in}(t)\), \(m_{out}(t)\), \(r(t)\) — coupled, irreversibly decoupled and relevant modes, respectively.
Before \(t=t_{k}^{out}\) the mode \(\kappa_{k}^{out}\) was coupled to the OQS. It was in an entangled state with the OQS due to the Schmidt decomposition:
\[|\Psi(t_{k}^{out})\rangle=\] \[=\sum_{q}c_{q}(k)|\Psi_{coll}^{(q)}(t_{k}^{out})\rangle_{rel}\otimes |\Psi_{J}^{(q)}(t_{k}^{out})\rangle_{\kappa_{k}^{out}} \tag{28}\]
where index \(rel\) means OQS with relevant modes, and \(\kappa_{k}^{out}\) refers to the newly formed irreversibly decoupled mode, \(q\) enumerates the basis elements for the such mode.
Since this \(\kappa_{k}^{out}\) mode is irreversibly decoupled, the amplitudes \(c_{q}(k)\) do not depend on time, these are invariants. A flow of motion invariants arises; they cease to depend on time effectively by the threshold of significance. Thus, form (28) is invariant at all future times and an invariant entanglement structure arises for future evolution. It has also been confirmed numerically. The emerging invariant structure of entanglement carries an ensemble of decoherent histories.
According to the von Neumann measurement model [37], one can collapse the wave function (28) and interpret the equation as the \(k\)-th quantum jump at time \(t=t_{k}^{out}\): \(|\Psi(t_{k}^{out})\rangle\rightarrow|\Psi_{coll}^{(q)}(t_{k}^{out})\rangle\) with probability \(|c_{q}(k)|^{2}\). Such quantum jump are irreversible in time.
By the time \(t\), \(m_{out}(t)\) modes has been irreversibly decoupled. Each mode decoupled is accompanied by a quantum jump, which is obtained by recurrently applying the measurement procedure:
\[|\Psi(t_{1}^{out})\rangle\rightarrow|\Psi_{coll}^{(q_{1})}(t_{1}^ {out})\rangle_{rel}\] \[|\Psi_{coll}^{(q_{1})(out)}\rangle_{rel}\rightarrow|\Psi_{coll}^ {(q_{1}q_{2})}(t_{2}^{out})\rangle_{rel} \tag{29}\] \[|\Psi_{coll}^{(q_{1}q_{2})}(t_{3}^{out})\rangle_{rel}\rightarrow| \Psi_{coll}^{(q_{1}q_{2})}(t_{3}^{out})\rangle_{rel}\]
Therefore, \(m_{out}(t)\) quantum jumps occur before time \(t\). They are characterized by the history of choices \(h=(q_{1},q_{2},\ldots,q_{k})=\{q_{k}\}_{k:t_{k}^{out}\leq t}\), appearing with probabilities:
\[P(q_{1},q_{2},\ldots,q_{k}) =\prod_{k:\,t_{k}^{out}\leq t}|c_{q_{k}}(k)|^{2} \tag{30}\]
This is the proposed definition of decoherent histories. In this case, the average of observables over all decoherent histories \(h\) up to the time \(t\) corresponds to the full many-particle quantum dynamics of the OQS in terms of the significance threshold.
Thus, in the environment projectors operator (eq.(9)) in decoherent histories approach (Sec.IV) are naturally appear as:
\[P_{\alpha_{k}}^{k}=Tr_{\kappa_{k}^{out}}(|\Psi(t_{k}^{out})\rangle\langle\Psi (t_{k}^{out})|) \tag{31}\]
### The entropy of an ensemble of decoherent histories
The statistical ensemble of quantum jump histories is encoded in an emerging invariant entanglement structure (28) that does not change at future times. By analogy with a tape recorder that records and does not change the recorded data.
Summarizing, to observe a trajectory a measuring device is needed. By adding the environment, considered as a recording device, information about the trajectory is recorded in the stream of irreversibly decoupled degrees of freedom.
We can introduce the definition of the entropy of an ensemble of decoherent histories (30):
\[S=-\sum_{\tilde{q}=(q_{1},\ldots q_{N})}P(\tilde{q})\ln(P(\tilde{q})) \tag{32}\]
## VII The entropy of decoherent histories as a marker of quantum chaos
We proposed that entropy of an ensemble of decoherent histories (32) may be a criteria of quantum chaos. In this Section we present our main results.
The entropy was calculated taking into account the simplified assumption of the presence of ergodicity for quantum trajectories. In the sense that averaging over all trajectories is equivalent to averaging within one sufficiently long trajectory over all choices. It is the averaging over one trajectory that was used in our work.
As soon as a degree of freedom that irreversibly decoupled appeared, a quantum jump was performed for it. In Fig.7 the probability distribution of quantum jumps \(|c_{q}|^{2}\) (all possible choices) is presented.
Figure 7: Quantum jump probability distribution \(|c_{q}|^{2}\) in two cases for \(K=0\) (blue curve) and for \(K=-10\) (orange curve). 7 quanta participated in the dynamics.
The figure shows that for the integrable case at \(K=0\) the probability distribution is very narrow, while for the chaotic regime \(K=-10\) the jump probability distribution is very wide.
This procedure was repeated for the entire time interval. One random implementation of the choice of quantum transitions was considered. In Fig.8 the instantaneous production of entropy is shown depending on the number of the quantum jump. In integrable and chaotic regimes, entropy along one trajectory behaves in radically different ways.
When \(q\) jumps have already happened and the moment of the next jump has come, we can expand the wave function of the system in terms of the Schmidt expansion (28) and from the previous set of significant modes select a new set of significant modes and a mode that is irreversibly decoupled (on which the projection is carried out):
\[|\Psi(t)\rangle=\sum_{P_{q+1}}c_{P_{q+1}}|\Psi_{coll}(t,P_{1},...,P_{q},P_{q+1} )\rangle_{rel}\otimes|\Psi_{J}(t)\rangle_{P_{q+1}}\]
At the \(q+1\) step, a new distribution of quantum jumps arises (a set of alternatives). Entropy for one jump will increase:
\[\Delta S=-\sum_{P_{q+1}}c_{P_{q+1}}\ln(c_{P_{q+1}}) \tag{33}\]
Average entropy production for one trajectory per quantum jump:
\[\langle\Delta S\rangle=\frac{1}{n}\sum_{n}\Delta S \tag{34}\]
where \(n\) is total number of the quantum jumps. In Fig.9 represents the average entropy production per quantum jump. It can be seen that its behavior changes strongly when passing from the integrable case there is practically no increase in entropy to the chaotic case there is a strong increase in entropy.
It was confirmed that in the integrable case the trajectories behave more regularly and the entropy practically does not increase, while in the transition to the chaotic case the trajectories mix strongly and the entropy grows rather sharply. Moreover, with an increase in \(j\), the entropy growth angle increases. Thus, it is assumed that the entropy production along one trajectory can be a criterion of a quantum chaos.
## VIII Conclusions
The main idea was to introduce the definition of quantum chaos by analogy with the classical definition through the divergence of nearby trajectories.
Quantum trajectories can be introduced by connecting the system to the environment. The quantum environment in this case is analogous to a recording device. The role of the information carrier in the quantum environment is played by the degrees of freedom that are irreversibly decoupled from the OQS the stable records, which periodically arise during the evolution of the system in time.
In our work, we firstly offer the novel way of finding the degrees of freedom of the environment, that carries information about the trajectory by dint of averaging OTOC (19) and (24). Secondly, based on it we introduce the definition of trajectories and as a criterion for quantum chaos, we propose to use the entropy of the ensemble of given trajectories (32).
Thus, one can consider environment as a measuring
Figure 8: Instantaneous entropy production along one trajectory for \(K=1\) (blue curve) and for \(K=-10\) (orange curve).
Figure 9: Average entropy production (34) depending on the value of the kicked strength K. One can see a sharp increase in entropy production in the region of crossover between integrable and chaotic dynamics. The calculation was carried out for two different quantum numbers \(j=20\) (orange curve), for \(j=40\) (blue curve).
device that autonomously selects the time of measurement and the preferred basis without the intervention of a human experimenter. And in turn during the evolution in real time, measuring one after another irreversibly decoupled modes with a certain probability, a sequence of measurements is obtained, which results is a quantum trajectory.
It was confirmed that for a regular motion, decoherent histories behave relatively regularly, while for a chaotic motion, the recorded particle trajectory is more fluctuating and irregular. The entropy of an ensemble of such trajectories grows faster for the chaotic case than for the integrable one. It is also possible to observe a noticeable sharp increase in entropy during the transition of the system dynamics from integrable to chaotic at values of the kicked strength \(K\) from 2 to 3 (Fig.9). Thus, this approach made it possible to fix the phenomenon of quantum chaos for the model of a quantum kicked top. We propose to connect any considered chaotic system to the environment and use the entropy of the ensemble of decoherent histories as a criterion of chaos.
|
2304.11427 | High-order implicit shock tracking boundary conditions for flows with
parametrized shocks | High-order implicit shock tracking (fitting) is a class of high-order,
optimization-based numerical methods to approximate solutions of conservation
laws with non-smooth features by aligning elements of the computational mesh
with non-smooth features. This ensures the non-smooth features are perfectly
represented by inter-element jumps and high-order basis functions approximate
smooth regions of the solution without nonlinear stabilization, which leads to
accurate approximations on traditionally coarse meshes. In this work, we
introduce a robust implicit shock tracking framework specialized for problems
with parameter-dependent lead shocks (i.e., shocks separating a farfield
condition from the downstream flow), which commonly arise in high-speed
aerodynamics and astrophysics applications. After a shock-aligned mesh is
produced at one parameter configuration, all elements upstream of the lead
shock are removed and the nodes on the lead shock are positioned for new
parameter configurations using the implicit shock tracking solver. The proposed
framework can be used for most many-query applications involving parametrized
lead shocks such as optimization, uncertainty quantification, parameter sweeps,
"what-if" scenarios, or parameter-based continuation. We demonstrate the
robustness and flexibility of the framework using a one-dimensional space-time
Riemann problem, and two- and three-dimensional supersonic and hypersonic
benchmark problems. | Tianci Huang, Charles Naudet, Matthew J. Zahr | 2023-04-22T15:13:03Z | http://arxiv.org/abs/2304.11427v1 | # High-order implicit shock tracking boundary conditions for flows with parametrized shocks
###### Abstract
High-order implicit shock tracking (fitting) is a class of high-order, optimization-based numerical methods to approximate solutions of conservation laws with non-smooth features by aligning elements of the computational mesh with non-smooth features. This ensures the non-smooth features are perfectly represented by inter-element jumps and high-order basis functions approximate smooth regions of the solution without nonlinear stabilization, which leads to accurate approximations on traditionally coarse meshes. In this work, we introduce a robust implicit shock tracking framework specialized for problems with parameter-dependent lead shocks (i.e., shocks separating a farfield condition from the downstream flow), which commonly arise in high-speed aerodynamics and astrophysics applications. After a shock-aligned mesh is produced at one parameter configuration, all elements upstream of the lead shock are removed and the nodes on the lead shock are positioned for new parameter configurations using the implicit shock tracking solver. The proposed framework can be used for most many-query applications involving parametrized lead shocks such as optimization, uncertainty quantification, parameter sweeps, "what-if" scenarios, or parameter-based continuation. We demonstrate the robustness and flexibility of the framework using a one-dimensional space-time Riemann problem, and two- and three-dimensional supersonic and hypersonic benchmark problems.
keywords: Shock fitting, high-order methods, discontinuous Galerkin, bow shocks, many-query analysis, hypersonics +
Footnote †: journal:
## 1 Introduction
Bow shocks are strong, detached, curved shocks that frequently arise in aerospace and astrophysics applications. The strength, geometry, and stand-off distance of bow shocks are highly dependent on the problem configuration including farfield conditions, the geometry of the body, and fluid properties. As the flow speed increases, accurate resolution of the lead shock in a computational setting is paramount to predict quantities of interest integrated over the body, particularly aerodynamic heating [19; 6]. This has forced computational fluid dynamics researchers and practitioners to invest substantial effort and resources to generate hexahedral meshes with very tight grid spacing near shocks and elements aligned to the curvature of the lead shock [29; 19; 5; 20; 7]. Given the substantial amount of user-intensive effort [19; 5] required to mesh and simulate a single configuration, _many-query_ analyses such as optimization, uncertainty quantification, parameter sweeps, "what-if" scenarios, or even parameter-based continuation, remain a significant challenge because they cause the bow shock to move, which requires modifications to the mesh.
Shock capturing is a popular and effective class of approaches to stabilize higher-than-first-order methods in the vicinity of shocks on a fixed computational grid. Limiters, which are used to limit the solution gradient near shocks, are commonly used with second-order finite volume methods [37] and high-order discontinuous Galerkin (DG) methods [11]. These methods are commonly used in real-world flow simulations; however, they place stringent demands on the computational mesh for high-speed flows (hexahedral elements, tight
grid spacing near shocks, alignment of elements with lead shock) [7] and are not well-suited for many-query studies involving parametrized shocks. Weighted essentially non-oscillatory (WENO) methods [21, 30, 24] use stencil-based high-order reconstruction near shocks to mitigate spurious oscillations and can lead to crisp shocks on structured meshes. They have shown good agreement with experiments for high-speed flows [35, 28], although they are not well-suited for complex domains that require unstructured meshes. For high-order methods, artificial viscosity approaches can smoothly resolve steep gradients with sub-cell accuracy and is the preferred shock capturing approach for finite-element-based methods [33, 2, 17, 8]. The combination of high-order DG methods with artificial viscosity has even been shown to reduce the sensitivity of hypersonic flow simulations to the choice of numerical flux and grid alignment [8]. However, artificial viscosity models usually suffer from a relatively strong dependence on a large amount of empirical parameters that must be tuned [39] and require substantial grid refinement near shocks where accuracy has been dropped to first order. As such, these methods are not ideally suited for many-query analyses involving parametrized shocks.
An alternative approach is _shock tracking_ or _shock fitting_, where the computational mesh is moved to align faces of mesh elements with solution discontinuities to represent them geometrically with the inter-element jump in the solution basis without requiring additional stabilization. This leads to accurate solutions on coarse meshes when using high-order methods and avoids instabilities often associated with shock capturing methods such as carbuncles. The traditional approach to shock tracking [31, 34] largely consists of explicitly identifying shock locations and using the Rankine-Hugoniot conditions to determine the shock motion and states across the shock. Although research in this area has experienced a resurgence in recent years [9, 3, 18, 16, 1, 43, 10, 4], these methods are most suited for flows with relatively simple shock geometries because they require explicit meshing of the shocks and specialized strategies to track the shocks separately from the remainder of the flow.
A different approach to shock tracking known as _implicit shock tracking_, which includes the High-Order Implicit Shock Tracking (HOIST) method [40, 42, 41] and the Moving Discontinuous Galerkin Method with Interface Condition Enforcement (MDG-ICE) [14, 25, 26], has overcome some of the challenges of traditional approaches to shock tracking. These methods discretize the conservation law on a shock-agnostic mesh and pose an optimization problem whose solution is the discontinuity-aligned mesh and corresponding discretized flow solution. That is, shock tracking is achieved implicitly through the solution of an optimization problem. The meshing challenge from traditional shock tracking approaches is largely circumvented, which leads to a general approach that is independent of the underlying conservation law. These methods have been shown to reliably and effectively solve inviscid and viscous, steady and unsteady, inert and reacting flows of varying degrees of shock complexity [14, 41, 36, 23, 12, 15, 13]. Because these methods inherently and automatically align the computational grid with shocks in the domain, they are well-suited to many-query studies involving parametrized shocks.
We propose a novel framework based on implicit shock tracking, specifically the HOIST method, to efficiently and robustly conduct many-query analyses of problems involving parameter-dependent _lead shocks_. We define a lead shock as a shock (or, more generally, any non-smooth feature) separating the boundary state from the downstream flow. Bow shocks and primary blast waves are examples of lead shocks; however, in the current setting, even the head of a rarefaction wave in a shock tube qualifies. The approach is initialized by applying the HOIST method on a shock-agnostic mesh at one parameter configuration of interest to generate a shock-aligned mesh and the corresponding flow field. Because the grid is aligned with the lead shock, the solution in all elements upstream of the shock will be a constant equal to the boundary state. As such, all elements upstream of the shock are removed to produce a reduced mesh that will be used for all subsequent parameter configurations, with the farfield boundary condition directly applied on the new shock boundary. In our numerical experiments, this can reduce the size of the computational mesh by three-fold, which is additional computational savings on top of the coarse meshes required by implicit shock tracking approaches [23].
With only the portion of the domain downstream of the lead shock modeled, there is no reason to directly optimize for all nodal coordinates in the mesh if the lead shock is the only non-smooth feature in the domain. Instead, we only optimize the positions of the nodes on the shock boundary with all other nodal coordinates determined by boundary constraints and partial differential equation (PDE) based smoothing using the approach in [22]. For problems with secondary non-smooth features in addition to the lead shock, all downstream nodal positions are optimized to eliminate the need for nonlinear stabilization and provide highly accurate solutions. In addition to substantially reducing the overall degrees of freedom (DoFs) of the
implicit shock tracking discretization, it also improves robustness and accelerates convergence because the overall tracking problem is easier and a high-quality initial guess is provided from the solution at previous parameter configurations. The proposed framework can be used for most _many-query_ applications involving parametrized lead shocks such as optimization, uncertainty quantification, parameter sweeps, "what-if" scenarios, or parameter-based continuation. In the continuation setting, we outline and demonstrate a procedure to leverage partially converged solves at intermediate stages to improve the efficiency of the approach.
The remainder of the paper is organized as follows. Section 2 introduces the governing system of inviscid conservation laws, its reformulation on a fixed reference domain, and its discretization using a discontinuous Galerkin method. Section 3 provides a brief summary of the HOIST formulation with targeted mesh optimization proposed in [42; 23; 22]. Section 4 introduces the specialized HOIST solver for abstract many-query problems involving parametrized shocks and its specialization to parameter-based continuation with partially converged intermediate stages. Finally, Section 5 demonstrates the robustness and flexibility of the proposed approach for Mach continuation of two- and three-dimensional supersonic and hypersonic problems, and for a parameter sweep of a Riemann problem (Euler equations) parametrized by its initial condition.
## 2 Governing equations and high-order discretization
In this section, we introduce a system of steady conservation laws whose solution will be assumed to contain a lead shock (Section 2.1). Next, we transform the system of conservation laws to a reference domain such that domain deformations appear explicitly in the governing equations (Section 2.2), and discretize the transformed equations using a high-order DG method (Section 2.3).
### System of conservation laws
Consider a general system of \(m\) inviscid conservation laws over \(\Omega\subset\mathbb{R}^{d}\)
\[\nabla\cdot F(U)=S(U)\quad\text{in}\ \ \Omega, \tag{1}\]
where \(U:\Omega\to\mathbb{R}^{m}\) is implicitly defined as the solution of (1), \(F:\mathbb{R}^{m}\to\mathbb{R}^{m\times d}\) is the flux function, and \(S:\mathbb{R}^{m}\to\mathbb{R}^{m}\) is the source term. In general, the solution \(U(x)\) may contain discontinuities, in which case the conservation law (1) holds away from the discontinuities and the Rankine-Hugoniot conditions hold at discontinuities. In this work, we focus on problems containing a _lead shock_ and, for concreteness, focus on the compressible Euler equations (Section 5). However, the method developed in this work applies to any conservation law of the form (1) whose solution contains a lead shock.
### Transformed system of conservation laws on a fixed reference domain
Let \(\mathbb{G}\) be the collection of diffeomorphisms from a reference domain \(\Omega_{0}\) to the physical domain \(\Omega\), i.e., for \(\mathcal{G}\in\mathbb{R}\), we have
\[\mathcal{G}:\Omega_{0}\to\Omega,\quad\mathcal{G}:X\mapsto\mathcal{G}(X). \tag{2}\]
Following the approach in [40], for any \(\mathcal{G}\in\mathbb{G}\), the conservation law on the physical domain \(\Omega\) is transformed to a conservation law on the reference domain \(\Omega_{0}\) as
\[\bar{\nabla}\cdot\bar{F}(\bar{U};G)=\bar{S}(\bar{U};g)\quad\text{in}\ \ \Omega_{0}, \tag{3}\]
where \(\bar{\nabla}\) is the gradient operator on the reference domain, \(\bar{U}:\bar{\Omega}\to\mathbb{R}^{m}\) is the transformed solution, \(\bar{F}:\mathbb{R}^{m}\times\mathbb{R}^{d\times d}\to\mathbb{R}^{m\times d}\) is the transformed flux function, \(\bar{S}:\mathbb{R}^{m}\times\mathbb{R}\to\mathbb{R}^{m}\) is the transformed source term, and
\[G=\bar{\nabla}\mathcal{G},\qquad g=\det G \tag{4}\]
are the deformation gradient and Jacobian, respectively, of the mapping \(\mathcal{G}\in\mathbb{G}\). For any \(X\in\Omega_{0}\), the transformed and physical solution are related as
\[\bar{U}(X)=U(\mathcal{G}(X)), \tag{5}\]
and the transformed flux and source term are defined as
\[\bar{F}:(\bar{W};\Theta)\mapsto(\det\Theta)F(\bar{W})\Theta^{-T},\qquad\bar{S}:( \bar{W};q)\mapsto qS(\bar{W}). \tag{6}\]
### Discontinuous Galerkin discretization of the transformed conservation law
We use a standard nodal discontinuous Galerkin method to discretize the transformed conservation law (3). Let \(\mathcal{E}_{h}\) represent a discretization of the reference domain \(\Omega_{0}\) into non-overlapping, potentially curved, computational elements. The DG trial space of discontinuous piecewise polynomials associated with the mesh \(\mathcal{E}_{h}\) is defined as
\[\mathcal{V}_{h}^{p}=\left\{v\in[L^{2}(\Omega_{0})]^{m}\;\big{|}\;v|_{K}\in[ \mathcal{P}_{p}(K)]^{m},\;\forall K\in\mathcal{E}_{h}\right\}, \tag{7}\]
where \(\mathcal{P}_{p}(K)\) is the space of polynomial functions of degree at most \(p\geq 1\) on the element \(K\). The space of globally continuous piecewise polynomials of degree \(q\) associated with the mesh \(\mathcal{E}_{h}\) is defined as
\[\mathcal{W}_{h}=\left\{v\in C^{0}(\Omega_{0})\;\big{|}\;v|_{K}\in\mathcal{P}_{ q}(K),\;\forall K\in\mathcal{E}_{h}\right\}; \tag{8}\]
we discretize the domain mapping \((\mathcal{G}\in\mathbb{G})\) with the corresponding vector-valued space \([\mathcal{W}_{h}]^{d}\). Taking the DG test space to be \(\mathcal{V}_{h}^{p^{\prime}}\), where \(p^{\prime}\geq p\), the DG formulation is: given \(\mathcal{G}_{h}\in[\mathcal{W}_{h}]^{d}\), find \(\bar{U}_{h}\in\mathcal{V}_{h}^{p}\) such that for all \(\bar{\psi}_{h}\in\mathcal{V}_{h}^{p^{\prime}}\), we have
\[\int_{\mathcal{E}K}\bar{\psi}_{h}^{+}\cdot\bar{\mathcal{H}}(\bar{U}_{h}^{+}, \bar{U}_{h}^{-},N_{h};\bar{\nabla}\mathcal{G}_{h})\,dS-\int_{K}\bar{F}(\bar{U} _{h};\bar{\nabla}\mathcal{G}_{h}):\bar{\nabla}\bar{\psi}_{h}\,dV=\int_{K}\bar {\psi}_{h}\cdot\bar{S}(\bar{U}_{h};\det(\bar{\nabla}\mathcal{G}_{h}))\,dV, \tag{9}\]
where \(N_{h}\) is the unit outward normal to element \(K\in\mathcal{E}_{h}\), \(\bar{W}_{h}^{+}\) (\(\bar{W}_{h}^{-}\)) denotes the interior (exterior) trace of \(\bar{W}_{h}\) to the element \(K\) for \(\bar{W}_{h}\in\mathcal{V}_{h}^{s}\) (any \(s\in\{0,1,\dots\}\)), and \(\bar{\mathcal{H}}\) is the numerical flux function associated with the reference inviscid flux \(\bar{F}\); see [23] for additional details.
After introducing a basis for the test, trial, and domain mapping spaces, the governing DG equations (9) with \(p^{\prime}=p\) reduces to the algebraic residual form
\[\mathbf{r}:\mathbb{R}^{N_{\mathbf{u}}}\times\mathbb{R}^{N_{\mathbf{u}}}\to\mathbb{R}^{N_{ \mathbf{u}}},\qquad\mathbf{r}:(\mathbf{u},\mathbf{x})\mapsto\mathbf{r}(\mathbf{u},\mathbf{x}), \tag{10}\]
where \(N_{\mathbf{u}}=\dim\mathcal{V}_{h}^{p}\) and \(N_{\mathbf{x}}=\dim([\mathcal{W}_{h}]^{d})\); \(\mathbf{u}\in\mathbb{R}^{N_{\mathbf{u}}}\) is the vector of flow field coefficients and \(\mathbf{x}\in\mathbb{R}^{N_{\mathbf{x}}}\) is the vector of nodal coordinates of the mesh (also called mesh DoFs). Notice that for a fixed mesh \(\mathbf{x}\), (10) is a standard DG discretization residual. In addition, we define the algebraic enriched residual associated with a test space of degree \(p^{\prime}=p+1\) as
\[\mathbf{R}:\mathbb{R}^{N_{\mathbf{u}}}\times\mathbb{R}^{N_{\mathbf{x}}}\to\mathbb{R}^{N_{ \mathbf{u}}^{\prime}},\qquad\mathbf{R}:(\mathbf{u},\mathbf{x})\mapsto\mathbf{R}(\mathbf{u},\mathbf{x}), \tag{11}\]
where \(N_{\mathbf{u}}^{\prime}=\dim\mathcal{V}_{h}^{p^{\prime}}\), which will later be used to construct the HOIST objective function.
## 3 The High-Order Implicit Shock Tracking (HOIST) method
In this section, we provide a brief summary of the HOIST method with targeted mesh optimization [42; 23], which allows only a selected portion of the mesh DoFs to be optimized while aligning mesh faces with non-smooth solution features and preserving boundaries. This is achieved by partitioning the mesh DoFs \(\mathbf{x}\in\mathbb{R}^{N_{\mathbf{x}}}\) as
\[\mathbf{x}=(\mathbf{x}_{\rm c},\mathbf{x}_{\rm u}),\qquad\mathbf{x}_{\rm u}=(\mathbf{y},\mathbf{x}_{ \rm s}) \tag{12}\]
where \(\mathbf{x}_{\rm c}\in\mathbb{R}^{N_{\mathbf{x}}^{\rm c}}\) are the constrained DoFs and \(\mathbf{x}_{\rm u}\in\mathbb{R}^{N_{\mathbf{x}}^{\rm s}}\) are the unconstrained DoFs. Following [23], \(\mathbf{x}_{\rm u}\) can be freely chosen (e.g., to align element faces with non-smooth features), whereas \(\mathbf{x}_{\rm c}\) is uniquely determined from \(\mathbf{x}_{\rm u}\). Following [22], unconstrained DoFs are further partitioned into optimized DoFs \(\mathbf{y}\in\mathbb{R}^{N_{\mathbf{y}}}\) and smoothed DoFs \(\mathbf{x}_{\rm s}\in\mathbb{R}^{N_{\mathbf{x}}^{\rm s}}\), where \(\mathbf{y}\) will be optimized for shock alignment and \(\mathbf{x}_{\rm s}\) will be determined through PDE-based smoothing. For abstraction, we let \(\mathbf{\phi}\) be a parametrization of the mesh DoFs
\[\mathbf{\phi}:\mathbb{R}^{N_{\mathbf{y}}}\to\mathbb{R}^{N_{\mathbf{x}}},\qquad\mathbf{\phi}:\bm {y}\mapsto\mathbf{\phi}(\mathbf{y}), \tag{13}\]
that maps the optimized mesh DoFs to all mesh DoFs, i.e., \(\mathbf{x}=\mathbf{\phi}(\mathbf{y})\). The parametrization must be constructed to ensure 1) optimized mesh DoFs (\(\mathbf{y}\)) can move freely, 2) nodes on fixed domain boundaries can only slide along those boundaries by computing the constrained mesh DoFs (\(\mathbf{x}_{\rm c}\)) from \(\mathbf{y}\), and 3) smoothed mesh DoFs (\(\mathbf{x}_{\rm s}\)) are determined through PDE-based smoothing (e.g., linear elasticity equations with pure Dirichlet boundary conditions). A complete description of the construction of \(\mathbf{\phi}\) can be found in [42; 23] (boundary preservation only) and [22] (boundary preservation and PDE-based smoothing).
The HOIST method is formulated as an optimization problem over the DG solution coefficients and the optimized mesh DoFs as
\[(\mathbf{u}^{\star},\mathbf{y}^{\star}):=\operatorname*{arg\,min}_{\mathbf{u}\in\mathbb{R} ^{N_{\mathbf{u}}},\mathbf{y}\in\mathbb{R}^{N_{\mathbf{y}}}}f(\mathbf{u},\mathbf{\phi}(\mathbf{y})) \quad\text{subject to:}\quad\mathbf{r}(\mathbf{u},\mathbf{\phi}(\mathbf{y}))=\mathbf{0}, \tag{14}\]
where \(f:\mathbb{R}^{N_{\mathbf{u}}}\times\mathbb{R}^{N_{\mathbf{u}}}\to\mathbb{R}\) is the objective function defined in [42; 23] and \(\mathbf{x}^{\star}=\mathbf{\phi}(\mathbf{y}^{\star})\) are the nodal coordinates of the discontinuity-aligned mesh. The objective function consists of two terms as
\[f:(\mathbf{u},\mathbf{x})\mapsto f_{\rm err}(\mathbf{u},\mathbf{x})+\kappa^{2}f_{\rm msh}( \mathbf{x}), \tag{15}\]
where \(f_{\rm err}:\mathbb{R}^{N_{\mathbf{u}}}\times\mathbb{R}^{N_{\mathbf{u}}}\to\mathbb{R}\) is the alignment term and \(f_{\rm msh}:\mathbb{R}^{N_{\mathbf{u}}}\to\mathbb{R}\) is a mesh quality term, defined as
\[f_{\rm err}:(\mathbf{u},\mathbf{x})\mapsto\frac{1}{2}\left\|\mathbf{R}(\mathbf{u},\mathbf{x}) \right\|_{2}^{2},\qquad f_{\rm msh}:\mathbf{x}\mapsto\frac{1}{2}\left\|\mathbf{R}_{\rm msh }(\mathbf{x})\right\|_{2}^{2}, \tag{16}\]
\(\mathbf{R}_{\rm msh}:\mathbb{R}^{N_{\mathbf{u}}}\to\mathbb{R}^{|\mathcal{E}_{h}|}\) is an element-wise mesh distortion residual defined in [23], and \(\kappa\) is a penalty parameter that balances the alignment and mesh quality objectives; see [23] for a complete definition of \(\mathbf{R}_{\rm msh}\) and an adaptive algorithm to set \(\kappa\). The norm of the enriched DG residual has proven to be an effective alignment indicator [42; 23] because it penalizes non-physical oscillations that will arise on meshes that do not fit solution discontinuities.
A sequential quadratic programming (SQP) method with a modified Levenberg-Marquardt Hessian approximation introduced in [14; 42] is used to solve the optimization problem (14). The SQP solver simultaneously converges the optimized mesh DoFs (\(\mathbf{y}\)) and the DG solution coefficients (\(\mathbf{u}\)) to their optimal values, i.e., a high-order DG solution (\(\mathbf{u}^{\star}\)) on a discontinuity-aligned mesh (\(\mathbf{\phi}(\mathbf{y}^{\star})\)). This is accomplished by combining the solution and optimized mesh DoFs into a single vector of optimization variables \(\mathbf{z}=(\mathbf{u},\mathbf{y})\in\mathbb{R}^{N_{\mathbf{z}}}\) (\(N_{\mathbf{z}}=N_{\mathbf{u}}+N_{\mathbf{y}}\)) and generating a sequence of iterates as
\[\mathbf{z}_{i+1}=\mathbf{z}_{i}+\alpha_{i}\Delta\mathbf{z}_{i} \tag{17}\]
for \(i=0,1,\dots\), where \(\mathbf{z}_{i}\in\mathbb{R}^{N_{\mathbf{z}}}\) is the vector of optimization variables at iteration \(i\), \(\Delta\mathbf{z}_{i}\in\mathbb{R}^{N_{\mathbf{z}}}\) is the search direction at the \(i\)th iteration, and \(\alpha_{i}\in\mathbb{R}_{>0}\) is the step length at the \(i\)th iteration. The search direction is determined at each iteration by solving a quadratic approximation to (14) at \(\mathbf{z}_{i}\) and the step length is determined via a line search of an \(\ell_{1}\) merit function. A complete description of the HOIST method and solver can be found in [42; 23].
## 4 The HOIST method for flows with parametrized shocks
In this section, we introduce a framework for solving flows with parametrized lead shocks using the HOIST method. This approach can be leveraged for _many-query_ analyses such as optimization, uncertainty quantification, parameter sweeps, or "what-if" scenarios, or to drive a continuation strategy for more complex flow regimes.
### Parametrized setting
While not introduced as such for brevity, all terms in the conservation law (1) as well as its transformed (3) and discrete variants (10)-(11) depend on _problem data_ such as boundary condition, domain geometry, material properties, etc. Many-query analyses and continuation-based solvers inherently vary one or more of these parameters for some purpose, e.g., find an optimal solution (optimization), compute a probability
distribution (uncertainty quantification), solve a complex problem using a sequence of easier ones (continuation). To handle these settings, we explicitly introduce dependence of the discretized residuals on a collection of parameters. That is, we redefine the standard DG residual in (10) as
\[\mathbf{r}:\mathbb{R}^{N_{\mathbf{u}}}\times\mathbb{R}^{N_{\mathbf{u}}}\times\mathcal{D} \rightarrow\mathbb{R}^{N_{\mathbf{u}}},\qquad\mathbf{r}:(\mathbf{u},\mathbf{x};\mathbf{\mu}) \mapsto\mathbf{r}(\mathbf{u},\mathbf{x};\mathbf{\mu}) \tag{18}\]
and the enriched DG residual in (11) as
\[\mathbf{R}:\mathbb{R}^{N_{\mathbf{u}}}\times\mathbb{R}^{N_{\mathbf{u}}}\times\mathcal{D} \rightarrow\mathbb{R}^{N^{\prime}_{\mathbf{u}}},\qquad\mathbf{R}:(\mathbf{u},\mathbf{x};\mathbf{ \mu})\mapsto\mathbf{R}(\mathbf{u},\mathbf{x};\mathbf{\mu}), \tag{19}\]
where \(\mathcal{D}\subset\mathbb{R}^{N_{\mathbf{\mu}}}\) is a space of admissible parameter configurations and \(\mathbf{\mu}\in\mathcal{D}\) is a vector of parameters that either directly or indirectly defines relevant problem data. For this work, we assume that for any \(\mathbf{\mu}\in\mathcal{D}\), the flow contains a lead shock that separates the farfield from shock-downstream portion of the domain. Furthermore, we assume topologically equivalent or similar lead shocks for all \(\mathbf{\mu}\in\mathcal{D}\).
From these definitions, the HOIST solution in (14) becomes parameter dependent, and we let \((\mathbf{u}_{\mathbf{\mu}}^{\star},\mathbf{y}_{\mathbf{\mu}}^{\star})\subset\mathbb{R}^{N_{\bm {u}}}\times\mathbb{R}^{N_{\mathbf{y}}}\) denote the solutions of (14) with \(\mathbf{r}\) and \(\mathbf{R}\) replaced by \(\mathbf{r}(\,\cdot\,,\,\cdot\,;\mathbf{\mu})\) and \(\mathbf{R}(\,\cdot\,,\,\cdot\,;\mathbf{\mu})\), respectively. Because the optimization problem in (14) is non-convex, it will have multiple local minima. However, from a given starting point \((\bar{\mathbf{u}},\bar{\mathbf{y}})\), the HOIST solver [23] will return a single solution. We let \(\Upsilon:\mathbb{R}^{N_{\mathbf{u}}}\times\mathbb{R}^{N_{\mathbf{y}}}\times\mathcal{D} \rightarrow\mathbb{R}^{N_{\mathbf{u}}}\times\mathbb{R}^{N_{\mathbf{y}}}\) be an operator that maps the starting point \((\bar{\mathbf{u}},\bar{\mathbf{y}})\) and parameter configuration \(\mathbf{\mu}\) to the element of the set \((\mathbf{u}_{\mathbf{\mu}}^{\star},\mathbf{y}_{\mathbf{\mu}}^{\star})\) returned by the HOIST solver, i.e.,
\[\Upsilon:(\bar{\mathbf{u}},\bar{\mathbf{y}},\mathbf{\mu})\mapsto\Upsilon(\bar{\mathbf{u}}, \bar{\mathbf{y}},\mathbf{\mu})\in(\mathbf{u}_{\mathbf{\mu}}^{\star},\mathbf{y}_{\mathbf{\mu}}^{\star}). \tag{20}\]
### Many-query analysis
In the remainder, we consider an abstract many-query setting where our goal is to determine the solution of the conservation law for all parameter configurations in an ordered subset \(\Xi\subset\mathcal{D}\) with \(\Xi=\{\mathbf{\mu}_{1},\ldots,\mathbf{\mu}_{N}\}\). Depending on the many-query setting, the set may be known _a priori_ (e.g., parameter sweeps, static continuation, Monte Carlo approaches to uncertainty quantification), or built adaptively (e.g., optimization, adaptive continuation, "what-if" scenarios). The solutions of the conservation law will be computed sequentially according to the ordering of the parameters, where the initial guess for the flow and mesh DoFs will come from the solution of the previous parameter. Optimal ordering of the parameters will depend on the many-query setting and the problem under consideration.
For the first parameter \(\mathbf{\mu}_{1}\), we follow the standard HOIST approach (no PDE-based smoothing) [23] where the initial mesh DoFs \((\bar{\mathbf{y}})\) come directly from a mesh generator and the initial flow solution \((\bar{\mathbf{u}})\) is the first-order finite volume solution of the parametrized PDE at \(\mathbf{\mu}=\mathbf{\mu}_{1}\) (or any other reasonable initial guess). From this initial guess, we compute the HOIST solution and make the observation that the solution in all elements upstream of the shock is constant and equal to the farfield boundary condition, making this a costly waste of degrees of freedom. Therefore, we remove all elements upstream of the lead shock and directly apply the farfield boundary condition to the lead shock itself (referred to as the _shock boundary_ in the remainder). This procedure is illustrated in Figure 1.
For all subsequent parameters, only elements downstream of the lead shock remain so changes to the lead shock position and shape come from deformations to the shock boundary. For problems where the lead shock is the _only_ shock for all \(\mathbf{\mu}\in\mathcal{D}\), we take the optimized mesh DoFs (\(\mathbf{y}\)) to be all DoFs on the shock boundary and the smoothed mesh DoFs (\(\mathbf{x}_{s}\)) to be all remaining unconstrained mesh DoFs (Figure 1). In this case, there is no need to allow topological changes to the mesh (e.g., element collapses as in [23]) so the mesh parametrization \(\mathbf{\phi}\) will be fixed for all parameters \(\mathbf{\mu}_{2},\ldots,\mathbf{\mu}_{N}\). For problems with secondary shocks, all unconstrained mesh DoFs are taken as optimized mesh DoFs. For either case, nodes that lie on the intersection of the lead shock boundary and other boundaries should respect the constraints of the fixed boundaries (Figure 1). In this work, we consider problems with only lead shocks as well as problems with both lead and secondary shocks.
With the initial HOIST solve, element removal, and mesh parametrization settled, we state the many-query analysis algorithm. Let \((\mathbf{u}_{1}^{\star},\mathbf{y}_{1}^{\star})\) denote the HOIST solution after the elements upstream of the shock have been removed and the nodal coordinates are parametrized on the new mesh. Then, for \(k=2,\ldots,N\)
Figure 1: Initial guess (density) on a shock-agnostic mesh of \(M_{\varnothing}=2\) flow over cylinder (_left_), the shock-aligned mesh and corresponding solution obtained using HOIST (_middle-left_), the corresponding solution and mesh extracted from downstream the bow shock (_middle-right_), and a schematic visualizing the mesh parametrization (_right_). Legend: shock boundary nodes that move freely (\(\copy\)), outlet boundary nodes that are constrained to slide along original boundary (\(\star\)), cylinder boundary nodes that are fixed (\(\star\)), and all remaining nodes (not shown for clarity) are determined from PDE-based smoothing.
we define the HOIST solution at parameter \(\mathbf{\mu}_{k}\) as
\[(\mathbf{u}_{k}^{\star},\mathbf{y}_{k}^{\star})\coloneqq\Upsilon(\mathbf{u}_{k-1}^{\star}, \mathbf{y}_{k-1}^{\star},\mathbf{\mu}_{k}). \tag{21}\]
That is, the HOIST solution for \(\mathbf{\mu}_{k}\) is initialized from the HOIST solution at \(\mathbf{\mu}_{k-1}\). Because the nodal coordinates are parametrized with optimized mesh DoFs on the lead shock boundary, the HOIST method naturally deforms that boundary to align with the new lead shock location (Section 5). The complete algorithm is summarized in Algorithm 1.
**Remark 1**.: _The number of unconstrained mesh DoFs have been significantly reduced by removing elements upstream of the shock and only including nodes on the lead shock as unconstrained mesh DoFs relative to the standard HOIST setting. This allows the lead shock to be tracked with high accuracy, but can stall deep convergence of the HOIST solver to tight optimality tolerances. Rapid, deep convergence can be restored by terminating the HOIST solver once the mesh has converged and updating \(\mathbf{u}_{k}^{\star}\) with the solution of_
\[\mathbf{r}(\,\cdot\,,\mathbf{\phi}(\mathbf{y}_{k}^{\star});\mathbf{\mu}_{k})=\mathbf{0} \tag{22}\]
_starting from the HOIST output \(\mathbf{u}_{k}^{\star}\), e.g., using Newton-Raphson iterations. That is, the HOIST flow solution is updated with a fixed-mesh DG solve. Algorithms 1-2 both include this (optional) fixed mesh solve._
```
0: Reference mesh of entire domain \(\bar{\mathcal{E}}_{h}\), parameter set \(\Xi=\{\mathbf{\mu}_{1},\ldots,\mathbf{\mu}_{N}\}\)
0: HOIST solution over \(\Xi\): \(\{(\mathbf{u}_{1}^{\star},\mathbf{y}_{1}^{\star}),\ldots,(\mathbf{u}_{N}^{\star},\mathbf{y}_{N }^{\star})\}\)
1:Initial HOIST solve: Compute HOIST solution \((\tilde{\mathbf{u}}_{1}^{\star},\tilde{\mathbf{y}}_{1}^{\star})\) at \(\mathbf{\mu}_{1}\) over \(\bar{\mathcal{E}}_{h}\)
2:Reduce mesh: Create reduced mesh \(\mathcal{E}_{h}\) as \(\bar{\mathcal{E}}_{h}\) without elements upstream of the lead shock
3:Transfer solution: Transfer solution \(\tilde{\mathbf{u}}_{1}^{\star}\) to reduced mesh \(\mathbf{u}_{1}^{\star}\)
4:Parametrize reduced mesh: Determine parametrization of reduced mesh \(\mathbf{\phi}\) (Section 4.2)
5:Transfer mesh DoFs: Transfer mesh DoFs \(\tilde{\mathbf{y}}_{1}^{\star}\) to reduced mesh \(\mathbf{y}_{1}^{\star}\)
6:for\(k=2,\ldots,N\)do
7: HOIST solve: \((\mathbf{u}_{k}^{\star},\mathbf{y}_{k}^{\star})=\Upsilon(\mathbf{u}_{k-1}^{\star},\mathbf{y}_ {k-1}^{\star},\mathbf{\mu}_{k})\)
8: Fixed mesh solve: Solve \(\mathbf{r}(\mathbf{u},\mathbf{\phi}(\mathbf{y}_{k}^{\star});\mathbf{\mu}_{k})=\mathbf{0}\) for \(\mathbf{u}\) with initial guess \(\mathbf{u}_{k}^{\star}\), replace \(\mathbf{u}_{k}^{\star}\leftarrow\mathbf{u}\)
9:endfor
```
**Algorithm 1** HOIST method for many-query analysis with parametrized lead shock
### Application: Parameter continuation with early termination
Algorithm 1 is well-suited for many-query settings that require an accurate solution for every \(\mathbf{\mu}\in\Xi\) such as parameter sweeps and "what-if" scenarios. However, applications such as optimization and continuation only need the solution at \(\mathbf{\mu}_{N}\); solutions at all previous parameters either aid in finding \(\mathbf{\mu}_{N}\) or robustly computing the solution at \(\mathbf{\mu}_{N}\). As such, the solution at intermediate parameters \(\mathbf{\mu}_{1},\ldots,\mathbf{\mu}_{N-1}\) can be computed approximately to improve computational efficiency. We focus on the continuation setting because the tolerances used for intermediate solutions in an optimization setting are intimately tied to global convergence theory [27] and beyond the scope of this work.
To this end, we introduce two convergence criteria based on (1) the number of iterations and (2) relative reduction of the DG residual. Let \((\mathbf{u}_{k}^{i},\mathbf{y}_{k}^{i})\) denote the \(i\)th iteration of the HOIST SQP solver at \(\mathbf{\mu}_{k}\) with \((\mathbf{u}_{k}^{0},\mathbf{y}_{k}^{0})=(\mathbf{u}_{k-1}^{\star},\mathbf{y}_{k-1}^{\star})\). Then, we define the modified HOIST operator \(\Upsilon_{n,\xi}:\mathbb{R}^{N_{\mathbf{u}}}\times\mathbb{R}^{N_{\mathbf{u}}}\times \mathcal{D}\to\mathbb{R}^{N_{\mathbf{u}}}\times\mathbb{R}^{N_{\mathbf{y}}}\) for \(n>1\) and \(\xi<1\) to be
\[\Upsilon_{n,\xi}:(\mathbf{u}_{k}^{0},\mathbf{y}_{k}^{0},\mathbf{\mu}_{k})\mapsto(\tilde{\bm {u}}_{k}^{\star},\tilde{\mathbf{y}}_{k}^{\star})\coloneqq(\mathbf{u}_{k}^{I},\mathbf{y}_ {k}^{I}), \tag{23}\]
where \(I\leq n\) is the smallest number such that
\[\left\|\mathbf{r}(\mathbf{u}_{k}^{I},\mathbf{\phi}(\mathbf{y}_{k}^{I});\mathbf{\mu}_{k})\right\| \leq\xi\left\|\mathbf{r}(\mathbf{u}_{k}^{0},\mathbf{\phi}(\mathbf{y}_{k}^{0});\mathbf{\mu}_{k}) \right\|. \tag{24}\]
That is, \((\tilde{\mathbf{u}}_{k}^{\star},\tilde{\mathbf{y}}_{k}^{\star})\) is the output of the HOIST SQP solver at parameter \(\mathbf{\mu}_{k}\) after \(n\) iterations or when the residual converges to a relative tolerance of \(\xi\), whichever occurs first. Then, we introduce iteration limits \(n_{1}\leq n_{2}\) and tolerances \(\xi_{2}\leq\xi_{1}\), and use \(\Upsilon_{n_{1},\xi_{1}}\) for intermediate parameters \(k=1,\ldots,N-1\) and \(\Upsilon_{n_{2},\xi_{2}}\) for the last parameter \(k=N\). The complete algorithm is summarized in Algorithm 2.
## 5 Numerical experiments
In this section, we demonstrate the proposed HOIST framework for shock-dominated flow problems parametrized by the farfield Mach number (Section 5.1) and initial condition (Section 5.2). We examine the robustness of the framework under different solver settings to show the framework is able to accurately track the lead shock across parameter configurations and provide high-quality solutions on extremely coarse meshes.
### Mach continuation
We begin by considering a series of compressible, inviscid flows with a bow shock that separates the boundary state from the downstream flow. We consider several simple bluff body flows (Section 5.1.1-5.1.3) that only possess a bow shock to study the method and close with flow over a double wedge geometry (Section 5.1.4) that has complex shock-shock interactions downstream of the bow shock. For these problems, the flow through the domain \(\Omega\subset\mathbb{R}^{d}\) is modeled using the steady Euler equations
\[\begin{split}\frac{\partial}{\partial x_{j}}\left(\rho(x)v_{j}( x)\right)&=0,\\ \frac{\partial}{\partial x_{j}}\left(\rho(x)v_{i}(x)v_{j}(x)+P( x)\delta_{ij}\right)&=0,\\ \frac{\partial}{\partial x_{j}}\left(\left[\rho(x)E(x)+P(x) \right]v_{j}(x)\right)&=0,\end{split} \tag{25}\]
for all \(x\in\Omega\), where \(i=1,\ldots,d\) and summation is implied over the repeated index \(j=1,\ldots,d\). The density \(\rho:\Omega\rightarrow\mathbb{R}_{>0}\), velocity \(v_{i}:\Omega\rightarrow\mathbb{R}\) in the \(x_{i}\) direction for \(i=1,\ldots,d\), and total energy \(E:\Omega\rightarrow\mathbb{R}_{>0}\) of the fluid are implicitly defined as the solution of (25). We assume the fluid is an ideal gas that follows the thermal and caloric state equations
\[P=\rho RT,\qquad e=\frac{1}{\gamma-1}\frac{P}{\rho}, \tag{26}\]
where \(P:\Omega\rightarrow\mathbb{R}_{>0}\) and \(T:\Omega\rightarrow\mathbb{R}_{>0}\) are the pressure and temperature of the fluid, \(e:\Omega\rightarrow\mathbb{R}_{>0}\) is the internal energy of the fluid, \(R\) is the specific gas constant, and \(\gamma\) is the ratio of specific heats. The frozen sound speed \(c:\Omega\rightarrow\mathbb{R}_{>0}\) and Mach number \(M:\Omega\rightarrow\mathbb{R}_{>0}\) are defined as
\[c^{2}:=\frac{\gamma P}{\rho},\qquad M:=\frac{\sqrt{v_{i}v_{i}}}{c}. \tag{27}\]
From the definition of the specific total energy \(E:\Omega\to\mathbb{R}_{>0}\) and total enthalpy \(H:\Omega\to\mathbb{R}_{>0}\)
\[E:=e+\frac{v_{i}v_{i}}{2},\qquad H:=\frac{\rho E+P}{\rho}, \tag{28}\]
the pressure of the fluid, \(P:\Omega\to\mathbb{R}_{>0}\), is directly related to the conservative variables for an ideal gas as
\[P=(\gamma-1)\left(\rho E-\frac{\rho v_{i}v_{i}}{2}\right). \tag{29}\]
The relationships between the downstream stagnation pressure \(p_{02}\) and the upstream pressure \(P_{\infty}\), and between the stagnation temperature \(T_{0}\) and the upstream temperature \(T_{\infty}\), are defined through the upstream Mach number \(M_{\infty}\) and \(\gamma\) as
\[\frac{p_{02}}{P_{\infty}}=\frac{\left[\frac{(\gamma+1)M_{\infty}^{2}}{2} \right]^{\gamma/(\gamma-1)}}{\left[\left(\frac{2\gamma M_{\infty}^{2}}{\gamma +1}\right)-\left(\frac{\gamma-1}{\gamma+1}\right)\right]^{1/(\gamma-1)}}, \qquad\frac{T_{0}}{T_{\infty}}=1+\frac{\gamma-1}{2}M_{\infty}^{2}. \tag{30}\]
Finally, the Euler equations can be written as a general system of conservation laws (1) as
\[U(x)=\begin{bmatrix}\rho(x)\\ \rho(x)v(x)\\ \rho(x)E(x)\end{bmatrix},\qquad F(U(x))=\begin{bmatrix}\rho(x)v(x)^{T}\\ \rho v(x)v(x)^{T}+P(x)I\\ (\rho(x)E(x)+P(x))v(x)^{T}\end{bmatrix},\qquad S(U(x))=0.\]
**Remark 2**.: _We found that it is necessary to use boundary conditions on the shock boundary that directly evaluate the physical flux at the boundary state, as opposed to boundary conditions based on Riemann solvers. We found that Riemann solver boundary conditions can fail to enforce the boundary state on the shock boundary and derail the implicit tracking process._
#### 5.1.1 Inviscid flow over cylinder, Mach continuation to \(M_{\infty}=3\)
In this problem, we solve for steady \(M_{\infty}=3\) flow over a cylinder in \(d=2\) dimensions using Mach continuation beginning at \(M_{\infty}=2\). At \(M_{\infty}=3\), we have the following stagnation quantities, which will be used to make quantitative assessment of the accuracy of the HOIST method: \(p_{02}=12.061\), \(T_{0}=2\), and \(H_{0}=7\). Our approach begins by applying the HOIST method at \(M_{\infty}=2\) on a shock-agnostic mesh of the entire domain consisting of \(229\) elements and extracting the solution and mesh downstream of the lead shock to initialize our continuation strategy, resulting in a reduced mesh with \(172\) elements (Figure 1).
We examine the performance of the framework with different number of continuation stages, \(N=2,5,10,20\). The intermediate stages are solved with at most \(n_{1}=30\) iterations or to a tolerance of \(\xi_{1}=10^{-4}\), and the final stage is solved with at most \(n_{2}=100\) iterations or to a tolerance of \(\xi_{2}=10^{-8}\). The HOIST solver parameters used for all \(\Upsilon\) evaluations are \(\lambda=10^{-4}\), \(\kappa_{0}=1\), and \((\zeta,\upsilon)=(4,0.25)\)[23]. For all four cases, the SQP solver drives the DG residual \(\left\|\mathbf{r}(\mathbf{u}_{k}^{i},\mathbf{\phi}(\mathbf{y}_{k}^{i}),\mathbf{\mu}_{k})\right\|\) rapidly towards the early termination criteria for each intermediate stage; in the final stage, the enriched DG residual \(\left\|\mathbf{R}(\mathbf{u}_{N}^{i},\mathbf{\phi}(\mathbf{y}_{N}^{i}),\mathbf{\mu}_{N})\right\|\) plateaus within \(15\) iterations (Figure 2). Even though the DG residual does not reach a tight tolerance for any of the case, all cases exhibits deep convergence, i.e., \(\left\|\mathbf{r}(\mathbf{u},\mathbf{\phi}(\mathbf{y}_{N}^{i}),\mathbf{\mu}_{N})\right\|\sim \mathcal{O}(10^{-14})\), within \(3\) Newton iterations using the HOIST output as its starting point. Thus, the shock is indeed tracked and the fully discrete PDE is satisfied on the final discontinuity-aligned mesh, which shows the continuation framework is robust with as few as two stages.
The proposed framework produces well-resolved, accurate shock profiles and temperature and pressure distributions along the cylinder (Figure 3). The HOIST solver is robust in that it does not require parameter tuning across stages despite the changing Mach number. The value and the relative error of stagnation quantities evaluated at the stagnation point \((x_{1},x_{2})=(-1,0)\) are reported in Table 1, which are all quite small, particularly considering the coarse grid used (only \(172\) quadratic elements). For the \(N=5\) case, Figure 4 shows the flow solution and mesh at all Mach numbers considered, including intermediate (partially converged) ones. In all cases, the shock is tracked with curved, high-order elements and the solution is well-resolved throughout the domain. The overall mesh quality is also well-preserved as the shock boundary
Figure 2: SQP convergence history of the DG residual \(\left|\mathbf{r}(\mathbf{u}_{k}^{i},\mathbf{\phi}(\mathbf{y}_{k}^{i});\mathbf{\mu}_{k})\right|\) and enriched DG residual \(\left|\mathbf{R}(\mathbf{u}_{k}^{i},\mathbf{\phi}(\mathbf{y}_{k}^{i});\mathbf{\mu}_{k})\right|\), and the DG residual throughout the fixed-mesh Newton iterations \(\left|\mathbf{r}(\mathbf{u}_{i},\mathbf{\phi}(\mathbf{y}_{k}^{j});\mathbf{\mu}_{k})\right|\) (with \(N=2,5,10,20\) (_top to bottom_) continuation stages for \(M_{\infty}=2\) to \(M_{\infty}=3\) continuation (cylinder).
is compressed towards the cylinder.
#### 5.1.2 Inviscid flow over cylinder, Mach continuation to \(M_{\infty}=10\)
Next, we increase the difficulty of the previous problem by increasing the target Mach number to \(M_{\infty}=10\) using Mach continuation beginning at \(M_{\infty}=2\). In this case, the stagnation quantities are \(p_{02}=129.217\), \(T_{0}=15\), and \(H_{0}=52.5\). We use the same initial configuration as in Section 5.1.1 (after applying HOIST to the full domain at \(M_{\infty}=2\) and removing elements upstream of the bow shock), and we take the number of continuation stages to be \(N=40\) (\(5\) stages per Mach number). The intermediate stages are solved with at most \(n_{1}=30\) iterations or to a tolerance of \(\xi_{1}=10^{-4}\), and the final stage is solved with at most \(n_{2}=100\) iterations or to a tolerance of \(\xi_{2}=10^{-8}\). We use the same HOIST solver parameters as in Section 5.1.1 and they remain the same throughout the optimization. In the final stage (from Mach 9.8 to Mach 10), the enriched DG residual plateaus within \(15\) iterations (Figure 5) and the DG residual reaches a tolerance of \(10^{-14}\) with only \(4\) fixed-mesh Newton iterations.
The proposed framework accurately tracks the shock and the fully discrete PDE is satisfied to a tight tolerance on the final discontinuity-aligned mesh. The framework robustly handled the large Mach variation that causes the shock to move significantly from its position at the initial Mach number \(M_{\infty}=2\) to the target Mach number \(M_{\infty}=10\) (Figures 3-7). The peak pressure and temperature along the surface are also significantly larger than the original \(M_{\infty}=2\) case, but are smooth and well-resolved nonetheless. The value and relative error of stagnation quantities evaluated at the stagnation point \((x_{1},x_{2})=(-1,0)\) are reported in Table 1, which are still quite small. Figure 7 shows the flow solution and mesh at all Mach numbers considered, including intermediate (partially converged) ones. In all cases, the shock is tracked with curved, high-order elements and the solution is well-resolved throughout the domain. The overall mesh quality is also well-preserved as the shock boundary is substantially compresses toward the cylinder.
Figure 3: Lead shock position (_top_), and pressure (_bottom left_) and temperature (_bottom right_) along the surface of the cylinder at \(M_{\infty}=3\) using Mach continuation starting from \(M_{\infty}=2\) with \(N=2\) (), \(N=5\) (), \(N=10\) (), and \(N=20\) () stages.
Figure 4: Density distribution at all continuation stages (\(M_{\infty}=2,2.2,2.4,2.6,2.8,3\)) for \(N=5\) (_left_ to _right_).
Figure 6: Lead shock positions (_top_) for \(M_{\infty}=2,3,4,\ldots,10\), and pressure (_bottom left_) and temperature (_bottom right_) along the surface of the cylinder at \(M_{\infty}=10\) using Mach continuation starting from \(M_{\infty}=2\) with \(N=40\) stages.
Figure 7: Density distribution at selected continuation stages (\(M_{\infty}=2,3,4,\ldots,10\)) (_left_ to _right_).
#### 5.1.3 Inviscid flow over sphere, Mach continuation to \(M_{\infty}=3\)
Next, we consider steady \(M_{\infty}=3\) flow over a sphere using Mach continuation beginning at \(M_{\infty}=2\). At \(M_{\infty}=3\), we have the following stagnation quantities, which will be used to make quantitative assessment of the accuracy of the HOIST method: \(p_{02}=12.061\), \(T_{0}=2\), and \(H_{0}=7\) (same as the 2D \(M_{\infty}=3\) case in Section 5.1.1). To reduce the computational cost, we model only an eighth of the geometry and use symmetry boundary conditions. Our approach begins by applying the HOIST method at \(M_{\infty}=2\) on a shock-agnostic mesh of the entire domain consisting of 491 elements and extracting the solution and mesh downstream of the bow shock to initialize our continuation strategy, resulting in a reduced mesh with 159 elements (Figure 8).
We apply our continuation strategy with \(N=10\) stages. The intermediate stages are solved with at most \(n_{1}=15\) iterations or to a tolerance of \(\xi_{1}=10^{-4}\), and the final stage is solved with at most \(n_{2}=100\) iterations or to a tolerance of \(\xi_{2}=10^{-8}\). We use the same HOIST solver parameters as in Section 5.1.1 for all intermediate stages, and we set \(\lambda=0.5\), \(\kappa_{0}=10^{-12}\) in the final stage to allow the solver to further improve the solution. Similarly rapid convergence of the HOIST solver and fixed-mesh solve in the final stage as the two-dimensional problems are observed (Figure 9). Figure 10 shows the flow solution and mesh at the initial (\(M_{\infty}=2\)) and final (\(M_{\infty}=3\)) Mach numbers. We observe that the high-order elements conform to the curvature of the bow shock, which leads to a highly accurate solution on an extremely coarse mesh (only 159 quadratic elements) when combined with the high-order flow field approximation. The overall mesh quality is well-preserved as the shock boundary and mesh elements compress towards the sphere. The value and relative error of stagnation quantities evaluated at the stagnation point \((x_{1},x_{2},x_{3})=(-1,0,0)\) are reported in Table 1, which demonstrates the high accuracy per DoF of the overall approach for three-dimensional problems.
#### 5.1.4 Inviscid flow over double wedge, Mach continuation to \(M_{\infty}=6.8\)
Next, we consider steady \(M_{\infty}=6.8\) flow over a double-wedge geometry with angles \(15^{\circ}\) and \(35^{\circ}\) using Mach continuation beginning at \(M_{\infty}=2.8\). Our approach begins by applying the HOIST method at \(M_{\infty}=2.8\) on a shock-agnostic mesh of the entire domain consisting of 681 elements and extracting the solution and mesh downstream of the lead shock to initialize our continuation strategy, resulting in a reduced mesh with 576 elements (Figure 11). In this experiment, the shock boundary consists of an oblique shock followed by a bow shock, which introduces highly non-uniform mesh motion during the parameter sweep. In addition, this problem possesses complex shock-shock interactions and a supersonic jet downstream of the lead shock as \(M_{\infty}\) is increased.
Figure 8: Flow domain (_left_) and the \(M_{\infty}=2\) initial guess (density) on a shock-agnostic mesh (_middle-left_), the shock-aligned mesh and corresponding solution obtained using HOIST (_middle-right_), and the corresponding solution and mesh extracted from downstream the bow shock (_right_).
Figure 10: Density distribution at the initial \(M_{\infty}=2\) (_left_) and final \(M_{\infty}=3\) (_right_) continuation stage with two views (_top to bottom_).
We apply our continuation strategy with \(N=121\) stages. The intermediate stages are solved with \(n_{1}=50\) iterations and the final stage is solved with \(n_{2}=100\) iterations. The HOIST solver parameters used for all \(\Upsilon\) evaluations are \(\lambda=10^{-1}\), \(\kappa_{0}=10^{-2}\), and \((\zeta,\upsilon)=(2,0.5)\)[23]. Figure 12 shows the flow solution and mesh at selected Mach numbers during the parameter sweep. The density range varies drastically from the initial Mach 2.8 flow to the final Mach 6.8 flow, and complex shock-shock interactions emerge. As the inflow Mach increases, a Type IV shock interaction [32] is observed and a thin layer of elements conforms to the curved supersonic jet.
\begin{table}
\begin{tabular}{c|c|c c|c c|c c} & \(N\) & \(\check{p}_{02}\) & \(|p_{02}-\check{p}_{02}|/p_{02}\) & \(\check{T}_{0}\) & \(|T_{0}-\check{T}_{0}|/T_{0}\) & \(\check{H}_{0}\) & \(|H_{0}-\check{H}_{0}|/H_{0}\) \\ \hline \(M=3\), 2D & 2 & 12.0610 & 6.8939e-06 & 1.9999 & 5.9976e-05 & 6.9996 & 5.8529e-05 \\ & 5 & 12.0588 & 1.7742e-04 & 1.9998 & 9.4091e-05 & 6.9994 & 9.2435e-05 \\ & 10 & 12.0592 & 1.4654e-04 & 1.9999 & 5.3712e-05 & 6.9996 & 5.1601e-05 \\ & 20 & 12.0587 & 1.9162e-04 & 1.9998 & 8.3470e-05 & 6.9994 & 8.1685e-05 \\ \hline \(M=10\), 2D & 40 & 129.2054 & 8.9226e-05 & 15.0029 & 1.9480e-04 & 52.5104 & 1.9905e-04 \\ \hline \(M=3\), 3D & 10 & 11.9799 & 6.7254e-03 & 1.9990 & 5.1980e-04 & 6.9966 & 4.8529e-04 \\ \end{tabular}
\end{table}
Table 1: Quantities of interest at the stagnation point produced by HOIST method at final continuation stage (\(\check{p}_{02}\), \(\check{T}_{0}\), \(\check{H}_{0}\)) and the corresponding relative error for all numerical experiments.
Figure 11: Initial guess (density) on a shock-agnostic mesh of \(M_{\varpi}=2.8\) flow over the double-wedge geometry (_left_), the shock-aligned mesh and corresponding solution obtained using HOIST (_middle_), and the corresponding solution and mesh extracted from downstream the lead shock (_right_). The corresponding boundary conditions are inflow (), slip walls (), and outflow ().
Figure 12: Density distribution for double-wedge simulation (_left to right_): \(M_{w}=2.8\), \(M_{w}=3.77\), \(M_{w}=4.77\) (stage 60, before re-mesh), \(M_{w}=4.77\) (stage 60, after re-mesh), \(M_{w}=5.77\), and \(M_{w}=6.8\).
Due to the nature of this problem, the elements condense near the intersection of the two wedges at \((x_{1},x_{2})=(0.98,0.13)\) forming long and skinny triangles as the Mach number increases. These unavoidably crowded elements can degrade the solution quality, which necessitates re-meshing at an intermediate parameter stage. We choose to re-mesh at the middle of the parameter sweep without any attempt to preserve internal shock structures or even the shock boundary. After the re-mesh, we use the HOIST method with a free shock boundary to re-solve the stage to recover the appropriate lead shock position and internal shock structure. As shown in Figure 13, the initial re-meshed configuration of 837 elements is shock-agnostic with respect to the shock interactions downstream the lead shock, and the tracked configuration of 809 elements resolves those solution features.
Despite the lead shock being accurately tracked, we observe the complex shock interactions have a low resolution and the contact discontinuity is not well-tracked on the coarse mesh as shown in Figure 14. This is due to the low mesh element density in the shock interaction region. As such, we refine the mesh at stage 60 (immediately after the re-mesh and solve), which leads to a grid with 2846 elements, and use the HOIST method to compute the corresponding refined solution (Figure 13). From this mesh and solution, the Mach continuation strategy continues until \(M_{\infty}=6.8\) is reach with \(n_{1}=30\). While the lead shock and solution are visually indistinguishable, a close look at the shock interaction region shows the refined mesh tracks and resolves the Type IV interaction and contact much better. Furthermore, the refined mesh compresses more elements into the supersonic jet to better resolve it. Finally, we show the positions of the lead shock at selected parameter stages (Figure 15) and note the re-mesh at stage 60 did not disrupt the lead shock location because the shock positions overlap (1) before and after the re-mesh at stage 60 and (2) on the coarse and fine mesh at \(M_{\infty}=6.8\).
To close this study, we justify the reduced-mesh Mach continuation approach relative to directly applying the HOIST method to the parameter of interest. For this, we return to the full domain and use HOIST to directly solve the problem at \(M_{\infty}=5.8\) (stage 91). In this setting, we observe poorly condition elements upstream of the lead shock, numerous element collapses near the beginning of the first wedge, an inaccurate lead shock position, and under-resolved shock interactions. These issues can be avoided by either using a refined mesh or the proposed continuation approach (Figure 12).
### Initial condition sweep
To demonstrate the flexibility of the proposed many-query framework beyond Mach continuation, we close by considering a parametrized Riemann problem of the Euler equations. The Euler equations model
Figure 13: Density distributions at \(M_{\infty}=4.77\) (stage 60) for double-wedge simulation: before re-mesh (_left_), after re-mesh with a piecewise constant initial guess for the flow solution (_middle-left_), the HOIST solution after re-mesh (_middle-right_), and the HOIST solution after refinement of new mesh (_right_).
Figure 14: Density distribution at \(M_{\infty}=6.8\) for double-wedge simulation: coarse mesh (_left_), coarse mesh with shock interaction blown up (_middle-left_), refined mesh (_right-middle_), and refined mesh with shock interaction blown up (_right_). Colorbar in Figure 12.
Figure 15: Lead shock positions for the double wedge simulation: \(M_{\infty}=2.8\) (), \(M_{\infty}=3.77\) (), \(M_{\infty}=4.77\) (stage 60 before re-mesh ) (), \(M_{\infty}=4.77\) (stage 60 after re-mesh) (), \(M_{\infty}=5.77\) (), \(M_{\infty}=6.8\) (final stage, coarse mesh) (), \(M_{\infty}=6.8\) (final stage, fine mesh) ().
Figure 16: Initial guess (density) on a shock-agnostic mesh (full domain) of \(M_{\infty}=5.8\) flow over the double-wedge geometry (_left_) and the HOIST solution (_right_). Mesh motion from the shock-upstream portion of the domain introduces poorly conditioned elements.
unsteady, compressible flow in a one-dimensional domain \(\Omega\subset\mathbb{R}\) and read
\[\frac{\partial}{\partial t}\rho(x,t)+\frac{\partial}{\partial x} \left(\rho(x,t)v(x,t)\right) =0,\] \[\frac{\partial}{\partial t}\left(\rho(x,t)v(x,t)\right)+\frac{ \partial}{\partial x}\left(\rho(x,t)v(x,t)^{2}+P(x,t)\right) =0, \tag{31}\] \[\frac{\partial}{\partial t}\left(\rho(x,t)E(x,t)\right)+\frac{ \partial}{\partial x}\left(\left[\rho(x,t)E(x,t)+P(x,t)\right]v(x,t)\right) =0,\]
for all \(x\in\Omega\) and \(t\in\mathcal{T}\), where \(\mathcal{T}:=:=(0,T]\) is the temporal domain and \(T\in\mathbb{R}_{>0}\) is the final time. The density \(\rho:\Omega\times\mathcal{T}\rightarrow\mathbb{R}_{>0}\), velocity \(v:\Omega\times\mathcal{T}\rightarrow\mathbb{R}\), and total energy \(E:\Omega\times\mathcal{T}\rightarrow\mathbb{R}_{>0}\) of the fluid are implicitly defined as the solution of (31). We assume the fluid is an ideal gas, which leads to the relationship between pressure and energy in (29). Finally, the Euler equations can be written as a general system of steady conservation laws (1) over the space-time domain \(\Omega\times\mathcal{T}\) with space-time coordinate \(z=(x,t)\) as
\[U(z)=\begin{bmatrix}\rho(x,t)\\ \rho(x,t)v(x,t)\\ \rho(x,t)E(x,t)\end{bmatrix},\qquad F(U(z))=\begin{bmatrix}\rho(x,t)v(x,t)& \rho(x,t)\\ \rho(x,t)v(x,t)^{2}+P(x,t)&\rho(x,t)v(x,t)\\ (\rho(x,t)E(x,t)+P(x,t))v(x,t)&\rho(x,t)E(x,t)\end{bmatrix},\qquad S(U(z))=0.\]
We consider a Riemann problem of the Euler equations (31) with the following parametrized initial condition
\[(\rho(x,0),v(x,0),P(x,0))=\begin{cases}(\rho_{L},0,P_{L}(\mu))&x<0.5\\ (\rho_{R}(\mu),0,P_{R}(\mu))&x\geq 0.5,\end{cases} \tag{32}\]
where \(\mu\in\mathcal{D}\coloneqq[0,1]\) and
\[P_{L}(\mu)=25\mu+(1-\mu),\qquad\rho_{R}(\mu)=0.35\mu+0.125(1-\mu),\qquad P_{ R}(\mu)=0.075\mu+0.1(1-\mu) \tag{33}\]
over the spatial domain \(\Omega\coloneqq(0,1)\) and temporal domain \(\mathcal{T}=(0,0.2]\). At \(\mu=0\), this corresponds to the canonical Sod shock tube and possesses similar features to the Woodward-Colella blast [38] at \(\mu=1\). The discontinuity magnitudes and propagation speed of the waves varies significantly for \(\mu\in\mathcal{D}\). Because larger values of \(\mu\) will have faster waves, we allow the final time to vary with \(\mu\) such that the waves travel approximately the same distance in \(\mathcal{T}\). That is, we let \(T=T(\mu)\) with \(T(0)=0.2\) and final time for other parameters determined using the HOIST framework.
Following the proposed procedure, we begin with a shock-agnostic mesh of the space-time domain consisting of 82 triangles and apply the HOIST method at \(\mu=0\) (Sod) to create a shock-aligned mesh and the corresponding solution. We use linear (\(q=1\)) mesh elements and quadratic (\(p=2\)) solution approximation because all non-smooth features are straight-sided for any \(\mu\in\mathcal{D}\) and the solution inside the rarefaction is nonlinear. At \(\mu=0\), the solution possesses a shock wave, contact discontinuity, and rarefaction wave, with both the shock wave and head of the rarefaction qualifying as lead "shocks" (non-smooth features separating a boundary from the remainder of the flow). Finally, we remove all elements left of the rarefaction and right of the shock wave, except one layer of upstream elements are retained (not shown in figure for clarity) to improve boundary condition enforcement, to create a reduced mesh of 42 elements (Figure 17).
From this configuration, we perform a parameter sweep to reliably compute the HOIST solution at \(N=125\) uniformly spaced samples in \(\mathcal{D}\) (Figure 18). The HOIST solver parameters used for all \(\Upsilon\) evaluations are \(\lambda=10^{-1}\), \(\kappa_{0}=10^{-6}\), and \((\zeta,\upsilon)=(2,0.5)\). To handle the parameter-dependent final time \(T(\mu)\), we allow the HOIST solver to determine the location of the top boundary (\(t=T\)). Figure 18 shows four selected parameter configurations. The upper temporal boundary adjusts to ensure the waves travel roughly the same spatial distance. At \(\mu=0\), the speed of the shock is approximately \(1.75\) and the density jump magnitude is about \(2\), whereas at \(\mu=1\), the shock speed is \(5.3\) and the density jump is \(5.7\). Finally, it can be seen that for larger values of \(\mu\), the speed of the shock and contact are close, which makes the constant wedge between the shock and contact slender with a sharp angle. We have observed that thin regions like this can be difficult to directly track from a shock-agnostic mesh, often requiring a much finer mesh so elements can fit to the region without being collapsed. The proposed many-query setting alleviates this burden by initializing these difficult situations from nearly aligned grids (from a HOIST solve at a different parameter configuration).
Finally, we verify the HOIST solution at the extremal parameters (\(\mu=0,1\)). At \(\mu=0\) (Sod) we compare to the exact solution at the final time \(T(\mu=0)\) and at \(\mu=1\) we compare to a highly refined second-order finite volume simulation at the final time \(T(\mu=1)=0.039\). At both parameters, the HOIST solution matches the reference solution well with the shock, contact, and head and tail of the rarefaction tracked (Figure 19). This also shows the dramatic variation the solution undergoes as the parameter varies throughout \(\mathcal{D}\).
Figure 17: Initial guess (density) on a shock-agnostic space-time mesh for the Riemann problem at \(\mu=0\) (_top_), the shock-aligned mesh and corresponding solution obtained using HOIST (_middle_), and the corresponding solution and mesh extracted from the region to the left of the rarefaction head and right of the shock (_bottom_).
Figure 18: Space-time density distribution at \(\mu=0,0.6,0.8,1.0\) (_top to bottom_).
## 6 Conclusion
We introduce a specialized version of the high-order implicit shock tracking framework, originally developed in [42; 23], for problems with parametrized lead shocks. The approach applies implicit shock tracking (e.g., HOIST [42; 23] or MDG-ICE [14]) on a shock-agnostic mesh at one parameter configuration of interest to generate a shock-aligned mesh and the corresponding flow field. All elements upstream of the lead shock are removed to produce a reduced mesh that will be used for all subsequent parameter configurations, with the farfield boundary condition directly applied on the shock boundary. In addition to the significant reduction in elements (up to a factor of three in this work), further reduction in the mesh DoFs is possible when the lead shock is the only non-smooth feature in the domain because only nodes on the shock boundary are optimized; the remainder are determined from boundary constraints and PDE-based smoothing. As a result, the shock boundary deforms as the parameters change and the remaining nodes are positioned to improve element quality. In addition to reducing the overall degrees of freedom of the implicit shock tracking discretization, this also improves robustness and accelerates convergence because the overall shock-fitting problem is easier and a high-quality initial guess is provided from the solution at previous parameter configurations. The proposed framework can be used for most _many-query_ applications involving parametrized lead shocks such as optimization, uncertainty quantification, parameter sweeps, "what-if" scenarios, or parameter-based continuation.
In this work, we use the abstract many-query setting for Mach number continuation in steady inviscid flows and initial condition sweep for one-dimensional Riemann problems. For continuation, we leverage partially converged solves at intermediate stages to improve the efficiency of the approach. A set of numerical experiments, which include two- and three-dimensional supersonic and hypersonic flows, demonstrate the robustness and flexibility of the proposed framework, in particular, the same HOIST solver parameters can be used throughout the continuation process despite substantial variations in Mach number (from \(M_{\infty}=2\) to \(M_{\infty}=10\)) and initial state. The framework was also shown to facilitate shock tracking for complex flow features downstream of the lead shock, as demonstrated by the flow over the double wedge where the shock interactions and supersonic jet were tracked and well-resolved by the approach.
## Acknowledgments
This work is supported by AFOSR award numbers FA9550-20-1-0236, FA9550-22-1-0002, FA9550-22-1-0004, and ONR award number N00014-22-1-2299. The content of this publication does not necessarily reflect the position or policy of any of these supporters, and no official endorsement should be inferred.
Figure 19: Density slices at final time for \(\mu=0\) (_left_) and \(\mu=1\) (_right_) for the HOIST solution (- - - ) and reference solution (- - -). |
2303.04933 | The B & V Light Curves for Recurrent Nova T CrB From 1842--2022, the
Unique Pre- and Post-Eruption High-States, the Complex Period Changes, and
the Upcoming Eruption in 2025.5$\pm$1.3 | T CrB is one of the most-famous and brightest novae known, and is a recurrent
nova with prior eruptions in 1866 and 1946 that peak at $V$=2.0. I have
constructed light curves spanning 1842--2022 with 213,730 magnitudes, where the
$B$ and $V$ magnitudes are fully corrected to the Johnson system. These light
curves first reveal a unique complex high-state (with 20$\times$ higher
accretion rate than the normal low-state) stretching from -10 to +9 years after
eruption, punctuated with a deep pre-eruption dip (apparently from dust
formation in a slow mass ejection) and a unique enigmatic secondary eruption
(with 10 per cent of the energy of the primary eruption), with the light curves
identical for the 1866 and 1946 eruptions. Starting in 2015, T CrB entered the
high-state, like in 1936, so a third eruption in upcoming years has been widely
anticipated. With the pre-1946 light curve as a template, I predict a date of
2025.5$\pm$1.3 for the upcoming eruption, with the primary uncertainty arising
from a possible lengthening of the pre-eruption high-state. I use the
large-amplitude ellipsoidal modulation to track the orbital phase of the binary
from 1867--2022. I measure that the orbital period increased abruptly by
$+$0.185$\pm$0.056 days across the 1946 eruption, the 1947--2022 years had a
steady period decrease of ($-$8.9$\pm$1.6)$\times$10$^{-6}$ days-per-day, and
the 1867--1946 years had a steady period change consistent with zero, at
($+$1.75$\pm$4.5)$\times$10$^{-6}$ days-per-day. These large period changes
cannot be explained by any published mechanism. | Bradley E. Schaefer | 2023-03-08T23:07:15Z | http://arxiv.org/abs/2303.04933v1 | The \(B\) & \(V\) Light Curves for Recurrent Nova T CrB From 1842-2022, the Unique Pre- and Post-Eruption High-States, the Complex Period Changes, and the Upcoming Eruption in 2025.5\(\pm\)1.3
###### Abstract
T CrB is one of the most-famous and brightest novae known, and is a recurrent nova with prior eruptions in 1866 and 1946 that peak at \(V\)=2.0. I have constructed light curves spanning 1842-2022 with 213,730 magnitudes, where the \(B\) and \(V\) magnitudes are fully corrected to the Johnson system. These light curves first reveal a unique complex high-state (with 20\(\times\) higher accretion rate than the normal low-state) stretching from -10 to +9 years after eruption, punctuated with a deep pre-eruption dip (apparently from dust formation in a slow mass ejection) and a unique enigmatic secondary eruption (with 10 per cent of the energy of the primary eruption), with the light curves identical for the 1866 and 1946 eruptions. Starting in 2015, T CrB entered the high-state, like in 1936, so a third eruption in upcoming years has been widely anticipated. With the pre-1946 light curve as a template, I predict a date of 2025.5\(\pm\)1.3 for the upcoming eruption, with the primary uncertainty arising from a possible lengthening of the pre-eruption high-state. I use the large-amplitude ellipsoidal modulation to track the orbital phase of the binary from 1867-2022. I measure that the orbital period increased abruptly by +0.185\(\pm\)0.056 days across the 1946 eruption, the 1947-2022 years had a steady period decrease of (\(-\)8.9\(\pm\)1.6)\(\times\)10\({}^{-6}\) days-per-day, and the 1867-1946 years had a steady period change consistent with zero, at (+1.75\(\pm\)4.5)\(\times\)10\({}^{-6}\) days-per-day. These large period changes cannot be explained by any published mechanism.
keywords: stars: evolution - stars: variables - stars: novae, cataclysmic variables - stars: individual: T CrB
## 1 Introduction
T Coronae Borealis (T CrB) is a famous recurrent novae (RN), with very fast classical nova eruptions in 1866 and 1946 (Payne-Gaposchkin, 1964). T CrB in 1866 was the first well-observed nova and the first with spectroscopy (Huggins, 1866). T CrB peaks at 2.0 mag, making its 1946 eruption the brightest nova event from 1943-2022. In quiescence, T CrB is by far the brightest of all known novae, with an average quiescent magnitude of 9.8 mag, and this allows for effectively continuous coverage of its light curves from 1866 to present, and with an average coverage of once-every-6-hours (even through its yearly solar conjunction) ever since 1946. T CrB has the companion star to its white dwarf as a red giant star, M4 III, which dominates the optical and infrared spectrum, all with an orbital period of 227.5687\(\pm\)0.0099 days (Kenyon and Garcia, 1986; Leibowitz, Ofek, & Mattei, 1997; Fekel et al., 2000). The year-to-year light curve is dominated by ellipsoidal modulations at half the orbital period with typical full-amplitude of 0.3 mag, although rapid flickering at the 0.3 mag level is ubiquitous.
T CrB is also unique for having two separate eruptions, spaced half-a-year apart with an intervening interval of 80 days stably at the pre-eruption brightness level. The main eruption has a fast rise, a peak at \(V\)=2.0 mag, and a duration of 6 days within 3 mags of the peak, while the secondary event has a fast rise, a peak at \(V\)=8.0 mag, and a FWHM duration of 90 days. The second-eruptions contain a substantial fraction of the radiative energy of the primary-eruptions. This double-event was seen identically in 1866 and 1946. These unprecedented second-eruptions have had little recognition in the literature, for which I am aware of no plausible explanation. So here we have a highly-energetic new mode of nova eruptions that provides a mystery, as a challenge to theorists.
T CrB also has a unique and complex set of high-states, lasting roughly from \(T_{\rm eruption}\)-10 to \(T_{\rm eruption}\)+9 years, with year-long transitions between the normal quiescent low-state and the high-state (Schaefer, 2014). The high-state is prominent in blue light, with amplitude of 1.4 mag (Schaefer, 2014), while the spectrum is dominated by the addition of high-ionization emission lines (Payne-Gaposchkin, 1964). This high-state has an equal radiative energy as does the primary-eruption. The post-eruption high-state light curve is identical between the 1866 and 1946 eruptions. The recognition of the pre-eruption dip (the fast and deep dimming of T CrB in the months before the main eruption) was by L. Peltier in 1945, and he correctly interpreted this as a sign of an imminent eruption (Peltier, 1945). The existence and details of the high-state was first recognized by Schaefer (2014), with this only being possible by my construction then of light curves in \(B\) and \(V\) with 102,000 magnitudes from 1855 to 2013, where no one had previously constructed an adequately long and well-calibrated data set. I know of no attempts to explain this com
plex high-state. I am struck by the difficulty to explain how the light curve rise-and-falls for the pre-eruption high-state can _anticipate_ the upcoming classic nova event. So we have another T CrB mystery, one that dominates the energetics, as a challenge for theorists.
RNe are classical novae which have recurrence time-scales (\(\tau_{rec}\)) of shorter than 100 years. Only 10 systems are now known in our Milky Way Galaxy with multiple discovered eruptions separated by less than 100 years (Schaefer, 2010), while one other system has a recurrence time scale of 40-50 years (Schaefer et al., 2022). The last two eruptions of T CrB were separated by 80 years, so the simplistic idea is that the next eruption will be around the year 1946+80, or 2026. This schematic prediction has been common knowledge at least since my undergraduate days in the 1970s. The expected accuracy of this calculation is poor, since the observed ratio of longest-to-shortest recurrence time-scales is a factor of 2.1\(\times\) for U Sco, 3.7\(\times\) for T Pyx, and 2.9\(\times\) for RS Oph. With a discovery of the anticipatory pre-eruption high-state and dip, a new possibility opened up for a means to get an accurate prediction of the upcoming eruption (Schaefer, 2014). Then, in the year 2015, the American Association of Variable Star Observers (AAVSO) \(B\) light curve had a sharp transition to a high-state, with the morphology of the \(B\) and \(V\) light curves being closely similar to that around 1938. The 2015 transition to a high-state was first recognized photometrically and spectroscopically by Munari, Dallaporta, & Cherini (2016), who called attention to this as being similar to the transition in 1936. Assuming that the light curve of the pre-eruption high-state is similar from eruption-to-eruption, Schaefer (2019) predicted the next eruption for 2023.6\(\pm\)1.0. In a follow-up with a subset of my data, Luna et al. (2020) predict that the eruption will be 2023-2026. Currently, there is a widespread anticipation that T CrB will erupt soon.
## 2 Light Curve From 1842-2022
A primary purpose of this paper is to construct a complete light curve, in the modern Johnson \(B\) and \(V\) systems, from 1842-2022. Table 1 contains a listing of the observers and their details. Individual magnitudes are explicitly listed in Table 2 for the visual observations in Section 2.1, in Table 3 for the photographic magnitudes from archival plates, in Table 4 for the photoelectric and CCD observations from the literature in Section 2.3, and in Table 5 for the collected observations from amateur observers worldwide in Section 2.4. Figure 1 shows the overall plot of the 1842-2022 \(B\) and \(V\) light curves.
### Visual Observations
Visual magnitude measures are the traditional method by which an observer directly compares the brightness of the target star to that of a nearby comparison star. With a sequence of comparison stars, the observer place the brightness as some fraction between the nearest comparison stars that are just-fainter and just-brighter. With a knowledge of the adopted magnitudes for the sequence stars, the target's magnitude is just the fractional difference in these comparison star magnitudes. With moderate practice plus good nearby comparison stars, a 1-sigma photometric accuracy of 0.20 mag is easily obtained (Stanton, 1999). The most experienced observers can consistently get 0.10 or even 0.07 mag accuracy.
Visual magnitudes constitute 89 per cent of the magnitudes in Fig. 1. Most of the science in this paper is possible only because of the vast numbers and complete time coverage of the visual measures. I have collected 116,844 visual magnitudes for T CrB, as itemized in Table 1. The bulk of the visual magnitudes are from the AAVSO International Database (AID), all from 1939 and later, and are considered separately in Section 2.4. Here, I will only look at the 2621 visual magnitudes not in the AID (see Table 2). The sources are mainly in widely scattered published papers (mostly with amateur observers), and in manuscripts now stored in various archives in Germany, England, and the US. The entire \(V\) light curve from 1842-1939 and the primary measures of both eruption light curves are derived from these visual observations.
All of the visual magnitudes should be converted to \(V\). Visual magnitudes have a similar spectral sensitivity as Johnson \(V\) magnitudes, with this by intentional construction. This means that visual and \(V\) magnitudes are similar, yet with significant systematic offsets. This conversion is based on the massive study of Stanton (1999), involving 63 observers of a wide range of ages, experiences, and equipment, including three colour blind people. The conversion is simply
\[V=m_{\rm vis}-0.210(B-V), \tag{1}\]
where \(m_{\rm vis}\) is the visual magnitude as reported by a visual observer, the target colour index is the usual \(B-V\), and \(V\) is the Johnson \(V\) measure. For the case of T CrB, the average colour in quiescence is \(B-V\)=1.22 mag (Bruch & Engel, 1994) with only small variations. During eruption, the \(B-V\) can be found from the light curves in Schaefer (2010). We have no direct measure of the colour at peak light, but this is \((B-V)_{\rm peak}\)=+0.11 mag for apparently all classical novae (Schaefer, 2022c), while T CrB has near-zero extinction due to its high Galactic latitude and small distance. So we now have a method for converting from the observed \(m_{\rm vis}\) to \(V\).
The observers almost always reports their magnitudes where they have already worked out the visual magnitude in comparison with their sequence. An ubiquitous trouble is that the observers' adopted magnitudes for their comparison sequences are skewed from the Johnson \(V\) magnitudes, often by simple rounding of the sequence star magnitudes to the nearest tenth, and often by systematic errors that can get up to a magnitude is size. Another ubiquitous trouble is that the adopted magnitudes and the catalogued magnitudes are never in the visual magnitude system. The general solution is presented in Johnson et al. (2014). The first step is to look up the Johnson \(B\) and \(V\) magnitudes for all the sequence stars, and use Eq. 1 to calculate \(m_{\rm vis}\) for each star. The second step is to calculate the fraction for which the reported magnitude is between the adopted magnitudes for the two stars just-brighter and just-fainter than the target, with this backtracking the observer's calculation when they reported the magnitude. The third step is to take this fraction to be the observed fraction that the target is between the visual magnitudes of the two comparison stars. The fourth step is to equate the two equations for the fraction, and solve for the visual magnitude of the target. The fifth step is to convert the target visual magnitude to its \(V\) magnitude. The resulting equation is
\[V=m_{b}+[(m_{f}-m_{b})(\mu-\mu_{b})/(\mu_{f}-\mu_{b})]-0.21\times(B-V). \tag{2}\]
Here, \(m_{b}\) and \(m_{f}\) are the visual magnitudes of the sequence stars just brighter-than and just fainter-than the target, with these being derived from catalogue \(B\) and \(V\) for the sequence stars, along with an application of Eq. 1. The sequence has adopted the magnitudes \(\mu_{b}\) and \(\mu_{f}\) for these stars. Based on this sequence, the observer makes a judgment on the relative brightness of the target and reports a magnitude \(\mu\). The \(B-V\) is the colour of the target star. Fortunately, essentially all published reports of visual observations explicitly identify their comparison stars (either by a chart or by coordinates), and they explicitly state their adopted magnitudes for each sequence star. The
\begin{table}
\begin{tabular}{l l l l l l l} \hline \hline Observer & Years & Start (JD) & Band & Count & Source \\ \hline J. F. W. Herschel & 1842 & 2393996 & \(V_{\rm E.S.}\)\(\rightarrow\)\(V\) & 1 & Schaefer (2013) \\ F. Argelander & 1855-1856 & 2398721 & \(V_{\rm E.S.}\)\(\rightarrow\)\(V\) & 2 & _Bonner Dumontureung_, c.f. Schoenfeld (1875) \\ J. F. J. Schmidt & 1866-1879 & 2402734 & \(V_{\rm E.S.}\)\(\rightarrow\)\(V\) & 936 & Schmidt (1877; 1879) \\ J. Birmingham & 1866 & 2402734 & \(V_{\rm E.S.}\)\(\rightarrow\)\(V\) & 1 & Birmingham (1866) \\ Courtebaisse & 1866 & 2402734 & \(V_{\rm E.S.}\)\(\rightarrow\)\(V\) & 1 & Stone (1866) \\ S. C. Chandler & 1866 & 2402736 & \(V_{\rm E.S.}\)\(\rightarrow\)\(V\) & 17 & Gould (1866a; 1866b) \\ C. H. Davis & 1866 & 2402736 & \(V_{\rm E.S.}\)\(\rightarrow\)\(V\) & 15 & Davis (1866) \\ J. Baxendell & 1866-1869 & 2402737 & \(V_{\rm E.S.}\)\(\rightarrow\)\(V\) & 101 & Baxendell (1866-1869) \\ J. Carpenter & 1866 & 2402739 & \(V_{\rm E.S.}\)\(\rightarrow\)\(V\) & 2 & Stone (1866) \\ F. Bird & 1866 & 2402740 & \(V_{\rm E.S.}\)\(\rightarrow\)\(V\) & 18 & Bird (1866) \\ W. R. Dawes & 1866 & 2402740 & \(V_{\rm E.S.}\)\(\rightarrow\)\(V\) & 5 & Daves (1866; 1866b) \\ E. J. Stone & 1866 & 2402740 & \(V_{\rm E.S.}\)\(\rightarrow\)\(V\) & 7 & Stone (1866) \\ E. Schoenfeld & 1866-1875 & 2402744 & \(V_{\rm E.S.}\)\(\rightarrow\)\(V\) & 468 & Campbell (1920); Valentiner (1900) \\ C. Behrmann & 1866 & 2402746 & \(V_{\rm E.S.}\)\(\rightarrow\)\(V\) & 5 & Behrmann (1866) \\ T. W. Backhouse & 1866-1916 & 2402751 & \(V_{\rm E.S.}\)\(\rightarrow\)\(V\) & 429 & Backhouse (1905; 1916) \\ A. Kreuger & 1866-1867 & 240286 & \(V_{\rm E.S.}\)\(\rightarrow\)\(V\) & 12 & Heis \& Krouger (1903) \\ H. M. Parkhurst & 1884, 1892 & 2403975 & \(V_{\rm E.S.}\)\(\rightarrow\)\(V\) & 3 & Parkhurst (1890) \\ Harvard & 1890-1962 & 2411572 & \(B\) & 896 & Harvard plates (this paper) \\ T. E. Espin & 1899-1899 & 2412563 & \(V_{\rm E.S.}\)\(\rightarrow\)\(V\) & 2 & Espin (1893; 1900) \\ J. Holctscheckk & 1896-1909 & 2413869 & \(V_{\rm E.S.}\)\(\rightarrow\)\(V\) & 14 & Holotschek (1907, 1912) \\ E. E. Barnard & 1906 & 2417451 & \(V_{\rm E.S.}\)\(\rightarrow\)\(V\) & 2 & Barnard (1907) \\ E. Zinner & 1913-1935 & 2419975 & \(V_{\rm E.S.}\)\(\rightarrow\)\(V\) & 40 & Ferrari (1935) \\ W. H. Steavenson & 1925-1947 & 2424360 & \(V_{\rm E.S.}\)\(\rightarrow\)\(V\) & 82 & Steavenson (1926–1948) \\ K. Ferrari & 1929-1935 & 2425687 & \(B\) & 82 & Bamberg plates (Ferrari 1935) \\ Bamberg & 1932-1939 & 2426868 & \(B\) & 16 & Bamberg plates (this paper) \\ S. Bohme & 1935-1938 & 2427932 & \(B\) & 57 & Bamburg plates (Bohme 1938) \\ K. Himpel & 1936-1938 & 2428248 & \(V_{\rm E.S.}\)\(\rightarrow\)\(V\) & 153 & Himpel (1938a; 1938b) \\ Sonneberg & 1936-2000 & 2428422 & \(B\) & 692 & Sonneberg plates (this paper) \\
1515 AIDS observers & 1939-2022 & 2429382 & \(V_{\rm E.S.}\)\(\rightarrow\)\(V\) & 114203 & AAVSO AID\({}^{a}\) \\ A. DeutschW. W. Morgan & 1946 & 2431860 & \(V_{\rm E.S.}\)\(\rightarrow\)\(V\) & 17 & Morgan \& Deutsch (1947) \\ N. F. H. Knight & 1946 & 2431860 & \(V_{\rm E.S.}\)\(\rightarrow\)\(V\) & 2 & Knight (1946) \\ E. Pettit & 1946-1950 & 2431861 & \(V_{\rm E.S.}\)\(\rightarrow\)\(V\) & 175 & Pettit (1946; 1950) \\ J. Ashbrook & 1946 & 2431861 & \(V_{\rm E.S.}\)\(\rightarrow\)\(V\) & 54 & Ashbrook (1946) \\ K. C. Gordon \& G. E. Kron & 1946 & 2431861 & \(B\) & 14 & Gordon \& Kron (1979) \\ K. C. Gordon \& G. E. Kron & 1946 & 2431862 & \(V\) & 10 & Gordon \& Kron (1979) \\ M. Ch. Bertaud & 1946-1947 & 2431865 & \(V_{\rm E.S.}\)\(\rightarrow\)\(V\) & 58 & Bertaud (1947) \\ R. Weber & 1946-1961 & 2431934 & \(B\) & 124 & Weber (1961) \\
113 AIDS observers & 1973-2022 & 2441832 & \(V\) & 7368 & AAVSO AID\({}^{a}\) \\ H. C. Lines et al. & 1981-1983 & 2444715 & \(V\) & 83 & Lines et al. (1988) \\ H. C. Lines et al. & 1982-1983 & 2445141 & \(B\) & 57 & Lines et al. (1988) \\ D. Raikova \& A. Antov & 1985 & 2446200 & \(B\) & 16 & Raikova \& Antov (1986) \\ D. Raikova \& A. Antov & 1985 & 2446200 & \(V\) & 16 & Raikova \& Antov (1986) \\ L. Hric et al. & 1990-1997 & 2447969 & \(B\) & 88 & Hric et al. (1998) \\ L. Hric et al. & 1990-1997 & 2447969 & \(V\) & 87 & Hric et al. (1998) \\ R. Zamanov et al. & 1991-1999 & 2448321 & \(B\) & 95 & Zamanov \& Zamanov (1997); Zamanov et al. (2004) \\ R. Zamanov et al. & 1991-1999 & 2448321 & \(V\) & 95 & Zamanov \& Zamanov (1997)\({}^{b}\) \\ ASAS & 2003-2009 & 2452689 & \(V\) & 236 & Pojmanski (1997)\({}^{b}\) \\
52 AIDS observers & 2004-2022 & 2453075 & \(B\) & 3937 & AAVSO AID\({}^{a}\) \\ U. Munari et al. & 2006-2015 & 2453867 & \(B\) & 204 & Munari et al. (2016) \\ U. Munari et al. & 2006-2015 & 2453867 & \(V\) & 205 & Munari et al. (2016) \\ APASS & 2012 & 2455989 & \(B\) & 10 & AAVSO APASS\({}^{c}\) \\ _TESS_ Sector 24 & 2020 & 2458955 & TESS & 16119 & _TESS_ SPOC from MAST\({}^{d}\) \\ _TESS_ Sector 25 & 2020 & 2458983 & TESS &
end result is a confident conversion from visual magnitudes to the Johnson \(V\) magnitude system.
The first observation in my light curve is that of Sir John Herschel on 1842 June 9, as part of his visual survey of the entire sky. He noted a star near the position of T CrB, and when the nova erupted in 1866, he claimed that T CrB was near 6 mag in 1842 (Herschel 1866). If correct, then T CrB had an eruption in 1842. This identification has been questioned by McLaughlin (1939) due to the published chart showing the star at the position of a nearby star (HD 144287) with a stable \(V\)=7.06. But such a star could not have been seen with Herschel's nominal naked-eye charting. In this ambiguous situation, I have made an exhaustive examination of Herschel's correspondence, notebooks, diary, and papers (Schaefer 2013). I found a chart made by Herschel with star positions guaranteed by pinpricking his original chart, sent in a letter to W. Huggins (dated 1866 May 19), and see that the position of the star is that of HD 144287 (and inconsistent with T CrB). Further, I found Herschel's notes telling of his use of an opera glass so as to get to 7 mag. So there was no 1842 eruption, and Herschel's observation is actually a limit on T CrB of \(V\)\(>\)7.06.
The great Leslie Peltier started regularly monitoring T CrB in 1920, before the existence of RN was known, on a general idea that it would have another nova event. For composite light curve purposes, I cannot use any of his 236 step estimates from 1920-1933 (which I have only found in letters saved in the AAVSO files), nor his 249 magnitude measures from 1933-1946.20. One basis for this exclusion is that 180 out of 249 of his magnitudes are reported as exactly 9.8 mag (233 are 9.7-9.9 mag), with T CrB not behaving as such. Peltier's magnitudes disagree by up to a magnitude from everyone else in the world. And Peltier's comparison star sequence (given in a private letter to Leon Campbell dated 3 June 1923) has differences from the modern \(V\) that range from -0.02 to -1.68 mag for stars over the narrow magnitude range of 9.2-9.76. Seventeen days after the end of this series, Peltier discovered the pre-eruption dip, realized that it meant an eruption was imminent, and announced it to the world (Peltier 1945). Peltier's long vigil on T CrB ('We had been friends for many years; on thousands of nights I had watched over it as it script') just barely missed the discovery of the 1946 eruption, and then 'There is no warmth between us anymore' (Peltier 1965).
In the end, I have 2621 visual magnitude measures that have been converted to the Johnson \(V\) system. As the original sources for the magnitudes did not provide error estimates, \(\pm\)0.2 mags is adopted.
Figure 1: Light curve for T CrB. All \(B\)- and \(V\)-band observations are shown by the blue and green circles respectively. We see the two eruptions in 1866 and 1946, reaching up to \(V\)=2.0. The scatter seen in any year is entirely from the usual flickering and the 113-day ellipsoidal modulation. (The measurement error bars are comparable in size to the dots or smaller.) We see that the \(B-V\) varying with a typical value around 1.2 mag. Most importantly, this graph, or its equivalent, is the only way to see complex high-state from 1866–1875, 1936–1955, and 2015–present. These high-states are the same from eruption-to-eruption. These high-states consist mostly of blue-light, with them prominent in \(B\) and only noticeable in \(V\). The pre-eruption high-state starts nearly 10 years before the eruption, so the high-state starting around 2015 implies an upcoming third eruption around the year 2025. The pre-eruption dip will provide an immediate notice of a few months before the upcoming eruption. The secondary eruption in 1866 and 1946 is lost with the primary eruption in this compressed horizontal scale. The high-state has the same total energy as the the classical nova eruptions, while the T CrB high-state is unique.
### Photographic Magnitudes
Before the advent of modern electronic detectors, the only way to get a light curve in some colour other than the visual-band was to use photography. In practice, a number of observatories collected large numbers of sky photographs, where the emulsion was on one side of a clear glass plate, so the stars appeared as negative images on these plates. The brightness of the star is effectively always measured from the sharply-defined image radius, where the magnitude scale is always calibrated by the image radii of nearby stars of known brightness. Before the 1970s, the spectral sensitivity of the emulsions were nearly always indistinguishable from that of the modern Johnson \(B\) magnitude system, and does not vary significantly over time (Laycock et al., 2010). Therefore, when the magnitudes for the calibrating comparison stars are in the Johnson \(B\) magnitude system, the result magnitude will be accurately in the modern Johnson \(B\) system.
T CrB is bright, and so is readily recorded on most archival plates. The largest collection of archival astronomical plates is at the Harvard College Observatory (HCO), where I have measured 896 magnitudes from 1890-1962. The Harvard plates are always the only source of \(B\) magnitudes before the 1930s. The second largest collection of archival astronomical plates is at Sonneberg Observatory, in Germany, where I have measured 692 plates from 1936-2000. The Sonneberg plates are valuable as usually being the only source of \(B\) magnitude starting in 1954 (with the notorious Menzel Gap at HCO) up until electronic detectors became common in the 1990s. The plate collection at Bamberg Observatory, in Germany, has a good collection of plates that covers the 1930s. Bohme (1938) and Ferrari (1935) have already published 139 \(B\) magnitudes for T CrB, although I have had to convert their old magnitudes to modern \(B\) magnitudes by detailed analysis of their stated comparison stars and their adopted sequences of magnitudes. Further, I have found one previously-unused box of plates in the Bamberg archives, and this provides 16 more \(B\) magnitudes from 1932-1939. A further set of archival plates is presented in Weber (1961), with 124 magnitudes from 1946-1961, all from his private observatory. Weber was using 103a-O emulsion (with \(B\) spectral sensitivity) and I have had to convert to modern \(B\) magnitudes by using his stated comparison stars and sequence. Weber's light curve is valuable as having the best time coverage in \(B\) for the secondary eruption in 1946.
My standard technique for measuring the magnitudes from plates are told in detail in Schaefer (2016a, 2016b). Importantly, I have used comparison stars with \(B\) magnitudes from APASS, with these chosen to have red colours similar to T CrB itself. The \(B\) magnitudes of T CrB for many individual plates have been measured 6 times by myself and others. I have made trips to HCO in 2008, 2010, and 2013, making measures from 896 plates with many duplicate measures. A. Pagnotta (College of Charliston) made 19 measures of plates, designed to test the reproducibility and accuracy of the magnitudes. Shapley (1933) reports on 342 measures (not counting limits) made by the highly experienced H. Leavitt (Harvard), which I have had to convert to modern \(B\) magnitudes as derived from their comparison star sequence. Finally, the large-scale programme Digital Access to a Sky Century @ Harvard (DASCH), with J. Grindlay (Harvard) as Principle Investigator, has scanned and digitized the majority of Harvard plates, and used those scans to run a sophisticated program to calculate the \(B\) magnitudes of all stars on the plates (Tang et al., 2013). DASCH reports 354 magnitudes.
This large-scale study of T CrB provides yet another study of the measurement accuracy of the photographic photometry. In all, these multiple-measures shows that the six independent magnitudes from the four sources all agree with each other closely. For the three non-DASCH sources, the average difference in magnitudes is 0.009 mag, which shows that no one source is systematically making measures bright or dim. The comparison between all the non-DASCH sources shows the same RMS of the differences, so the errors for each individual source is similar and near \(\pm\)0.15 mag. That is, these sources are accurate with an unbiased real measurement error of 0.15 mag, for the T CrB case. The comparisons of the differences between the DASCH magnitudes and from the three other sources shows a consistently larger RMS, which corresponds to the real measurement error for DASCH of \(\pm\)0.35 mag for this one case of T CrB. For the final combined magnitudes for each HCO plate, I used an average of all the individual measures, and I'll adopt an error bar of \(\pm\)0.15 mag.
In the end, I have 1867 \(B\) magnitudes, mostly measured by myself from the plates in front of me. These measures are important because they provide the only measure of the blue light from 1890-1982, including all the complex high-state and eruption phenomena from 1938-1955. These magnitudes are in Table 3 and plotted in Fig. 1.
### Photoelectric Photometers and CCDs
Electronic detectors have been available since the 1946 eruption for single-channel photoelectric photometers and since roughly the year 2000 for CCD photometry. For photoelectric photometers, five publications have reported 561 \(B\) or \(V\) magnitudes from 1946-1999 (see Table 1), with these being useful to check the light curves from other sources. For T CrB magnitudes with CCDs, I have only found three sources from professional astronomers with 665 magnitudes 2003-2015. Further, the AID records 11,305 \(B\) and \(V\) measures from 1973-2022 from photoelectric photometers and CCDs.
\begin{table}
\begin{tabular}{l l l l l} \hline Julian Date & Year & Band & Magnitude & Source \\ \hline
2393996.5 & 1842,441 & _Vis.\(\rightarrow\)V_ & \(>\)7.09 \(\pm\) 0.20 & J. Herschel \\
2398721.5 & 1855,379 & _Vis.\(\rightarrow\)V_ & 10.44 \(\pm\) 0.20 & Argelander \\
2399039.6 & 1856,248 & _Vis.\(\rightarrow\)V_ & \(>\) 6.02 & Schmidt \\
2402734.37 & 1866,364 & _Vis.\(\rightarrow\)V_ & \(>\) 5 \(\pm\) 0.20 & Birmingham \\
2402734.486 & 1866,363 & _Vis.\(\rightarrow\)V_ & 2.00 \(\pm\) 0.20 & Birmingham \\... & & & & \\
2433011.001 & 1949.257 & _Vis.\(\rightarrow\)V_ & 9.67 \(\pm\) 0.20 & Pettit \\
2433061.705 & 1949,395 & _Vis.\(\rightarrow\)V_ & 9.71 \(\pm\) 0.20 & Pettit \\
2433119.698 & 1949,554 & _Vis.\(\rightarrow\)V_ & 9.76 \(\pm\) 0.20 & Pettit \\
2433186.649 & 1949,738 & _Vis.\(\rightarrow\)V_ & 9.73 \(\pm\) 0.20 & Pettit \\
243313.040 & 1950,084 & _Vis.\(\rightarrow\)V_ & 9.67 \(\pm\) 0.20 & Pettit \\ \hline \end{tabular}
\end{table}
Table 2: T CrB visual photometry (full table with 2621 lines available on-line as Supplementary Material)
\begin{table}
\begin{tabular}{l l l l l} \hline Julian Date & Year & Band & Magnitude & Source \\ \hline
2411572.599 & 1890.560 & \(B\) & 11.68 \(\pm\) 0.15 & HCO (11520) \\
2411878.689 & 1891.398 & \(B\) & 11.30 \(\pm\) 0.15 & HCO (15634) \\
2412250.715 & 1892,417 & \(B\) & 11.70 \(\pm\) 0.15 & HCO (16373) \\
2412262.672 & 1892.449 & \(B\) & 11.71 \(\pm\) 0.15 & HCO (16455) \\
2412287 & 1892.516 & \(B\) & 11.76 \(\pm\) 0.15 & HCO (1) \\... & & & & \\
2451055.482 & 1998.660 & \(B\) & 12.00 \(\pm\) 0.21 & Sonneberg \\
2451427.476 & 1999.678 & \(B\) & 11.80 \(\pm\) 0.21 & Sonneberg \\
2451428.478 & 1999.681 & \(B\) & 11.80 \(\pm\) 0.21 & Sonneberg \\
2451430.475 & 1999.686 & \(B\) & 11.60 \(\pm\) 0.21 & Sonneberg \\
2451661.563 & 2000.319 & \(B\) & 11.40 \(\pm\) 0.21 & Sonneberg \\ \hline \end{tabular}
\end{table}
Table 3: T CrB \(B\) light curve from photographic photometry (full table with 1867 lines available on-line as Supplementary Material)
As with all photometry where magnitudes from many different observers are being combined, we have to be careful that the magnitude systems are close to the Johnson systems for each observer. The electronic detectors are linear in flux, so we need not worry about scale changes from bright to faint stars. For the selected T CrB measures, the detectors have always used filters that produce a spectral sensitivity close to those of the Johnson \(B\) and \(V\) systems, so colour terms will be negligible. The adopted \(B\) and \(V\) magnitudes for their primary comparison stars are always within the usual 0.03 mag of the values now listed in the SIMBAD database. Indeed, the APASS magnitudes and calibrations are now serving as the standard system. In all, there are no corrections applied to the photoelectric and CCD magnitudes, their systematic errors are negligibly small, and their measurement errors are typically \(\sim\)0.03 mag or smaller. The good photometric precision from Poisson noise for the electronic detectors has no utility for most science questions concerning T CrB, because the star has intrinsic random flickering at the 0.3 mag level, so the sampling error always dominates over the measurement error.
The 1226 photoelectric and CCD magnitudes are in Table 4.
### AAVSO International Database
Amateur astronomers have been keeping long vigils on T CrB since 1866. Most of the visual observations reported in Section 2.1 are by amateurs, all with a photometric accuracy that is the same as for professional observers. Mostly starting with the 1946 eruption, the worldwide coverage by top-quality amateur observers has made T CrB one of the best observed of all variable stars. From 1946 to 2022, the average is 4 visual measures per night, every night. The amateurs started making well-calibrated photoelectric observations in 1973, and well-calibrated CCD observations around 2004.
This massive coverage makes the 'amateur' contributions to be the most important out of all the sources for the T CrB light curve. Little of this has been published in the professional literature. The only way to access most of this critical data is through dusty notebooks in archives, and in the repositories of various variable star organizations around the globe. Prominent archives are the variable star sections of the British Astronomical Association (BAA), the French Variable Star Observers Association (AFOEV), the Royal Astronomical Society of Canada, and even the Royal Astronomical Society of New Zealand. Close to half of all observers and all magnitudes are from people based in the United States with an affiliation with the American Association of Variable Star Observers (AAVSO), with headquarters in Cambridge Massachusetts. The AAVSO performs the invaluable service of collecting all magnitudes from every available source worldwide, and placing them uniformly in one publicly-available database, the AAVSO International Database (AID).
I have downloaded all the magnitudes in the AID up until October 2022. I concentrate only on the \(B\), \(V\), and visual magnitude estimates. I do not use any measures that are only limits on the magnitude, or for which the stated uncertainty is \(>\)0.30 mag. Nor do I use any unfiltered CCD measures calibrated with \(V\) comparison stars, the so-called \(CV\) magnitudes, as these have colour corrections that are always unknown in particular and can be at the 0.10 mag level or larger. I am left with 52 observers making 3937 \(B\)-band measures with CCDs from 2004 to present, 113 observers making 7368 \(V\)-band measures mostly with CCDs from 1973 to present, and 1515 observers making 114,203 visual observations from 1939 to present.
The photometric accuracy of the amateurs is always the same as for the professional observers, and both are more than accurate enough to answer the various questions raised in this paper. For the CCD and photoelectric measures, the typical uncertainty is \(\sim\)0.03 mag, even though the Poisson error might be substantially smaller. For the visual observations, the typical uncertainty is close to 0.20 mag (Stanton 1999). When binned together, say in 0.01-year bins as I do, with an average of near 16 observations per bin, the formal photometric measurement uncertainty is around 0.05 mag. Nevertheless, the total uncertainty is not from the measurement errors, but rather arises from the intrinsic random variability of T CrB itself. That is, ordinary flickering of T CrB causes fast variations up to the 0.5-mag level, and this dwarfs the measurement errors from all sources, professional or amateur. When T CrB is up-and-down by half-a-magnitude on all time-scales, it matters little whether there is some additional measurement error at the 0.05 mag level. This means that the formal measurement errors have little meaning and no utility. Rather, the uncertainty of the magnitude is dominated by the sampling errors. So the utility of an eyeball estimate by an amateur with an uncertainty of \(\pm\)0.20 mag is essentially the same as the utility of a well-calibrated CCD measure by a professional with an uncertainty of 0.01 mag. However, the utility of averaging 16 visual measures on 16 nights is substantially better (4x in this case) than any single measure with a photometric accuracy of 0.001 mag, with the reason being that the many-night-combinations have averaged over the ubiquitous flickering on T CrB.
The visual magnitudes need to be accurately converted to \(V\) magnitudes. The procedure is described in Section 2.1. Critical for this procedure is to know the comparison stars used by the individual observers. Fortunately, observers worldwide largely used the same charts and sequences, and these sequences had only insignificant changes over time after 1939. For this, I have found charts and sequences as used by observers in the archives of the AAVSO, BAA, and elsewhere. On this basis, I have converted all of the visual magnitudes to the modern Johnson \(V\) magnitude system.
The important variability, for the purposes of this paper, are all on time-scales of weeks to months to years. As such, I have binned up the AID \(B\) and V light curves to 0.01-year bins. The exception is during the months of the 1866 and 1946 eruptions, where the fast variations are better represented with much smaller bin sizes. The result is a binned light curve with 819 \(B\) measures, 1137 \(V\) measures, and 7127 visual measures converted to \(V\) (see Table 5 and Fig. 1). In the source column, the parentheses states either the AAVSO observer ID or the number of magnitudes averaged together.
### \(B\) and \(V\) Light Curve 1866-2022
In all, I have 6288 \(B\) magnitudes 1890-2022 and 124935 \(V\) magnitudes 1845-2022. Only two of these magnitudes are limits, Her
\begin{table}
\begin{tabular}{l l l l l} \hline Julian Date & Year & Band & Magnitude & Source \\ \hline
2431861.961 & 1946.111 & \(B\) & 3.68 0.03 & Gordon \& Kron \\
2431862.021 & 1946.111 & \(B\) & 3.66 0.03 & Gordon \& Kron \\
2431862.063 & 1946.111 & \(V\) & 3.56 0.03 & Gordon \& Kron \\
2431863.925 & 1946.116 & \(V\) & 4.58 0.03 & Gordon \& Kron \\
2431863.951 & 1946.116 & \(B\) & 4.85 0.03 & Gordon \& Kron \\... & & & & \\
2457349.216 & 2015.891 & \(V\) & 9.809 0.015 & Munari et al. \\
2457369.676 & 2015.947 & \(B\) & 10.924 0.008 & Munari et al. \\
2457369.676 & 2015.947 & \(V\) & 9.821 0.012 & Munari et al. \\
2457376.666 & 2015.966 & \(B\) & 10.944 0.009 & Munari et al. \\
2457376.666 & 2015.966 & \(V\) & 9.797 0.015 & Munari et al. \\ \hline \end{tabular}
\end{table}
Table 4: T CrB light curve from photoelectric and CCD photometry (full table with 1226 lines available on-line as Supplementary Material)
schel's non-detection in 1842 and Schmidt's deep pre-eruption limit just 2.8 hours before Birmingham's discovery of the 1866 eruption. These observations are described in Table 1, explicitly listed in Tables 2-5, and plotted in Fig. 1. In this Section, I will present various blow-up and superposed plots to better show what was going on.
Fig. 2 shows a close-up of the two eruption light curves, where the horizontal axis is the number of days since the time of peak light. For this, I take the times of peak light to be JD 2402734.5 (1866.363) and JD 2431860.0 (1946.105). The rise to peak and the peak magnitude is sharply constrained by the confident non-detection (with T CrB being fainter than 5th or 6th mag) by the great observer J. F. J. Schmidt (National Observatory of Athens) just 2.8 hours before the discovery at \(V\)=2.0 by Birmingham. The eruption light curves in \(V\) are identical between the 1866 and 1946 events. The peak magnitude is close to \(V\)=2.0, as seen by Birmingham in 1866, and as missed by half-a-day in 1946. The \(B-V\) colour near peak is close to 0.0 mag. The colour has increased to +0.5 by day 20, due to the very red contribution of the secondary star. The times for the light curve to drop by 2, 3, and 6 magnitudes from peak are \(t_{2}\)=3.0, \(t_{3}\)=5.0, and \(t_{6}\)=12.0 days. T CrB is one of the all-time fastest novae (c.f. Schaefer, 2010; 2022c).
Fig. 3 shows a close-up of the time soon after the primary eruption, when the flux level goes flat at the prior level of the pre-eruption high-state. The basic nova eruption is over and done. Then, after an interval of over 80 days in quiescence, a secondary eruption stars. T CrB experiences a second eruption, this with a fast rise, dominated by blue light. The secondary event has a total duration of near 100 days, while its peak is roughly 160 days after the primary peak. Importantly, the timing and light curve of the secondary eruption is identical between the 1866 and 1946 eruptions. This secondary eruption is unique amongst novae (and cataclysmic variables, CVs, in general), even while such events would have been detected in the case of \(\sim\)200 other novae, including \(\sim\)20 other novae with red giant companions (Strope, Schaefer, & Henden, 2010; Schaefer, 2022b).
Fig. 4 shows the pre-eruption dip leading up to the 1946 eruption. This is what L. Peltier discovered, and made him think that a second eruption was imminent (Peltier, 1945). When we see T CrB start to fade towards a dip sometime in the upcoming years, we will have advance notice for the date of the eruption. When the dip becomes first noticeable, we will get roughly 1 year advance warning and can make a prediction of the date accurate a month or two. In the \(B\) band, the pre-eruption dip appears as a steady drop all the way to the day of the eruption. In the \(V\) band, the last year of the pre-eruption dip appears to have T CrB _brightening_ steadily to the eruption from a minimum roughly one year in advance.
light being dominated by the red giant. Note, there is no apparent odd-even difference in the minima, which implies that any irradiation effects are relatively small. The \(B\)-band light curve still shows the ellipsoidal sinewave at a similar level. The modulation in \(B\) is less well defined, likely because the chaotic addition of flickering and variations (from the accretion disc) are more prominent in blue light, plus there are much fewer \(B\) magnitudes (as compared to the large number of \(V\) magnitudes) to beat down the flicker variations.
T CrB shows long-term variations in quiescence, as shown in Fig. 6. For this plot, the light curves have been binned at half the orbital period (113.7843 days), so that the ellipsoidal modulations are always averaged to zero. This also serves to minimize the variations due to flickering and short-term variations. The changes in the level between eruptions is presumably dominated by the changes in the accretion rate. We see that T CrB has two levels away from eruptions, which schematically are like a quantization into either a low-state or a high-state, with fairly sharp transitions. The ordinary low level is from roughly 1875-1935 (60 years), 1955-2015 (60 years), plus at least 1855-1856. The transitions to the high-state take roughly two years. T CrB shows a distinct high-state, dominated by blue light, from at least 1866-1875, 1936-1954, and 2015-present.
### _Tess_ Light Curves
_Transiting Exoplanet Survey Satellite (TESS)_ is a mission designed to provide awesome light curves with 20-1800 second time resolution nearly continuously for many \(\sim\)26 day intervals for most stars in the sky down to 19th mag and fainter (Ricker et al., 2015). _TESS_ observed T CrB during pairs of orbits labelled as Sectors 24, 25, and 51 (see Table 1). The time resolution was 120-s during the first two Sectors. During Sector 51, T CrB data were returned with a time resolution of 20-s. I have used the mission standard production of the light curves, labelled as SPOC, with these being publicly available from MAST. The fluxes were derived with the standard'simple aperture photometry'. T CrB is a bright star for _TESS_, so the flux levels (in units of electrons per second) are high and the fractional Poisson uncertainties are low. For Sector 51, the average flux level is near 123,000 with an average Poisson level of 89. The _TESS_ detectors are CCDs with no filters, so the spectral sensitivity runs from 6000-10000 A.
Fig. 7 shows the 120-s resolution light curve for the 53.5 day interval in 2020 of Sectors 24 and 25. We see variations on all time-scales. The variations are at the 4 per cent level, corresponding to an amplitude of 0.04 mag. The flickering amplitude in red light is substantially smaller than for the \(V\) or \(B\) bands, as expected.
Fig. 8 shows a one-day close-up for the Sector 51 light curve with 20-s time resolution. Further, two insets show expanded intervals, each 0.1 days in duration. The Poisson error bars are \(\pm\)86 electrons per second. In the main figure, the Poisson errors are close to the size of each dot, so all the point-to-point variability is significant and intrinsic to T CrB. In the insets, where each minor tick mark on the vertical axis is 200, the dots are again the same size as the Poisson errors, so the variability on the one-minute time-scale is real.
## 3 The upcoming eruption in 2025.5\(\pm\)1.3
Many workers have recognized that the rise to a high-state, starting in 2015, is the harbinger of an eruption sometime close to 80 years after the prior eruption. Now, with a full and definitive light curve for the years 1930-1946, plus the latest AAVSO light curve up to October 2022, we can make the best prediction as to the date of the eruption. To this end, in Fig. 9, I have plotted a close-up of the \(B\) and \(V\) light curves from Fig. 1. Further, I have used the 1930-1955 \(B\) light curve to create a template of the behaviour of the eruption and high-state. The idea is to slide the template left-right in Fig. 9, to obtain the best match in the pre-eruption high-state, then read off the year of the predicted eruption. This presumes that the pre-eruption high-state before the upcoming eruption is the same as before the 1946 eruption. We have seen in detail that the primary eruption light curves, the secondary eruption light curves, and the post-eruption high-state light curves are identical to within the measurement uncertainties (see Figs 2, 3, and 6), so it is reasonable to presume that the pre-eruption high-states will also be identical from eruption-to-eruption. However, a comparison of the template versus the high-state from 2015-2022 shows (see Fig. 9) that the two are somewhat different in amplitude. With one relatively small difference, we have the possibility that the upcoming eruption will have some small difference in timing. The duration from 2015 to the upcoming eruption is unlikely to be greatly longer than displayed in the template, because we have the precedent from Argelander in 1856 that the duration of the pre-eruption high-state was less than ten years.
The best positioning of the template, for sliding it to earlier and later dates for the eruption year in Fig. 9, will be most constrained by the times of fastest variation. The initial rise to the high-state is clearly defined around 1936 and around 2015. Sliding the template right and left, I find that the initial rise is best matched for an eruption date of 2023.8, with the rise to the high-state is certainly mismatched for dates before 2023.4 and after 2024.2. The other time when the pre-eruption high-state has a fast change is when the brightness suddenly starts fading to the pre-eruption dip. This has not started as of February 2023. To slide the \(c\).1946 \(B\) template such that the dip starts being apparent _after_ 2023.2, the eruption date must be after 2024.2.
Figure 4: Pre-eruption dip in the year before the 1946 eruption. This light curve shows all of the un-binned \(B\) magnitudes (blue circles) and \(V\) magnitudes (green circles) from 1941–1947. The primary eruption runs off the top of the plot on the right side, being followed by the secondary eruption on the right edge of the plot. In 1941, the now had already risen from its long-term quiescent level of \(B\)=11.7 to its pre-eruption high-state, which has a slow fade from 10.3 in 1941 to 10.7 in 1944. Around the start for 1945, the nova faded down to \(B\)=11.7 just before the start of the 1946 eruption. This pre-eruption dip was discovered by the great observer L. Peltier, and this phenomenon will provide a one-year warning of the exact date of the upcoming eruption. I have not even heard of any speculation as to the physical mechanism for the pre-eruption dip.
These constraints have assumed that the upcoming eruption has the same timing as the 1946-template. But the current high-state is already different from the template by being half a magnitude fainter in \(B\), so the eruption date could be longer than that seen in the pre-1946 template. For example, maybe the timing of the eruption depends on the mass of material accreted during the pre-eruption high-state, so then the relatively small accretion over the last decade will make for an eruption date after 2023.8. We do not know the physics of the situation, so the delay can be some unknown number of years later. Given Argelander's measure and the close similarity of the light curves within years of the eruptions, the delay after 2023.8 cannot be longer than something like two or three years. Given these various imperfect constraints, a final prediction would be from 2024.2-2026.8, which can be expressed as 2025.5\(\pm\)1.3.
After the upcoming start of the pre-eruption dip, this prediction can be substantially improved in accuracy. On the assumption that the pre-eruption dip is similar to that of 1945-1946, the eruption will be within a few months of 1.0 years after the start of the pre-eruption dip. On the same assumption, the eruption will be roughly three months after T CrB reaches the bottom of the dip, with an uncertainty of perhaps a month. So, hopefully, the advance notice for the T CrB eruption might get the accuracy down to a few weeks of time.
Figure 5: T CrB in normal quiescence, 2006-2010, with ellipsoidal modulation. The red giant star is necessarily somewhat elongated due to the usual Roche lobe gravity effects from the nearby white dwarf, so as the star goes round in its orbit, it alternatively presents its broad side (and appears brightest) at orbital elongations, and its small side (and appears faintest) at conjunctions. This ellipsoidal effect will make the light curve have a sinusoidal modulation at half the orbital period. The resultant sinewave is easily seen in the \(V\)-band light curve (green dots), while the same modulations are visible in the \(B\)-band light curve (blue dots). This sinewave defines the position of the companion star in its orbit, and so becomes a measure of the orbital period and its variations.
Figure 6: T CrB in normal quiescence, 1855–2022. The \(B\) magnitudes (blue circles) and the \(V\) magnitudes (green circles) have been binned over a time of 113.7843 days so as to average out ellipsoidal modulations and short-term variations like flickering. (Around the times of eruption, smaller bins sizes were used, as appropriate to show the underlying variations.) The point of this figure is to show the high- and low-states of T CrB between eruption. The structure of the high-state is complex, with a pre-eruption dip, the primary classical nova eruption, and the unique secondary eruption interspersed in the middle of the nearly-two-decade high-states. Importantly, the high-state after the 1866 eruption appears identical to that after the 1946 eruption, and the high-state starting in 2015 appears similar to that starting in 1936. With the further result that the two primary eruption light curves and the two secondary eruption light curves being identical, we have a strong case that T CrB is closely repeating itself in detail.
## 4 Folded light curves
The orbital period and the phasing of the conjunctions are known with high accuracy from the 1946-1999 radial velocity curve compiled and fitted by Fekel et al. (2000), with a period of 227.5687\(\pm\)0.0099 and maximum velocity at JD 2447918.62\(\pm\)0.27. The measured lines are from the atmosphere of the red giant (with no confusing lines from the accretion disc), so the radial velocity curve is a faithful measure of the actual geometric position of the star in its orbit. The fitted orbital eccentricity is zero, with an uncertainty likely smaller than 0.012 (c.f. Kenyon & Garcia 1986). I have used this ephemeris to phase up the \(B\) and \(V\) light curves. The phase 0.00 is for maximum velocity when the companion star is at an elongation in its orbit, on the side moving away from Earth. At phase 0.00, the broad-side of the red giant star will be directly towards Earth, so this is the phase of maximum for the ellipsoidal variations. At phase 0.25, the companion star is at conjunction, on the far side of the white dwarf, where we have the fullest view of the irradiated hemisphere. At phase 0.50, the companion star is at the other elongation, where the red giant is moving towards Earth, with the Roche geometry presenting the largest cross sectional view of the companion, so this must be the phase of maximum ellipsoidal effect. At phase 0.75, the red giant is at inferior conjunction, with the cross sectional view of the companion being minimal, and the irradiated hemisphere largely invisible.
Fig. 10 shows the visual light curve binned into phases, with the top panel for the years 1955-2015 of the low-state in quiescence, and with the bottom panel for the years 2016-2022 of the ongoing high-state. Both panels show a folded light curve that is nearly sinusoidal with a full-amplitude of 0.17 mag (as depicted by the black sinewave). In the low-state, the minimum at phase 0.25 (with the fullest view of the irradiated hemisphere) is not significantly brighter than the minimum
Figure 8: _TESS_ light curve close-ups with 20-second resolution. During Sector 51, _TESS_ returned 20-second time resolution. A representative sample 1.0 day interval is shown here. The two insets show yet-finer close-ups of two 0.1-day intervals. The Poisson error bars are 86, just somewhat larger the the plotted points. This means that much of the variability on the one-minute and faster time-scale is significant and intrinsic to T C\(\rm{\rm{\it B}}\).
Figure 7: _TESS_ light curve with 120-second resolution. During Sectors 24 and 25, _TESS_ returned 33235 fluxes over a 53.5 day interval. The red colour of the dots is to remind us that the _TESS_ fluxes are red sensitive, covering roughly 6000-10000 Å. The small gaps are caused by spacecraft operations once each perigee for data downlinks. The Poisson error for these fluxes is \(\pm\)43. With this, all of the variations in the plot are significant and intrinsic to T C\(\rm{\it B}\). We see flickering on the fastest time-scales, with a continuum of variations up to at least time-scales of several days.
at phase 0.75 (with the irradiated hemisphere largely hidden), which says that the irradiation effects are negligibly small in the \(V\) band in the low-state. The maxima occur at phases 0.11 and 0.50. For a closely circular orbit, the maxima and minima of the ellipsoidal effects must be exactly 180\({}^{\circ}\) apart. With the T CrB maxima deviating from the ellipsoidal requirement by 36\({}^{\circ}\), there must be some other effect acting to make the maximal light at an orbital phase 36\({}^{\circ}\) after the elongation at phase 0.75. Perhaps the hotspot (where the accretion stream hits the outer edge of the accretion disc) has a beaming pattern that is brightest around phase 0.20 such that the added effects of the hotspot and the ellipsoidal shape make for a maximum light at phase 0.10. However, for this idea, the two maxima are at the same brightness, so an additional light source phased with a maximum at phase 0.50 would have to be closely equal to the light from the hotspot. Further, my understanding is that it will be difficult for the hotspot to produce a beaming pattern that peaks at phase 0.20 or so. With these difficulties and required coincidences and yet more required light sources, a model involving the hotspot seems poor. In all, I have no useful idea as to the reason why the maxima are at phases 0.10 and 0.50 in the folded light curve.
The bottom panel of Fig. 10 shows the folded visual light curve for the high-state. The high-state light curve shows maxima at phases 0.11 and 0.50. The added light from the high-state raised the average brightness from \(\langle V\rangle\)=10.16 to \(\langle V\rangle\)=9.79. The amplitude of the high-state light curve is close to that of the low state, so the extra light is not strongly modulated by the orbit. However, the individual maxima and minima are somewhat unequal in the high-state. The apparent deviations from the average amplitude for individual extrema is typical of that seen in the low state, where folded light curve for 6-year intervals always vary up and down somewhat due to the well-known ordinary fluctuations of T CrB (see Section 7). That is, the shape, amplitude, and phasing of the \(V\)-band light curve are the same from the low-state to the high-state.
Fig. 11 shows the phase-folded \(B\) light curve for the low-state (top panel) and the high-state (bottom panel). The black sinewave is the schematic ellipsoidal model with an amplitude of 0.42 mag in both panels. The low-state appears similar in \(B\) and \(V\), although the amplitude is larger in the \(B\). The maxima and minima are at closely similar levels, so the irradiation effects are negligible in the \(B\) band in the low state. The maxima are at phases 0.04 and 0.45, which implies that some additional effect is being superposed on the ellipsoidal and irradiation effects. The extra light in the high-state raised \(\langle B\rangle\)=11.63 to \(\langle B\rangle\)=10.81. The extra light still has the half-period signal that shows ellipsoidal modulation, but the light curve shape has substantial deviations from a sinewave. I note that these deviations from the sinewave are typical of those seen over other six year intervals. That is, the ubiquitous flickering and flaring on all time-scales makes for a light curve with relatively few cycles always appears to have deviations from the average due to the happenstance of durations and timings of the fluctuations with a time-scale of a month or so. Thus, the deviations from a sinewave in the high-state are consistent with the ordinary and expected T CrB variability.
Figure 10: Phase-folded visual light curves for the low-state (top panel) and the high-state (bottom panel). The black sinewave is the schematic effect of ellipsoidal modulations, where the maxima must be at phases 0.00 (or 1.00 in this doubled plot) and 0.50 (or 1.50). Mysteriously, both panels show maxima near phases 0.11 and 0.50, implying some light source that is not perfectly symmetrical with orbital phase. The top panel has averaged over 60 years (near 96 orbits), which smooths out the aperiodic fluctuations so prominent for T CrB. The bottom panel has averaged over only six years, so small variations are seen that arise from the ordinary sampling on a source with substantial and chaotic long-term variations.
Figure 9: The upcoming eruption of T CrB is predicted for the year 2025.5\(\pm\)1.5. The light curve is a close-up of that in Fig. 1, depicting the \(V\) magnitudes in green and the \(B\) magnitudes in blue, with data up to 2022.8. The \(B\)-band template from years surrounding the 1946 eruption is shown as a purple curve, although plotted here with a shift in years so that the primary eruption peak is in the year 2024.0. We already know that the primary eruption, the secondary eruption, and the post-eruption high-state light curves are all identical is shape and relative timing from the 1866 eruption to the 1946 eruption, so it is reasonable to expect that the shape and timing of the pre-eruption high-state are also constant from eruption-to-eruption. Despite this expectation, this figure shows that the current high-state has a smaller amplitude than that before the 1946 eruption, and this modest difference in shape allows for possible modest differences in timing. The figure shows the template shifted so that the rises around 1936 and 2015 overlap, which implies an eruption date of 2023.8. But the pre-eruption dip has not started, as of February 2023, so the eruption should be after 2024.2. That the current high-state is half a magnitude lower than the template suggests the possibility that the duration of the current pre-eruption high-state might be somewhat longer than in the template, perhaps by 2–3 years. With this, the eruption year would be between 2024.2 and 2026.8, or 2025.5\(\pm\)1.3.
## 5 The complex changes in the orbital period
Near half of my motivation for the long work of compiling an exhaustive T CrB light curve with correct modern photometry is so that I can measure the orbital period changes all the way back to the 1866 eruption. The idea is that the T CrB ellipsoidal modulations closely define the orbital position of the companion star (with the maxima pointing to the elongations), so monitoring the timing of the maxima will give the true orbital period as a function of time. T CrB is the only known CV for which this is possible, because only T CrB has a large-amplitude ellipsoidal modulation and is bright enough to have a very long photometric record. So the task is to measure times of maxima throughout the \(B\) and \(V\) light curves from 1866-2022, place these on a traditional \(O-C\) diagram, and calculate the long-term changes in the orbital period of T CrB.
### \(O-C\) Curve for 1866-2022 and Broken-Parabola Fits
For this task, I have extracted from my primary light curve (see Tables 2-5) all the magnitudes in each colour for time intervals with durations 2-10 years. The time intervals were chosen so as to include enough magnitudes to keep the error bars usefully small, and to avoid crossing state changes. (The eruption years of 1866 and 1946 were not included. Time intervals with scant data and uselessly-large error bars are not included.) Each set of selected magnitudes was then fit to a sinewave with a period of half-227.5687 days with a standard chi-square analysis. The time of peak is from the model fit with the smallest chi-square, selecting the peak close to zero phase in the radial velocity ephemeris that is closest to the average date of the input magnitudes. The reduced chi-square values for these fits is always close to unity, which is to say that my quoted photometric error bars are reasonable, and this allows accurate estimates of the uncertainties in the times of minima as being the range over which the chi-square is within unity of its lowest value. The result is 31 times of maximum light, with their Julian dates tabulated in Table 6.
The \(O-C\) curve is a listing and plot of the deviations of the observed times from some linear model times as a function of the year. For this, some arbitrary linear ephemeris must be adopted, and I have adopted the ephemeris of Fekel et al. (2000), with \(P\)=227.5687 days and an epoch for maximum light of JD 2447918.62. With this fiducial ephemeris, each of my observed times of maximum light can be assigned an integer, \(N\), that counts the elapsed orbits forwards or backwards from the epoch. With \(N\), the predicted time of maximum can be calculated for each observed time of maximum, with the difference being \(O-C\). The \(O-C\) values have units of days, and have the same uncertainties as the JD for the times of maxima. These values are tabulated in Table 6, and plotted in Fig. 12.
In the \(O-C\) diagram, we can only expect two types of period changes. (Details of the construction, equations, fitting, and interpretation of the \(O-C\) diagram are extensively discussed and analyzed for both observational and theoretical effects in Schaefer, 2011; 2020a.) The first type is the sudden period change that happens at the time of the eruption, where \(\Delta P\)=\(P_{\rm after}\)\(-P_{\rm before}\), for the orbital periods just before and after the eruption. Various competing effects are operating, including mass-loss and frictional angular momentum loss, while there also certainly must be some additional now-unknown effect that dominates over the other known effects. \(\Delta P\) appears in the
\begin{table}
\begin{tabular}{l c c c c c} \hline Year range & Band & JD maximum & (Year) & \(N\) & \(O-C\) \\ \hline
[MISSING_PAGE_POST]
\) & 2459297.6 \(\pm\) 1.3 & 2021.225 & 50 & 0.5 \\ \hline \end{tabular}
\end{table}
Table 6: Times of maximum light and \(O-C\) measures 1867.0–2022.8
Figure 11: Phase-folded \(B\) light curves for the low-state (top panel) and the high-state (bottom panel). The black sinewaves in both panels show the schematic effects of ellipsoidal modulations, with 0.42 mag full-amplitude. In the low-state, we see ellipsoidal modulations, with no apparent irradiation effects, although the maxima appear at unexpected phases of 0.04 and 0.45. In the high-state, the two minima are of unequal depths (like the effect from irradiation). In the bottom panel, the deviations from the sinewave are consistent with the sampling over a relatively short number of years with chaotic fluctuations superposed.
\(O-C\) curve as a sharp kink, where a turn upwards means that the period suddenly increased at the time of the eruption. The second type of period change visible in the \(O-C\) curve is the slow and steady period increase (or decrease) operating throughout the entire time between eruptions. This period change is denoted as \(\dot{P}\), which is the change in \(P\) over a given time period, so the quantity is in units of days-per-day, or dimensionless. As seen in many other novae, \(\dot{P}\) is apparently constant throughout the inter-eruption interval, and this agrees with the expectation that the condition of the quiescence nova binary is stable between eruptions so the period-change mechanisms should be nearly constant. A constant \(\dot{P}\) appears in the \(O-C\) curve as a simple parabola extending between the times of eruptions, with a concave-down parabola showing that the period is steadily _decreasing_. Surprisingly, the well-observed case of RN U Sco proves that the constant \(\dot{P}\) between eruptions can change by an order-of-magnitude across the time of an eruption, so the T CrB \(\dot{P}\) could easily be different for the 1866-1946 and 1946-present intervals. In all, the T CrB 1866-2022 \(O-C\) curve must appear close to a broken-parabola, with the break in the year 1946 and the parabolic curvature from before-1946 need not be the same curvature as after-1946.
I have fitted the T CrB \(O-C\) curve to a broken parabola. This best-fitting model is shown as the black curve in Fig. 12. The reduced chi-square is close to unity, so the quoted error bars are good. My fitted \(P_{\rm before}\) is 227.495\(\pm\)0.053 days, while \(P_{\rm after}\) is 227.680\(\pm\)0.019 days. This makes \(\Delta P\)=0.185\(\pm\)0.056 days. This sudden period change across the 1946 eruption is seen in Fig. 12 as the sharp upward kink. Importantly, the period is getting _longer_ across the eruption. After 1946, the \(O-C\) shows substantial curvature with a concave-down parabola, for which the best-fitting \(\dot{P}\) is (\(-\)8.9\(\pm\)1.6)\(\times\)10\({}^{-6}\) days-per-day. Before 1946, the curvature is greatly different, appearing with at most only small curvature consistent with zero, where the uncertainty is moderately large due to the relatively large error bars on the pre-1946 \(O-C\) input. The pre-1946 \(\dot{P}\) is (+1.75\(\pm\)4.5)\(\times\)10\({}^{-6}\) days-per-day.
The null-hypothesis is no period change, i.e., a straight line model in Fig. 12. The confidence in adding a parabolic term can be estimated from the change in chi-square (Lampton, Margon, & Bowyer, 1976). For this, the chi-square increases by 17.1 over its minimum, which is to say that the null-hypothesis is rejected at \(>\)4-sigma confidence level. The null hypothesis for \(\Delta P\) is a value of zero, which is a single parabola model. For this, the chi-square is 16.6 larger than the minimum, which is to say that this null hypothesis is rejected at \(>\)4-sigma. The null hypothesis for \(\dot{P}\) is a value of zero (making a model as a broken line), which has a chi-square that is 13.3 larger than the overall minimum, so the possibility of no-curvature is rejected at the 3.6-sigma confidence level. For the curvature, we can look at the post-1946 \(O-C\) points, with this time interval having the values with the smallest error bars and no complications from \(\Delta P\) or changing \(\dot{P}\). For this interval, the fitted \(\dot{P}\) is (-7.8\(\pm\)2.5)\(\times\)10\({}^{-6}\) in dimensionless units of days per day, while the fits with no curvature are worse by 11.2, which indicates the curvature is significant at the 3.3-sigma confidence level. In all, the existence of the sharp period-change in 1946 is significant, and the existence of the steady period change after 1946 is significant.
The measured \(\Delta P\) and \(\dot{P}\) values are greatly different from zero, so much so that theory has a difficult time explaining the sizes of both (see next Section). In such a case, it is prudent to consider whether the basic result can be impeached in some way. But the data are simple, good quality (to within the stated error bars), and multiply redundant, so I see no chance to impeach the input. The analysis is simple and standard, while the best-fitting models have reduced chi-squares of close to unity. The existence of the period changes is measured to have a confidence level of \(>\)4-sigma. So the case for large non-zero \(\Delta P\) and \(\dot{P}\) is convincing. I can think of only one evidenced argument to impeached anything, and that is against just one \(O-C\) value. The \(B\)-band light during the high-state (see bottom panel of Fig. 11) has a different shape than during the low-state. In this case, it is possible (but I have no other evidence for such) that the added light in the high-state is not symmetric in phase, thus introducing a systematic offset in the fitted phase of maximum light that does not reflect the position of the companion star in its orbit. This line of speculation is applicable only to the \(B\) light curve and only during the high-state, so only one \(O-C\) value (the 2015-2023 \(B\) value in Table 6) might be impeached. With this one point tossed out, the best-fitting broken parabola and the best-fitting straight line are closely the same as in the previous paragraph, with the chi-square difference of 15.3, for a significance of the existence of the two types of period changes at the 3.9-sigma level. The speculation about the symmetry of phasing of \(B\)-light in the high-state cannot be transferred to the \(V\)-light in the high-state, because the extra \(V\)-light in the high-state is small (see Fig. 6) and because the folded light curve is the same shape as during the low-state, to within the normal variations imposed by the flickering (see Fig. 10). So there are no more \(O-C\) curve points that can be suspected even by speculation. In all, the existence of the large \(\Delta P\) and \(\dot{P}\) values is significant at the 4-sigma confidence level, and the result cannot be impeached.
Figure 12: T CrB \(O-C\) curve 1867–2022. The \(O-C\) measures from Table 6 are for the \(B\) light curve (blue points) and the \(V\) light curve (green points) as calculated by a chi-square fit to a sinewave, with the fiducial ephemeris for an adopted period of 227.5687 and epoch of JD 2447918.62. The size of the points depends on the size of the error bars, with large dots for values with small error bars, with this adopted so that the visual picture is not dominated by the points with the largest uncertainty. We see the expected \(O-C\) curve as a broken parabola (thick black curve), with the break at the 1946 eruption, and the curvatures during quiescence being different between the two eruption intervals of 1866–1946 and 1946–2022. We see good agreement between \(B\) and \(V\), and the calculated error bars produce a reduced chi-square satisfactorily close to unity (1.26 for 26 degrees of freedom). The thick grey curves show the broken parabola models for extreme ranges of values that are still within the one-sigma region. The significant period change across the 1946 eruption (seen as the kink in 1946) has \(\Delta P\)=0.185\(\pm\)0.056 days, which means that the orbital period _increased_ due to the eruption. The steady period change between eruption is significantly different from 1866–1946 (with a \(\dot{P}\) consistent with zero) to 1946–2022 with \(\dot{P}\)=(-8.9\(\pm\)1.6)\(\times\)10\({}^{-6}\) days-per-day. These measured period changes are difficult to understand with published models.
### Broad Implications For These Measured Period Changes
T CrB has a large \(\Delta P\), a very large \(\Delta P\). The value for T CrB is \(>\)100\(\times\) that of any other nova system, for which I have measures for 12 nova eruptions (e.g., Schaefer 2020b; 2022a).
A measure of this huge period change is \(P/\Delta P\)=1230 eruption cycles, as a schematic doubling time-scale for the orbital period. With \(\tau_{rec}\) of 80 years, the doubling time-scale is 98,000 years from \(\Delta P\) alone. This is greatly faster than all evolutionary time-scales for other known CVs. The effective long-term period change is \(\Delta P\)/\(\tau_{rec}\)=6.3\(\times\)10\({}^{-6}\), with this being _opposite_ the effects of \(\dot{P}\) and dominating over the effects of \(\dot{P}\) (with \(\dot{P}\) during a single eruption cycle averaging to \(-\)3.6\(\times\)10\({}^{-6}\)). So the unknown mechanism that generates \(\Delta P\) is dominating the evolution of T CrB.
With the abrupt increase in \(P\) by 4.44 hours, the orbital semi-major axis will suddenly get larger by 0.106 R\({}_{\odot}\), and the companion star's Roche lobe will expand by 24,800 km. This can be compared to the red giant's atmospheric scale height of 468,000 km. With this, we understand that the changing Roche lobe size will not make any substantial change in the quiescent accretion rate from intereruption-to-interreruption intervals.
This expansion of the Roche lobe by 24,800 km can also be compared to the expected expansion of a red giant star around the time when its radius is 66 \(R_{\odot}\). The exact value for the red giant expansion is uncertain, because there must certainly have been some mass transfer in the original binary, because the star that is now the companion has a mass near 0.81 M\({}_{\odot}\) (for a 1.35 M\({}_{\odot}\) white dwarf with a mass ratio of 0.60, Belczynski & Mikolajewska 1998), yet must have had a somewhat larger main sequence mass so that the core can now start with the red giant expansion. For any reasonable evolution track in the HR diagram, the red giant in T CrB will expand at a rate of near 0.5 km per year. The evolutionary expansion of T CrB is negligibly small for time-scales of faster than a million years or so.
The observed \(\Delta P\) can be compared to values predicted for all known mechanisms (Schaefer 2020a). The first mechanism is mass loss from the nova ejecta, which will necessarily increase \(P\) by approximately \(2PM_{\rm ejecta}\)/(\(M_{WD}+M_{comp}\)), which is 0.000021 days for an ejecta mass of M\({}_{\rm ejecta}\)=10\({}^{-7}\) M\({}_{\odot}\) (Selvelli et al. 1992). This mechanism cannot account for the observed \(\Delta P\) by a factor of 10\({}^{4}\)\(\times\). The second mechanism is termed 'frictional angular momentum loss', where the velocity of the companion star moving within the expanding nova shell is slowed down by dynamical friction with the shell's mass, hence lowering the \(P\). Detailed calculation gives the period change as \(-\)1.3\(\times\)10\({}^{-8}\) days. In any case, this negligibly small effect is always _negative_ for a decreasing period, so this second effect cannot account for the observed \(\Delta P\). The third mechanism is essentially magnetic braking of the companion star inside the expanding nova ejecta shell. This effect will necessarily be small due to the relatively high-speed and low-mass of the ejecta, so the resultant period change is negligibly small even compared to the mass-loss effect. In any case, this third effect is always negative for the case in hand, so it cannot account for the observed \(\Delta P\). The fourth mechanism to change \(P\) across a nova eruption is to invoke asymmetric ejecta, where the ejection will produce a reaction force back on to the white dwarf. All nova shells show substantial deviations from spherical symmetry, such that if the white dwarf in T CrB happened to have ejected an excess of mass in the backward direction of its orbital motion, then the orbital velocity will speed up and the period will suddenly increase. For expected conditions for T CrB (say, with Selvelli's Mc\({}_{ ejecta}\) and the asymmetry factor of 0.5), the calculated period change is 0.0014 days, which is greatly smaller than observed. But if the M\({}_{\rm ejecta}\) is raised to near 10\({}^{-5}\) M\({}_{\odot}\), then the observed \(\Delta P\) can be reproduced. Such a high M\({}_{\rm ejecta}\) is not outlandish, as my estimated mass accreted between eruptions is 1.2\(\times\)10\({}^{-6}\) M\({}_{\odot}\), and additional mass from the surface of the white dwarf might be dredged up and ejected by the nova event. Further, extreme asymmetries in the ejecta can increase the period change by a factor of 4\(\times\). In all, the only known mechanism with any possibility of producing the large value of \(\Delta P\) is the asymmetric ejection of the nova shell, and only then if pushed to unlikely extremes.
T CrB has a large \(\dot{P}\) after the 1946 eruption, a very large \(\dot{P}\). The value for T CrB is \(>\)200\(\times\) larger than for any other known novae, for which I have measures for 14 novae (e.g., Schaefer 2020b; 2022a).
The large \(\dot{P}\) can be compared to the theoretical values predicted by various physical mechanisms. Schaefer (2020a) summarizes all known mechanisms to produce \(P\) changes between eruptions, and find that only three mechanisms are possible to make a steady \(\dot{P}\) on long time-scales. The first mechanism is that ordinary mass transfer from the companion star to the white dwarf will change the period. The resultant \(\dot{P}\) equals \(3P(1+q)\dot{M}/M_{\rm Comp}\), with \(q\) being the mass ratio (M\({}_{\rm Comp}\)/M\({}_{\rm WD}\)) and the accretion rate is \(\dot{M}\). For the high-state accretion, \(\dot{P}\) is 5.9\(\times\)10\({}^{-8}\), which is 150\(\times\) smaller than the value from the 1946-2022 interval. In any case, this mass-transfer mechanism will always _increase_ the period, so this cannot explain the observed \(\dot{P}\). The second \(\dot{P}\) mechanism is labelled'magnetic braking', where the stellar wind from the companion is entrained to the companion's magnetic field and forced to co-rotate, so rotational angular momentum goes to the wind atoms which are then lost from the system, with tidal effects forcing synchronized rotation which then takes away angular momentum from the orbit, providing a steady negative-\(\dot{P}\). For the case of T CrB, the likely tidal synchronization of the companion star's rotation forces it to have a rotation period of 227 days, and this long rotation period means that any dynamo effect will produce only a very small magnetic field, while the slow spinning of the companion star means that any stellar wind gas will carry little angular momentum (Privitera et al. 2016). Between these two effects, there is no real chance that magnetic braking can account for the very large \(\dot{P}\) after 1946. The third \(\dot{P}\) mechanism is the gravitational radiation emitted by all binary stars, with the associated angular momentum loss slowly grinding down the orbital period. For the case of T CrB, the orbital period is so large that the gravitational radiation effect is infinitesimal. In all, we are left with the only known mechanisms for producing long-term steady period changes between nova eruption are all greatly too small to explain the post-1946 \(\dot{P}\).
Despite having no theory explanation for the period in T CrB, we do have an empirical measure of their changes. The \(\Delta P\) dominates, with the period increasing by 4.44 hours across the 1946 eruption, so the binary is _separating_ and the companion's Roche lobe radius is enlarging by 24,800 km each eruption. This creates two related problems, one with the past of the T CrB system, and the other with the future of the T CrB system. With the Roche lobe now expanding fast, we are faced with the T-CrB-progenitor problem. That is, how can the progenitor have arrived at a situation with a substantially smaller Roche lobe in the past, only to now be expanding? Further, in the past, the \(\dot{M}\) would have been much higher, pushing the accretion at least into the'steady hydrogen burning' state, and no such system with a red giant companion is known, despite being luminous and prominent in our Galaxy. I have no useable answer for this T-CrB-progenitor dilemma. The future of T CrB is apparently to separate, with the Roche lobe expanding, the accretion decreasing, and ultimately leading to a disconnected binary if continued. However, the situation for changes in the current rates and the effects of an expand
ing Roche lobe are complex and unknown, so such an extrapolation by many eruption-cycles has little pretense to accuracy.
The _changes_ in \(P\) controls the evolution of CVs in general. For T CrB, all the known mechanisms to explain \(\Delta P\) and to explain \(\dot{P}\) fail by orders of magnitude. With this, we have no useful theory for understanding the evolution of T CrB in the past or future, or even what currently powers the high-\(\dot{M}\) that makes the system into an RN.
## 6 The Fourier Transforms
The AID and the _TESS_ light curves have large numbers of brightness measured that are well-sampled, with these being good for searching for periodicities over a very wide range. The ellipsoidal modulations, with a period of 113.7843 days is already well-known. A signal at the orbital period is also possible, for example, due to irradiation effects on the companion star. Another possible signal arises from various effects tied to the white dwarf rotation period.
The optimal way to search for strictly periodic signals is the Fourier transform. For my period search, I use the discrete Fourier transform program VSTAR, which also produces a fitted semi-amplitude for each trial frequency. I have applied the Fourier transform to the unbinned AID visual light curve after 1948 (112560 magnitudes), to the _TESS_ Sector 24 and 25 light curve of 53.5 days with 120-second resolution (33235 fluxes), and to the _TESS_ Sector 51 light curve of 19.3 days with 20-second resolution (49272 fluxes). The amplitude of a signal can be derived from the peak power, and the limit on any amplitude over a range of frequency can be derived from the power in the envelope of the noise in the power spectrum.
The half-orbital period produced a highly significant peak in the Fourier transform for the AID data. This peak was at a period of 113.77 days, with a full-amplitude of 0.149 mag. The full orbital period also appears with a peak at 227.27 days, corresponding to a full-amplitude of 0.042 mag.
I have not found any other significant non-artefact periodicities. The transform of the AID light curve places a limit on the presence of any strict periodicity from 10-4000 days to have a full-amplitude of under 0.03 mag. All three Fourier transforms limit the presence of coherent periodicities from 0.01-10 days to have a full-amplitude of under 0.0051 mag. For periods from 40-seconds to 0.01 days, the transform of Sector 51 places a limit on the amplitude of any possible periodicity to be under 0.00072 mag.
## 7 The Power Spectral Density
The power spectral density (PSD) is the Fourier power (the amplitude squared) as a function of the frequency. This is usually displayed on a log-log plot with logarithmic spacing of frequency bins. This shows the relative strength of fast variations versus slow variations. In CVs, the PSD appears as a noisy power-law, quantified by a power law index, \(\gamma\), where the average Fourier power scales with frequency (\(f\) in units of cycles per day) as \(f^{-\gamma}\). Typical values for CVs are 0.5\(<\)\(>\)\(<\)1.5 with substantial variability (e.g., Bruch 2022). Power-laws are always cutoff at some low- and high-frequency due to some limitation of the physical mechanism intrinsic to the star. For CVs, the high-frequency cutoff to a steep slope is close to log[\(f\)]\(\rightarrow\)1.9 (Scaringi et al. 2015). Further, the observed power-laws will suffer breaks at high-frequency due to measurement noise (like from Poisson fluctuations) and will suffer from large noise at low-frequency due to the happenstance of the particular random realization of the few measures of the slowest variations. The variations on the faster time-scales (say, faster than a few hours) is commonly termed 'flickering'. The underlying physical mechanism is still unknown, but it undoubtedly arises from fluctuations of the accretion process associated with the disc.
T CrB flickering (c.f., Figs 7-8) has been extensively measured, e.g., Walker (1977), Zamanov & Bruch (1998), and Ikheivicz et al. (2016). Flickering on T CrB is indistinguishable from flickering on the RNe (Schaefer 2010) and on other CVs (Zamanov & Bruch 1998), despite the large range of disc sizes. Zamanov et al. (2004) has well-measured the T CrB PSD with 27 nights of long photometry runs in the \(U\)-band, concluding that the power scales as \(f^{-1.46\pm 0.17}\) for 1.85\(<\)\(\log[f]<\)3.16. I have reported the PSD of T CrB for \(-\)4.3\(<\)log[\(f\)]\(<\)\(-\)1.58 (Schaefer 2010, fig. 66) based on the 1947-2009 AAVSO visual light curve, showing a power-law of \(f^{-0.9}\) with much scatter.
I have constructed a new PSD for T CrB from my complete \(V\)-band light curve in magnitudes for the years 1868-1945 and 1948-2022 (avoiding times near the eruptions), from the 53.5 day interval for _TESS_ Sectors 24 and 25 with 120-s time resolution flux light curve, and from the 24.6 day interval for _TESS_ Sector 51 with 20-second time resolution flux light curve. The time intervals near the eruptions are excluded because the eruption light curves are not a measure of the accretion processes highlighted by the PSD. From the \(V\)-band PSD, I have removed the frequencies near one-month and one-year due to the presence of the usual artefacts. Poisson noise turns the PSD flat for log[\(f\)]\(>\)2.33, so this is my limit. I have added in a power-law to represent the observed \(U\)-band magnitude light curve PSD of Zamanov et al. (2004). The four PSDs all have separate normalizations that are largely unknowable, but fortunately three have large overlap in frequencies, while the \(V\)-band PSD can be matched to the _TESS_ PSDs at log[\(f\)] around \(-\)1.1. Each PSD is normalized so that the deviations in the overlap region are minimized.
Fig. 13 shows my combined PSD covering \(-\)4.69\(<\)log[\(f\)]\(<\)3.16. This is 7.85 orders of magnitude in frequency. The usual high-frequency break at log[\(f\)]\(\rightarrow\)+1.9 cycles per day is not seen. It is striking that the entire PSD is close to a single power-law, with \(f^{-1.22\pm 0.08}\). The existence of this PSD so close to a single power-law is consistent with the possibility that a single physical mechanism produces the variations on time-scales from 1.0 minutes to 134 years.
Figure 13: Power spectral density over 7.85 orders-of-magnitude in frequency. This PSD is a composite of four normalized PSDs, derived from my \(V\)-band light curve 1848–1944 and 1948–2022 (green diamonds), 53.5 days of the _TESS_ Sectors 24 and 25 (red diamonds), the 24.6 day light curve from _TESS_ Sector 51 with 20-s time resolution (orange diamonds), and the 27-night \(U\)-band light curve from Zamanov et al. (green line sticking out at the lower right). The single high point is from the ellipsoidal modulations. The PSD is close to a single power-law with \(f^{-1.22\pm 0.08}\).
## 8 The spectral energy distribution
The spectral energy distribution (SED) is the monochromatic spectral flux density (F\({}_{\nu}\) with units of Jansky), or alternatively flux (\(\nu\)F\({}_{\nu}\)), as a function of the photon frequency (\(\nu\) in units of Hertz), and is often displayed on a log-log plot. The SED shows where the energy comes out, and viewers can readily see the shapes (like blackbodies and power laws) with the physical interpretations. For T CrB, I have collected measured fluxes in Table 7 from many sources, with these being ordered by year of observation. Most sources are from the years 1976-2012, while T CrB was in its quiescent low-state (top panel of Figure 14). The last two source (AID and _Swift_) are from 2019-2022, with T CrB in its high-state (bottom panel of Fig. 14). The X-ray data are not displayed, as the points are isolated far off the right side and far off the bottom of the plot.
The SED plot shows a good blackbody shape for the red giant companion star, plus a flattening in the ultraviolet and \(U\) bands. I have fitted a blackbody plus an \(\alpha\)-disc model (Frank, King, & Raines, 2002), with the fit from disc-plus-blackbody shown as a thick purple curve. The blackbody alone is shown as a narrow grey line that merges with the total curve redward of the \(B\) band (because the red giant dominates greatly over disc light). The standard \(\alpha\)-disc model gives the flux across frequencies (Frank et al., 2002) where I have adopted a distance of 914 pc (Schaefer, 2022c), \(E\) (\(B-V\))=0.065 from _Galex_, a white dwarf mass of 1.35 M\({}_{\odot}\)(Shara et al., 2018; Hachisu & Kato, 2019), a mass ratio of 0.60 (Belczynski & Mikolajewska, 1998), a disc size of 22 per cent of the white dwarf Roche lobe size (Eq. 4.20 Frank et al., 2002), an orbital period of 227 days, and an orbital inclination of 60\({}^{\circ}\)(Belczynski & Mikolajewska, 1998). This accretion disc is fully specified with confidently measured parameters, except that the accretion rate is a free parameter adjusted to fit the ultraviolet brightness. The X-ray flux measures are not included in the fit, as they come from a different physical mechanism (boundary layer emission) that is not described by the \(\alpha\)-disc model.
My fitted temperature for the red giant is 2870\(\pm\)40 K, with this being applicable both for the low-state and the high-state. The fitted disc model (the light-blue curve) is essentially determined to fit the ultraviolet fluxes. The vertical scatter in the SED plot is due to the intrinsic variability of T CrB, with this variability being large in the ultraviolet. In the high-state, the ultraviolet flux is larger than in the low-state by typically a factor of 20\(\times\), and this points to the high-state having a greatly increased accretion rate.
## 9 Accretion rate and accretion mass
The activity and energetics and eruptions of T CrB are all driven by the accretion via Roche lobe overflow from the red giant to a disc around the white dwarf. The critical parameters are the accretion rate (\(\dot{M}\)) and the mass accumulated on the surface of the white dwarf (\(M_{\rm acc}\)). The accretion light is not readily separable from the red giant light in the optical bands, while the accretion light dominates in the ultraviolet. So \(\dot{M}\) can only be measured from the ultraviolet flux. Selvelli et al. (1992) measured the accretion at 2.32\(\times\)10\({}^{-8}\) M\({}_{\odot}\) yr\({}^{-1}\), based on many measures with the _International Ultraviolet Explorer (IUE)_. In 25 observations from 1979-1990, they found the ultraviolet flux to be highly variable, with the RMS being 60 per cent of the average, and with an extreme range of a factor of 20\(\times\). With this variability, we can only derive an average accretion rate that is hopefully representative of the entire time interval. The tool is the SED fits to the ultraviolet flux, as in the previous section, where the \(\alpha\)-disc model is completely determined with good accuracy, except for the accretion rate, which can be determined as a fit parameter.
\begin{table}
\begin{tabular}{l c c c c c} \hline Source & Year & Band & Flux & Log[\(\nu\)] & Log[\(F_{\nu}\)] \\ \hline Walker & 1976 & \(U\) & 12.85 mag & 14.92 & -1.76 \\ Walker & 1976 & \(B\) & 11.6 mag & 14.83 & -0.91 \\ Walker & 1976 & \(V\) & 10.04 mag & 14.74 & -0.37 \\ Walker & 1976 & \(R\) & 9.04 mag & 14.67 & -0.07 \\ Walker & 1976 & \(I\) & 7.46 mag & 14.58 & 0.46 \\... & & & & \\ AID & 2016-22 & \(I\) & 7.4 mag & 14.58 & 0.49 \\ _Swift_ XRT & 2018 & 0.2–10 keV & 5.6\(\times\)10\({}^{-6}\) Jy & 17.38 & -5.25 \\ _Swift_ UVOT & 2019-22 & \(UVW1\) & 12.88 mag & 15.08 & -2.09 \\ _Swift_ UVOT & 2019–23 & \(UVM2\) & 10.72 mag & 15.13 & -1.20 \\ _Swift_ UVOT & 2019-24 & \(UVW2\) & 10.98 mag & 15.18 & -1.32 \\ \hline \end{tabular}
\end{table}
Table 7: T CrB Spectral Energy Distribution (full table of 55 fluxes, plus column for references, appears in the on-line Supplementary Material
Figure 14: Spectral energy distribution plot for quiescent low-state (top panel) and the recent high-state (bottom panel). The fitted blackbody from the red giant companion star is shown by the thin gray line, and this is unchanged from low-state to high-state. The best-fitting temperature is 2870\(\pm\)40 K. The fitted \(\alpha\)-disc is shown by the light-blue curve, with the position of the shape only determined from the ultraviolet fluxes. Redward of the \(B\)-band, the red giant light dominates over the disc light by orders-of-magnitude. The scatter about the best smooth curve shows the ordinary temporal flickering of T CrB. Top panel: With many flux measures from 1976–2012, the SED is well defined, with a good fit to the red giant’s blackbody, while the highly-variable disc contribution is only visible in the ultraviolet. Bottom panel: I can find few useable measures of the flux after the high-state turn-on around 2015. These show that the \(R\) and \(I\) magnitudes are unchanged from low- to high-state, which is to say that the red giant is not the source of the extra luminosity in the high-state. The extra light in the high-state is very blue and ultraviolet rich, and this light varies substantially day-to-day.
For the low-state, we have 25 ultraviolet fluxes from \(IUE\) in 1979-1990, 2 fluxes from _Galex_ in 2005 and 2007, and15 fluxes in three bands from _Swift_ UVOT in 2008 and 2011 For the high-state, we have 8 fluxes in three bands from _Swift_ UVOT in 2019-2022, plus 4 \(U\) magnitudes reported by AID for 2021 and 2022. The fitted \(\alpha\)-disc models are shown in Fig. 14, with the derived \(\dot{M}\) representing the average over the time intervals. The average low-state accretion rate is 3.2\(\times\)10\({}^{-9}\) M\({}_{\odot}\) yr\({}^{-1}\), while it is 6.4\(\times\)10\({}^{-8}\) M\({}_{\odot}\) yr\({}^{-1}\) in the high-state.
T CrB has gone through many states since the first positive detection in 1855. In Table 8, I have listed the states, with a descriptive name and the observed range of years. To each of these states, I have assigned either the low-state or the high-state accretion rate, with the eruptive states presumably having no accretion as the overflow mass is blown away. The mass accreted in each interval, \(M_{\rm acc}\) in solar masses, is then calculated and listed. With this, the total mass accreted between eruption, \(\dot{Z}M_{\rm acc}\) can be added up. The 1866-1946 interval, one complete eruption cycle had 1.4\(\times\)10\({}^{-6}\) M\({}_{\odot}\). The 1946-2022.8 interval has an accreted mass of 1.2\(\times\)10\({}^{-6}\) M\({}_{\odot}\), but this cycle is not yet complete.
My value for log[\(\dot{Z}M_{acc}\)] for one complete eruption cycle is \(-\)5.9, and this can be compared to theoretical predictions. The mass accreted between eruptions required to trigger the nova eruption is variously labelled as \(M_{\rm ign}\) by Shen & Bildsten (2009), \(m_{\rm acc}\) by Yaron et al. (2005), or \(M_{\rm trigger}\) by Schaefer et al. (2022a). These calculations are for a steady-state accretion, yet for T CrB it is unclear whether the low-state or high-state accretion is applicable. In their fig. 7, Shen & Bildsten give the trigger mass for a 1.35 M\({}_{\odot}\) white dwarf being fed material with solar abundances, with the log-mass ranging from \(-\)5.9 to \(-\)5.2 for log[\(\dot{M}\)] varying from \(-\)7 to \(-\)9. Similarly, for varying accretion rate over the same range, Yaron et al. have ranges of log-mass from \(-\)7.1 to \(-\)6.4 for relatively cool white dwarfs, and from \(-\)7.1 to \(-\)6.7 for relatively hot white dwarfs. I can only conclude that theory has not yet made a confident and accurate prediction, with the log of the trigger mass possibly being from \(-\)5.2 to \(-\)7.1. With this, my measure fits in well with theoretical expectations for the trigger mass.
## 10 Energetics of eruptions and high-states
The total energy radiated by T CrB in its various states can tell us about the physics of the states. This energy can be calculated from integrals under the long-term light curves, to produce the energy in the \(B\) and \(V\) bands, \(E_{\rm B}\) and \(E_{\rm V}\).
The first step is to pull out binned \(B\) and \(V\) light curves. For this, I use the light curve shown in Fig. 6, where the ellipsoidal variations are averaged out by the use of 113.7843-day bins during quiescence.
The second step is to correct for extinction. With an adopted \(E(B-V)\)=0.065 from \(Galex\), the \(B\)-band and \(V\)-band extinctions are 0.27 and 0.21 mag.
The third step is to convert all the magnitudes to flux densities with units of Jansky. For this, a zero-magnitude star has flux densities of 4260 and 3640 Jy in the \(B\) and \(V\) bands (Bessell 1979).
The fourth step is to subtract out the constant light of the red giant, so we are left with only flux from the accretion processes. The subtracted red giant fluxes in \(B\) and \(V\) are taken from the SED fits.
The fifth step is to convert these flux densities in Janskys to the luminosity in the \(B\) and \(V\) bands, with units ergs per second. Three conversion factors are needed. First, the flux density in Janskys is to be multiplied by a factor of 10\({}^{-23}\) to get units of erg s\({}^{-1}\) cm\({}^{-2}\) Hz\({}^{-1}\). Second, the factor of 4\(\pi D^{2}\) is used to get units of erg s\({}^{-1}\) Hz\({}^{-1}\). Here, I adopt the distance to T CrB of 914 pc (Schaefer 2022c). Third, the converted flux must be multiplied by the bandwidth, in Hertz, to get the luminosity in that band. For bandwidths of 980 A and 890 A in the \(B\) and \(V\), the correction factors are 1.54\(\times\)10\({}^{14}\) and 8.88\(\times\)10\({}^{13}\) Hz.
The sixth step is to integrate the luminosity over time for various intervals so as to get the total radiated energy \(E_{\rm B}\) and \(E_{\rm V}\) (Table 8). With the pre-eruption dip apparently caused by dust extinction from the circumstellar medium, the observed brightness does not represent the emitted energy, so I have calculated the radiated energy assuming the high-state luminosity.
The seventh step is to correct from \(E_{\rm B}\) and \(E_{\rm V}\) to the bolometric energy \(E_{\rm bolo}\). To make this correction for the low- and high-states, I use my SED fits for the \(\alpha\)-disc models. For the primary and secondary eruptions, I adopted an SED for a 10,000 K blackbody, as appropriate for the photosphere of nova shells. My bolometric range extends from 0.1-30 microns, which includes effectively all the energy, except for the unknown flux that comes out in the far-ultraviolet. The X-ray flux is always greatly too small to contribute any significant fraction of energy. Both the \(B\) and \(V\) light curves should produce the same \(E_{\rm bolo}\), to within measurement errors. The differences in the two measures of log[\(E_{\rm bolo}\)] have an average near zero and an RMS of 0.2, which I take to be the real measurement uncertainty. The log \(E_{\rm B}\), log \(E_{\rm V}\), and log[\(E_{\rm bolo}\)] values are in Table 8.
## 11 The unique states around eruptions
### The High-State
The unique high-state starts close to ten years before the 1946 eruption (and \(<\)10.1 years before the 1866 eruption), continuing until nine years after both eruptions as a nearly flat plateau, for a duration of 19 years. (Superposed on this high-state plateau are the pre-eruption dip, the primary eruption, and the secondary eruption.) No precedent is known. Back in the late 1930's, it was recognized that the extra light was very blue in colour, and powering the new bright emission lines for high-ionization species (Payne-Gaposchkin & Wright, 1946). So the situation for the high-state was recognized to be some mysterious source that was very hot.
The appearance of the hot source is apparently associated with mass ejection at some level. During World War II, spectroscopists started recording P Cygni line profiles indicating associated outflows at \(\sim\)300 km s\({}^{-1}\) (Swings, Elvey, & Struve 1940).
As for the mechanism and cause of this hot-source, I am aware of no suggestions in the literature, largely because the existence of the high-state was unknown until the long-term fully-calibrated light curve (as in Fig. 1 and Fig. 6) became available (Schaefer 2014). Nevertheless, there are only two possibilities, either the accretion rate suddenly increased by roughly 20\(\times\) making for a bright accretion disc, or some type of nuclear burning ignited on the surface of the white dwarf powering some sort of a hot photosphere. The nuclear burning possibility is dubious because there is no situation where burning would simmer steadily for 19 years, much less punctuated by at least one real thermonuclear runaway on the white dwarf. Further, there is no real chance that a hot photosphere around the white dwarf would drive mass ejection with a velocity as slow as 300 km s\({}^{-1}\).
So the high-state is an accretion event, where the red giant suddenly starts (and stops 19 years later) to pour mass through the Roche lobe with the \(\dot{M}\) value 20\(\times\) higher. I know of no physical mechanism that can drive this accretion high-state. Further, I know of no timing mechanism that can start the high-state at a time 69\(\pm\)1 year after the prior eruption, that starts the mass ejection several years before the eruption, and stops the high-state 9\(\pm\)1 years after the eruption.
The existence of the pre-eruption high-state presents a mystery
as to why the system brightens _before_ the eruption. The physical mechanism that makes the high-state is presumably associated with the accretion rate, which is controlled by the Roche lobe overflow in the atmosphere of the red giant near the inner Lagrangian point of the orbit. The physical mechanism that determines the timing of the start of the classical nova eruption is controlled by the pressure and temperature at the base of the accreted material on the surface of the white dwarf. But how can the atmosphere of the red giant star know about the far-future conditions deep in the white dwarf? What mechanism connects the two locations? One suggestion is that the red giant atmosphere independently kicks into a high-state, and the enlarged accretion rate quickly drives the white dwarf accretion layer to the nova trigger threshold. But this is a description of the problem (that the high-state starts 10 years before the nova event), not an explanation for the timing of the high-state and the eruption. So we are left with no explanation for the enigma as to why the high-state anticipates the eruption by 10 years, just as we have no explanation for the cause or physical mechanism of the high-state existence.
Intriguingly, I know of three other novae that have conspicuous pre-eruption rises that might be related to the pre-eruption high-state of T CrB: **(1)** V533 Her had a distinct exponential rise by 1.5 mag in the 1.5 years leading up to its 1963 nova eruption (Collazzzi et al., 2009). **(2)** V1500 Cyg had an exponential rise in brightness by \(>\)8 mags in the 23 days before its eruption (Collazzi et al., 2009). **(3)** The 2011 eruption of RN T Pyx had a unique flare that peaked at 1.0 mag above the normal quiescent level at a time 9 days before the fast rise of the nova event (Schaefer et al., 2013).
Further, 8 novae are known to have a prolonged high-state lasting for many decades after the eruption has completely stopped (Schaefer and Collazzi, 2010). These are labelled as 'V1500 Cyg stars', named after their prototype. All of these classical novae are more than 2.5 mag brighter than their pre-eruption levels for times more than thirty years after the nova light curve has gone essentially flat. The only plausible explanation is that the white dwarf has continued nuclear burning on its surface, and this is driving a high accretion rate. The cause and mechanism for this continued nuclear burning is unknown. These post-eruption high-states might serve as precedence or exemplars for the T CrB post-eruption high-state.
We are left with the T CrB pre- and post-eruption high-state as being completely unique, yet there are at least 11 nova systems with pre- and post-eruption high-states that might share the same physical mechanism. Further, three other nova systems have re-brightenings that can be labelled as secondary eruptions (see Section 11.3). All of these novae have energetic transient high-states and eruptions that are mixed together with a wide array of morphologies and time-scales. The physical mechanisms are not proven in any of these other cases, but it appears that all share a greatly increased accretion rate, perhaps powered by continued nuclear burning on the white dwarf that starts and stops away from the regular nova event. With these loose precedents, I expect that the high-state of T CrB is the result of a greatly increased accretion rate, possibly triggered by steady thermonuclear burning on the white dwarf.
### Pre-Eruption Dip
There is no precedent in any system for a pre-eruption dip. The primary clue as to the cause of the dip is that the \(V\)-band light falls roughly 1.5 mag fainter than the normal quiescent level of the red giant alone. I think that the easiest mechanism to dim the red giant light, to far fainter than its normal level for most of a year, is for the dimming to be external to the star. With this, the fainter brightness of the red giant is likely due to dimming by circumstellar dust. This fits nicely with the observed P Cygni profiles spotted only in the several years before the eruption (Payne-Gaposchkin and Wright, 1946). That is, just before the start of the pre-eruption dip, T CrB was ejecting gas with velocities \(\sim\)300 km s\({}^{-1}\), while ejecta from novae are famous for occasionally producing shells of freshly-made dust that dims the star, often by many magnitudes. So we have a simple explanation for the pre-eruption dip.
Details still need to be worked out. For example, models of dust formation can test whether the timing between the visibility of the P Cygni profiles and the dimming light curve is reasonable, and ejection rates should be calculated. Maybe most important is to understand the relative dimming between \(B\) and \(V\), which the larger dimming in the \(V\) appears in violation of the usual extinction laws. Fortunately, in the next year or so, T CrB might go through another pre-eruption dip, and this can be observed with the full suite of modern instruments.
### The Secondary Eruption
The secondary eruption behaved identically after the 1866 and 1946 eruptions (see Fig. 3). In both cases, the nova light had completely stopped 30 days after the peak, returning to the high-state level, only to have the secondary eruption start around day 110, have a nearly-flat maximum peaking around day 160, and then suddenly drop to the high-state level around day 210.
The secondary eruption displayed a continuum spectrum, indicating some sort of an optically-thick emission region. The \(B-V\) is
\begin{table}
\begin{tabular}{l l l l l l l} \hline State & Years & \(\dot{M}\) & \(M_{\rm acc}\) & Log[\(E_{\rm B}\)] & Log[\(E_{\rm V}\)] & Log[\(E_{\rm bolc}\)] \\ \hline Primary eruption & 1866.361 – 1866.44 & 0 & 0 &... & 43.14 & 44.20 \\ Inter-eruption high-state & 1866.64 – 1866.66 & 6.4\(\times 10^{-8}\) & 1.4\(\times 10^{-8}\) &... & 41.20 & 42.76 \\ Secondary eruption & 1866.66 – 1866.95 & 0 & 0 &... & 42.17 & 43.23 \\ Post-eruption high-state & 1866.95 – 1874.8 & 6.4\(\times 10^{-8}\) & 5.0\(\times 10^{-7}\) &... & 42.63 & 44.19 \\ Low-state & 1874.8 – 1935.5 & 3.2\(\times 10^{-9}\) & 1.9\(\times 10^{-7}\) & 42.29 & 42.28 & 43.58 \\ Pre-eruption high-state & 1935.5 – 1942.5 & 6.4\(\times 10^{-8}\) & 4.5\(\times 10^{-7}\) & 42.78 & 41.90 & 43.46 \\ Pre-eruption dip & 1942.5 – 1946.103 & 6.4\(\times 10^{-8}\) & 2.3\(\times 10^{-7}\) & 42.49 & 41.41 & 42.97 \\ Primary eruption & 1946.103 – 1946.19 & 0 & 0 & 43.72 & 43.11 & 44.17 \\ Inter-eruption high-state & 1946.19 – 1946.38 & 6.4\(\times 10^{-8}\) & 1.2\(\times 10^{-8}\) & 41.28 & 41.02 & 42.59 \\ Secondary eruption & 1946.38 – 1946.68 & 0 & 0 & 42.37 & 42.09 & 43.15 \\ Post-eruption high-state & 1946.68 – 1954.5 & 6.4\(\times 10^{-8}\) & 5.0\(\times 10^{-7}\) & 42.80 & 42.46 & 44.03 \\ Low-state & 1954.5 – 2015.0 & 3.2\(\times 10^{-9}\) & 1.9\(\times 10^{-7}\) & 42.29 & 42.28 & 43.58 \\ Pre-eruption high-state & 2015.0 – 2022.8 & 6.4\(\times 10^{-8}\) & 5.0\(\times 10^{-8}\) & 42.76 & 42.59 & 44.16 \\ \hline \end{tabular}
\end{table}
Table 8: T CrB Accretion Rates and Energetics
the same as for the colour at the primary peak, suggesting that the optically-thick region is some sort of a photosphere with a 10,000 K temperature, as is universal for nova eruptions. The total bolometric energy of the secondary eruption is 10\(\times\) less than that of the primary eruption. This total energy (10\({}^{43.2}\) ergs) is large, similar to the energy from thermonuclear runaways of nova events, and greatly larger than anything possible from accretion energy.
I know of three attempted theoretical explanations: **(1)** The first attempt appeared in the highly influential review on RNe by Webbink et al. (1987), with the secondary eruption caused by a concentrated ring of gas in the disc suddenly accreting on to the surface of the main-sequence star. This model requires that the white dwarf be replaced by a main-sequence star, with this idea being strongly refuted (Selvelli et al., 1992). **(2)** Webbink et al. (1987) mention a second possible explanation for the secondary maximum, and that has the inner hemisphere on the red giant being irradiated by the nova light, with this reprocessed energy becoming visible as the hot/bright side of the companion star orbiting into and then out of view. This possibility is ruled out by the start of the secondary eruption being 80 days after the primary eruption light had completely faded away. Further, Webbink et al. (1987) points out that the timing and peak colours are all wrong for the irradiation explanation. **(3)** Hachisu & Kato (1999) proposed an extension to the irradiated companion idea, where a severely tilted disc is irradiated, with the combined irradiation resulting in the secondary event. This idea fails to all the same problems as the previous explanation. Further, the irradiation of the companion star and of the accretion disc cannot produce a spectrum that shows a continuum with a photospheric temperature of around 10,000 K. The effects of the disc tilting are much too small to matter, as disc flux scales as the cosine of the viewing angle, with the postulated tilt going from 70\({}^{\circ}\) to 35\({}^{\circ}\), for a 2.4\(\times\) increase in the disc light, which is only a few per cent of the optical light. In the end, the three attempted explanations all fail.
Let me propose a new explanation. The idea is simply that the secondary eruption is a separate nova eruption, i.e., a thermonuclear runaway involving accreted mass on the surface of the white dwarf. The nuclear burning is of mass accreted at the high-\(\dot{M}\) rate between the end of the primary nova eruption and the start of the secondary nova eruption. The trigger mass is greatly smaller for the secondary eruption because the surface of the white dwarf is greatly hotter due to the primary eruption. This idea provides a simple explanation for the continuum from a 10,000 K photosphere as being the usual consequence of a thermonuclear nova event. This idea provides a known energy source that is sufficiently large to explain the observed radiative energy of 10\({}^{43.2}\) ergs. The fast rise and the duration are characteristic of nova eruptions, although the light curve shape is more like the uncommon low-energy F-class novae (Strope et al., 2010). The relative delay from the primary to secondary eruptions is determined by the high-state accretion rate, which appears constant between the 1866 and 1946 eruptions. However, before this idea can be taken seriously, detailed model calculations of nova eruptions are needed for the specific conditions of T CrB. Critical questions would be whether the different conditions at the start of the secondary eruption could lead to the observed light curve shape, and whether the accreted mass before the secondary eruption is adequate to produce the observed energy.
Intriguingly, some rare classical novae have low-amplitude eruption events that are soon after their primary nova: **(1)** V1047 Cen had a pre-eruption magnitude \(V\)=18.7, came to a peak of \(V\)=8.5 in mid-2005, and slowly faded to 16th mag from 2006-2019. In mid-2019, the system brightened suddenly by \(>\)4 mags, staying with a flat-topped peak around 14.0 mag for 10 months, and then faded back to its pre-outburst level. **(2)** V5856 Sgr had a pre-eruption brightness of \(I\)\(>\)22, a peak magnitude of \(V\)=5.9 in 2016, and a slow decline from 11.0-13.3 mag over the interval 2017.0-2021.6. Then, the old nova suddenly brightened by over 1 mag for a flat-topped re-brightening lasting 260 days, followed by three 2-month-long flares that are continuing to the present. **(3)** V1280 Sco has a pre-eruption level of \(V\)\(>\)20.0, reached a peak at \(V\)=3.78 in 2007, had the usual D-class jitters, went into a deep dip fading below 16.0 mag (likely due to the usual dust formation in the expanding nova shell), and the light curve recovered to 11.5 mag in early 2008. In all other cases, D-class novae will immediately start fading, but V1280 Sco remained constant at around 11.5 mag from 2008-present. This long luminous plateau might be a high-state, or it might be a second eruption starting just 1 year after the primary eruption with a low flat-topped light curve.
So the secondary eruption of T CrB might have partial precedent with these three other novae. The morphologies and time-scales of all four secondary eruptions vary widely, but they all share the properties of a large total energy output (comparable or larger than for the primary eruption), starting out with the nova brightness (and hence accretion) far above its pre-eruption level, having the start of the secondary eruption sufficiently close in time to the primary eruption (from 0.3-14 years) such that some causal connection seems forced, with all the secondary eruptions having fast rises of under a month followed by a flat-topped plateau. These morphological similarities are not enough to prove that the same mechanism is powering all four novae, but it is enough to make a reasonable case that we are seeing four versions of one eruption mechanism. The four novae are a fair cross-section of the nova population, and I do not see any pattern to the four novae with secondary eruptions.
## 12 Acknowledgements
The real heroes of this massive data-mining program are the roughly-2000 observers from 1842-2022, who spent many nights in vigils over T CrB. Further, this heartfelt thanks must extend to the roughly 150 workers involved with archiving and maintaining the plates, letters, data books, light curves, and papers throughout the last 180 years. Ron Webbink (University of Illinois, Champagne-Urbana) provided the valuable service of creating an exhaustive bibliography on T CrB for papers, notebooks, letters, and archival manuscripts, with these resources dominating the pre-1975 visual and photographic light curves, with this preventing many of the old magnitudes from being lost. Further, many instrument builders, observers, and data-handlers provided light curves from APASS, DASCH, TESS, and ASAS. For my study, I made heavy use of many utilities and products of the AAVSO, including comparison star calibrations (APASS), finder chart construction (VSP), light curve plotting (LCG), Fourier transforms (VStar), archiving over one-eighth of a million T CrB magnitudes from observers worldwide (AID), plus the old charts, manuscripts, and letters now only in the AAVSO archives and files. I have been a heavy user of the Harvard plates, with the courtesy of Josh Grindlay and Alison Doane. I thank Peter Kroll for hospitality during two long visits to Sonneberg Observatory, and Ulrich Heber for hospitality during a visit to Bamberg Observatory. The referee was helpful with the _TESS_ analysis.
The DASCH project at Harvard is grateful for partial support from NSF grants AST-0407380, AST-0909073, and AST-1313370
## 13 Data Availability
All of the photometry data are explicitly given in Tables 2-6, or are publicly available from the references and links in Table 1.
|
2305.09141 | Deep Ensembling for Perceptual Image Quality Assessment | Blind image quality assessment is a challenging task particularly due to the
unavailability of reference information. Training a deep neural network
requires a large amount of training data which is not readily available for
image quality. Transfer learning is usually opted to overcome this limitation
and different deep architectures are used for this purpose as they learn
features differently. After extensive experiments, we have designed a deep
architecture containing two CNN architectures as its sub-units. Moreover, a
self-collected image database BIQ2021 is proposed with 12,000 images having
natural distortions. The self-collected database is subjectively scored and is
used for model training and validation. It is demonstrated that synthetic
distortion databases cannot provide generalization beyond the distortion types
used in the database and they are not ideal candidates for general-purpose
image quality assessment. Moreover, a large-scale database of 18.75 million
images with synthetic distortions is used to pretrain the model and then
retrain it on benchmark databases for evaluation. Experiments are conducted on
six benchmark databases three of which are synthetic distortion databases
(LIVE, CSIQ and TID2013) and three are natural distortion databases (LIVE
Challenge Database, CID2013 and KonIQ-10 k). The proposed approach has provided
a Pearson correlation coefficient of 0.8992, 0.8472 and 0.9452 subsequently and
Spearman correlation coefficient of 0.8863, 0.8408 and 0.9421. Moreover, the
performance is demonstrated using perceptually weighted rank correlation to
indicate the perceptual superiority of the proposed approach. Multiple
experiments are conducted to validate the generalization performance of the
proposed model by training on different subsets of the databases and validating
on the test subset of BIQ2021 database. | Nisar Ahmed, H. M. Shahzad Asif, Abdul Rauf Bhatti, Atif Khan | 2023-05-16T03:45:02Z | http://arxiv.org/abs/2305.09141v1 | # Deep Ensembling for Perceptual Image Quality Assessment
###### Abstract
Blind image quality assessment is a challenging task particularly due to unavailability of reference information. Training a deep neural network requires a large amount of training data which is not readily available for image quality. Transfer learning is usually opted to overcome this limitation and different deep architectures are used for this purpose as they learn features differently. After extensive experiments, we have designed a deep architecture containing two CNN architectures as its sub-units. Moreover, a self-collected image database BIQ2021 is proposed with 12,000 images having natural distortions. The self-collected database is subjectively scored and is used for model training and validation. It is demonstrated that synthetic distortion databases cannot provide generalization beyond the distortion types used in the database and they are not ideal candidates for general-purpose image quality assessment. Moreover, a largescale database of 18.75 million images with synthetic distortions is used to pre-train the model and then retrain it on benchmark databases for evaluation. Experiments are conducted on six benchmark databases three of which are synthetic distortion databases (LIVE, CSIQ & TID2013) and three are natural distortion databases (LIVE Challenge Database, CID2013 & KonIQ-10k). The proposed approach has provided a Pearson correlation coefficient of 0.8992, 0.8472 and 0.9452 subsequently and Spearman correlation coefficient of 0.8863, 0.8408 and 0.9421. Moreover, the performance is demonstrated using Perceptually Weighted Rank Correlation (PWRC) to indicate perceptual superiority of the proposed approach. Multiple experiments are conducted to validate the generalization performance of the proposed model by training on different subsets of the databases and validating on the test subset of BIQ2021 database.
Keywords:image quality assessment, perceptual quality assessment, blind image quality assessment, no-reference image quality assessment, deep learning, deep ensemble, ensemble learning, convolutional neural networks, natural distortion image database. +
Footnote †: journal:
Peer Reviewed Copy Available at: _Soft Computing_**volume 26**, pages 7601-7622 (2022) |
2304.05452 | Fractional matter coupled to the emergent gauge field in a quantum spin
ice | Electronic spins can form long-range entangled phases of condensed matter
named quantum spin liquids. Their existence is conceptualized in models of two-
or three-dimensional frustrated magnets that evade symmetry-breaking order down
to zero temperature. Quantum spin ice (QSI) is a theoretically well-established
example described by an emergent quantum electrodynamics, with excitations
behaving like photon and matter quasiparticles. The latter are fractionally
charged and equivalent to the `spinons' emerging from coherent phases of
singlets in one dimension, where clear experimental proofs of fractionalization
exist. However, in frustrated magnets it remains difficult to establish
consensual evidence for quantum spin liquid ground states and their fractional
excitations. Here, we use backscattering neutron spectroscopy to achieve
extremely high resolution of the time-dependent magnetic response of the
candidate QSI material Ce$_2$Sn$_2$O$_7$. We find a gapped spectrum featuring a
threshold and peaks that match theories for pair production and propagation of
fractional matter excitations (spinons) strongly coupled to a background gauge
field. The multiple peaks are a specific signature of the $\pi$-flux phase of
QSI, providing spectroscopic evidence for fractionalization in a
three-dimensional quantum spin liquid. | Victor Porée, Han Yan, Félix Desrochers, Sylvain Petit, Elsa Lhotel, Markus Appel, Jacques Ollivier, Yong Baek Kim, Andriy H. Nevidomskyy, Romain Sibille | 2023-04-11T18:52:02Z | http://arxiv.org/abs/2304.05452v3 | # Fractional matter coupled to the emergent gauge field in a quantum spin ice
###### Abstract
Electronic spins can form long-range entangled phases of condensed matter named quantum spin liquids [1, 2, 3, 4]. Their existence is conceptualized in models of two- or three-dimensional frustrated magnets that evade symmetry-breaking order down to zero temperature. Quantum spin ice (QSI) is a theoretically well-established example described by an emergent quantum electrodynamics, with excitations behaving like photon and matter quasiparticles [5, 6, 7]. The latter are fractionally charged and equivalent to the'spinons' emerging from coherent phases of singlets in one dimension, where clear experimental proofs of fractionalization exist [8, 9, 7]. However, in frustrated magnets it remains difficult to establish consensual evidence for quantum spin liquid ground states and their fractional excitations. Here, we use backscattering neutron spectroscopy [10] to achieve extremely high resolution of the time-dependent magnetic response of the candidate QSI material Ce\({}_{2}\)Sn\({}_{2}\)O\({}_{7}\) (refs. [11, 12]). We find a gapped spectrum featuring a threshold and peaks that match theories [13, 14, 15, 16, 17] for pair production and propagation of fractional matter excitations (spinons) strongly coupled to a background gauge field. The observed peaks provide evidence for a QSI through spectroscopic signatures of space-time symmetry fractionalization [16, 17, 18], while the threshold behavior corroborates the regime of strong light-matter interaction predicted for the emergent universe in a QSI [19].
The idea that certain phases of condensed matter have "quantum orders" alludes to the description of their electronic correlations with an effective low-energy gauge theory, without spontaneous symmetry breaking [20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 200, 201, 202, 203, 204, 205, 206, 207, 208, 209, 210, 211, 222, 213, 223, 234, 235, 236, 237, 238, 240, 241, 242, 243, 244, 245, 246, 247, 248, 249, 250, 251, 252, 253, 254, 255, 256, 257, 258, 259, 260, 261, 262, 263, 264, 265, 266, 267, 268, 269, 270, 271, 272, 273, 274, 275, 276, 277, 278, 279, 280, 281, 282, 283, 284, 285, 286, 287, 288, 289, 290, 282, 284, 286, 287, 288, 288, 289, 291, 285, 28, 286, 28, 287, 288, 289, 292, 293, 294, 295, 296, 297, 298, 299, 300, 31, 320, 33, 34, 35, 36, 37, 38, 39, 40, 39, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 61, 62, 63, 64, 65, 66, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 82, 84, 86, 88, 89, 91, 83, 85, 87, 88, 89, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 113, 114, 115, 116, 117, 119, 121, 122, 123, 124, 125, 126, 127, 128, 129, 131, 142, 133, 143, 144, 145, 146, 147, 148, 149, 150, 161, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 181, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 200, 203, 204, 205, 206, 207, 208, 209, 210, 211, 22, 22, 23, 24, 25, 26, 27, 28, 29, 30, 21, 20, 22, 23, 24, 26, 29, 31, 20, 23, 24, 25, 26, 27, 28, 29, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 83, 86, 87, 88, 89, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 113, 109, 114, 115, 116, 117, 118, 119, 120, 121, 123, 124, 125, 126, 127, 128, 129, 131, 142, 133, 144, 145, 146, 147, 148, 149, 150, 161, 170, 171, 173, 175, 176, 177, 178, 179, 182, 190, 191, 192, 193, 194, 195, 196, 197, 198, 200, 203, 204, 205, 206, 207, 209, 210, 211, 22, 213, 214, 215, 216, 217, 218, 219, 220, 223, 23, 24, 24, 25, 26, 27, 28, 29, 30, 31, 32, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 56, 57, 58, 59, 61, 60, 62, 63, 64,
fractional elementary charge[22]. Similar states termed quantum spin liquids (QSL) are predicted to emerge in models of two- and three-dimensional frustrated magnets[1, 2, 3, 4]. Their effective low-energy description is a deconfined gauge theory where quasiparticles that carry spin 1/2 and no charge, known as spinons, can propagate coherently with the background gauge field. However, because the fractional spin excitations interact strongly with the background gauge field under which they are charged, their dynamics is highly non-trivial. The symmetries of the underlying crystal structure can additionally enrich topological phases: spinons can carry fractional crystal momentum, leading to enhanced periodicity of the excitation spectrum in momentum[23, 24, 18, 25].
A prototypical model of a three-dimensional frustrated magnet is the spin ice[25], whose magnetic degrees of freedom reside on a lattice of corner-sharing tetrahedra where each motif results in a local '2-in-2-out' constrain due to nearest-neighbor ferromagnetic interactions \(J_{//}\). The classical limit of this model is called classical spin ice (CSI) and consists of a macroscopically degenerate manifold of ground states obeying this local rule reminiscent of the arrangement of hydrogens in water ice[26] (Fig. 1**a**). Such physics is realized in rare-earth pyrochlore materials with large uniaxial magnetic moments, where thermally-driven spin flips create pairs of emergent fractional quasiparticles called "magnetic monopoles" (Fig. 1**b-c**)[27]. These quasiparticles interact through an effective Coulomb potential, which, in materials like Ho\({}_{2}\)Ti\({}_{2}\)O\({}_{7}\) (ref. [28]), arises from classical dipole-dipole forces. It is theoretically well established that a true QSL can be stabilized in rare-earth pyrochlores with an isolated ground-state doublet (an effective spin-1/2) by nearest-neighbor transverse interactions \(J_{\pm}\) acting perturbatively on CSI states[5, 6, 29, 30, 31]. The dominant tunneling process of this quantum spin ice (QSI) is a ring exchange term (\(J_{ring}\sim J_{\pm}^{3}/J_{//}^{2}\)) that corresponds to flipping loops of head-to-tail spins on a hexagonal plaquette[5] (Fig. 1**d**). The ring exchange terms have local symmetry properties - a U(1) invariance - making their effective lattice gauge theory analogous to quantum electrodynamics (QED). The sign of the transverse interaction translates into distinct QSI phases where the hexagonal plaquettes are threaded by static 0 (\(J_{\pm}>0\)) and \(\pi\) (\(J_{\pm}<0\)) fluxes of the emergent gauge field[32, 33, 18, 34]. At temperatures \(T\approx J_{ring}\), the QSI ground state is characterized by gapless, linearly dispersing excitations, which are transverse fluctuations of the gauge field and correspond to the photons of the emergent QED. At higher temperatures \(J_{ring}\ll T\ll J_{//}\), however, thermal fluctuations destroy part of the quantum coherence and gradually restore a CSI[29, 30, 31]. The exotic nature of QSI also stands out from its gapped, fractional excitations - spinons, which are characterized by a larger energy scale set by \(J_{//}\). They correspond to "magnetic monopoles" (electric charges in QED language) executing coherent quantum motion[5, 6, 7].
Neutrons can create spin-flip excitations leading to integer changes of the total spin, which in a QSL is expected to generate pairs of spinons that separate and execute quantum motion under the constraints of the emergent background gauge field. Here, we present neutron spectroscopy data of a candidate QSI material - Ce\({}_{2}\)Sn\({}_{2}\)O\({}_{7}\) (refs. [11, 12, 13, 14, 15, 16, 17]), providing a wavevector-integrated spectrum of its magnetic response with \(\mu\)eV resolution. From a technical perspective, our findings demonstrate an advance in terms of energy resolution improved by more than an order of magnitude compared to other studies, allowing quantitative comparisons with theories for spinon dynamics in QSI.
We first present inelastic neutron scattering (INS) data acquired using a time-of-flight (TOF) spectrometer (Fig. 2**a**), at different temperatures across the dominant energy scale in Ce\({}_{2}\)Sn\({}_{2}\)O\({}_{7}\) (\(J_{//}\approx 50\)\(\mu\)eV \(\approx 0.6\) K) [16]. The magnetic response is essentially inelastic, as shown by the lack of temperature dependence of the elastic line (Fig. 2**c**). We use the highest temperature spectrum measured at 5 K, which is well above the correlated regime in this material, in order to evaluate the magnetic scattering \(S(E)\) at lower temperatures by difference. Fig. 2**b** shows the imaginary part of the dynamic spin susceptibility calculated as \(\chi^{\prime\prime}(E)=S(E)\times[1-\exp(-E/k_{B}T)]\), where \(E\) is the neutron energy transfer and \(k_{B}\) is the Boltzmann's constant. This data shows the typical magnetic response in cerium pyrochlores [16, 17, 18, 19]: a continuum of spin excitations, as expected from spinons [16, 17, 18, 19], peaked at the energy of the dominant exchange interaction \(J_{//}\). We fit these spectra using a phenomenological Lorentzian peak shape in order to capture their temperature evolution (Fig. 2**d-f**). The center of the band is temperature independent within the resolution of the measurement, and the intensity of the continuum increases while its width reduces upon cooling. This evolution occurs mostly below 1 K, which is consistent with changes previously reported in bulk magnetic susceptibility and diffuse magnetic scattering [11, 12, 13, 14]. At the lowest measured temperature \(T\sim 0.2\) K, the data suggest a gapped spectrum with a non-trivial density of states (DOS) as shown in the inset of Fig. 2**b**. The TOF energy resolution of about 11 \(\mu\)eV, however, does not allow to characterize the DOS in sufficient details.
While in TOF data the energy resolution is largely determined by the value and spread of the incident neutron wavelength, in backscattering spectroscopy it is mainly limited by the properties of crystal analyzers (Fig. 3**a**) [20]. Fig. 3**b** presents data acquired using a typical backscattering geometry where the incident energy is varied by Doppler effect, covering a window of \(\pm 30\)\(\mu\)eV around the elastic line with a high resolution (HR) of 0.7 \(\mu\)eV. Remarkably, this allows to observe the gap expected for spinons in a QSI (Fig. 3**b** inset). We also performed
another experiment on the same instrument but using a recently developed option of 'backscattering and time-of-flight spectrometer' (BATS)[36]. The latter provides a larger energy window of \(\pm\)250 \(\mu\)eV, covering the entire bandwidth of the continuum in Ce\({}_{2}\)Sn\({}_{2}\)O\({}_{7}\). The increase in energy range comes at the cost of a coarser resolution of 3.3 \(\mu\)eV in our data, which still provides a more than threefold improvement in resolution over the TOF data in Fig. 2, and allows to capture fine details throughout the entire continuum.
In Fig. 3**c** we show the dynamic spin susceptibility \(\chi^{\prime\prime}(E)\) obtained from combined HR and BATS data. At the base temperature of these experiments (\(T\approx\) 0.17 K), \(\chi^{\prime\prime}(E)\) can be well fitted using three Gaussian peaks of unconstrained widths. The maximum of the spectrum is reproduced by the main Gaussian peak located around 50 \(\mu\)eV, above which a gradual intensity decrease is observed, well accounted for by the two weaker Gaussian peaks around 100 \(\mu\)eV and 150 \(\mu\)eV, resulting in an overall asymmetric spectrum. Comparing the total fitted curve with an extrapolation of the experimental data points shows that the latter deviates slightly from the former at the lowest energy transfers, suggesting a threshold behavior at the bottom edge of the gapped continuum. The threshold eventually leads to a slight shoulder around 25 \(\mu\)eV energy transfers, identified in both the residual of the fit and the derivative of the extrapolated data (see Fig. 3**c**). At a higher temperature close to the uncorrelated regime (\(T\approx\) 0.8 K), \(\chi^{\prime\prime}(E)\) shows much weaker inelastic scattering, in excellent agreement with the TOF data.
A continuous spectrum of excitations is usually taken as a hallmark of QSL states, but there can be alternative explanations for their existence, such as disorder[37]. These continua are therefore less deterministic of fractional quasiparticles than, for instance, jumps in the electric conductance of a two-dimensional electron gas[22]. However, using combinations of analytical and numerical methods applied to the case of QSI[14, 15, 16, 17, 38], theory has recently focused on studying the DOS in the two-monopole sector (spinons). Importantly, these predictions provide more specific features than just a continuum, highlighting the structure of the underlying gauge field theory.
We first consider analytical results for the quantum dynamics of spinons hopping on a CSI background[14], which is relevant at finite temperatures \(J_{ring}\ll T\ll J_{//}\). The spinon hopping is constrained by the flippable spins in the CSI background - a condition that makes its propagation deviate significantly from a free hopping model and results in the unique threshold and asymmetry in the wavevector-integrated DOS[14]. Remarkably, this model captures the gross features observed in the excitation spectrum of Ce\({}_{2}\)Sn\({}_{2}\)O\({}_{7}\) (solid blue line in Fig. 4**a**). The threshold and asymmetry of the continuum are important experimental observations, since they reflect the effect of the
background gauge field on the dynamics of the fractional quasiparticles [14]. These two features, as well as the spinon band being centered on the energy of the dominant exchange, were also observed in numerical calculations using exact diagonalization [14] as well as in quantum Monte Carlo simulations [38]. The fitted exchange parameters based on the analytical hopping model, \(J_{//}\) = 48 \(\mu\)eV and \(J_{\pm}=-5.2\)\(\mu\)eV, are in good agreement with previous estimates based on fits of bulk thermodynamic properties at the mean-field level [12]. In the context of Ce\({}_{2}\)Sn\({}_{2}\)O\({}_{7}\), \(J_{//}\) refers to the coupling between octupolar components of the pseudo-spins [39] - a possibility that was predicted by theory [13, 40] and later observed experimentally [12]. The asymmetry observed in the data indicates \(J_{\pm}<0\), as confirmed by the fit, because flipping the sign of \(J_{\pm}\) in the spinon hopping model would otherwise invert the shape of the spectrum along the energy axis [14]. The data thus confirms [12] that Ce\({}_{2}\)Sn\({}_{2}\)O\({}_{7}\) stabilizes the \(\pi\)-flux phase of QSI - the symmetry enriched state occupying a large portion of the QSI phase space [41], as also argued in Ce\({}_{2}\)Zr\({}_{2}\)O\({}_{7}\) (refs. 42-43). In the \(\pi\)-flux phase, translational symmetry fractionalizes [32, 33, 34, 18], so that the spinons acquire a finite Aharonov-Bohm phase after transporting them around any hexagonal plaquette, leading to an enhanced periodicity of the two-spinon density of states in the Brillouin zone [16, 17, 18].
A widely used theoretical framework to study QSI is gauge mean-field theory - a parton construction where bosonic spinons hop on the parent diamond lattice (c.f. Fig. 1) while interacting with the emergent U(1) gauge field [44, 32]. A recent extension of this theory allows for the classification of symmetry fractionalization [16], predicting clear spectroscopic signatures for the \(\pi\)-flux phase [17]. The spinon dispersion is expected to be composed of two bands that are mostly flat, leading to three peaks in the two-spinon density of states, with energy separations proportional to \(J_{\pm}/J_{//}\) and intensity ratios reproducing an overall asymmetric spectrum. We use these results [16, 17, 18] to fit \(\chi^{\prime\prime}(E)\) including a phenomenological line broadening accounting for finite spinon lifetime and thermal fluctuations (black curve in Fig. 4**b**), giving \(J_{//}\) = 69 \(\mu\)eV and \(J_{\pm}\) = \(-17\)\(\mu\)eV. Remarkably, this model provides an explanation for the scattering observed at \(E>80\)\(\mu\)eV that is not accounted for by the spinon hopping model where gauge fluxes are thermally activated and incoherent (\(J_{ring}\ll T\ll J_{//}\)). Although the intensity of the second peak is overestimated by the gauge mean-field theory at zero temperature, the level of agreement is remarkable given the nature of this model, i.e. the qualitative observation of peaks in the data and reproducing their positions in energy is significant. At finite temperatures \(T\approx J_{ring}\), when gauge fluxes just start to freeze and become coherent, we expect that thermal fluctuations renormalize the relative intensities of the three peaks. The exchange constants translate into
12.4 \(\mu\)eV, which indicates that our experiments at \(T\approx 0.17\) K \(\approx 15\)\(\mu\)eV were indeed performed in the intermediate temperature regime where quantum coherence is not completely destroyed by thermal fluctuations. The physical existence of a second peak in the two-spinon density of states, at approximately the same energy as in the gauge mean-field theory, is also confirmed by recent numerical results using exact diagonalization[45] (grey curve in Fig. 4**b**).
We next consider the theoretical DOS taking into account the QED effects of spinons (electric charges) propagating on a coherent QSI background (photons)[15]. In this case, the most significant consequence of the Coulomb interaction is an abrupt step-function onset of spinon production at small momenta, which is known as the Sommerfeld enhancement[15]. Additionally, a crucial nature of the non-relativistic emergent QED is the hierarchy of exchange parameters, leading to spinons propagating much faster than the photons. This effectively leads to a broadening of the threshold at larger momenta, because spinons start to emit diffuse Cerenkov radiation[15]. The corresponding analytical model applies in the long-wavelength limit and thus can only be compared with our data at the low-energy onset of the spinon band. Therefore, we fit the analytical QED model to our HR backscattering data as shown with the dashed red lines in Fig. 4. The exchange parameters obtained from the spinon hopping model[14] (Fig. 4**a**) and gauge mean field theory[16, 17, 18] (Fig. 4**b**) were converted to predefined parameters in the QED model - namely the ring exchange, spinon mass and speed of light. The fine-structure constant of the emergent QED - a dimensionless value characterizing how strongly light and matter couple, was fixed to \(\alpha=0.08\) based on numerical estimates for QSI[19]. After integrating the analytical model over the experimental window of momentum transfers, the calculated DOS matches the experiment remarkably well with a spinon gap \(\Delta\approx 17\)\(\mu\)eV.
Finely resolving the spectrum of continuous excitations in a candidate QSI material opens the door to benchmarking important theory predictions on this unique quantum mechanical ground state. The agreement with the DOS expected for QSI is significant for several reasons. The characteristic features observed in the data - threshold, main peak and asymmetry, are signatures of the strong interaction of fractional spinons with the emergent gauge field. The asymmetry of the spectrum results from discernable peaks in the data, implying that the experiment probes a model-specific signature of fractionalization, at a temperature where quantum coherence has developed. This is especially remarkable given the notorious difficulties of experimentally assessing defining characteristics of QSLs. Moreover, we extract the exchange parameters of a QSL using a microscopic probe, directly from the spin liquid ground state excitation spectrum. This contrasts with the method of inferring exchange parameters from spin
waves of a related field-induced ordered phase. Finally, the comparison of the edge structure at the threshold of the spinon continuum appears to agree with predictions for the effects of photons on the pair production and propagation of electric charges. Recent numerical results have established how the emergent QED compares with that of our universe through estimates of its fine-structure constant[19]. It is predicted that the alternative vacuum of this condensed matter system is drastically different, with phenomena arising from strong light-matter interactions[15, 19]. Although our data cannot be used to directly determine the fine-structure constant, they corroborate these theoretical predictions. Momentum-resolved experiments on single-crystal samples would certainly further our understanding, however, directly fitting QED parameters from such data may require resolutions in both energy and momentum space that are far beyond current spectroscopic techniques. We note that a recent report possibly indicates the selection of a different ground state in samples of Ce\({}_{2}\)Sn\({}_{2}\)O\({}_{7}\) prepared hydrothermally and at much lower temperatures[46]. Together with the fact that different ratios of exchange interactions were found in the three known cerium pyrochlore materials[42, 43, 47, 42], this may suggest a high degree of tunability of the emergent QED, perhaps opening the door to its experimental control.
**Figure 1 | Correlations and excitations in quantum spin ices.** The '2-in-2-out' ice configurations found in spin ices (**a**), as well as the creation (**b**) and propagation (**c**) of'magnetic monopole' quasiparticles. The ellipses represent uniaxial magnetic moments, with blue and red poles, defining magnetic flux variables that live on a diamond lattice (blue lines). In Ce\({}_{2}\)Sr\({}_{2}\)O\({}_{7}\) the ice rule applies similarly on objects of a more complex magnetization density (magnetic octupoles) [12, 13, 40]. In classical spin ice, thermal fluctuations create spin flips leading to fractional magnetic charges propagating through the sample (blue and red spheres) [27]. In a quantum spin ice (QSI) [5, 6, 7, 8], the corresponding fractional gapped excitations (spinons) execute quantum coherent motion. The dominant tunneling process in QSI occurs on hexagonal plaquettes highlighted by the blue loop on panel **d**. This quantum dynamics is encoded by the fluctuation of electric flux variables living on a second diamond lattice (drawn in green) interpenetrating the first one. In this emergent quantum electrodynamics, transverse fluctuations of the dual gauge field are gapless'magnetic photon' excitations [5, 6, 7].
Figure 2**: **| Temperature evolution of spin excitations in Ce\({}_{2}\)Sn\({}_{2}\)O\({}_{7}\).****a**, Inelastic neutron scattering data measured at the time-of-flight spectrometer IN5 using an incident wavelength of 10 A, providing an energy resolution of 11 \(\mu\)eV. The spectra were collected at various temperatures indicated in the plot, integrated on a range of momentum transfers \(|\mathbf{Q}|\) from 0.3 to 1.1 A\({}^{\text{-1}}\) and corrected for instrumental background, resulting in the experimental data points with error bars corresponding to \(\pm\)1 standard error. **b**, Imaginary part of the dynamic spin susceptibility \(\chi^{\prime\prime}(E)\) (data points with error bars corresponding to \(\pm\)1 standard error) extracted from the data shown in panel **a**, as described in the main text. The red lines represent phenomenological Lorentzian fits of the data, allowing to numerically track the temperature dependence of the spin excitations. The fit function is defined as \(\chi^{\prime\prime}(E)=\frac{S_{f}\tau^{\text{E}}}{(E-\delta)^{2}+\gamma^{2}}\), with \(S_{f}\) a global scale factor, \(\gamma\) the Lorentzian width and \(\delta\) its center. Panel **c** shows the temperature evolution of the scattering at the elastic line, with error bars corresponding to \(\pm\)1 standard error, indicating that the magnetic scattering in the accessible \(|\mathbf{Q}|\) range is essentially inelastic. Panels **d**, **e** and **f** present the temperature dependence of the center, intensity and width of the Lorentzian fit to the \(\chi^{\prime\prime}(E)\) data, respectively, with error bars corresponding to \(\pm\)1 standard error.
Figure 3: **High-resolution neutron spectroscopy of fractional excitations in Ce\({}_{2}\)Sn\({}_{2}\)O\({}_{7}\).****a**, Sketch of the neutron backscattering technique. Neutrons are first scattered by the sample towards crystal analyzers – **a** component that discriminates their energy with a very high resolution, and then backscattered towards a detector[10]. **b**, Comparison of the Ce\({}_{2}\)Sn\({}_{2}\)O\({}_{7}\) spectra collected at 0.17 K and 5 K using the IN16B instrument (Institut Laue–Langevin, Grenoble) in ‘backscattering and time-of-flight spectrometer’ (BATS)[36] and ‘high-resolution’ (HR)[50] modes. The spectra (data points with error bars corresponding to \(\pm\)1 standard error) were integrated on an identical range of momentum transfers \(|\textbf{Q}|\) (0.4 to 1.7 Å:1) and rescaled on the basis of their respective elastic line intensities, effectively correcting any discrepancies between the two configurations. The inset shows a zoom into the HR data, focusing on the threshold part of the spectra and showing the clear rise of the continuum on top of the remaining paramagnetic quasi-elastic signal. The latter is attributed to fluctuations of the dipole components of the pseudo-spin at finite temperatures. **c**, Superposition of the imaginary part of the dynamical spin susceptibility, \(\chi^{\prime\prime}(E)\) (data points with error bars corresponding to \(\pm\)1 standard error), extracted from the continuation of HR (dark colored symbols) and BATS (light colored symbols) experiments at 0.17 K (blue shades) and 0.8 K (green shades). The continuous red line is a fit using a constant background and three Gaussian peaks individually shown as dashed black curves. The centers, intensities and widths are unconstrained, confirming the significance of each contribution to the spectrum. The continuous blue line and the grey points are the residual of the fit and the derivative of the experimental data, respectively, both shifted by -1.5 arbitrary units for clarity. The two insets in panel **c** show the energy resolution provided by each instrument configuration, on the same energy scale as the main panel.
**Figure 4 \(|\) Comparison of the dynamical spin susceptibility with models of spinon dynamics for the \(\pi\)-flux phase of quantum spin ice.** In **a**, the continuous blue line is a fit of the combined HR and BATS data on their full energy window at 0.17 K, using the analytical model of Udagawa & Moessner for the quantum dynamics of spinons hopping on a lattice and considering **a** classical spin ice background[14]. Similarly, in **b**, we show the best fit using the gauge mean field theory of quantum spin ice revised by Desrochers, Chern & Kim[16, 17, 18]. In both these fits, the adjustment variables are exchange parameters \(J_{//}\) and \(J_{\pm}\) whose fitted values are indicated in the respective panels, corresponding to \(J_{\pm}/J_{//}=-0.1083\) (**a**) and \(J_{\pm}/J_{//}=-0.2464\) (**b**). The fit using the gauge mean-field theory incorporates a peak broadening (standard deviation \(\sigma=11.2\) eV). In panel **b**, we also compare the fit with available results of numerical calculations for \(J_{\pm}/J_{//}=-0.1875\) using exact diagonalization on 32 sites (Hosoi, Zhang, Patri & Kim[45]). The corresponding curve (solid grey line) is the average of results at the \(\Gamma\) and X points of the Brillouin zone after setting the energy scale of \(J_{//}\) to the value determined from the fit of the gauge mean field theory. The red dashed lines in panels **a** and **b** use the analytical model of Morampudi, Wilczek & Laumann considering a QSI background, i.e., including photons, which effectively broadens the threshold for our experimental \(|\textbf{Q}|\) window due to the emission of Cerenkov radiation[15]. These QED effects are neglected in the other models, while the model of Morampudi _et al._ neglects the lattice and therefore can only be used to compare with data at the lower edge of the continuum. We used a numerical estimate for the fine-structure constant, \(\alpha=0.08\) (ref. [19]), and other QED parameters obtained from the conversion of the exchange parameters \(J_{//}\) and \(J_{\pm}\) deduced from the fits to either the spinon hopping model (blue line on panel **a**) or gauge mean field theory (black line on panel **b**) - see Methods. The red dashed line in panel **a** is a fit, whose only free parameter is the spinon gap \(\Delta=18\) eV, while in panel **b** we impose the spinon gap \(\Delta=16.6\) eV which is the value predicted by the gauge mean-field theory for the exchange parameters obtained from the corresponding fit.
## References
* [1] Balents, L. Spin liquids in frustrated magnets. _Nature_**464**, 199 (2010).
* [2] Savary, L. & Balents, L. Quantum spin liquids: a review. _Rep. Prog. Phys._**80**, 016502 (2016).
* [3] Knolle, J. & Moessner, R. A Field Guide to Spin Liquids. _Annu. Rev. Condens. Matter Phys._**10**, 451-472 (2019).
* [4] Broholm, C. _et al._ Quantum spin liquids. _Science_**367**, eaay0668 (2020).
* [5] Hermele, M., Fisher, M. P. A. & Balents, L. Pyrochlore photons: The _U_(1) spin liquid in a \(S\) = \(\%\) three-dimensional frustrated magnet. _Phys. Rev. B_**69**, 064404 (2004).
* [6] Gingras, M. J. P. & McClarty, P. A. Quantum spin ice: a search for gapless quantum spin liquids in pyrochlore magnets. _Rep. Prog. Phys._**77**, 056501 (2014).
* [7] Tennant, D. A., Perring, T. G., Cowley, R. A., & Nagler, S. E. Unbound spinons in the spin-1/2 antiferromagnetic chain KCuF\({}_{3}\). _Phys. Rev. Lett._**70** 4003-4006 (1993).
* [8] Lake, B., Tennant, D., Frost, C. & Nagler, S. E. Quantum criticality and universal scaling of a quantum antiferromagnet. _Nature Mater_**4**, 329-334 (2005).
* [9] Mourigal, M., Enderle, M., Klopperieper, A. _et al._ Fractional spinon excitations in the quantum Heisenberg antiferromagnetic chain. _Nature Phys_**9**, 435-441 (2013).
* [10] Gardner, J. S., Ehlers, G., Faraone, A. & Sakai, V. G. High-resolution neutron spectroscopy using backscattering and neutron spin-echo spectrometers in soft and hard condensed matter. _Nature Rev. Phys._**2**, 103-116 (2020).
* [11] Sibille, R. _et al._ Candidate Quantum Spin Liquid in the Ce\({}^{3+}\) Pyrochlore Stannate Ce\({}_{2}\)Sn\({}_{2}\)O?. _Phys. Rev. Lett._**115**, 097202 (2015).
* [12] Sibille, R. _et al._ A quantum liquid of magnetic octupoles on the pyrochlore lattice. _Nature Phys._**16**, 546-552 (2020).
* [13] Li, Y.-D. & Chen, G. Symmetry enriched U(1) topological orders for dipole-octupole doublets on a pyrochlore lattice. _Phys. Rev. B_**95**, 041106 (2017).
* [14] Udagawa, M. & Moessner, R. Spectrum of itinerant fractional excitations in quantum spin ice. _Phys. Rev. Lett._**122**, 117201 (2019).
* [15] Morampudi, S. D., Wilczek, F. & Laumann, C. R. Spectroscopy of spinons in Coulomb quantum spin liquids. _Phys. Rev. Lett._**124**, 097204 (2020).
* [16] Desrochers, F., Chem, L. E. & Kim, Y. B. Symmetry fractionalization in the gauge mean-field theory of quantum spin ice. _Phys. Rev. B_**107**, 064404 (2023).
* [17] Desrochers, F. & Kim, Y. B. Spectroscopic signatures of fractionalization in octupolar quantum spin ice. _arXiv:2301.05240_
* [18] Yao, X-P., Li, Y-D. & Chen, G.. Pyrochlore U(1) spin liquid of mixed-symmetry enrichments in magnetic fields. _Phys. Rev. Research_**2**, 013334 (2020).
* [19] Pace, S. D., Morampudi, S. C., Moessner, R. & Laumann, C. R. Emergent Fine Structure Constant of Quantum Spin Ice Is Large. _Phys. Rev. Lett._**127**, 117205 (2021).
* [20] Wen, X-G. Topological Order: From Long-Range Entangled Quantum Matter to a Unified Origin of Light and Electrons. _ISRN Condensed Matter Physics_**2013**, 198710 (2013).
* [21] Levin, M. & Wen, X.-G. Colloquium: Photons and electrons as emergent phenomena. _Rev. Mod. Phys._**77**, 871 (2005).
* [22] Eisenstein, J. P. & Stormer, H. L. The Fractional Quantum Hall Effect. _Science_**248**, 1510-1516 (1990).
* [23] Essin, A. M. & Hermele, M. Classifying fractionalization: Symmetry classification of gapped Z\({}_{2}\) spin liquids in two dimensions. _Phys. Rev. B_**87**, 104406 (2013).
* [24] Essin, A. M. & Hermele, M. Spectroscopic signatures of crystal momentum fractionalization. _Phys. Rev. B_**90**, 121102 (2014).
* [25] Castelnovo, C., Moessner, R. & Sondhi, S. L. Spin Ice, Fractionalization, and Topological Order. _Annu. Rev. Condens. Matter Phys._**3**, 35-55 (2012).
* [26] Ramirez, A. P., Hayashi, A., Cava, R. J., Siddharthan, R. & Shastry, B. S. Zero-point entropy in'spin ice'. _Nature_**399**, 333-335 (1999).
* [27] Castelnovo, C., Moessner, R. & Sondhi, S. L. Magnetic monopoles in spin ice. _Nature_**451**, 42-45 (2008).
* [28] Fennell, T. _et al._ Magnetic Coulomb Phase in the Spin Ice Ho\({}_{2}\)Ti\({\rm O}\). _Science_**326**, 415 (2009).
* [29] Shannon, N., Sikora, O., Pollmann, Penc, K. & Fulde, P. Quantum Ice: A Quantum Monte Carlo Study. _Phys. Rev. Lett._**108**, 067204 (2012).
* [30] Benton, O., Sikora, O. & Shannon, N. Seeing the light: Experimental signatures of emergent electromagnetism in a quantum spin ice. _Phys. Rev. B_**86**, 075154 (2012).
* [31] Kato, S. & Onoda, S. Numerical Evidence of Quantum Melting of Spin Ice: Quantum-to-Classical crossover. _Phys. Rev. Lett._**115**, 077202 (2015).
* [32] Lee, SB., Onoda, S. & Balents, L. Generic quantum spin ice. _Phys. Rev. B_**86**, 104412 (2012).
* [33] Chen, G. Spectral periodicity of the spinon continuum in quantum spin ice. _Phys. Rev. B_**96**, 85136 (2017).
* [34] Gaudet, J. _et al._ Quantum spin ice dynamics in the dipole-octupole pyrochlore magnet Ce\({}_{2}\)Zr\({}_{2}\)O\({}_{7}\). _Phys. Rev. Lett._**122**, 187201 (2019).
* [35] Gao, B. _et al._ Experimental signatures of a three-dimensional quantum spin liquid in effective spin-1/2 Ce\({}_{2}\)Zr\({}_{2}\)O\({}_{7}\) pyrochlore. _Nature Phys._**15**, 1052-1057 (2019).
* [36] Appel, M., Frick, B & Magerl, A. A flexible high speed pulse chopper system for an inverted neutron time-of-flight option on backscattering spectrometers. _Sci. Rep._**8**, 13580 (2018).
* [37] Paddison, J. A. M. _et al._ Continuous excitations of the triangular-lattice quantum spin liquid YbMgGaO\({}_{4}\). _Nature Phys._**13**, 117-122 (2017).
* [38] Huang, C.-J., Deng, Y., Wan, Y. & Meng, Z.-Y. Dynamics of topological excitations in a model quantum spin ice. _Phys. Rev. Lett._**120**, 167202 (2018).
* [39] Rau, J. G. & Gingras, M. J. P. Frustrated Quantum Rare-Earth Pyrochlores. _Annu. Rev. Condens. Matter Phys._**10**, 357-386 (2019).
* [40] Huang, Y.-P., Chen, G. & Hermele, M. Quantum Spin Ices and Topological Phases from Dipolar-Octupolar Doublets on the Pyrochlore Lattice. _Phys. Rev. Lett._**112**, 167203 (2014).
* [41] Benton, O., Jaubert, L. D. C., Singh, R. R. P., Oitmaa, J. & Shannon, N. Quantum Spin Ice with Frustrated Transverse Exchange: From a \(\pi\)-Flux Phase to a Nematic Quantum Spin Liquid. _Phys. Rev. Lett._**121**, 067201 (2018).
* [42] Smith, E. M. _et al._ Case for a U(1)- Quantum Spin Liquid Ground State in the Dipole-Octupole Pyrochlore Ce\({}_{2}\)Zr\({}_{2}\)O\({}_{7}\). _Phys. Rev. X_**12**, 021015 (2012).
* [43] Bhardwaj, A., Zhang, S., Yan, H. et al. Sleuthing out exotic quantum spin liquidity in the pyrochlore magnet Ce\({}_{2}\)Zr\({}_{2}\)O\({}_{7}\). _npj Quantum Mater._**7**, 51 (2022).
* [44] Savary, L. & Balents, L. Coulombic Quantum Liquids in Spin-1/2 Pyrochlores. _Phys. Rev. B_**108**, 037202 (2012).
* [45] Hosoi, M., Zhang, E. Z., Patri, A. S. & Kim, Y. B. Uncovering Footprints of Dipolar-Octupolar Quantum Spin Ice from Neutron Scattering Signatures. _Phys. Rev. Lett._**129**, 097202 (2022).
* [46] Yahne, D. R. _et al._ Dipolar spin ice regime proximate to an all-in-all-out Neel ground state in the dipolar-octupolar pyrochlore Ce\({}_{2}\)Zn\({}_{2}\)O\({}_{7}\). _arXiv:2211.15140_
* [47] Poree, V. _et al._ Dipolar-octupolar correlations and hierarchy of exchange interactions in Ce\({}_{2}\)Hf\({}_{2}\)O\({}_{7}\). _arXiv:2305.08261_
## Acknowledgements
This work is based on experiments performed at the Institut Laue-Langevin, France. We thank Xavier Tonon and the whole team for Advanced Neutron Environments for their dedicated work running the dilution refrigerators at the Institut Laue-Langevin. Tom Fennell is warmly acknowledged for his continuous support throughout this project and for a careful reading of the manuscript. We thank Nic Shannon for fruitful discussions. We acknowledge funding from the Swiss National Science Foundation (R.S. and V.P., Grant No. 200021_179150), the U.S. National Science Foundation Division of Materials Research under the award DMR-1917511 (H.Y. and A.H.N.), and the Natural Sciences and Engineering Research Council of Canada (F.D. and YB.K.).
## Author contributions
Project and experiments were designed by R.S. Sample preparation and characterization were performed by R.S. and V.P. Neutron scattering experiments were carried out by V.P., E.L., S.P. and R.S. with O.J. and M.A. as local contacts. Experimental data were analysed by V.P., E.L., S.P. and R.S. Calculations were performed by H.Y., F.D., YB.K. and A.H.N. The paper was written by R.S. with feedback from all authors.
## Competing financial interests
The authors declare no competing financial interests.
## Methods
### Sample preparation
The sample used during this work is a large polycrystalline sample of Ce\({}_{2}\)Sn\({}_{2}\)O\({}_{7}\), which was also used in previous studies[11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 200, 201, 202, 203, 204, 205, 206, 207, 208, 209, 211, 222, 231, 232, 240, 207, 209, 222, 241, 242, 243, 244, 245, 246, 247, 248, 249, 250, 251, 252, 253, 254, 255, 256, 257, 258, 259, 260, 261, 262, 263, 264, 265, 266, 267, 268, 269, 270, 271, 272, 273, 274, 275, 276, 277, 278, 279, 281, 282, 283, 284, 285, 286, 287, 288, 289, 290, 291, 292, 293, 294, 295, 296, 297, 298, 299, 300, 301, 302, 303, 304, 305, 306, 307, 308, 315, 316, 323, 334, 335, 341, 342, 343, 343, 351, 352, 353, 361, 362, 363, 364, 365, 366, 367, 368, 371, 372, 373, 374, 375, 376, 377, 378, 379, 380, 390, 391, 392, 393, 394, 395, 396, 397, 398, 399, 400, 401, 402, 403, 404, 405, 406, 407, 408, 409, 409, 410, 411, 412, 413, 414, 415, 416, 417, 418, 42, 42, 435, 446, 447, 448, 45, 46, 471, 48, 49, 419, 42, 443, 42, 44, 44, 45, 46, 47, 48, 49, 42, 44, 46, 48, 49, 43, 49, 40, 41, 42, 44, 45, 46, 47, 49, 41, 42, 43, 44, 47, 48, 49, 43, 44, 45, 46, 48, 49, 44, 47, 49, 45, 48, 49, 40, 41, 42, 44, 45, 46, 49, 42, 45, 47, 49, 48, 49, 40, 41, 43, 44, 49, 42, 45, 46, 49, 41, 45, 47, 49, 42, 46, 49, 43, 44, 48, 49, 45, 49, 46, 47, 48, 49, 47, 49, 48, 49, 40, 41, 42, 45, 49, 42, 46, 49, 43, 44, 45, 47, 48, 49, 45, 49, 46, 47, 48, 49, 49, 40, 41, 42, 45, 49, 42, 46, 49, 43, 47, 48, 49, 49, 41, 45, 49, 42, 48, 49, 40, 41, 43, 44, 45, 46, 47, 49, 48, 49, 41, 45, 49, 42, 49, 43, 46, 49, 47, 48, 49, 45, 49, 46, 49, 47, 49, 48, 49, 49, 49, 40, 41, 44, 49, 42, 45, 49, 46, 48, 49, 47, 49, 48, 49, 49, 49, 40, 41, 45, 49, 42, 49, 43, 44, 45, 46, 47, 48, 49, 49, 40, 42, 49, 43, 49, 45, 46, 49, 47, 48, 49, 49, 41, 48, 49, 49, 42, 49, 45, 49, 46, 49, 47, 49, 48, 49, 49, 49, 49, 45, 49, 48, 49, 49, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 59, 50, 52, 54, 57, 59, 51, 53, 56, 57, 59, 51, 54, 58, 59, 52, 59, 53, 57, 57, 59, 52, 53, 59, 54, 55, 59, 56, 57, 58, 59, 50, 53, 59, 54, 56, 59, 57, 59, 58, 59, 50, 54, 59, 51, 55, 59, 52, 56, 57, 58, 59, 50, 53, 59, 50, 54, 57, 59, 52, 57, 58, 59, 50, 55, 56, 59, 51, 57, 58, 59, 51, 59, 52, 59, 50, 53, 57, 59, 50, 54, 58, 59, 50, 55, 56, 57, 59, 51, 58, 59, 52, 59, 50, 54, 59, 51, 52, 57, 58, 59, 53, 57, 59, 50, 54, 55, 56, 57, 59, 52, 58, 59, 53, 59, 54, 51, 55, 57, 56, 59, 52, 59, 53, 58, 59, 50, 54, 59, 55, 56, 57, 59, 58, 59, 51, 59, 50, 57, 59, 50, 56, 59, 52, 57, 59, 53, 58, 59, 51, 59, 50, 57, 59, 51, 52, 59, 52, 53, 59, 53, 59, 54,
copper can) were used to properly reduce the data using Mantid[49] routines, resulting in six pre-processed datafiles containing the \(\left\|\vec{Q}\right\|\)-integrated spectra (0.3 A\({}^{\text{-}}\)!\(\|\vec{Q}\|\)!\(<\)1.1 A\({}^{\text{-}}\)!).
Neutron backscattering spectroscopy was performed on IN16B at the Institut Laue-Langevin[50, 36]. The sample and sample preparation were identical to the IN5 experiment described above. In a first step, we used the BATS mode available at IN16B (ref. [36]) in order to cover the full bandwidth of excitations in Ce\({}_{2}\)Sn\({}_{2}\)O\({}_{7}\). Two instrument configurations, denoted as Ir4 and Ir6, were used and correspond to low repetition rates with 8\({}^{\circ}\) and 11\({}^{\circ}\) slits in the pulse choppers, providing a resolution of 4 \(\mu\)eV and 6 \(\mu\)eV respectively. The Ir6 allowed to efficiently measure spectra at intermediate temperatures, benefiting from a more intense beam at the expense of a slightly coarser resolution with respect to Ir4. The thermalization of the sample was monitored as described for the IN5 experiment. Spectra were recorded at 0.17 K, 0.8 K and 5 K using both Ir4 and Ir6, with additional measurements at 0.4 K and 1.2 K with the Ir6 set-up. Data for a vanadium standard, empty copper can and empty dilution were also recorded and used in the reduction routines using Mantid[49]. The spectra were integrated over the same \(\left\|\vec{Q}\right\|\) window ranging from 0.4 A\({}^{\text{-}}\) to 1.7 A\({}^{\text{-}}\)!. The resulting data can be seen in Fig. 3**b** and Fig. S1**a**, for Ir4 and Ir6 respectively. The imaginary part of the dynamical spin susceptibility was computed following the same method as described above and the results are shown in Fig. 3**c** (as well as Fig. 4**a** and 4**b**) and Fig. S1**b**, for Ir4 and Ir6, respectively.
A second experiment was carried out on IN16B, in order to better investigate the lower part of the energy spectrum. The High-Resolution (HR) mode of the instrument was used, which has a lower flux compared to the previously mentioned BATS mode, and a resolution at the elastic line of about 0.7 \(\mu\)eV. We have used a specialized high signal-to-noise ratio setup of the IN16B spectrometer previously reported[50]. The same powder sample was again used but this time was loaded in a copper can with annular geometry (outer 15 mm, inner 10 mm). The reason for such a choice was the reduction of the neutron absorption by the sample, which in this geometry, plays a more important role. The sample was cooled down to an estimated base temperature of approximatively 0.17 K. Data were recorded at three different temperatures, 0.17 K, 0.8 K and 5 K with similar statistics, allowing to track the signal's behavior and a direct comparison with previous experiments. The data were reduced via Mantid[49] routines, using carefully measured calibration scans (vanadium sheets, empty annular copper can and empty dilution refrigerator). The resulting spectra were then integrated over a \(\left\|\vec{Q}\right\|\) window ranging from 0.4 A\({}^{\text{-}}\)! to 1.7 A\({}^{\text{-}}\)!. The final spectra can be seen in Fig. 3**b**. The imaginary part of the dynamical spin susceptibility was computed following the same method as described above and is plotted in Fig. 3**c** as well as in Fig. 4**a** and 4**b**. In order to get a meaningful comparison of the BATS and HR data, the Ir4 HR spectra were subject to a minor rescaling, based on the relative intensities at the elastic line, thus compensating for any discrepancies between the two instrument modes.
**Fitting of the experimental data to model calculations**
We consider a Hamiltonian where the transverse exchange parameter \(J_{\pm}\) introduces quantum fluctuations to a classical spin ice manifold obtained from a dominant nearest-neighbor ferromagnetic interaction \(J_{1}\):
\[\mathcal{H}_{QSI}=\mathcal{H}_{CS}+\mathcal{H}_{transverse}=\sum_{(i,j)}J_{1}S _{i}^{y}S_{j}^{y}-J_{\pm}(S_{i}^{+}S_{j}^{-}+S_{i}^{-}S_{j}^{+})\.\]
Here \(S^{y}\) corresponds to the octupolar component of the 'dipole-octupole' pseudo-spin[40], stabilizing an octupole ice manifold in Ce\({}_{2}\)Sn\({}_{2}\)O\({}_{7}\) (ref. [12]).
We first used the results of Udagawa and Moessner[14] to compare with our data. They found that the two-spinon density of state (DOS) can be well approximated by the following exact result \(\rho_{\text{HC}}^{(2)}(\omega)=\int\ \mathrm{d}\epsilon\rho_{\text{HC}}^{(1)}( \omega-\epsilon)\times\rho_{\text{HC}}^{(1)}(\epsilon)\) where \(\rho_{\text{HC}}^{(1)}(\epsilon)=\frac{3}{2\pi}\frac{1}{\epsilon-\epsilon} \frac{\overline{5-\epsilon}}{3+\epsilon}\) is the single spinon DOS with \(\epsilon=(\omega-J_{\text{I}})\) ) in arbitrary units. We compare \(\rho_{\text{HC}}^{(2)}(\omega)\) directly with our experimental data. We vary an overall scale factor and the parameters \(J_{\text{I}}\) and \(J_{\pm}\) (after converting them into the meV units), to minimize the least-mean square difference between the theory and all the experimental data (BATS Ir4, BATS Ir6 and HR): \(C_{1}=\sum_{\omega\text{ in exp.}}\ \left(I_{\text{exp}}(\omega)-a\times\rho_{ \text{HC}}^{(2)}(\omega)\right)^{2}\). Here, the parameters \(J_{\text{I}}\), \(J_{\pm}\) are inside the definition of \(\rho_{\text{HC}}^{(2)}(\omega)\) but we did not write them out explicitly to lighten the notation. \(J_{\text{I}}\) is used in defining \(\epsilon=\omega-J_{\text{I}}\) in the single spinon DOS, and \(J_{\pm}\) is determined when converting the unit of \(\epsilon\) to meV. Here, the parameter \(a\) is the overall scaling factor that we also fit. We found that the square sum is minimized by the following parameters \(J_{\text{I}}=48\ \mu\text{eV}\), \(J_{\pm}=-5.2\ \mu\text{eV}\) and \(J_{\text{ring}}\equiv\frac{12J_{\pm}^{3}}{J_{\text{I}}^{2}}=0.73\ \mu\text{eV}\).
Second, we fit the experimental \(\chi^{\prime\prime}(E)\) data to the gauge mean-field theory results of Desrochers, Chern and Kim[16, 17]. We used the positions identified from the data as starting values for the centers of the three peaks expected for the \(\pi\)-flux phase of quantum spin ice. The goodness of fit measures are defined as
\[\chi_{Direct}^{2}=\sum_{n}\frac{(I^{Exp.}(E_{n})-I^{Theo.}(E_{n}))^{2}}{(\Delta I ^{Exp.}(E_{n}))^{2}}\]
and
\[\chi_{Peak}^{2}=\sqrt{\sum_{i=1}^{3}(E_{i}^{Theo.}-E_{i}^{Exp.})^{2}/E_{i}^{ Exp.}}.\]
We normalize both \(\chi_{Direct}^{2}\) and \(\chi_{Peak}^{2}\) before taking the weighted sum. The final goodness of fit is \(\chi^{2}=\alpha\ \chi_{Direct}^{2}+(1-\alpha)\ \chi_{Peak}^{2}\) with \(\alpha=0.6\). We found that \(\chi^{2}\) is minimized using \(J_{\text{I}}=69\ \mu\text{eV}\), \(J_{\pm}=-17\ \mu\text{eV}\) and \(J_{\text{ring}}\equiv\frac{12J_{\pm}^{3}}{J_{\text{I}}^{2}}=12.4\ \mu\text{eV}\).
Finally, we used the above extracted exchange parameters from both the analytical model of Udagawa and Moessner[14] and the extended gauge mean-field theory of Desrochers, Chern and Kim[16, 17, 18], and applied these to the analytical model of Morampudi, Wilczek & Laumann[15], which determined neutron scattering as
\[S(q,\omega,\Delta)=\frac{m^{3/2}\sqrt{2\pi R}}{1-\exp\left(-\frac{2\pi R}{ \sqrt{a-2\Delta-q^{2}/4m}}\right)}\theta(\omega-2\Delta-q^{2}/4m),\text{ where }R=\frac{1}{4}mc^{2}a^{2}\left(1-\frac{q^{2}}{4m^{2}c^{2}}\right)^{2}.\]
Most parameters in this model are determined by the spin exchange parameters: the loop flipping term coefficient \(g=12\frac{J_{\pm}^{3}}{J_{\pm}^{2}}\), the spinon mass \(m=\frac{1}{4J_{\pm}a_{0}^{2}}\) and the speed of light \(c=\xi ga_{0}\). In addition, there are three constants independent of the value of \(J_{\text{I}}\).\(J_{\pm}\), which are either known experimentally - the lattice constant \(a_{0}=10.6\times 10^{-10}\) m, or taken from numerical estimates[19] - the emergent fine-structure constant \(\alpha=0.08\) and the O(1) constant \(\xi=0.51\). Therefore, a fit to experimental data using this QED model has only two free parameters. The first free parameter is the overall scale of the DOS, while the second one is the spinon gap \(\Delta\). Although Morampudi _et al.[15]_ take the gap to be \(\Delta\sim J_{\text{I}}/2-12J_{\pm}\), this value turns out to be negative from our fitting
result. We hence take \(\Delta\) to be a free parameter when fitting the experimental neutron intensity to the theory. Since the work of Morampudi _et al._[15] is applicable in the long wavelength limit and does not consider the short wavelength effects of the pyrochlore lattice, it can only be used to compare with the low-energy end of the neutron scattering. In order to compare the model with the experimental data, we integrated over \(q\) to obtain the (local) density of states distribution \(\tilde{S}(\omega,\Delta)=\int\leavevmode\nobreak\ \mathrm{d}qS(q,\omega,\Delta)\) and minimized the following quantity: \(C_{2}=\sum_{\omega\leavevmode\nobreak\ \mathrm{in}\leavevmode\nobreak\ \mathrm{exp}}\leavevmode \nobreak\ \left(I_{\mathrm{exp}}(\omega)-a\times\tilde{S}(\omega,\Delta)\right)^{2}\). Here, \(a\) is the overall scaling factor and we use the low-energy HR dataset, which covers energy transfers up to \(26.5\leavevmode\nobreak\ \mathrm{\SIUnitSymbolMicro eV}\). The best fit we found results in \(\Delta=18\leavevmode\nobreak\ \mathrm{\SIUnitSymbolMicro eV}\) using the exchange parameters deduced from the analytical model of Udagawa and Moessner[14]. In order to calculate the low-energy response using the exchange parameters deduced from the extended gauge mean-field theory of Desrochers, Chern and Kim[16, 17, 18], we have used the spinon gap value predicted by the same theory, \(\Delta=16.6\leavevmode\nobreak\ \mathrm{\SIUnitSymbolMicro eV}\).
## Data availability
The data that support the plots within this paper and other findings of this study are available from the corresponding authors upon reasonable request. The datasets for the time-of-flight neutron spectroscopy experiments on IN5 and for the backscattering neutron spectroscopy experiments on IN16B are available from the Institute Laue-Langevin data portal[51, 52, 53].
|
2306.04135 | Semiparametric Discrete Choice Models for Bundles | We propose two approaches to estimate semiparametric discrete choice models
for bundles. Our first approach is a kernel-weighted rank estimator based on a
matching-based identification strategy. We establish its complete asymptotic
properties and prove the validity of the nonparametric bootstrap for inference.
We then introduce a new multi-index least absolute deviations (LAD) estimator
as an alternative, of which the main advantage is its capacity to estimate
preference parameters on both alternative- and agent-specific regressors. Both
methods can account for arbitrary correlation in disturbances across choices,
with the former also allowing for interpersonal heteroskedasticity. We also
demonstrate that the identification strategy underlying these procedures can be
extended naturally to panel data settings, producing an analogous localized
maximum score estimator and a LAD estimator for estimating bundle choice models
with fixed effects. We derive the limiting distribution of the former and
verify the validity of the numerical bootstrap as an inference tool. All our
proposed methods can be applied to general multi-index models. Monte Carlo
experiments show that they perform well in finite samples. | Fu Ouyang, Thomas T. Yang | 2023-06-07T04:12:02Z | http://arxiv.org/abs/2306.04135v3 | # Semiparametric Discrete Choice Models for Bundles
###### Abstract
We propose methods of estimation and inference for use in semiparametric discrete choice models for bundles in both cross-sectional and panel data settings. Our matching-based identification approach permits certain forms of heteroskedasticity and arbitrary correlation in the disturbances across choices. For the cross-sectional model, we propose a kernel-weighted rank procedure and show the validity of the nonparametric bootstrap for the inference. For the panel data model, we propose localized maximum score estimators and show that the numerical bootstrap is a valid inference method. Monte Carlo experiments demonstrate that our proposed estimation and inference procedures perform adequately in finite samples.
_JEL classification_: C13, C14, C35.
_Keywords_: Bundle choices; Rank estimation; Panel data; Bootstrap.
## 1 Introduction
In many circumstances, consumers purchase bundles of goods (e.g., both chips and salsa) or services (e.g., a combination of a mobile phone, home internet and cable TV plans), instead of a single good or service. The literature on bundle choice is less well developed than that on ordinary discrete choice models. An important but less extensively studied empirical question in industrial organization and marketing research relates to complementary or substitutive effects that may explain the bundle choice behavior of consumers. We refer readers to Berry et al. (2014) for a review of this literature. In an empirical study, Gentzkow (2007) estimated a parametric bundle choice model
analyzing demand for print and online newspapers. Similarly, using aggregate data, Fan (2013) examined ownership consolidation in the newspaper market, where households on the demand side may purchase two newspapers as a bundle. A substantial literature focuses on bundle choice behavior, analysis of product demand, pricing strategy, customer subscriptions, and brand collaboration, among others. Examples include Manski and Sherman (1980), Train, McFadden, and Ben-Akiva (1987), Hendel (1999), Chung and Rao (2003), Dube (2004), Nevo, Rubinfeld, and McCabe (2005), Augereau, Greenstein, and Rysman (2006), Song and Chitagunta (2006), Foubert and Gijsbrechts (2007), Liu, Chitagunta, and Zhu (2010), Gentzkow, Shapiro, and Sinkinson (2014), Kim, Misra, and Shapiro (2020), and Lewbel and Nesheim (2019). The models in most of these applications are parametric.1
Footnote 1: See Gentzkow (2007) pp. 722–723 for a brief review of pervasive parametric methods in the literature.
Nonparametric identification has been considered in cross-sectional settings. Sher and Kim (2014) explored the identification of bundle choice models without stochastic components in the utility from the perspective of microeconomic theory, assuming a finite population of consumers. Dunker, Hoderlein, and Kaido (2022) studied nonparametric identification of market-level demand models in a general framework in which the demand for bundles is nested. Fox and Lazzati (2017) provided nonparametric identification results for both single-agent discrete choice bundle models and binary games of complete information, and established the mathematical equivalence between these models. Allen and Rehbeck (2019) studied nonparametric identification of a class of latent utility models with additively separable unobserved heterogeneity by exploiting asymmetries in cross-partial derivatives of conditional average demands. Ouyang, Yang, and Zhang (2020) used the "identification at infinity" for bundle choice models in cross-sectional settings. Allen and Rehbeck (2022) established partial identification results for complementarity (substitutability) in a latent utility model with binary quantities of two goods.
Our paper differs from these studies in many respects. To our knowledge, this is the first work to study semiparametric point identification of preference coefficients in bundle choice models using individual-level data. Our methods are semiparametric in that no parametric restriction is imposed on unobserved error terms. The robustness afforded by the distribution-free specification makes our approach a competitive alternative to existing parametric methods. For the cross-sectional model, we adopt a stochastic utility maximization framework most similar to Fox and Lazzati (2017). Our identification results, however, rely on exogeneity conditions similar to Allen and Rehbeck (2019, 2022)), instead of either the exclusion restrictions ("special regressors") in Fox and Lazzati (2017) or aggregate data and valid instrumental variables required by Dunker et al. (2022). Our approach builds on an identification strategy different from those proposed in the papers mentioned above, using a matching insight. This work also contributes to the literature as the first to consider bundle choice models in a panel data setting with fixed effects.
This paper relies on the assumption of independence between the error term and covariates. However, our approach allows for arbitrary correlation in the unobserved errors across choices and
certain forms of heteroskedasticity. We use this assumption to identify the parameters of interest, following the maximum correlation rank approach proposed by Han (1987) and matching approach as in Honore and Kyriazidou (2000). A recent study Khan, Ouyang, and Tamer (2021) used a similar insight to study multinomial choice models. In the cross-sectional setting, we achieve identification via moment inequalities obtained by matching cross-section units in a specific way. In the panel setting, we relax the independence assumption by permitting correlation between the error term and covariates through unobserved fixed effects. The moment inequalities used for identification are obtained from variation in the covariates over time for each agent. We propose a maximum-score-type estimation procedure in the spirit of Manski (1987) for the panel data model. Other works for similar models include Lee (1995), Lewbel (2000), Altonji and Matzkin (2005), Fox (2007), Berry and Haile (2009), Pakes and Porter (2016), Ahn, Ichimura, Powell, and Ruud (2018), Shi, Shum, and Song (2018), Yan and Yoo (2019), Gao and Li (2020), and Chernozhukov, Fernandez-Val, and Newey (2019). These approaches, however, do not directly extend to bundle choice models because of the non-mutually exclusive choice set and distinctive random utilities associated with bundles.
We establish the limiting distribution of our estimators. For inference, we justify the validity of the standard bootstrap for the estimator in the cross-sectional setting. Unfortunately, the standard bootstrap does not work for the estimator in the panel setting. Instead, we employ the state-of-the-art numerical bootstrap, and we show its validity. All proposed estimation and inference procedures are easy to implement. In addition, we propose a test for the existence of interaction effects of choices on utilities, and we show how to achieve identification for cases with more complicated choice sets.
The remainder of this paper proceeds as follows. Section 2 presents the cross-sectional model and provides conditions for both observed covariates and unobserved errors sufficient to ensure point identification. We then propose a two-step, localized maximum rank correlation (MRC) procedure motivated by the identification strategy, and derive its \(\sqrt{N}\)-consistency and asymptotic normality. We show that the inference can be conducted via the standard bootstrap. We test the existence of interaction effects and show the identification for cases with more choices. Section 3 extends to a panel data model with unobserved agent-, alternative-, and bundle-specific fixed effects. We propose localized maximum score (MS) estimators for this model, develop its asymptotic properties, and justify the use of the numerical bootstrap for the inference. Again, for this setting, we test the existence of interaction effects and show the identification for cases with more choices. Section 4 investigates the finite sample performance of our proposed procedures using Monte Carlo experiments. Finally, Section 5 concludes the paper. All main proofs and tables are collected in Appendixes A-C. We provide some additional results in Appendix D. Some discussions and the intuition of the \(\sqrt{N}\) convergence rate of the cross-sectional estimator and the \(N^{1/3}\) convergence rate of the panel data estimator can be found in Appendix E. The proofs of technical lemmas are relegated to Appendix F.
For ease of reference, the notations maintained throughout this paper are listed here.
**Notation.** All vectors are column vectors. \(\mathbb{R}^{p}\) is a \(p\)-dimensional Euclidean space equipped with the Euclidean norm \(\|\cdot\|\), and \(\mathbb{R}_{+}\equiv\{x\in\mathbb{R}|x\geq 0\}\). \(\|\cdot\|_{F}\) denotes the Frobenius norm; that is, for any matrix \(\mathbf{A}\), \(\|\mathbf{A}\|_{F}=\sqrt{\operatorname{trace}\left(\mathbf{A}\mathbf{A}^{ \prime}\right)}\). We reserve letters \(i\) and \(m\) for indexing agents, \(j\) and \(l\) for indexing alternatives, and \(s\) and \(t\) for indexing time periods. The first element of a vector \(v\) is denoted by \(v^{(1)}\) and the sub-vector comprising its remaining elements is denoted by \(\tilde{v}\). We use \(P(\cdot)\) and \(\mathbb{E}[\cdot]\) to denote probability and expectation, respectively. \(1[\cdot]\) is an indicator function that equals \(1\) when the event in the brackets occurs, and \(0\) otherwise. For two random vectors \(U\) and \(V\), the notation \(U\stackrel{{ d}}{{=}}V|\cdot\) means that \(U\) and \(V\) have identical distribution conditional on \(\cdot\), and \(U\perp V|\cdot\) means that \(U\) and \(V\) are independent conditional on \(\cdot\). Symbols \(\setminus\), \({}^{\prime}\), \(\Leftrightarrow\), \(\propto\), \(\stackrel{{ d}}{{\rightarrow}}\), \(\stackrel{{ P}}{{\rightarrow}}\), and \(\rightsquigarrow\) represent set difference, matrix transposition, if and only if, proportionality, convergence in distribution, convergence in probability, and weak convergence in the sense of van der Vaart and Wellner (1996), respectively.
## 2 Cross-Sectional Model
The plan of this section is as follows. Section 2.1 shows the identification. Section 2.2 presents the estimator and its limiting distribution. The inference procedure is proposed in Section 2.3. Testing the interactive effects of choices, identification of three choices and their bundles, and some issues with regards to common regressors are presented in Sections 2.4, 2.5, and 2.6, respectively.
### Model and Identification
Throughout this paper, we focus on a choice model in which the choice set \(\mathcal{J}\) consists of three mutually exclusive alternatives (numbered 0-2) and a bundle of alternatives 1 and 2; that is, \(\mathcal{J}=\{0,1,2,(1,2)\}\). This simple model is sufficient to illustrate the main intuition that runs through both cross-sectional and panel data models.
For ease of exposition, we re-number alternatives in \(\mathcal{J}\) with 2-dimensional vectors of binary indicators \(d=(d_{1},d_{2})\in\{0,1\}\times\{0,1\}\), where \(d_{1}\) and \(d_{2}\) indicate if alternative 1 and 2 are chosen, respectively. In this way the choice set \(\mathcal{J}\) can be one-to-one mapped to the set \(\mathcal{D}=\{(0,0),(1,0),(0,1),(1,1)\}\). An agent chooses the alternative in \(\mathcal{D}\) to maximize the latent utility
\[U_{d}=\sum_{j=1}^{2}F_{j}(X_{j}^{\prime}\beta,\epsilon_{j})\cdot d_{j}+F_{b}( \eta\cdot(W^{\prime}\gamma))\cdot d_{1}\cdot d_{2}, \tag{2.1}\]
where \(F_{j}(\cdot,\cdot)\)'s and \(F_{b}(\cdot)\) are unknown (to econometricians) \(\mathbb{R}^{2}\mapsto\mathbb{R}\) and \(\mathbb{R}\mapsto\mathbb{R}\), respectively, functions strictly monotonic in each of their arguments, \(X_{j}\in\mathbb{R}^{k_{1}}\) collects covariates affecting
the utility associated with stand-alone alternative \(j\), \(W\in\mathbb{R}^{k_{2}}\) is a vector of explanatory variables characterizing the interaction effects of the bundle (e.g., bundle discount), \((\epsilon_{1},\epsilon_{2},\eta)\in\mathbb{R}^{2}\times\mathbb{R}_{+}\) captures unobserved (to the econometrician) heterogeneous effects, and \((\beta^{\prime},\gamma^{\prime})^{\prime}\in\mathbb{R}^{k_{1}+k_{2}}\) are unknown preference parameters to estimate.
We assume that \(F_{b}(0)=0\) so that the sign of \(\eta\cdot(W^{\prime}\gamma)\) has some economic meaning (see below for details). For instance, \(F_{b}\left(x\right)\) can be \(x\) or \(x^{3}\). The specification of model (2.1) takes the linear forms in Fox and Lazzati (2017) as a special case. We note that Fox and Lazzati (2017) does rely on the additive separability while ours does not. The utility of choosing \((0,0)\) is normalized to \(0\). The utility of choosing \((1,0)\) or \((0,1)\) is \(F_{1}\left(X_{1}^{\prime}\beta,\epsilon_{1}\right)\) or \(F_{2}(X_{2}^{\prime}\beta,\epsilon_{2})\), respectively. The utility of choosing the bundle is the sum of the two stand-alone utilities plus the interaction term \(F_{b}(\eta\cdot(W^{\prime}\gamma))\) which captures either complementary (\(\eta\cdot(W^{\prime}\gamma)\geq 0\)) or substitution (\(\eta\cdot(W^{\prime}\gamma)\leq 0\)) effects. \((\epsilon_{1},\epsilon_{2})\) are idiosyncratic shocks associated with each stand-alone alternative as in common multinomial choice models, and \(\eta\) reflects unobserved heterogeneity for the bundle. In what follows, we assume \(\eta>0\). Thus \(W^{\prime}\gamma>0\) indicates interaction effect to be complementary, and otherwise substitutive. We note that assuming \(\eta>0\) is innocuous. As will be clear from below, the estimates of \(\eta\cdot(W^{\prime}\gamma)\) remain the same for either assuming \(\eta>0\) or \(\eta<0\).
Given the latent utility model (2.1), the observed dependent variable \(Y_{d}\) takes the form
\[Y_{d}=1[U_{d}>U_{d^{\prime}},\forall d^{\prime}\in\mathcal{D}\setminus d]. \tag{2.2}\]
Let \(Z\equiv(X_{1}^{\prime},X_{2}^{\prime},W^{\prime})^{\prime}\). Then the probabilities of choosing alternatives \(d\in\mathcal{D}\) can be expressed as:
\[P(Y_{(0,0)}=1|Z)= P(\max\{F_{1}\left(X_{1}^{\prime}\beta,\epsilon_{1}\right),F_{2 }\left(X_{2}^{\prime}\beta,\epsilon_{2}\right),F_{1}\left(X_{1}^{\prime}\beta,\epsilon_{1}\right)+F_{2}\left(X_{2}^{\prime}\beta,\epsilon_{2}\right)+F_{b} (\eta\cdot(W^{\prime}\gamma))\}<0|Z),\] \[P(Y_{(1,0)}=1|Z)= P(\max\{0,F_{2}\left(X_{2}^{\prime}\beta,\epsilon_{2} \right),F_{1}\left(X_{1}^{\prime}\beta,\epsilon_{1}\right)+F_{2}\left(X_{2}^{ \prime}\beta,\epsilon_{2}\right)+F_{b}(\eta\cdot(W^{\prime}\gamma))\}<F_{1} \left(X_{1}^{\prime}\beta,\epsilon_{1}\right)|Z),\] \[P(Y_{(0,1)}=1|Z)= P(\max\{0,F_{1}\left(X_{1}^{\prime}\beta,\epsilon_{1} \right),F_{1}\left(X_{1}^{\prime}\beta,\epsilon_{1}\right)+F_{2}\left(X_{2}^{ \prime}\beta,\epsilon_{2}\right)+F_{b}(\eta\cdot(W^{\prime}\gamma))\}<F_{2} \left(X_{2}^{\prime}\beta,\epsilon_{2}\right)|Z),\] \[P(Y_{(1,1)}=1|Z)= P(\max\{0,F_{1}\left(X_{1}^{\prime}\beta,\epsilon_{1} \right),F_{2}\left(X_{2}^{\prime}\beta,\epsilon_{2}\right)\}<F_{1}\left(X_{1} ^{\prime}\beta,\epsilon_{1}\right)+F_{2}\left(X_{2}^{\prime}\beta,\epsilon_{2 }\right)+F_{b}(\eta\cdot(W^{\prime}\gamma))|Z). \tag{2.3}\]
When \((\epsilon_{1},\epsilon_{2},\eta)\perp Z\), expression (2.3) implies that \(P(Y_{(1,0)}=1|X_{1}=x_{1},X_{2}=x_{2},W=w)\) and \(P(Y_{(1,1)}=1|X_{1}=x_{1},X_{2}=x_{2},W=w)\) are both increasing in \(x_{1}^{\prime}\beta\) for some constant vectors \(x_{2}\) and \(w\). That is,
\[x_{1}^{\prime}\beta\geq\tilde{x}_{1}^{\prime}\beta\] \[\Leftrightarrow P(Y_{(1,d_{2})}=1|X_{1}=x_{1},X_{2}=x_{2},W=w)\geq P(Y_{(1,d_{2})}=1|X_{1}= \tilde{x}_{1},X_{2}=x_{2},W=w). \tag{2.4}\]
Similarly, for cases with \(d_{1}=0\) and any \(d_{2}\), \(x_{1}^{\prime}\beta\geq\tilde{x}_{1}^{\prime}\beta\) is the if-and-only-if condition to the second inequality in (2.4) but with "\(\leq\)" instead.
Once \(\beta\) is identified, we can move on to identify \(\gamma\) using the following moment inequalities for \(w^{\prime}\gamma\) constructed by fixing \((X_{1}^{\prime}\beta,X_{2}^{\prime}\beta)\) at some constant vector \((v_{1},v_{2})\). For \(d=(1,1)\),
\[w^{\prime}\gamma\geq\tilde{w}^{\prime}\gamma\] \[\Leftrightarrow P(Y_{(1,1)}=1|X_{1}^{\prime}\beta=v_{1},X_{2}^{\prime}\beta=v_{2},W=w) \geq P(Y_{(1,1)}=1|X_{1}^{\prime}\beta=v_{1},X_{2}^{\prime}\beta=v_{2},W= \tilde{w}). \tag{2.5}\]
For all \(d\in\mathcal{D}\setminus(1,1)\), \(w^{\prime}\gamma\geq\tilde{w}^{\prime}\gamma\) is the if-and-only-if condition for the second inequality in (2.5) but with "\(\preceq\)" instead.
To implement the above idea, we assume i.i.d. in Assumption C1 below. Thus, we are able to take two independent copies of \((Y,Z)\): e.g., \((Y_{i},Z_{i})\) and \((Y_{m},Z_{m})\) from the observed samples. For (2.4), we can match \(X_{2}\) and \(W\) for this two observations, and compare the values of \(X_{1}^{\prime}b\) and \(Y_{(1,d_{2})}\) for these two observations. This is the idea of MRC estimator. We refer readers to the next section for the details of the implementation.
Note that \(\gamma\) can be alternatively identified by matching \(X\equiv(X_{1}^{\prime},X_{2}^{\prime})^{\prime}\) across agents. We refer readers to Appendix E for details.
To establish the identification of \(\beta\) and \(\gamma\) based on (2.4)-(2.5), the following conditions are sufficient:
* (i) \(\{(Y_{i}^{\prime},Z_{i}^{\prime})^{\prime}\}_{i=1}^{N}\) are i.i.d. across \(i\), (ii) \((\epsilon_{1},\epsilon_{2},\eta)\perp Z\), and (iii) the joint distribution of \((\epsilon_{1},\epsilon_{2},\eta)\) is absolutely continuous on \(\mathbb{R}^{2}\times\mathbb{R}_{+}\).
* For any pair of \((i,m)\) and \(j=1,2\), denote \(X_{imj}=X_{ij}-X_{mj}\) and \(W_{im}=W_{i}-W_{m}\). Then, (i) \(X_{im1}^{(1)}\)\((X_{im2}^{(1)})\) has almost everywhere (a.e.) positive Lebesgue density on \(\mathbb{R}\) conditional on \(\tilde{X}_{im1}\) (\(\tilde{X}_{im2}\)) and conditional on \((X_{im2},W_{im})\) (\((X_{im1},W_{im})\)) in a neighborhood of \((X_{im2},W_{im})\) (\((X_{im1},W_{im})\)) near zero, (ii) Elements \(X_{im1}\) (\(X_{im2}\)), conditional on \((X_{im2},W_{im})\) (\((X_{im1},W_{im})\)) in a neighborhood of \((X_{im2},W_{im})\) (\((X_{im1},W_{im})\)) near zero, are linearly independent, (iii) \(W_{im}^{(1)}\) has a.e. positive Lebesgue density on \(\mathbb{R}\) conditional on \(\tilde{W}_{im}\) and conditional on \((X_{im1}^{\prime}\beta,X_{im2}^{\prime}\beta)\) in a neighborhood of \((X_{im1}^{\prime}\beta,X_{im2}^{\prime}\beta)\) near zero, and (iv) Elements in \(W_{im}\), conditional on \((X_{im1}^{\prime}\beta,X_{im2}^{\prime}\beta)\) in a neighborhood of \((X_{im1}^{\prime}\beta,X_{im2}^{\prime}\beta)\) near zero, are linearly independent.
* (\(\beta^{\prime},\gamma^{\prime})^{\prime}\in\mathcal{B}\times\mathcal{R}\), where \(\mathcal{B}=\{b\in\mathbb{R}^{k_{1}}|\left\|b\right\|=1,b^{(1)}\neq 0\}\) and \(\mathcal{R}=\{r\in\mathbb{R}^{k_{2}}|\left\|r\right\|=1,r^{(1)}\neq 0\}\).
* \(F_{1}(\cdot,\cdot)\), \(F_{2}(\cdot,\cdot)\), and \(F_{b}(\cdot)\) are strictly increasing functions on \(\mathbb{R}^{2}\mapsto\mathbb{R}\), \(\mathbb{R}^{2}\mapsto\mathbb{R}\), and \(\mathbb{R}\mapsto\mathbb{R}\), respectively. \(F_{b}(0)=0\).
Assumptions C1 is the key to establish the moment inequalities (2.4)-(2.5). C1 allows arbitrary correlation among \((\epsilon_{1},\epsilon_{2},\eta)\). Conditions in C2(i) and C2(iii) are standard requirements for MRC and MS types of estimators. Conditions Assumption C2(ii) and C2(iv) are the regular rank conditions
required for the identification. We normalize coefficients with norm 1. This is a typical practice for binary dependent variables. In addition, we assume the coefficients before the first variables are nonzero. Together with C2(i) and C2(iii), it is to ensure the identification for MRC and MS types of estimators. We restrict the functions of \(F\) to be strictly increasing in C4.
Our identification result for model (2.1)-(2.2) is stated below, and proved in Appendix A.
**Theorem 2.1**.: _If Assumptions C1-C4 hold, \(\beta\) and \(\gamma\) are identified._
### Localized MRC Estimator
The local monotonic relations established in (2.4)-(2.5) naturally motivate a two-step localized MRC estimation procedure. Note that the probability of obtaining perfectly matched observations is zero for continuous regressors. Following the literature, we propose to use kernel weights as an approximation to the matching. These steps are described in turn below.
In the first step, we consider the localized MRC estimator \(\hat{\beta}\) of \(\beta\), analogous to the MRC estimator proposed in Han (1987). Specifically, \(\hat{\beta}=\arg\min_{\beta\in\mathcal{B}}\mathcal{L}_{N,\beta}^{K}(b)\). \(\mathcal{L}_{N,\beta}^{K}(b)\) is defined as:
\[\mathcal{L}_{N,\beta}^{K}(b)=\sum_{i=1}^{N-1}\sum_{m>i}\sum_{d \in\mathcal{D}}\{\mathcal{K}_{h_{N}}(X_{im2},W_{im})(Y_{md}-Y_{id})\text{sgn}( X_{im1}^{\prime}b)\cdot(-1)^{d_{1}}\] \[+\mathcal{K}_{h_{N}}(X_{im1},W_{im})(Y_{md}-Y_{id})\text{sgn}( X_{im2}^{\prime}b)\cdot(-1)^{d_{2}}\} \tag{2.6}\]
with \(\mathcal{K}_{h_{N}}(X_{imj},W_{im})\equiv h_{N}^{-(k_{1}+k_{2})}\prod_{\iota =1}^{k_{1}}K(X_{imj,\iota}/h_{N})\prod_{\iota=1}^{k_{2}}K(W_{im,\iota}/h_{N})\), where \(X_{imj,\iota}\) (\(W_{im,\iota}\)) is the \(\iota\)-th element of vector \(X_{imj}\) (\(W_{im}\)), \(K\) (\(\cdot\)) is a standard kernel density function, and \(h_{N}\) is a bandwidth sequence that converges to 0 as \(N\to\infty\).2 Obviously, \(\mathcal{K}_{h_{N}}(X_{imj},W_{im})\overset{P}{\to}1\left[X_{ij}=X_{ij},W_{i} =W_{m}\right]\) for \(j=1,2\), as \(h_{N}\to 0\).
Footnote 2: In practice, kernel functions and bandwidths can be different for each univariate matching variable. Here we assume they are the same for all covariates just for notational convenience.
In the second step, we obtain \(\hat{\gamma}=\arg\min_{\gamma\in\mathcal{R}}\mathcal{L}_{N,\gamma}^{K}(r;\hat {\beta})\) with
\[\mathcal{L}_{N,\gamma}^{K}(r;\hat{\beta})=\sum_{i=1}^{N-1}\sum_{m>i}\mathcal{ K}_{\sigma_{N}}(V_{im}(\hat{\beta}))(Y_{i(1,1)}-Y_{m(1,1)})\text{sgn}(W_{im}^{ \prime}r), \tag{2.7}\]
where \(V_{im}(b)\equiv(X_{im1}^{\prime}b,X_{im2}^{\prime}b)^{\prime}\) for all \(b\in\mathbb{R}^{k_{1}}\), \(\mathcal{K}_{\sigma_{N}}(V_{im}(\hat{\beta}))=\sigma_{N}^{-2}K(X_{im1}^{\prime }\hat{\beta}/\sigma_{N})K(X_{im2}^{\prime}\hat{\beta}/\sigma_{N})\), and \(\sigma_{N}\) is a bandwidth sequence that converges to 0 as \(N\to\infty\). Again, \(\mathcal{K}_{\sigma_{N}}(V_{im}(\hat{\beta}))\overset{P}{\to}1\left[X_{i1}^{ \prime}\hat{\beta}=X_{m1}^{\prime}\hat{\beta},X_{i2}^{\prime}\hat{\beta}=X_{m2 }^{\prime}\hat{\beta}\right]\), as \(\sigma_{N}\to 0\).
The rest of this section establishes the asymptotic properties of the estimators computed based on objective functions (2.6) and (2.7). To streamline the exposition, we assume that all regres
sors are continuous. Before presenting additional regularity conditions and the main results, we introduce some new notations for ease of exposition: For all \(d\in\mathcal{D}\) and \(j=1,2\),
* \(V\left(\beta\right)\equiv(X_{1}^{\prime}\beta,X_{2}^{\prime}\beta)^{\prime}\).
* Let \(X_{ij}\) and \(X_{mj}\) be two independent copies of \(X_{j}\) and \(X_{imj}\equiv X_{ij}-X_{mj}\). Similarly, define \(W_{i},W_{m},W_{im},Y_{i},Y_{m},Y_{im},V_{i}\left(\beta\right),V_{m}\left(\beta\right)\), and \(V_{im}\left(\beta\right)\).
* \(f_{X_{imj},W_{im}}(\cdot)\) and \(f_{V_{im}(\beta)}(\cdot)\) denote the PDF of the random vectors \((X_{imj}^{\prime},W_{im}^{\prime})^{\prime}\) and \(V_{im}(\beta)\), respectively. \(f_{X_{2},W|X_{1}}(\cdot)\) (\(f_{X_{1},W|X_{2}}(\cdot)\)) denotes the PDF of \((X_{2}^{\prime},W^{\prime})^{\prime}\) (\((X_{1}^{\prime},W^{\prime})^{\prime}\)) conditional on \(X_{1}\) (\(X_{2}\)). \(f_{V(\beta)}(\cdot)\) (\(f_{V(\beta)|W}(\cdot)\)) denotes the PDF of \(V(\beta)\) (conditional on \(W\)).
* For any function \(g,\ \nabla g(v)\) and \(\nabla^{2}g(v)\) denote the gradient and Hessian matrix for \(g(\cdot)\) evaluated at \(v\), respectively.
We impose the following regularity conditions:
* (i) \(f_{V_{im}(\beta)}(\cdot)\) and \(f_{X_{imj},W_{im}}(\cdot)\) for \(j=1,2\) are bounded from above on their supports, strictly positive in a neighborhood of zero, and twice continuously differentiable with bounded second derivatives, (ii) For all \(d\in\mathcal{D}\), \(b\in\mathcal{B}\), and \(r\in\mathcal{R}\), \(\mathbb{E}[Y_{imd}[\mathrm{sgn}(X_{im1}^{\prime}b)-\mathrm{sgn}(X_{im1}^{ \prime}\beta)]|X_{im2}=\cdot,W_{im}=\cdot]\), \(\mathbb{E}[Y_{imd}[\mathrm{sgn}(X_{im2}^{\prime}b)-\mathrm{sgn}(X_{im2}^{ \prime}\beta)]|X_{im1}=\cdot,W_{im}=\cdot]\), and \(\mathbb{E}[Y_{im(1,1)}[\mathrm{sgn}(W_{im}^{\prime}r)-\mathrm{sgn}(W_{im}^{ \prime}\gamma)]|V_{im}\left(\beta\right)=\cdot]\) are continuously differentiable with bounded first derivatives, (iii) \(\mathbb{E}[Y_{imd}|Z_{i}=\cdot,Z_{m}=\cdot]\) is \(\kappa_{1\beta}^{\mathrm{th}}\) continuously differentiable with bounded \(\kappa_{1\beta}^{\mathrm{th}}\) derivatives. \(f_{X_{2},W|X_{1}}(\cdot)\) (\(f_{X_{1},W|X_{2}}(\cdot)\)) is \(\kappa_{2\beta}^{\mathrm{th}}\) continuously differentiable with bounded \(\kappa_{2\beta}^{\mathrm{th}}\) derivatives. Denote \(\kappa_{\beta}=\kappa_{1\beta}+\kappa_{2\beta}\). \(\kappa_{\beta}\) is an even integer greater than \(k_{1}+k_{2}\), (iv) \(\mathbb{E}[Y_{im(1,1)}|V_{i}=\cdot,V_{m}=\cdot,W_{i}=\cdot,W_{m}=\cdot]\) is \(\kappa_{1\gamma}^{\mathrm{th}}\) continuously differentiable with bounded \(\kappa_{1\gamma}^{\mathrm{th}}\) derivatives, and \(f_{V(\beta)|W}(\cdot)\) is \(\kappa_{2\gamma}^{\mathrm{th}}\) continuously differentiable with bounded \(\kappa_{2\gamma}^{\mathrm{th}}\) derivatives. Denote \(\kappa_{\gamma}=\kappa_{1\gamma}+\kappa_{2\gamma}\). \(\kappa_{\gamma}\) is an even integer greater than \(2\), and (v) \(\mathbb{E}[\|X_{imj}\|^{2}]<\infty\) for all \(j=1,2\), and \(\mathbb{E}[\|W_{im}\|^{2}]<\infty\). All derivatives in this assumption are with respect to \(\cdot\).
* \(K(\cdot)\) is continuously differentiable and assumed to satisfy: (i) \(\sup_{v}|K(v)|<\infty\), (ii) \(\int K(v)\mathrm{d}v=1\), (iii) for any positive integer \(\iota\leq\max\left\{\kappa_{\beta},\kappa_{\gamma}\right\},\) \[\int\upsilon^{\iota}K(\upsilon)\mathrm{d}\upsilon=0,\text{ and }\int\left|\upsilon^{\max\left\{\kappa_{\beta},\kappa_{\gamma} \right\}}\right|K(\upsilon)\mathrm{d}\upsilon<\infty.\]
* \(h_{N}\) and \(\sigma_{N}\) are sequences of positive numbers such that as \(N\rightarrow\infty\): (i) \(h_{N}\to 0\), \(\sigma_{N}\to 0\), (ii) \(\sqrt{N}h_{N}^{k_{1}+k_{2}}\rightarrow\infty\), \(\sqrt{N}\sigma_{N}^{2}\rightarrow\infty\), and (iii) \(\sqrt{N}h_{N}^{\kappa_{\beta}}\to 0\), \(\sqrt{N}\sigma_{N}^{\kappa_{\gamma}}\to 0\).
Assumptions C5 is standard in the literature, for details, see, e.g., Sherman (1993). Assumptions C6 and C7 are to ensure the consistency, the bias term being asymptotically negligible. C7 ensures
that the orders of \(\mathcal{K}_{h_{N}}\left(\cdot,\cdot\right)\) and \(\mathcal{K}_{\sigma_{N}}(\cdot)\) are at least \(\max\left\{\kappa_{\beta},\kappa_{\gamma}\right\}\). These are all standard. Follow the proof in Sherman (1993), we show the asymptotic normality of our estimator in the following theorem. The proof is deferred to Appendix A.
**Theorem 2.2**.: _If Assumptions C1-C7 hold, then we have:_
1. \(\sqrt{N}(\hat{\beta}-\beta)\overset{d}{\rightarrow}N(0,4\left\{\mathbb{E}[ \nabla^{2}\varrho_{i}(\beta)]\right\}^{-1}\mathbb{E}[\nabla\varrho_{i}(\beta) \nabla\varrho_{i}(\beta)^{\prime}]\left\{\mathbb{E}[\nabla^{2}\varrho_{i}( \beta)]\right\}^{-1})\) _or, alternatively,_ \(\hat{\beta}-\beta\) _has the linear representation:_ \[\hat{\beta}-\beta=-\frac{2}{N}\left\{\mathbb{E}[\nabla^{2}\varrho_{i}(\beta)] \right\}^{-1}\sum_{i=1}^{N}\varrho_{i}(\beta)+o_{P}(N^{-1/2}),\] _with_ \(\varrho_{i}\left(\cdot\right)\) _defined in (_A.1_)._
2. \(\sqrt{N}(\hat{\gamma}-\gamma)\overset{d}{\rightarrow}N\left(0,\mathbb{E}[ \nabla^{2}\tau_{i}(\gamma)]^{-1}\mathbb{E}\left[\Delta_{i}\Delta_{i}^{\prime} \right]\mathbb{E}[\nabla^{2}\tau_{i}(\gamma)]^{-1})\right)\)_, where_ \[\Delta_{i}\equiv 2\nabla\tau_{i}(\gamma)+2\mathbb{E}\left[\nabla_{13}^{2} \mu\left(V_{i}(\beta),V_{i}(\beta),\gamma\right)\right]\left\{\mathbb{E}[ \nabla^{2}\varrho_{i}(\beta)]\right\}^{-1}\varrho_{i}(\beta),\] _with_ \(\tau_{i}\left(\cdot\right)\) _and_ \(\mu\left(\cdot,\cdot,\cdot\right)\) _defined in (_A.2_), and_ \(\nabla_{13}^{2}\) _denotes the second order derivative w.r.t. the first and third arguments._
We provide some intuition on the \(\sqrt{N}\) convergence rate in Appendix E.
### Inference
Theorem 2.2 indicates that our localized MRC estimators are asymptotically normal and have asymptotic variances with the usual sandwich structure. The expressions for the asymptotic variances consist of first and second derivatives of the limit of the expectation of the maximands in (2.6) and (2.7). To use these results for statistical inference, Sherman (1993) proposed applying the numerical derivative method of Pakes and Pollard (1989). Hong, Mahajan, and Nekipelov (2015) investigated the application of the numerical derivative method in extremum estimators, including second-order U-statistics. As an alternative, Cavanagh and Sherman (1998) suggested nonparametrically estimating these quantities. These procedures, however, require selecting additional tuning parameters.
To avoid this complexity, we propose conducting inference using the classic nonparametric bootstrap for ease of implementation. Subbotin (2007) proved the consistency of the nonparametric bootstrap for Han's (1987) MRC estimator.3 The structure of the objective functions of
our estimators is similar to that of the standard MRC estimator, but they do differ in two ways. First, our estimators require matching and thus contain kernel functions. Second, the objective function for estimating \(\hat{\gamma}\) contains a first step \(\hat{\beta}\) to approximate the true value of \(\beta\). These two differences usually do not make the bootstrap inconsistent. For example, Horowitz (2001) demonstrated the consistency of the bootstrap for a range of estimators involved with kernel functions; Chen, Linton, and van Keilegom (2003) demonstrated the consistency of the bootstrap for estimators with a first-step estimation component and a well-behaved objective function.
The bootstrap estimators \(\hat{\beta}^{*}\) and \(\hat{\gamma}^{*}\) are obtained as follows. Draw \(\{Y_{i}^{\star\prime},Z_{i}^{\star\prime}\}_{i=1}^{N}\) independently from the collection of the sample values \(\{Y_{i}^{\prime},Z_{i}^{\prime}\}_{i=1}^{N}\) with replacement. \(\hat{\beta}^{*}\) is obtained from
\[\hat{\beta}^{*} =\arg\max_{b\in\mathcal{B}}\mathcal{L}_{N,\beta}^{K*}\left(b \right)\equiv\arg\max_{b\in\mathcal{B}}\sum_{i=1}^{N-1}\sum_{m>i}\sum_{d\in \mathcal{D}}\{\mathcal{K}_{h_{N}}\left(X_{im2}^{*},W_{im}^{*}\right)Y_{mid}^{* }\mathrm{sgn}\left(X_{im1}^{\star\prime}b\right)\cdot\left(-1\right)^{d_{1}} \tag{2.8}\] \[+\mathcal{K}_{h_{N}}\left(X_{im1}^{*},W_{im}^{*}\right)Y_{mid}^{ *}\mathrm{sgn}\left(X_{im2}^{\star\prime}b\right)\cdot\left(-1\right)^{d_{2}}\}.\]
Similarly, \(\hat{\gamma}^{*}\) is obtained from
\[\hat{\gamma}^{*}=\arg\max_{r\in\mathcal{R}}\mathcal{L}_{N,\gamma}^{K*}(r,\hat {\beta}^{*})\equiv\arg\max_{r\in\mathcal{R}}\sum_{i=1}^{N-1}\sum_{m>i}\mathcal{ K}_{\sigma_{N}}(V_{im}^{*}(\hat{\beta}^{*}))Y_{im(1,1)}^{*}\mathrm{sgn} \left(W_{im}^{\star\prime}r\right). \tag{2.9}\]
We claim that
\[\sqrt{N}(\hat{\beta}^{*}-\hat{\beta})\overset{d}{\to}N(0,4\left\{\mathbb{E}[ \nabla^{2}\varrho_{i}(\beta)]\right\}^{-1}\mathbb{E}[\nabla\varrho_{i}(\beta) \nabla\varrho_{i}(\beta)^{\prime}]\left\{\mathbb{E}[\nabla^{2}\varrho_{i}( \beta)]\right\}^{-1})\text{ conditional on the sample},\]
and
\[\sqrt{N}\left(\hat{\gamma}^{*}-\hat{\gamma}\right)\overset{d}{\to}N\left(0, \mathbb{E}[\nabla^{2}\tau_{i}(\gamma)]^{-1}\mathbb{E}\left[\Delta_{i}\Delta_{ i}^{\prime}\right]\mathbb{E}[\nabla^{2}\tau_{i}(\gamma)]^{-1}\right)\text{ conditional on the sample}.\]
A complete proof of the consistency of the bootstrap procedure is lengthy and even more tedious. Considering that this process is relatively standard in the literature, we present only an outline of the proof in Appendix D.1.
### Testing \(\eta\)
Our identification requires that one of the regressors among \(W\) possesses a nonzero coefficient. A consequence is that we cannot test whether these two goods have interactions on one's utility through the significance of \(\|\gamma\|\). This observation confirms the role of \(\eta:\) it determines the magnitude of the interaction effect of the two goods. If \(\eta\) degenerates to \(0\), the interaction effect is zero. In this section, we propose a testing procedure for whether \(\eta\) degenerates to \(0.\) We formulate the null
hypothesis as
\[\mathbb{H}_{0}:\eta>0\text{ almost surely and }E\left(\eta\right)>0,\]
and the alternative hypothesis as
\[\mathbb{H}_{1}:\eta=0\text{ almost surely.}\]
Before the formal result, we explain the idea of the test to facilitate reading.
We define \(\hat{\mathcal{L}}_{N}^{K}\left(\hat{\gamma}\right)\) as
\[\hat{\mathcal{L}}_{N}^{K}\left(\hat{\gamma}\right)=\frac{1}{\sigma_{N}^{2}N \left(N-1\right)}\sum_{i\neq m}K_{\sigma_{N},\gamma}\left(V_{im}\left(\hat{ \beta}\right)\right)Y_{im\left(1,1\right)}\text{sgn}\left(W_{im}^{\prime}\hat{ \gamma}\right).\]
Suppose we know the value of \(\beta\). Define
\[\mathcal{L}_{N}^{K}\left(r\right)=\frac{1}{\sigma_{N}^{2}N\left(N-1\right)} \sum_{i\neq m}K_{\sigma_{N},\gamma}\left(V_{im}\left(\beta\right)\right)Y_{im \left(1,1\right)}\text{sgn}\left(W_{im}^{\prime}r\right).\]
Clearly, \(\mathcal{L}_{N}^{K}\left(r\right)\) is a second-order U-statistic, and
\[\mathcal{L}_{N}^{K}\left(r\right)\overset{P}{\rightarrow}\bar{\mathcal{L}} \left(r\right)\equiv f_{V_{im}\left(\beta\right)}\left(0\right)\mathbb{E} \left[Y_{im\left(1,1\right)}\text{sgn}\left(W_{im}^{\prime}r\right)|V_{im} \left(\beta\right)=0\right].\]
When \(\eta=0,\)
\[\bar{\mathcal{L}}\left(r\right)=0\text{ for any }r,\]
because \(Y_{im\left(1,1\right)}\) is independent of \(W_{im}\) conditional on \(V_{im}\left(\beta\right)=0,\) and \(\mathbb{E}\left(Y_{im\left(1,1\right)}|V_{im}\left(\beta\right)=0\right)=0\). On the contrary, if \(\eta>0\) almost surely and \(E\left(\eta\right)>0,\) then the sign of \(W_{im}^{\prime}\gamma\) is the same as \(P\left(Y_{i\left(1,1\right)}|V_{i}\left(\beta\right)=v,W_{i}\right)-P\left(Y_{ m\left(1,1\right)}|V_{m}\left(\beta\right)=v,W_{m}\right)\) and it holds on a nonzero probability measure. As a result,
\[\bar{\mathcal{L}}\left(\gamma\right)>0.\]
In view of the above the result, testing \(\mathbb{H}_{0}\) is equivalent to testing \(\bar{\mathcal{L}}\left(\gamma\right)>0.\)
By some standard analysis for U-statistics, the limiting distribution of \(\mathcal{L}_{N}^{K}\left(\gamma\right)\) is
\[\sqrt{N}\left(\mathcal{L}_{N}^{K}\left(\gamma\right)-\bar{\mathcal{L}}\left( \gamma\right)\right)\overset{d}{\rightarrow}N\left(0,\Delta\right),\]
with
\[\Delta=\text{var}\left\{\mathbb{E}\left[Y_{im\left(1,1\right)}\text{sgn} \left(W_{im}^{\prime}\gamma\right)|Z_{i},V_{m}\left(\beta\right)=V_{i}\left( \beta\right)\right]\right\}.\]
We can then test whether \(\bar{\mathcal{L}}\left(\gamma\right)>0\) using the above limiting distribution.
The analysis for our case is complicated by the fact that we need to estimate \(\beta\) and \(\gamma.\) For the theorem below, we show that the plugged-in \(\hat{\beta}\) and \(\hat{\gamma}\) affect \(\hat{\mathcal{L}}_{N}^{K}\left(\hat{\gamma}\right)\) at the rates of \(N^{-1}\sigma_{N}^{-2}\) and \(N^{-1},\)
respectively, around the true values \(\left(\beta,\gamma\right)\). Those rates are much faster than \(N^{-1/2}\). Therefore, the asymptotics of \(\hat{\mathcal{L}}_{N}^{K}\left(\hat{\gamma}\right)\) is the same as that of \(\mathcal{L}_{N}^{K}\left(\gamma\right).\) The limiting distribution of \(\hat{\mathcal{L}}_{N}^{K}\left(\hat{\gamma}\right)\) under the null is presented in the following theorem. The proof is deferred to Appendix A.
**Theorem 2.3**.: _Suppose Assumptions C1-C7 hold. Then, under \(\mathbb{H}_{0},\)_
\[\sqrt{N}\left(\hat{\mathcal{L}}_{N}^{K}\left(\hat{\gamma}\right)-\bar{ \mathcal{L}}\left(\gamma\right)\right)\overset{d}{\to}N\left(0,\Delta\right).\]
To implement the test, we propose to employ the standard bootstrap procedure. Specifically, we obtain i.i.d. samplings \(\left(Z_{i}^{\ast},Y_{i}^{\ast}\right)\) with replacement for \(B\) times (e.g., \(299\) times), and we calculate
\[\hat{\mathcal{L}}_{N}^{K\ast}\left(\hat{\gamma}\right)=\frac{1}{\sigma_{N}^{2 }N\left(N-1\right)}\sum_{i\neq m}K_{\sigma_{N},\gamma}\left(V_{im}^{\ast} \left(\hat{\beta}\right)\right)Y_{im\left(1,1\right)}^{\ast}\mathrm{sgn} \left(W_{im}^{\ast\prime}\hat{\gamma}\right).\]
Note we do not need to re-compute \(\hat{\beta}\) and \(\hat{\gamma}\) using the samplings. We construct the \(95\%\) confidence interval for \(\bar{\mathcal{L}}\left(\gamma\right)\) as the one-sided \(\left[Q_{0.05}\left(\hat{\mathcal{L}}_{N}^{K\ast}\left(\hat{\gamma}\right) \right),+\infty\right),\) where \(Q_{0.05}\left(\cdot\right)\) denotes \(0.05\)-the quantile of \(\hat{\mathcal{L}}_{N}^{K\ast}\left(\hat{\gamma}\right)\). Again, due to the same reason as we provided above, the plugged-in \(\hat{\beta}\) and \(\hat{\gamma}\) have no impact on the asymptotics of \(\hat{\mathcal{L}}_{N}^{K\ast}\left(\hat{\gamma}\right).\) The validity of the bootstrap for U-statistics has been established in the literature, for example, see Arcones and Gine (1992). Since the analysis is standard, we omit it to save space.
### Identification of the Case with 3 Alternatives
We discuss the identification of the case with 3 alternatives. The case with more than 3 alternatives can be similarly handled. The regular technical conditions required for the identification are straightforward given those provided for the 2-alternatives case. We omit those conditions for conciseness, and we present the identification strategy only.
The choice set now becomes \(\mathcal{J}=\left\{0,1,2,3,(1,2),(1,3),(2,3),(1,2,3)\right\}\) or be equivalently \(\mathcal{D}=\left\{d|d=(d_{1},d_{2},d_{3})\in\left\{0,1\right\}^{3}\right\}\), which is the set of all possible \(d=(d_{1},d_{2},d_{3})\in\left\{0,1\right\}^{3}\).
An agent chooses \(d\) that maximizes the latent utility
\[U_{d}= \sum_{j=1}^{3}F_{j}(X_{j}^{\prime}\beta,\epsilon_{j})\cdot d_{j}+ F_{110}\left(\eta_{110}\cdot(W_{1}^{\prime}\gamma_{1})\right)\cdot d_{1} \cdot d_{2}+F_{101}\left(\eta_{101}\cdot(W_{2}^{\prime}\gamma_{2})\right) \cdot d_{1}\cdot d_{3}\] \[+F_{011}\left(\eta_{011}\cdot(W_{3}^{\prime}\gamma_{3})\cdot d_{2 }\cdot d_{3}\right)+F_{111}\left(\eta_{111}\cdot(W_{4}^{\prime}\gamma_{4}) \right)\cdot d_{1}\cdot d_{2}\cdot d_{3}\]
where \(X_{1},X_{2},X_{3}\in\mathbb{R}^{k_{1}},\ W_{1}\in\mathbb{R}^{k_{2}},W_{2}\in \mathbb{R}^{k_{3}},W_{3}\in\mathbb{R}^{k_{4}},W_{4}\in\mathbb{R}^{k_{5}}\), \(\eta\equiv(\eta_{110},\eta_{101},\eta_{011},\eta_{111})^{\prime}\in\mathbb{R }_{+}^{4}\) are the bundle-specific unobserved heterogeneities, and all \(F\)s are strictly increasing in all arguments. For simplicity, we assume that \(X_{1},X_{2},X_{3},W_{1},W_{2},W_{3}\), and \(W_{4}\) have no common co
variates. Let \(Z\equiv(X_{1}^{\prime},X_{2}^{\prime},X_{3}^{\prime},W_{1}^{\prime},W_{2}^{\prime},W_{3}^{\prime},W_{4}^{\prime})^{\prime}\). We note the key assumption for the identification is \((\epsilon_{1},\epsilon_{2},\epsilon_{3},\eta)\perp Z,\) and we impose it for this section.
The observed dependent variable \(Y_{d}\) takes the form
\[Y_{d}=1[U_{d}>U_{d^{\prime}},\forall d^{\prime}\in\mathcal{D}\setminus d].\]
Similar to the 2-alternatives case, by \((\epsilon_{1},\epsilon_{2},\epsilon_{3},\eta)\perp Z,\)
\[P(Y_{(1,d_{2},d_{3})}=1|Z,X_{2}=x_{2},X_{3}=x_{3},W_{1}=w_{1},W_{2}=w_{2},W_{3 }=w_{3},W_{4}=w_{4})\]
is increasing in \(X_{1}^{\prime}\beta\) for any constant vectors \(\left(x_{2}^{\prime},x_{3}^{\prime},w_{1}^{\prime},w_{2}^{\prime},w_{3}^{\prime },w_{4}^{\prime}\right)^{\prime}\) and any \(\left(d_{2},d_{3}\right)^{\prime}\). Let \((Y_{i},Z_{i})\) and \((Y_{m},Z_{m})\) be two independent copies of \((Y,Z)\) with
\[Y\equiv(Y_{(0,0,0)},Y_{(1,0,0)},Y_{(0,1,0)},Y_{(0,0,1)},Y_{(1,1,0)},Y_{(1,0,1)},Y_{(0,1,1)},Y_{(1,1,1)})^{\prime}.\]
Then we have the following (conditional) moment inequalities for all \(d_{2}\) and \(d_{3}\):
\[\left\{X_{i1}^{\prime}\beta\geq X_{m1}^{\prime}\beta\right\} \tag{2.10}\] \[\Leftrightarrow\left\{P(Y_{i(1,d_{2},d_{3})}=1|Z_{i},X_{i2}=x_{2},X_{i3}=x_{3},W_{i1}=w_{1},W_{i2}=w_{2},W_{i3}=w_{3},W_{i4}=w_{4})\right.\] \[\geq\left.P(Y_{m(1,d_{2},d_{3})}=1|Z_{m},X_{m2}=x_{2},X_{m3}=x_{3},W_{m1}=w_{1},W_{m2}=w_{2},W_{m3}=w_{3},W_{m4}=w_{4})\right\}.\]
Similar moment inequalities can be obtained for \(X_{ij}^{\prime}\beta\geq X_{mj}^{\prime}\beta\) by fixing \(\left\{X_{1},X_{2},X_{3}\right\}\setminus\left\{X_{j}\right\}\) and \(W_{1},W_{2},W_{3},W_{4}\). Collectively, these moment inequalities establish the identification of \(\beta\), with some standard technical conditions as for the 2-alternatives case.
We can identify \(\gamma\)s by fixing \(X_{j}^{\prime}\beta,\)\(j=1,2,3,\) once \(\beta\) is identified, or simply by fixing \(X_{1},X_{2},X_{3}\). We present the former strategy. For \(\gamma_{1},\) we fix \((X_{1}^{\prime}\beta,X_{2}^{\prime}\beta,X_{3}^{\prime}\beta)\) at some constant vector \((v_{1},v_{2},v_{3})^{\prime},\) and fix \(W_{i2},W_{i3},W_{i4}.\) Then by \((\epsilon_{1},\epsilon_{2},\epsilon_{3},\eta)\perp Z,\)
\[P(Y_{(1,1,d_{3})}=1|Z,X_{1}^{\prime}\beta=v_{1},X_{2}^{\prime}\beta=v_{2},X_{3 }^{\prime}\beta=v_{3},W_{2}=w_{2},W_{3}=w_{3},W_{4}=w_{4})\]
is increasing in \(W_{1}^{\prime}\gamma\) for any \(d_{3}.\) As a result, for any \(d_{3},\)\((v_{1},v_{2},v_{3})^{\prime},\)\(w_{2},w_{3},\) and \(w_{4},\)
\[\left\{W_{i1}^{\prime}\gamma_{1}\geq W_{m1}^{\prime}\gamma_{1}\right\} \tag{2.11}\] \[\Leftrightarrow\left\{P(Y_{i(1,1,d_{3})}=1|Z_{i},X_{i1}^{\prime} \beta=v_{1},X_{i2}^{\prime}\beta=v_{2},X_{i3}^{\prime}\beta=v_{3},W_{i2}=w_{2 },W_{i3}=w_{3},W_{i4}=w_{4})\right.\] \[\geq\left.P(Y_{m(1,1,d_{3})}=1|Z_{m},X_{m1}^{\prime}\beta=v_{1},X_{ m2}^{\prime}\beta=v_{2},X_{m3}^{\prime}\beta=v_{3},W_{m2}=w_{2},W_{m3}=w_{3},W_{m4}=w_ {4})\right\}.\]
We can similarly identify \(\gamma_{2}\) and \(\gamma_{3}\).
To identify \(\gamma_{4}\), we can fix either \(X_{1},X_{2},X_{3},W_{1},W_{2},W_{3}\) or \((X_{1}^{\prime}\beta,X_{2}^{\prime}\beta,X_{3}^{\prime}\beta,W_{1}^{\prime} \gamma_{1},W_{2}^{\prime}\gamma_{2},W_{3}^{\prime}\gamma_{3})^{\prime}\)
once \(\beta\), \(\gamma_{1},\gamma_{2}\), and \(\gamma_{3}\) are identified. Suppose we choose the latter and we fix \((X^{\prime}_{1}\beta,X^{\prime}_{2}\beta,X^{\prime}_{3}\beta,W^{\prime}_{1} \gamma_{1},W^{\prime}_{2}\gamma_{2},W^{\prime}_{3}\gamma_{3})^{\prime}\) at \((v_{1},v_{2},v_{3},v_{4},v_{5},v_{6})^{\prime}\). Then \((\epsilon_{1},\epsilon_{2},\epsilon_{3},\eta)\perp Z\) implies that
\[\Pr\left(Y_{(1,1,1)}=1|Z,X^{\prime}_{1}\beta=v_{1},X^{\prime}_{2}\beta=v_{2},X ^{\prime}_{3}\beta=v_{3},W^{\prime}_{1}\gamma_{1}=v_{4},W^{\prime}_{2}\gamma_ {2}=v_{5},W^{\prime}_{3}\gamma_{3}=v_{6}\right)\]
is increasing in \(W^{\prime}_{4}\gamma_{4}\) for any fixed \((v_{1},v_{2},v_{3},v_{4},v_{5},v_{6})^{\prime}\). Thus, the following inequality holds for any \((v_{1},v_{2},v_{3},v_{4},v_{5},v_{6})^{\prime}\),
\[\left\{W^{\prime}_{i4}\gamma_{4}\geq W^{\prime}_{m4}\gamma_{4}\right\} \tag{2.12}\] \[\Leftrightarrow\left\{P(Y_{i(1,1,1)}=1|Z_{i},X^{\prime}_{i1}\beta =v_{1},X^{\prime}_{i2}\beta=v_{2},X^{\prime}_{i3}\beta=v_{3},W^{\prime}_{i1} \gamma_{1}=v_{4},W^{\prime}_{i2}\gamma_{2}=v_{5},W^{\prime}_{i3}\gamma_{3}=v_ {6})\right.\] \[\geq\left.P(Y_{m(1,1,1)}=1|Z_{m},X^{\prime}_{m1}\beta=v_{1},X^{ \prime}_{m2}\beta=v_{2},X^{\prime}_{m3}\beta=v_{3},W^{\prime}_{m1}\gamma_{1}=v _{4},W^{\prime}_{m2}\gamma_{2}=v_{5},W^{\prime}_{m3}\gamma_{3}=v_{6})\right\}.\]
The inequalities established in equations (2.10) - (2.12) can be used to construct estimators similarly as before.
### Identification with Common Regressors
To identify parameters for one alternative, we need to fix covariates at certain values for other alternatives, see, e.g., equation (2.4). A draw back of this strategy is that we do not allow common regressors across alternatives. We believe this is the price to pay for not imposing any distribution assumptions on error terms and allowing arbitrary correlations among them. Due to this limitation, we do not have a general identification result in the presence of common regressors. We below provide some (partial) identification results.
We first discuss the case (denoted as "Case 1") when there exist some common regressors between \(X_{1}\) and \(X_{2}\), but no common regressors exist between \(X\equiv(X^{\prime}_{1},X^{\prime}_{2})^{\prime}\) and \(W\).
**Case 1 model setup:** Suppose the model contains both alternative-specific and agent-specific regressors - for example, in model (2.1), we rewrite the model as
\[U_{d}=\sum_{j=1}^{2}F_{j}(X^{\prime}_{j}\beta+S^{\prime}\theta_{j},\epsilon_{j })\cdot d_{j}+F_{b}\left(\eta\cdot(W^{\prime}\gamma)\right)\cdot d_{1}\cdot d _{2}\]
with \(S\) collecting all agent-specific regressors (e.g., gender).
**Case 1 result 1:** Identification of \(\beta\) and \(\theta\), assuming \(\theta=\theta_{1}=\theta_{2}\).
We can identify \(\beta\) via
\[x_{1}^{\prime}\beta\geq\tilde{x}_{1}^{\prime}\beta\Leftrightarrow\] \[P(Y_{(1,d_{2})}=1|X_{1}=x_{1},X_{2}=x_{2},S=s,W=w)\geq P(Y_{(1,d _{2})}=1|X_{1}=\tilde{x}_{1},X_{2}=x_{2},S=s,W=w),\]
for any \(d_{2},x_{1},x_{2},s,\) and \(w.\) In the special case where \(\theta_{1}=\theta_{2}=\theta,\)\(\theta\) can be point identified in an additional step after \(\beta\) is identified. This can be achieved by
\[s^{\prime}\theta \geq \tilde{s}^{\prime}\theta\Leftrightarrow\] \[P(Y_{(1,1)} = 1|Z,X_{1}^{\prime}\beta=v_{1},X_{2}^{\prime}\beta=v_{2},S=s,W=w )\geq P(Y_{(1,1)}=1|Z,X_{1}^{\prime}\beta=v_{1},X_{2}^{\prime}\beta=v_{2},S= \tilde{s},W=w)\]
for any \(v_{1},v_{2},\) and \(w.\)
**Case 1 result 2:** Identification of \(\gamma.\)
We can identify \(\gamma\) without any problem. We establish the following moment inequality for \(\gamma\) by matching \(X\) and \(S\):
\[w^{\prime}\gamma \geq \tilde{w}^{\prime}\gamma\Leftrightarrow\] \[P(Y_{(1,1)} = 1|X_{1}=x_{1},X_{2}=x_{2},S=s,W=w)\geq P(Y_{m(1,1)}=1|X_{1}=x_{1},X_{2}=x_{2},S=s,W=\tilde{w}),\]
for any \(x_{1},x_{2},s.\)
**Case 1 result 3:** Partial Identification of coefficients \(\theta_{1}\) and \(\theta_{2}.\)
We continue the discussion in "Result 1", and we assume that \(\theta_{1}\neq\theta_{2}\). The identification of \(\theta_{1}\) and \(\theta_{2}\) is generally not possible without any further restrictions. "Result 1" implies
\[\{s^{\prime}\theta_{1} \geq \tilde{s}^{\prime}\theta_{1},s^{\prime}\theta_{2}\geq\tilde{s}^{ \prime}\theta_{2}\}\Rightarrow\] \[P(Y_{(1,1)} = 1|Z,X_{1}^{\prime}\beta=v_{1},X_{2}^{\prime}\beta=v_{2},S=s,W=w) \geq P(Y_{(1,1)}=1|Z,X_{1}^{\prime}\beta=v_{1},X_{2}^{\prime}\beta=v_{2},S= \tilde{s},W=w),\]
for any \(v_{1},v_{2},\) and \(w.\) Using this type of relationship, Gao and Li (2020) studied the set identification in multinomial response models. We may follow their lead and construct identification sets for \(\theta_{1}\) and \(\theta_{2}.\)
The case when there exist some common regressors between \(X_{1}\) and \(W,\) but no common regressors exist between \((X_{1}^{\prime},W^{\prime})^{\prime}\) and \(X_{2},\) does not have any good economic interpretation. We do not discuss this case. We turn to the case (denoted as "Case 2") where agent-specific regressors also enter the equation for the bundle.
**Case 2 model setup:** Suppose the utility is written as
\[U_{d}=F_{1}(X_{1}^{\prime}\beta+S^{\prime}\theta_{1},\epsilon_{j})\cdot d_{1}+F_{ 2}(X_{2}^{\prime}\beta+S^{\prime}\theta_{2},\epsilon_{2})\cdot d_{2}+F_{b}\left( \eta\cdot(W^{\prime}\gamma+S^{\prime}\theta_{b})\right)\cdot d_{1}\cdot d_{2}.\]
**Case 2 result 1:** We can identify \(\beta\) and \(\gamma\) following the same logic in "Case 1 results 1 and 2".
**Case 2 result 2:** We can identify \(\theta\) in the special case of \(\theta_{1}=\theta_{2}=\theta_{b}=\theta\) using the same reason as in "Case 1 result 1".
**Case 2 result 3:** For the general situation when \(\theta_{1}\neq\theta_{2}\neq\theta_{b}\), we cannot point identify them without more conditions. Similar to "Case 1 result 3", we can partially identify them by using
\[\{s^{\prime}\theta_{1} \geq \tilde{s}^{\prime}\theta_{1},s^{\prime}\theta_{2}\geq\tilde{s}^{ \prime}\theta_{2},s^{\prime}\theta_{b}\geq\tilde{s}^{\prime}\theta_{b}\}\Rightarrow\] \[P(Y_{(1,1)} = 1|Z,X_{1}^{\prime}\beta=v_{1},X_{2}^{\prime}\beta=v_{2},S=s,W=w )\geq P(Y_{(1,1)}=1|Z,X_{1}^{\prime}\beta=v_{1},X_{2}^{\prime}\beta=v_{2},S= \tilde{s},W=w),\]
for any \(v_{1},v_{2}\), and \(w\).
## 3 Panel Data Model
The structure of this section is similar to that of Section 2. Section 3.1 shows the identification and the limiting distribution of the estimator. Section 3.2 provides the details and shows the validity of the numerical bootstrap inference procedure. Testing the interaction effects of choices and the identification of the case with three choices are presented in Sections 3.3 and 3.4, respectively.
### Identification and Localized MS Estimation
The increasing availability of panel data sets provides new opportunities for the econometrician to control for unobserved heterogeneity across agents. This helps with the relaxing of the strict exogeneity restriction placed in the cross-sectional model. In this section we apply the matching-based identification strategy presented in Section 2.1 to a panel data bundle choice model. The latent utilities and observed choices are expressed as follows:
\[U_{dt}=\sum_{j=1}^{2}F_{j}(X_{jt}^{\prime}\beta,\alpha_{j},\epsilon_{jt})\cdot d _{j}+F_{b}\left(\eta_{t}\cdot\left(W_{t}^{\prime}\gamma+\alpha_{b}\right) \right)\cdot d_{1}\cdot d_{2}, \tag{3.1}\]
and
\[Y_{dt}=1[U_{dt}>U_{d^{\prime}t},\forall d^{\prime}\in\mathcal{D}\setminus d], \tag{3.2}\]
where we use \(t=1,...,T\) to denote time periods and suppress the agent subscript \(i\) to simplify notations. Again, we assume that all \(F\)s are strictly increasing in their arguments. To our knowledge, this is the first work studying bundle choice models in panel data settings. The specification considered here is a natural extension of the cross-sectional model discussed in Section 2. Similar extensions are common in various discrete choice models. See, e.g., Manski (1987) and Shi et al. (2018). Note that the random utility specified by expression (3.1) includes a set of unobserved (to the econometrician) agent-specific fixed effects \(\alpha\equiv(\alpha_{1},\alpha_{2},\alpha_{b})^{\prime}\) associated with the two stand-alone alternatives and the bundle, respectively. Denote \(Z_{t}=(X^{\prime}_{1t},X^{\prime}_{2t},W^{\prime}_{t})^{\prime}\) and \(\xi_{t}=(\epsilon_{1t},\epsilon_{2t},\eta_{t})^{\prime}\). In line with the literature on fixed effects methods, we place no restrictions on the distribution of \(\alpha\) conditional on \(Z^{T}\equiv(Z^{\prime}_{1},...,Z^{\prime}_{T})^{\prime}\) and \(\xi^{T}\equiv(\xi^{\prime}_{1},...,\xi^{\prime}_{T})^{\prime}\).
Here we consider the identification and estimation of model (3.1)-(3.2) with \(T<\infty\) and \(N\rightarrow\infty\) (i.e., short panel). For any \(T\geq 2\), our identification strategy relies on a similar group homogeneity condition as that adopted by Manski (1987), Pakes and Porter (2016), and Shi et al. (2018) for binary and multinomial choice models. Specifically, we assume \(\xi_{s}\stackrel{{ d}}{{=}}\xi_{t}|(\alpha,Z_{s},Z_{t})\) for any two time periods \(s\) and \(t\).
This restriction is much weaker than the strong exogeneity condition needed for the cross section model. But in a panel date setting, it suffices to establish the following moment inequalities in the presence of a fixed effects \(\alpha\): for all \(d\) with \(d_{1}=1\) and any fixed \((x^{\prime}_{2},w^{\prime},c)^{\prime}\),
\[x^{\prime}_{1}\beta\geq\tilde{x}^{\prime}_{1}\beta\Leftrightarrow \tag{3.3}\] \[P(Y_{(1,d_{2})}=1|X_{1}=x_{1},X_{2}=x_{2},W=w,\alpha=c)\geq P(Y_ {(1,d_{2})}=1|X_{1}=\tilde{x}_{1},X_{2}=x_{2},W=w,\alpha=c).\]
For a similar reason, for all \(d\) with \(d_{1}=0\) and any fixed \((x^{\prime}_{2},w^{\prime},c)^{\prime}\), \(x^{\prime}_{1}\beta\geq\tilde{x}^{\prime}_{1}\beta\) is the if-and-only-if condition to the second inequality in (3.3) but with a "\(\leq\)" instead.
For \(d=(1,1)\) and any fixed \((x^{\prime}_{1},x^{\prime}_{2},c)^{\prime}\),
\[w^{\prime}\gamma\geq\tilde{w}^{\prime}\gamma\Leftrightarrow \tag{3.4}\] \[P(Y_{(1,1)}=1|X_{1}=x_{1},X_{2}=x_{2},W=w,\alpha=c)\geq P(Y_{(1,1 )}=1|X_{1}=x_{1},X_{2}=x_{2},W=\tilde{w},\alpha=c).\]
For a similar reason, for all \(d\neq(1,1)\) and any fixed \((x^{\prime}_{1},x^{\prime}_{2},c)^{\prime}\), \(w^{\prime}\gamma\geq\tilde{w}^{\prime}\gamma\) is the if-and-only-if condition to the second inequality in (3.4) but with a "\(\leq\)" instead.
The moment inequalities (3.3)-(3.4)4 are derived from the monotonicity conditions of agents' choices and analogous to (2.4)-(2.5) for the cross-sectional model. The main difference is that, in the presence of the fixed effects, we match and make our comparisons within agents over time,
as opposed to pairs of agents. These moment inequalities are the foundation of our identification result for model (3.1)-(3.2).
We impose the following conditions for identification. For notational convenience, we let \(X_{jts}\equiv X_{jt}-X_{js}\) and \(W_{ts}\equiv W_{t}-W_{s}\) for all \(j=1,2\) and \((s,t)\).
* (i) \(\{(Y_{i}^{T},Z_{i}^{T})\}_{i=1}^{N}\) are i.i.d, where \(Y_{i}^{T}\equiv\{(Y_{i(0,0)t},Y_{i(1,0)t},Y_{i(0,1)t},Y_{i(1,1)t})\}\),(ii) for almost all \((\alpha,Z^{T})\), \(\xi_{t}\overset{d}{=}\xi_{s}|\left(\alpha,Z_{t},Z_{s}\right)\), and (iii) the distribution of \(\xi_{t}\) conditional on \(\alpha\) is absolutely continuous w.r.t. the Lebesgue measure on \(\mathbb{R}^{2}\times\mathbb{R}_{+}\) for all \(t\).
* (\(\beta^{\prime},\gamma^{\prime})^{\prime}\in\mathcal{B}\times\mathcal{R}\), where \(\mathcal{B}=\{b\in\mathbb{R}^{k_{1}}|\left\|b\right\|=1,b^{(1)}\neq 0\}\) and \(\mathcal{R}=\{r\in\mathbb{R}^{k_{2}}|\left\|r\right\|=1,r^{(1)}\neq 0\}\).
* (i) \(X_{1ts}^{(1)}\) (\(X_{2ts}^{(1)}\)) has a.e. positive Lebesgue density on \(\mathbb{R}\) conditional on \(\tilde{X}_{1ts}\) (\(\tilde{X}_{2ts}\)) and conditional on \((X_{2ts}^{\prime},W_{ts}^{\prime})^{\prime}\) (\((X_{1ts}^{\prime},W_{ts}^{\prime})^{\prime}\)) in a neighborhood of \((X_{2ts}^{\prime},W_{ts}^{\prime})^{\prime}\) (\((X_{1ts}^{\prime},W_{ts}^{\prime})^{\prime}\)) near zero, (ii) the support of \(X_{1ts}\) (\(X_{2ts}\)), conditional on \((X_{2ts}^{\prime},W_{ts}^{\prime})^{\prime}\) (\((X_{1ts}^{\prime},W_{ts}^{\prime})^{\prime}\)) in a neighborhood of \((X_{2ts}^{\prime},W_{ts}^{\prime})^{\prime}\) (\((X_{1ts}^{\prime},W_{ts}^{\prime})^{\prime}\)) in a neighborhood of \((X_{2ts}^{\prime},W_{ts}^{\prime})^{\prime}\) (\((X_{1ts}^{\prime},W_{ts}^{\prime})^{\prime}\)) near zero, is not contained in any proper linear subspace of \(\mathbb{R}^{k_{1}}\), (iii) \(W_{ts}^{(1)}\) has a.e. positive Lebesgue density on \(\mathbb{R}\) conditional on \(\tilde{W}_{ts}\) and conditional on \((X_{1ts},X_{2ts})\) in a neighborhood of \((X_{1ts},X_{2ts})\) near zero, and (iv) the support of \(W_{ts}\), conditional on \((X_{1ts},X_{2ts})\) in a neighborhood of \((X_{1ts},X_{2ts})\) near zero, is not contained in any proper linear subspace of \(\mathbb{R}^{k_{2}}\).
* \(F_{j}\left(\cdot,\cdot,\cdot\right),\,j=1,2\), and \(F_{b}\left(\cdot\right)\) are strictly increasing in their arguments.
Assumptions P1-P3 are analogous to the identification conditions used by Manski (1987). Note that Assumption P1 allows for arbitrary correlation between the fixed effects and the observed covariates, provided that the correlation is time stationary. This assumption substantially relaxes the strict exogeneity condition in the cross-sectional case, but the price to pay is a slower convergence rate, as shown in Theorem 3.2 below. A technical explanation for the slower convergence rate can be found in Appendix E.
We summarize our identification results in the following theorem. The proof is omitted for brevity because it is very similar to the one for the cross-sectional model.
**Theorem 3.1**.: _Suppose Assumptions P1-P4 hold. Then \(\beta\) and \(\gamma\) are identified._
The monotonic relations (3.3)-(3.4) motivate the following two-step localized MS estimation procedure. Given a random sample of \(N\) agents \(i=1,...,N\), we obtain the estimator \(\hat{\beta}\) of \(\beta\) with the objective function
\[\mathcal{L}_{N,\beta}^{P,K}(b)=\sum_{i=1}^{N}\sum_{t>s}\sum_{d\in \mathcal{D}} \{\mathcal{K}_{h_{N}}(X_{i2ts},W_{its})(Y_{ids}-Y_{idt})\text{sgn}(X _{i1ts}^{\prime}b)\cdot(-1)^{d_{1}}\] \[+\mathcal{K}_{h_{N}}(X_{i1ts},W_{its})(Y_{ids}-Y_{idt})\text{sgn}( X_{i2ts}^{\prime}b)\cdot(-1)^{d_{2}}\}, \tag{3.5}\]
where \(\mathcal{K}_{h_{N}}(X_{ijts},W_{its})\equiv\prod_{l=1}^{k_{1}}h_{N}^{-1}K\left(X_{ ijts,l}/\,h_{N}\right)\prod_{l=1}^{k_{2}}h_{N}^{-1}K\left(W_{its,l}/\,h_{N}\right)\) for \(j=1,2\), \(K\left(\cdot\right)\) is a kernel density function, and \(h_{N}\) is a bandwidth sequence that converges to \(0\) as \(N\rightarrow\infty\). Similarly, we compute the estimator \(\hat{\gamma}\) of \(\gamma\) using the objective function
\[\mathcal{L}_{N,\gamma}^{P,K}(r)=\sum_{i=1}^{N}\sum_{t>s}\mathcal{K}_{\sigma_{N }}(X_{i1ts},X_{i2ts})(Y_{i(1,1)t}-Y_{i(1,1)s})\text{sgn}(W_{its}^{\prime}r), \tag{3.6}\]
where \(\mathcal{K}_{\sigma_{N}}(X_{i1ts},X_{i2ts})\equiv\prod_{l=1}^{k_{1}}\sigma_{N }^{-1}K\left(X_{i1ts,l}/\,\sigma_{N}\right)\prod_{l=1}^{k_{1}}\sigma_{N}^{-1}K \left(X_{i2ts,l}/\,\sigma_{N}\right)\) for \(j=1,2\), and \(\sigma_{N}\) is a bandwidth sequence that converges to \(0\) as \(N\rightarrow\infty\). Note that objective functions (3.5) and (3.6) take value \(0\) for observations whose choice is time invariant; that is, our approach uses only data on "switchers" in a way similar to the estimator in Manski (1987).
Unlike the cross-section case, here we choose not to estimate \(\gamma\) by matching \(X_{jt}^{\prime}\hat{\beta}\) and \(X_{js}^{\prime}\hat{\beta}\), \(j=1,2\). The reason can be found in Appendix E.
To establish the asymptotic properties of \(\hat{\beta}\) and \(\hat{\gamma}\), we need the following conditions in addition to Assumptions P1-P4:
* Let \(f_{X_{jt},W_{t}}(\cdot)\) denote the density of the random vector \((X_{jt}^{\prime},W_{t}^{\prime})^{\prime}\) for all \(j=1,2\), and \(f_{X_{1ts},X_{2ts}}(\cdot)\) denote the density of the random vector \((X_{1ts},X_{2ts})\). \(f_{X_{jt},W_{t}}(\cdot)\)'s and \(f_{X_{1ts},X_{2ts}}(\cdot)\) are absolutely continuous, bounded from above on their supports, strictly positive in a neighborhood of zero, and twice continuous differentiable a.e. with bounded derivatives.
* For all \(d\in\mathcal{D}\), \(\mathbb{E}[Y_{dts}\text{sgn}\left(X_{1ts}^{\prime}b\right)|X_{2ts},W_{ts}]\) and \(\mathbb{E}[Y_{dts}\text{sgn}\left(X_{2ts}^{\prime}b\right)|X_{1ts},W_{ts}]\) are twice continuously differentiable w.r.t. \(b\) a.e. with bounded derivatives, and \(\mathbb{E}[Y_{dts}\text{sgn}\left(W_{ts}^{\prime}r\right)|X_{1ts},X_{2ts}]\) is twice continuously differentiable w.r.t. \(r\) a.e. with bounded derivatives.
* \(K(\cdot)\) is a function of bounded variation and has a compact support. It satisfies: (i) \(K(v)\geq 0\) and \(\sup_{v}|K(v)|<\infty\), (ii) \(\int K(v)\text{d}v=1\), (iii) \(\int vK(v)\text{d}v=0\), (iv) \(\int v^{2}K(v)\text{d}v<\infty\) and (v) \(K(\cdot)\) is twice continuously differentiable with bounded first and second derivatives.
* \((\hat{\beta},\hat{\gamma})\) satisfies \[N^{-1}\mathcal{L}_{N,\beta}^{P,K}(\hat{\beta})\geq\max_{b\in\mathcal{B}}N^{-1 }\mathcal{L}_{N,\beta}^{P,K}(b)-o_{P}((Nh_{N}^{k_{1}+k_{2}})^{-2/3})\] and \[N^{-1}\mathcal{L}_{N,\gamma}^{P,K}(\hat{\gamma})\geq\max_{r\in\mathcal{R}}N^{- 1}\mathcal{L}_{N,\gamma}^{P,K}(r)-o_{P}((N\sigma_{N}^{2k_{1}})^{-2/3}).\]
* \(h_{N}\) and \(\sigma_{N}\) are sequences of positive numbers such that as \(N\rightarrow\infty\): (i) \(h_{N}\to 0\) and \(\sigma_{N}\to 0\), (ii) \(Nh_{N}^{k_{1}+k_{2}}\rightarrow\infty\) and \(N\sigma_{N}^{2k_{1}}\rightarrow\infty\), and (iii) \((Nh_{N}^{k_{1}+k_{2}})^{2/3}h_{N}^{2}\to 0\) and \((N\sigma_{N}^{2k_{1}})^{2/3}\sigma_{N}^{2}\to 0\).
The boundedness and smoothness restrictions placed on Assumptions P5-P7 are regularity conditions needed for proving the uniform convergence of the objective functions to their population
analogues. Assumption P8 is a standard technical condition. Assumption P9 is also standard; P9(iii) is placed to ensure the bias term from the kernel estimation is asymptotically negligible. Note that Assumption P6 implicitly assumes that the second moments of \(X_{jts}\)'s and \(W_{ts}\) exist.
For the asymptotic distribution, we focus on the case where all regressors are continuous, and introduce the following notations to ease the exposition. Let
\[\phi_{Ni}\left(b\right) \equiv\sum_{t>s}\sum_{d\in\mathcal{D}}\{\mathcal{K}_{h_{N}}(X_{ i2ts},W_{its})Y_{idst}\left(-1\right)^{d_{1}}\left(1[X_{i1ts}^{\prime}b>0]-1[X_{ i1ts}^{\prime}\beta>0]\right)\] \[+\mathcal{K}_{h_{N}}(X_{i1ts},W_{its})Y_{idst}\left(-1\right)^{d_{ 2}}\left(1[X_{i2st}^{\prime}b>0]-1[X_{i2st}^{\prime}\beta>0]\right)\}\]
and
\[\varphi_{Ni}\left(r\right)\equiv\sum_{t>s}\mathcal{K}_{\sigma_{N}}(X_{i1ts},X _{i2ts})Y_{i(1,1)ts}(1[W_{its}^{\prime}r>0]-1[W_{its}^{\prime}\gamma>0]).\]
Note that \(\hat{\beta}\) and \(\hat{\gamma}\) can be equivalently obtained from
\[\hat{\beta}=\arg\max_{b\in\mathcal{B}}N^{-1}\sum_{i=1}^{N}\phi_{Ni}\left(b \right)\text{ and }\hat{\gamma}=\arg\max_{r\in\mathcal{R}}N^{-1}\sum_{i=1}^{N}\varphi_{Ni} \left(r\right),\]
because \(\operatorname{sgn}\left(\cdot\right)=2\times 1\left[\cdot\right]-1\) and adding terms not related to \(b\) or \(r\) has no effect on the optimization.
**Theorem 3.2**.: _Suppose Assumptions P1-P9 hold. Then,_
\[(Nh_{N}^{k_{1}+k_{2}})^{1/3}(\hat{\beta}-\beta)\overset{d}{\to}\arg\max_{\rho \in\mathbb{R}^{k_{1}}}\mathcal{Z}_{1}\left(\rho\right),\]
_where \(\mathcal{Z}_{1}\) is a Gaussian process taking values in \(\ell^{\infty}\left(\mathbb{R}^{k_{1}}\right),\) with \(\mathbb{E}\left(\mathcal{Z}_{1}\left(\rho\right)\right)=\frac{1}{2}\rho^{ \prime}\mathbb{V}\rho,\) covariance kernel \(\mathbb{H}_{1}\left(\rho_{1},\rho_{2}\right),\) and \(\rho,\rho_{1},\rho_{2}\in\mathbb{R}^{k_{1}},\) and_
\[(N\sigma_{N}^{2k_{1}})^{1/3}\left(\hat{\gamma}-\gamma\right)\overset{d}{\to} \arg\max_{\delta\in\mathbb{R}^{k_{2}}}\mathcal{Z}_{2}\left(\delta\right),\]
_where \(\mathcal{Z}_{2}\) is a Gaussian process taking values in \(\ell^{\infty}\left(\mathbb{R}^{k_{2}}\right),\) with \(\mathbb{E}\left(\mathcal{Z}_{2}\left(\delta\right)\right)=\frac{1}{2}\delta^{ \prime}\mathbb{W}\delta\), covariance kernel \(\mathbb{H}_{2}\left(\delta_{1},\delta_{2}\right),\) and \(\delta,\delta_{1},\delta_{2}\in\mathbb{R}^{k_{2}}\). \(\mathbb{V}\), \(\mathbb{W}\), \(\mathbb{H}_{1}\), and \(\mathbb{H}_{2}\) are defined, respectively, by equations (B.1), (B.2), (B.3), and (B.4) in Appendix B._
When the number of alternatives, \(J,\) is greater than \(2,\) the convergence rates of \(\hat{\beta}\) and \(\hat{\gamma}\) become even slower because we need to match more covariates. We provide the details in Section 3.4. In line with previous results for the MS estimators (e.g., Kim and Pollard (1990) and Seo and Otsu (2018)), the limiting distributions of \(\hat{\beta}\) and \(\hat{\gamma}\) are non-Gaussian and their rates of convergence are less than \(N^{-1/3}\). We provide some intuition on the convergence rates of both estimations for the cross section and panel data cases in Appendix E. Inference using the limiting distribution directly is rather difficult. We recommend a bootstrap-based procedure for the inference in next section. One alternative is to adopt a smoothed MS approach (e.g., Horowitz (1992)), which may yield a
faster and asymptotically normal estimator. We leave this topic for future research.
### Inference
Abrevaya and Huang (2005) proved the inconsistency of the classic bootstrap for the ordinary MS estimator. Our panel data estimators are indeed of the MS type and we thus expect that the classic bootstrap does not work for them either. Applying the same arguments as Abrevaya and Huang (2005) with slight modifications can prove this result.
Valid inference can be made by the \(m\)-out-of-\(n\) bootstrap, according to Lee and Pun (2006). Recently, Hong and Li (2020) proposed the numerical bootstrap, and showed the superior performance of this procedure over the \(m\)-out-of-\(n\) bootstrap. For our panel data estimators, another advantage of the numerical bootstrap is that we do not need to choose another set of bandwidths (\(h_{N}\) and \(\sigma_{N}\)) for estimation using the bootstrap series as with the \(m\)-out-of-\(n\) bootstrap. Based on these considerations, we propose to conduct the inference using the numerical bootstrap.
The numerical bootstrap estimators \(\hat{\beta}^{*}\) and \(\hat{\gamma}^{*}\) are obtained as follows. Draw \(\left\{Y_{i}^{T\ast\prime},Z_{i}^{T\ast\prime}\right\}_{i=1}^{N}\) independently from the collection of the sample values \(\left\{Y_{i}^{T\prime},Z_{i}^{T\prime}\right\}_{i=1}^{N}\) with replacement. Then, obtain \(\hat{\beta}^{*}\) from
\[\hat{\beta}^{*}=\arg\max_{b\in\mathcal{B}}N^{-1}\sum_{i=1}^{N}\phi_{Ni}\left(b \right)+\left(N\varepsilon_{N1}\right)^{1/2}\cdot[N^{-1}\sum_{i=1}^{N}\phi_{ Ni}^{*}\left(b\right)-N^{-1}\sum_{i=1}^{N}\phi_{Ni}\left(b\right)], \tag{3.7}\]
where \(\varepsilon_{N1}\) is a tuning parameter to be discussed later, and \(\phi_{Ni}^{*}\left(b\right)\) is the same as \(\phi_{Ni}\left(b\right)\) except it uses the sampling series \(\left\{Y_{i}^{T\ast\prime},Z_{i}^{T\ast\prime}\right\}\) as inputs. Similarly, compute \(\hat{\gamma}^{*}\) from
\[\hat{\gamma}^{*}=\arg\max_{r\in\mathcal{R}}N^{-1}\sum_{i=1}^{N}\varphi_{Ni} \left(r\right)+\left(N\varepsilon_{N2}\right)^{1/2}\cdot[N^{-1}\sum_{i=1}^{N} \varphi_{Ni}^{*}\left(r\right)-N^{-1}\sum_{i=1}^{N}\varphi_{Ni}^{*}\left(r \right)], \tag{3.8}\]
where \(\varepsilon_{N2}\) is a tuning parameter, and \(\varphi_{Ni}^{*}\left(r\right)\) is similarly defined using bootstrap series.
When \(\varepsilon_{N1}^{-1}=\varepsilon_{N2}^{-1}=N\), the numerical bootstrap reduces to the classic nonparametric bootstrap. The numerical bootstrap excludes this case and requires \(N\varepsilon_{N1}\rightarrow\infty\) and \(N\varepsilon_{N2}\rightarrow\infty\) as \(N\rightarrow\infty\). We note that \(\varepsilon_{N1}^{-1}\) and \(\varepsilon_{N2}^{-1}\) play a similar role to the \(m\) in the \(m\)-out-of-\(n\) bootstrap (use only \(m\) observations for the estimation).5 The following conditions for the \(\varepsilon_{N}\)'s are required for the validity of the numerical bootstrap:
Footnote 5: Note that our \(\varepsilon_{N}\) was written as \(\varepsilon_{N}^{1/2}\) in Hong and Li (2020).
**P10**: \(\varepsilon_{N1}\) and \(\varepsilon_{N2}\) are sequences of positive numbers such that as \(N\rightarrow\infty\): (i) \(\varepsilon_{N1}\to 0\) and \(\varepsilon_{N2}\to 0\), (ii) \(N\varepsilon_{N1}\rightarrow\infty\) and \(N\varepsilon_{N2}\rightarrow\infty\), (iii) \(\varepsilon_{N1}^{-1}h_{N}^{k_{1}+k_{2}}\rightarrow\infty\) and \(\varepsilon_{N2}^{-1}\sigma_{N}^{2k_{1}}\rightarrow\infty\), and (iv)
\[(\varepsilon_{N1}^{-1}h_{N}^{k_{1}+k_{2}})^{2/3}h_{N}^{2}\to 0\text{ and }( \varepsilon_{N2}^{-1}\sigma_{N}^{2k_{1}})^{2/3}\sigma_{N}^{2}\to 0.\]
We show the validity of the numerical bootstrap in the following theorem. We note that our estimators do not directly satisfy all conditions required by Hong and Li (2020), for example, condition (vi) in Theorem 4.1 of Hong and Li (2020) is not the case here. The proof is deferred to Appendix B.
**Theorem 3.3**.: _Suppose Assumptions P1-P10 hold. Then_
\[(\varepsilon_{N1}^{-1}h_{N}^{k_{1}+k_{2}})^{1/3}(\hat{\beta}^{*}-\hat{\beta}) \overset{d}{\rightarrow}\arg\max_{\rho\in\mathbb{R}^{k_{1}}}\mathcal{Z}_{1}^ {*}\left(\rho\right)\text{ conditional on the sample,}\]
_and_
\[(\varepsilon_{N2}^{-1}\sigma_{N}^{2k_{1}})^{1/3}\left(\hat{\gamma}^{*}-\hat{ \gamma}\right)\overset{d}{\rightarrow}\arg\max_{\delta\in\mathbb{R}^{k_{2}}} \mathcal{Z}_{2}^{*}\left(\delta\right)\text{ conditional on the sample,}\]
_where \(\mathcal{Z}_{1}^{*}\left(\rho\right)\) and \(\mathcal{Z}_{2}^{*}\left(\delta\right)\) are independent copies of \(\mathcal{Z}_{1}\left(\rho\right)\) and \(\mathcal{Z}_{2}\left(\delta\right)\) defined in Theorem 3.2, respectively._
We now discuss the choice of the tuning parameters. Recall that \(\hat{\beta}\) is of rate \((Nh_{N}^{k_{1}+k_{2}})^{-1/3}\), and additionally we need \((Nh_{N}^{k_{1}+k_{2}})^{2/3}h_{N}^{2}\to 0\) (to handle the bias). To attain a fast rate of convergence, we tend to set \(h_{N}\) as large as possible. For example, we may set \((Nh_{N}^{k_{1}+k_{2}})^{2/3}h_{N}^{2}\propto\log\left(N\right)^{-1}\). For the same reason, we may set \((N\sigma_{N}^{2k_{1}})^{2/3}\sigma_{N}^{2}\propto\log\left(N\right)^{-1}\). These lead to \(h_{N}\propto\log\left(N\right)^{-\frac{3}{2k_{1}+2k_{2}+6}}N^{-\frac{1}{k_{1}+ k_{2}+3}}\) and \(\sigma_{N}\propto\log\left(N\right)^{-\frac{3}{4k_{1}+6}}N^{-\frac{1}{2k_{1}+3}}.\) As recommended by Hong and Li (2020), we can choose \(\varepsilon_{N1}\) and \(\varepsilon_{N2}\) such that \(\varepsilon_{N1}^{-1}h_{N}^{k_{1}+k_{2}}\propto(Nh_{N}^{k_{1}+k_{2}})^{2/3}\) and \(\varepsilon_{N2}^{-1}\sigma_{N}^{2k_{1}}\propto(N\sigma_{N}^{2k_{1}})^{2/3}\), which then implies that
\[\varepsilon_{N1}\propto N^{-\frac{k_{1}+k_{2}+2}{k_{1}+k_{2}+3}}\log\left(N \right)^{-\frac{k_{1}+k_{2}}{2k_{1}+2k_{2}+6}}\text{ and }\varepsilon_{N2}\propto N^{-\frac{2k_{1}+2}{2k_{1}+3}}\log\left(N \right)^{-\frac{k_{1}}{2k_{1}+3}}. \tag{3.9}\]
### Testing \(\eta\)
We follow the lead in Section 2.4 and propose to test whether \(\eta\) degenerates to \(0\) in this section. We formulate the hypothesis in the same way as before:
\[\mathbb{H}_{0}:\eta>0\text{ almost surely and }E\left(\eta\right)>0,\]
and
\[\mathbb{H}_{1}:\eta=0\text{ almost surely.}\]
Since we adopt the same idea as in Section 2.4, we only briefly describe how we construct the
test. The object used to conduct the test is:
\[N^{-1}\mathcal{L}_{N,\gamma}^{P,K}(r)=N^{-1}\sum_{i=1}^{N}\sum_{t>s}\mathcal{K}_{ \sigma_{N}}(X_{i1ts},X_{i2ts})(Y_{i(1,1)t}-Y_{i(1,1)s})\text{sgn}(W_{itts}^{ \prime}r).\]
Define
\[\bar{\mathcal{L}}^{P}\left(r\right)=\sum_{t>s}f_{X_{1ts},X_{2ts}}\left(0,0 \right)\mathbb{E}\left[\left.(Y_{i(1,1)t}-Y_{i(1,1)s})\text{sgn}(W_{itts}^{ \prime}r)\right|X_{1ts}=0,X_{2ts}=0\right].\]
Then
\[N^{-1}\mathcal{L}_{N,\gamma}^{P,K}(r)\overset{P}{\rightarrow}\bar{\mathcal{ L}}^{P}\left(r\right)\]
uniformly in a small neighbourhood of \(\gamma.\) Using the same reason as before, we know that under \(\mathbb{H}_{0},\)\(\bar{\mathcal{L}}^{P}\left(\gamma\right)>0,\) and under \(\mathbb{H}_{1},\)\(\bar{\mathcal{L}}^{P}\left(\gamma\right)=0.\) We show the limiting distribution of \(N^{-1}\mathcal{L}_{N,\gamma}^{P,K}(\hat{\gamma})\) in the following theorem. The proof is deferred to Appendix B.
**Theorem 3.4**.: _Suppose Assumptions P1-P9 hold. Then, under \(\mathbb{H}_{0},\)_
\[\sqrt{N\sigma_{N}^{2k_{1}}}\left(N^{-1}\mathcal{L}_{N,\gamma}^{P,K}(\hat{ \gamma})-\bar{\mathcal{L}}^{P}\left(\gamma\right)\right)\overset{d}{ \rightarrow}N\left(0,\Delta^{P}\right),\]
_with_
\[\Delta^{P}=\lim_{N\rightarrow\infty}\text{var}\left(\sigma_{N}^{k_{1}}\sum_{ t>s}\mathcal{K}_{\sigma_{N}}(X_{i1ts},X_{i2ts})(Y_{i(1,1)t}-Y_{i(1,1)s})\text{sgn}(W_{itts}^{ \prime}\gamma)\right).\]
\(\Delta^{P}\) is clearly well defined, after some standard calculations. The plug-in \(\hat{\gamma}\) has no effect on the asymptotics of \(N^{-1}\mathcal{L}_{N,\gamma}^{P,K}(\hat{\gamma}).\) To see the reason, we note from the proof that
\[N^{-1}\mathcal{L}_{N,\gamma}^{P,K}(\hat{\gamma})-N^{-1}\mathcal{L}_{N,\gamma} ^{P,K}(\gamma)=O_{P}\left(\left\|\hat{\gamma}-\gamma\right\|^{2}\right)+O_{P} \left(\sigma_{N}^{2}\right)+O_{P}\left(\left(N\sigma_{N}^{2k_{1}}\right)^{-2/3 }\right).\]
We showed \(\hat{\gamma}-\gamma=O_{P}\left(\left(N\sigma_{N}^{2k_{1}}\right)^{-1/3}\right)\) and assumed \(\left(N\sigma_{N}^{2k_{1}}\right)^{2/3}\sigma_{N}^{2}\to 0,\) so
\[\sqrt{N\sigma_{N}^{2k_{1}}}\left(N^{-1}\mathcal{L}_{N,\gamma}^{P,K}(\hat{ \gamma})-N^{-1}\mathcal{L}_{N,\gamma}^{P,K}(\gamma)\right)=o_{P}\left(1\right).\]
For the implementation, we can employ the standard bootstrap with no need to re-estimate \(\hat{\gamma}\). The validity can be justified as in Section 2.4.
### Identification of the Case with 3 Alternatives
We discuss the identification of the bundle choice model with 3 alternatives in the panel setting. The panel data model is as follows
\[U_{dt}= \sum_{j=1}^{3}F_{j}(X_{jt}^{\prime}\beta,\epsilon_{jt},\alpha_{j}) \cdot d_{j}+F_{110}\left(\eta_{110t}\cdot(W_{1t}^{\prime}\gamma_{1}+\alpha_{b1} )\right)\cdot d_{1}\cdot d_{2}+F_{101}\left(\eta_{101t}\cdot(W_{2t}^{\prime} \gamma_{2}+\alpha_{b2})\right)\cdot d_{1}\cdot d_{3}\] \[+F_{011}\left(\eta_{011t}\cdot(W_{3t}^{\prime}\gamma_{3}+\alpha_{ b3})\right)\cdot d_{2}\cdot d_{3}+F_{111}\left(\eta_{111t}\cdot(W_{4t}^{\prime} \gamma_{4}+\alpha_{b4})\right)\cdot d_{1}\cdot d_{2}\cdot d_{3},\]
where \(X_{1},X_{2},X_{3}\in\mathbb{R}^{k_{1}},\)\(W_{1}\in\mathbb{R}^{k_{2}},W_{2}\in\mathbb{R}^{k_{3}},W_{3}\in\mathbb{R}^{k_{4}},\)\(W_{4}\in\mathbb{R}^{k_{5}},\) and \(\eta_{t}\equiv(\eta_{110t},\eta_{101t},\eta_{011t},\eta_{111t})^{\prime}\in \mathbb{R}^{4}_{+}.\) All \(F\)s are assumed to be strictly increasing in their arguments. An agent chooses \(d\) that maximizes the latent utility
\[Y_{dt}=1[U_{dt}>U_{d^{\prime}t},\forall d^{\prime}\in\mathcal{D}\setminus d].\]
Denote
\[Z_{t} =(X_{1t}^{\prime},X_{2t}^{\prime},X_{3t}^{\prime},W_{1t}^{\prime}, W_{2t}^{\prime},W_{3t}^{\prime},W_{4t}^{\prime})^{\prime},\xi_{t}=(\epsilon_{1t}, \epsilon_{2t},\epsilon_{3t},\eta_{110t},\eta_{101t},\eta_{011t},\eta_{111t})^ {\prime},\] \[\text{and }\alpha =(\alpha_{1},\alpha_{2},\alpha_{3},\alpha_{b1},\alpha_{b2},\alpha _{b3},\alpha_{b4})^{\prime}\,.\]
The key condition is that \(\xi_{s}\overset{d}{=}\xi_{t}|(\alpha,Z_{s},Z_{t})\) holds for any two time periods \(s\) and \(t.\)
We only present the results that can identify \(\beta\) and \(\gamma_{1}.\) Identifications of \(\gamma_{2},\gamma_{3},\gamma_{4}\) can be obtained similarly.
We construct moment inequalities over the same individual using different observations across time. For any \(d_{2},d_{3},x_{1},x_{2},x_{3},w_{1},w_{2},w_{3},w_{4}\) and \(c,\) the following holds:
\[\left\{x_{1}^{\prime}\beta\geq\tilde{x}_{1}^{\prime}\beta\right\} \tag{3.10}\] \[\Leftrightarrow\{P(Y_{(1,d_{2},d_{3})}=1|X_{1}=x_{1},X_{2}=x_{2},X _{3}=x_{3},W_{1}=w_{1},W_{2}=w_{2},W_{3}=w_{3},W_{4}=w_{4},\alpha=c)\] \[\geq P(Y_{(1,d_{2},d_{3})}=1|X_{1}=\tilde{x}_{1},X_{2}=x_{2},X_{3} =x_{3},W_{1}=w_{1},W_{2}=w_{2},W_{3}=w_{3},W_{4}=w_{4},\alpha=c)\},\]
and
\[w_{1}^{\prime}\gamma_{1}\geq\tilde{w}_{1}^{\prime}\gamma_{1} \tag{3.11}\] \[\Leftrightarrow P(Y_{(1,1,d_{3})}=1|X_{1}=x_{1},X_{2}=x_{2},X_{3}=x_{3},W_{1}=w_{1},W _{2}=w_{2},W_{3}=w_{3},W_{4}=w_{4},\alpha=c)\] \[\geq P(Y_{(1,1,d_{3})}=1|X_{1}=x_{1},X_{2}=x_{2},X_{3}=x_{3},W_{1}= \tilde{w}_{1},W_{2}=w_{2},W_{3}=w_{3},W_{4}=w_{4},\alpha=c).\]
Under some regular conditions, we are able to identify all parameters.
We match more covariates to identify those parameters than the case with \(2\) alternatives. Using similar analysis, we can show that
\[\hat{\beta}-\beta=O_{P}\left((Nh_{N}^{2k_{1}+k_{2}+k_{3}+k_{4}+k_{5}})^{-1/3} \right)\text{ and }\hat{\gamma}_{1}-\gamma_{1}=O_{P}\left((N\sigma_{N}^{3k_{1}+k_{3}+k_{4}+k_{ 5}})^{-1/3}\right),\]
where we assume we use the same type of estimator as in Section 3.1 and the same bandwidth (\(h_{N}\) or \(\sigma_{N}\)) for all covariates. The convergence rates of \(\hat{\gamma}_{2},\hat{\gamma}_{3}\) and \(\hat{\gamma}_{4}\) can be similarly obtained. The identification of \(J>3\) can be handled similarly but with more tedious notations.
## 4 Monte Carlo Experiments
In this section, we investigate the finite sample performance of our proposed estimation and inference procedures, in both cross-sectional and panel data models, by means of Monte Carlo experiments. We set sample sizes \(N=250,500,1000\) for cross-sectional designs, and \(N=1000,2500,5000\) for panel data designs, considering the slower convergence rate. All simulation results are obtained from \(1000\) independent replications. We report these results in tables collected in Appendix C.
First, we consider the two-step MRC procedure for the cross-sectional model. We start from a benchmark design (Design 1) specified as follows:
\[U_{id}=\sum_{j=1}^{2}(\beta_{1}X_{ij,1}+\beta_{2}X_{ij,2}+\epsilon _{ij})\cdot d_{j}+\eta_{i}\cdot(\gamma_{1}W_{i,1}+\gamma_{2}W_{i,2})\cdot d_{1 }\cdot d_{2},\] \[Y_{id}=1[U_{id}>U_{id^{\prime}},\forall d^{\prime}\neq d],\]
where \(\beta_{1}=\beta_{2}=1\), \(\gamma_{1}=\gamma_{2}=1\), and \(d=(d_{1},d_{2})\in\{(0,0),(1,0),(0,1),(1,1)\}\). In this design, we let \(\{\{X_{ij,i}\}_{j=1,2;i=1,2},\{W_{i,\iota}\}_{l=1,2},\{\epsilon_{ij}\}_{j=1,2 },\eta_{i}\}_{i=1,\ldots,N}\) be independent from each other and across \(i,j\), and \(\iota\). \(X_{ij,1}\)'s, \(W_{i,1}\), and \(\epsilon_{ij}\)'s are \(N(0,1)\), \(X_{ij,2}\)'s and \(W_{i,2}\) are uniform in \([-1,1]\), and \(\eta_{i}\sim\text{Beta}(2,2)\). Imposing the scale normalization (Assumption C4), we treat \(\beta_{2}\) and \(\gamma_{2}\) as free parameters to estimate. After the normalization, true values of \(\beta_{2}\) and \(\gamma_{2}\) become \(\sqrt{2}/2\).
Note that \(k_{1}=k_{2}=2\), and larger bandwidth means smaller variation. In light of this observation and requirements under Assumptions C8 and C9, we employ the sixth-order Gaussian kernel (\(\kappa_{\beta}=6\)) and tuning parameter (bandwidth) \(h_{N}=c_{1}\hat{\sigma}\cdot N^{-1/8}\log(N)^{1/6}\) for the first-step estimation, where \(\hat{\sigma}\) represents the standard deviation of the corresponding matching variable. Under this choice, \(\sqrt{N}h_{N}^{4}\propto\log(N)^{2/3}\rightarrow\infty\) and \(\sqrt{N}h_{N}^{6}\to 0\). For similar reasons, we adopt the fourth-order Gaussian kernel (\(\kappa_{\gamma}=4\)) and bandwidth \(\sigma_{N}=c_{2}\hat{\sigma}\cdot N^{-1/4}\log(N)^{1/4}\) for the second step, and thus \(\sqrt{N}\sigma_{N}^{2}\propto\log(N)^{1/2}\rightarrow\infty\) and \(\sqrt{N}\sigma_{N}^{4}\to 0\). To test the sensitivity of our methods to the choice of bandwidths, we experiment with several values of \(c_{1}\in\{0.6,1,1.4,1.8\}\) and several values of \(c_{2}\in\{1.6,2,2.4,2.8\}\). It turns out that the results are not sensitive to the choice of \((c_{1},c_{2})\). Thus,
to save space, we only report the results of \((c_{1},c_{2})=(1,2)\), for which the root mean squared errors (RMSE) are the smallest among all choices.
Results for Design 1 are reported in tables numbered "1" and so on and so forth, for other designs. We report the performance of the MRC estimators and the nonparametric bootstrap in tables labeled "A" and "B", respectively. For example, Table 1A summarizes the performance of the estimators for Design 1, in which we report the mean bias (MBIAS) and RMSE of the estimator. Since these statistics are sensitive to outliers, we also present the median bias (MED) and the median absolute deviation (MAD). Table 1B reports the empirical coverage frequencies (COVERAGE) and lengths (LENGTH) of the 95% confidence intervals (CI) constructed using the standard bootstrap with 200 independent draws and estimations.
The results for Design 1 are in line with the asymptotic theory. The RMSE of \(\hat{\beta}_{2}\) and \(\hat{\gamma}_{2}\) shrink at the parametric rate as the sample size increases, with RMSE of \(\hat{\gamma}_{2}\) greater than that of \(\hat{\beta}_{2}\). This reflects the fact that for this particular design, the number of moment inequalities that can be used to identify \(\beta_{2}\) is twice that of \(\gamma_{2}\). The standard bootstrap also performs well for our MRC estimators, and yields shrinking CI with coverage rates approaching 95% as the sample size grows.
The design of Monte Carlo experiments may have a large effect on the results. For example, all i.i.d. regressors and errors often make estimators perform better than in cases where either the regressors or errors are correlated. To check the performance of our estimators with correlated regressors and errors, we modify Design 1 by setting
\[(X_{i1,1},X_{i2,1})\sim N\left(\left(\begin{array}{c}0\\ 0\end{array}\right),\left(\begin{array}{cc}1&0.5\\ 0.5&1\end{array}\right)\right),\ (\epsilon_{i1},\epsilon_{i2})\sim N\left( \left(\begin{array}{c}0\\ 0\end{array}\right),\left(\begin{array}{cc}1&0.5\\ 0.5&1\end{array}\right)\right),\]
and \((X_{i1,2},X_{i2,2})=(\zeta_{i1}+\zeta_{i3},\zeta_{i2}+\zeta_{i3})\), where \(\zeta_{i1}\), \(\zeta_{i2}\), and \(\zeta_{i3}\) are uniform in \([-1/2,1/2]\) and mutually independent. All other aspects remain the same. We refer to this modified version of the benchmark design as Design 2. We adopt the same kernels and bandwidths as in Design 1, and report its simulation results in Tables 2A and 2B. The results are very similar to those for Design 1, though as expected the performance slightly deteriorates. This confirms our theoretical result that our approach allows for arbitrary correlation in errors.
We then turn to examine the finite sample performance of the MS estimation and the numerical bootstrap procedure for panel data bundle choice models. We consider two designs (Designs 3 and 4) with the same choice set as the cross-sectional designs (i.e., \(\{(0,0),(1,0),(0,1),(1,1)\}\)) and a panel of two time periods. For each panel data design, we consider sample sizes of 1000, 2500, and 5000. For inference, we construct 95% CI based on 200 independent draws and estimations. The
first panel data design (Design 3) is specified as follows:
\[U_{idt} =\sum_{j=1}^{2}(\beta_{1}X_{ijt,1}+\beta_{2}X_{ijt,2}+\alpha_{ij}+ \epsilon_{ijt})\cdot d_{j}+\eta_{it}\cdot(\gamma_{1}W_{it,1}+\gamma_{2}W_{it,2} +\alpha_{ib})\cdot d_{1}\cdot d_{2},\] \[Y_{idt} =1[U_{idt}>U_{id^{\prime}t},\forall d^{\prime}\neq d],\]
where \(\beta_{1}=\beta_{2}=1\) and \(\gamma_{1}=\gamma_{2}=1\). \(\left\{\{X_{ijt,\iota}\}_{j=1,2;\iota=1,2},\{W_{it,\iota}\}_{t=1,2},\{\epsilon _{ijt}\}_{j=1,2},\eta_{it}\right\}_{i=1,\dots,N;t=1,2}\) are mutually independent random variables, where \(X_{ijt,1}\)'s, \(W_{it,1}\)'s, and \(\epsilon_{ijt}\)'s are \(N(0,1)\), \(X_{ijt,2}\)'s and \(W_{it,2}\)'s are uniform in \([-1,1]\), and \(\eta_{it}\sim\text{Beta}(2,2)\). \(\alpha_{ij}=(X_{ij1,2}+X_{ij2,2})/4\) for \(j=1,2\) and \(\alpha_{ib}=(W_{i1,2}+W_{i2,2})/4\). Because of scale normalization (Assumption P3), we set \(\beta_{2}\) and \(\gamma_{2}\) as free parameters to estimate. True values of \(\beta_{2}\) and \(\gamma_{2}\) are therefore normalized to \(\sqrt{2}/2\). Clearly in this design, the (unobserved) fixed effects are correlated with regressors, invalidating the matching method in the cross-sectional case here.
The second panel data design (Design 4) has the same random utility model and coefficients as Design 3 but, analogous to Design 2 for the cross-sectional setting, we set
\[(X_{i1t,1},X_{i2t,1})\sim N\left(\left(\begin{array}{c}0\\ 0\end{array}\right),\left(\begin{array}{cc}1&0.5\\ 0.5&1\end{array}\right)\right),\text{ for }t=1,2,\]
and
\[(\epsilon_{i11},\epsilon_{i21},\epsilon_{i12},\epsilon_{i22})\sim N\left( \left(\begin{array}{c}0\\ 0\\ 0\end{array}\right),\left(\begin{array}{cccc}1&0.5&0.5&0.5\\ 0.5&1&0.5&0.5\\ 0.5&0.5&1&0.5\\ 0.5&0.5&0.5&1\end{array}\right)\right),\]
and \((X_{i1t,2},X_{i2t,2})=(\zeta_{i1t}+\zeta_{i3t},\zeta_{i2t}+\zeta_{i3t})\) for \(t=1,2\), where \(\zeta_{i1t}\), \(\zeta_{i2t}\), and \(\zeta_{i3t}\) are uniform in \([-1/2,1/2]\) and mutually independent. All other aspects remain the same.
To implement the MS estimation for these designs, we use the Epanechnikov kernel and \(h_{N}=\sigma_{N}=c_{3}\hat{\sigma}\cdot N^{-1/7}\log(N)^{-1/14}\). For the implementation of the numerical bootstrap, we have two more tuning parameters: \(\varepsilon_{N1}\) and \(\varepsilon_{N2}\). We set \(\varepsilon_{N1}=\varepsilon_{N2}=c_{4}\cdot N^{-5/7}\log(N)^{-5/14}\).6 This choice of \((h_{N},\sigma_{N},\varepsilon_{N1},\varepsilon_{N2})\) meets Assumptions P9 and P10. We experiment with \(c_{3}\in\{1.6,2,2.4,2.8\}\) and \(c_{4}\in\{0.6,1.0,1.4,1.8\}\). It turns out that the results are not sensitive to the choice of \((c_{3},c_{4})\). Hence, to save space, we report only the results with \((c_{3},c_{4})=(2,1)\).
Footnote 6: Though Hong and Li (2020) recommended \(\varepsilon_{N1},\varepsilon_{N2}\propto N^{-6/7}\log(N)^{-2/7}\) (see (3.9) in our Section 3.2), some tentative Monte Carlo simulations suggest that a slight modification like this leads to a better finite sample performance.
Simulation results for Designs 3 and 4 are reported, respectively, in Tables 3A-3B and Tables 4A-4B. As the results indicate, our MS estimators perform reasonably well and give similar results for these two designs. The MBIAS and RMSE of \(\hat{\beta}_{2}\) and \(\hat{\gamma}_{2}\) decline as the sample size increases. \(\hat{\gamma}_{2}\) has overall larger bias and RMSE than \(\hat{\beta}_{2}\), reflecting the fact that its estimation uses fewer
moment inequalities. These results indicate the consistency of our estimators, though the rates of convergence are clearly slower than \(\sqrt{N}\).7 The inference results are also in line with the theory. When the sample size is relatively small, the CI of \(\beta_{2}\) obtained by the numerical bootstrap tends to be too wide, while that of \(\gamma_{2}\) tends to be too narrow. We conjecture that this is mainly because we implement the estimation and bootstrap inference procedures for \(\beta_{2}\) and \(\gamma_{2}\) with the same set of tuning parameters, but different numbers of moment inequalities. As the sample size grows, we observe that the CI of both \(\beta_{2}\) and \(\gamma_{2}\) shrink with their coverage rates approaching 95% (from different directions).
Footnote 7: It is a bit surprising that our MS procedure performs slightly better in Design 4 than in Design 3 in terms of RMSE. One possible explanation is that when the serial dependence in the errors is high, intertemporal variation in the values of observed covariates has a stronger effect on the changes of the values of \(Y_{idt}\) over time.
## 5 Conclusions
In this paper, we propose new estimation and inference procedures for semiparametric discrete choice models for bundles. For the cross-sectional model, we propose a two-step kernel-weighted rank procedure and establish its \(\sqrt{N}\)-consistency and asymptotic normality. Based on these results, we further show the validity of using the nonparametric bootstrap to carry out inference. Our matching-based identification strategy for the cross-sectional setting is extended to the panel data model, enabling a consistent two-step MS estimation procedure of a model with agent-, alternative-, and bundle-specific fixed effects. We further derive limiting distributions of the proposed estimators and justify the application of the numerical bootstrap (Hong and Li (2020)) for making inference. A small-scale Monte Carlo study demonstrates that the proposed estimation and inference procedures perform well in finite samples.
Throughout this paper, we focus on the settings that allow the researcher to observe individual-level data. We note that our proposed methods can also be adjusted for application to models using aggregate data; that is, where the researcher can only observe the aggregated choice probabilities (e.g., market shares) in a number of markets and the market-level covariates for each alternative. Such models are often encountered in empirical industrial organizations (see, e.g., Fan (2013)). An in-depth discussion of this extension would involve different estimators and asymptotics.8 Because of space constraints, we leave this topic to a separate study.
Footnote 8: Shi et al. (2018) discussed semiparametric estimation of panel data multinomial choice models using aggregate data. Their estimator, defined as an optimizer of a linear programming problem, was shown to be \(\sqrt{N}\)-consistent. Hsieh, Shi, and Shum (2022) proposed an inference procedure for estimators of this type. Our approach, if extended to models using aggregate data, will give estimators of similar structure. We conjecture that their arguments, with certain modification, can be applied to our case.
The work here leaves many open questions for future research. For example, one may consider smoothing the objective functions (in the spirit of Horowitz (1992)) to attain faster rates of convergence and asymptotic normality for the panel data estimators. Besides, as pointed out in Sections
2 and 3, our matching-based approach provides set identification results in relation to, for example, preference coefficients on agent-specific regressors; however, consistent estimators for these sets are lacking in the literature.
## Appendix
This appendix is organized as follows. Appendix A provides technical lemmas and the proofs of Theorems 2.1, 2.2, and 2.3 for the cross-sectional model. Appendix B proves Theorems 3.2, 3.3, and 3.4 and their supporting lemmas. Appendix C collects all the tables in this paper. We relegate the proofs of all technical lemmas used in Appendixes A and B to Appendix F. Appendix E collects some additional results in the paper.
Throughout, we use acronyms, SLLN and CMT, for Strong Law of Large Numbers and Continuous Mapping Theorem, respectively. For any (random) positive sequences \(\{a_{N}\}\) and \(\{b_{N}\}\), \(a_{N}=O(b_{N})\) (\(O_{P}(b_{N})\)) means that \(a_{N}/b_{N}\) is bounded (bounded in probability) and \(a_{N}=o(b_{N})\) (\(o_{P}(b_{N})\)) means that \(a_{N}/b_{N}\to 0\) (\(a_{N}/b_{N}\stackrel{{ P}}{{\to}}0\)). For any summation indexed by agents, we suppress the statement that the agent is in the set \(\{1,...,N\}\). For example, \(\sum_{i}\) means \(\sum_{i=1}^{N}\), and \(\sum_{i\neq m}\) represents \(\sum_{i=1}^{N}\sum_{m\neq i}\).
## Appendix A Proofs for the Cross-Sectional Model
### Proof of Theorem 2.1
Proof of Theorem 2.1.: It suffices to show the identification of \(\beta\) based on (2.4) as the same arguments can be applied to the identification of \(\beta\) and \(\gamma\) based on similar moment inequalities. To ease the notation, we denote \(\Omega_{im}=\{X_{i2}=X_{m2},W_{i}=W_{m}\}\). By Assumptions C1, the monotonic relation (2.4) implies that for all \(d\in\mathcal{D}\) with \(d_{1}=1\), \(\beta\) maximizes
\[Q_{1}(b)\equiv\mathbb{E}[(P(Y_{i(1,d_{2})}=1|Z_{i})-P(Y_{m(1,d_{2})}=1|Z_{m})) \text{sgn}(X^{\prime}_{im1}b)|\Omega_{im}]\]
for each pair of \((i,m)\). To show that \(\beta\) attains a unique maximum, suppose that there is a \(b\in\mathcal{B}\) such that \(Q_{1}(b)=Q_{1}(\beta)\). In what follows, we assume \(\beta^{(1)}>0\) w.l.o.g. (the case \(\beta^{(1)}<0\) is symmetric). We want to show that \(b=\beta\) must hold. First, note that \(b^{(1)}>0\) must hold. Otherwise, we have \(P[\text{sgn}(X^{\prime}_{im1}b)\neq\text{sgn}(X^{\prime}_{im1}\beta)|\Omega_{ im}]=P[(\tilde{X}^{\prime}_{im1}\tilde{\beta}/\beta^{(1)}<-X^{(1)}_{im1},\tilde{X}^{ \prime}_{im1}\tilde{b}/b^{(1)}<-X^{(1)}_{im1})\cup(\tilde{X}^{\prime}_{im1} \tilde{b}/b^{(1)}>-X^{(1)}_{im1},\tilde{X}^{\prime}_{im1}\tilde{\beta}/\beta^ {(1)}>-X^{(1)}_{im1})|\Omega_{im}]>0\) by Assumption C2(i). Then, \(\beta\) and \(b\) yield different values of the \(\text{sgn}(\cdot)\) function in \(Q_{1}(\cdot)\) with strictly positive probability, and thus
\(Q_{1}(\beta)\). For the case \(b^{(1)}>0\), we write \(P[\mathrm{sgn}(X^{\prime}_{im1}b)\neq\mathrm{sgn}(X^{\prime}_{im1}\beta)|\Omega_{ im}]=P[(\tilde{X}^{\prime}_{im1}\tilde{\beta}/\beta^{(1)}<-X^{(1)}_{im1}<\tilde{X}^{ \prime}_{im1}\tilde{b}/b^{(1)})\cup(\tilde{X}^{\prime}_{im1}\tilde{b}/b^{(1)}<- X^{(1)}_{im1}<\tilde{X}^{\prime}_{im1}\tilde{\beta}/\beta^{(1)})|\Omega_{im}]\). This implies that for all \(b\) satisfying \(Q_{1}(b)=Q_{1}(\beta)\), \(P[(\tilde{X}^{\prime}_{im1}\tilde{\beta}/\beta^{(1)}<-X^{(1)}_{im1}<\tilde{X}^ {\prime}_{im1}\tilde{b}/b^{(1)})\cup(\tilde{X}^{\prime}_{im1}\tilde{b}/b^{(1)}< -X^{(1)}_{im1}<\tilde{X}^{\prime}_{im1}\tilde{\beta}/\beta^{(1)})|\Omega_{im}]=0\) must hold, which is equivalent to \(P(\tilde{X}^{\prime}_{im1}\tilde{\beta}/\beta^{(1)}=\tilde{X}^{\prime}_{im1} \tilde{b}/b^{(1)}|\Omega_{im})=1\) under Assumption C2(i). Then it follows by Assumption C2(ii) that \(\tilde{\beta}/\beta^{(1)}=\tilde{b}/b^{(1)}\) and hence \(b^{(1)}\beta=\beta^{(1)}b\). As \(\|b\|=\|\beta\|=1\), we have \(b^{(1)}=\beta^{(1)}\) and thus \(b=\beta\). This completes the proof.
### Proof of Theorem 2.2
The proof of Theorem 2.2 uses results in a series of technical lemmas (i.e., Lemmas A.1-A.7), which we will present right above the main proof. Before delving into the technical details, we first introduce some new notations and outline the "roadmap" for the proof process.
The following notations will be adopted for notational convenience:
* \(V_{i}\equiv V_{i}(\beta)\), \(\hat{V}_{i}\equiv V_{i}(\hat{\beta})\), \(V_{im}\equiv V_{i}-V_{m}\), and \(\hat{V}_{im}\equiv\hat{V}_{i}-\hat{V}_{m}\).
* For realized values, \(v_{i}\equiv v_{i}(\beta)\), \(\hat{v}_{i}\equiv v_{i}(\hat{\beta})\), \(v_{im}\equiv v_{i}-v_{m}\), and \(\hat{v}_{im}\equiv\hat{v}_{i}-\hat{v}_{m}\).
* \(B(v_{i},v_{m},w_{i},w_{m})\equiv\mathbb{E}[Y_{im(1,1)}|V_{i}=v_{i},V_{m}=v_{m },W_{i}=w_{i},W_{m}=w_{m}]\).
* \(S_{im}(r)\equiv\mathrm{sgn}(w^{\prime}_{im}r)-\mathrm{sgn}(w^{\prime}_{im} \gamma)\).
* Define \[\varrho_{i}(b)\equiv-\sum_{d\in\mathcal{D}}\{\varrho_{i21d}(b)\cdot(-1)^{d_{1 }}+\varrho_{i12d}(b)\cdot(-1)^{d_{2}}\},\] (A.1) where \[\varrho_{ijld}(b)\equiv\mathbb{E}\left[Y_{imd}\left[\mathrm{sgn}(X^{\prime}_{ iml}b)-\mathrm{sgn}(X^{\prime}_{iml}\beta)\right]|Z_{i},X_{mj}=X_{ij},W_{m}=W_{i}\right]\] for \(j,l=1,2\) with \(j\neq l\).
* Define \[\tau_{i}(r) \equiv \mathbb{E}[Y_{im(1,1)}\left[\mathrm{sgn}(W^{\prime}_{im}r)- \mathrm{sgn}(W^{\prime}_{im}\gamma)\right]|V_{m}\left(\beta\right)=V_{i}\left( \beta\right),W_{i}],\text{ and }\] \[\mu(v_{1},v_{2},r)\] \[\equiv \mathbb{E}\left[\left.Y_{im(1,1)}\left[\mathrm{sgn}(W^{\prime}_{ im}r)-\mathrm{sgn}(W^{\prime}_{im}\gamma)\right]\left(\begin{array}{c}X^{\prime}_{ im1}\\ X^{\prime}_{im2}\end{array}\right)\right|V_{i}\left(\beta\right)=v_{1},V_{m} \left(\beta\right)=v_{2}\right]f_{V(\beta)}\left(v_{1}\right)\]
* Let \(\nabla_{1}\mu(v_{1},v_{2},r)\) denote the partial derivative of \(\mu(v_{1},v_{2},r)\) w.r.t. its first argument. Denote \(\nabla^{2}_{1k}\mu(v_{1},v_{2},r)\) as the partial derivative of \(\nabla_{1}\mu(v_{1},v_{2},r)\) w.r.t. its \(k\)-th argument and \(\nabla^{2}_{33}\nabla_{1}\mu(v_{1},v_{2},r)\) as the Hessian matrix of \(\nabla_{1}\mu(v_{1},v_{2},r)\) w.r.t. its third argument.
* Let \(K_{\beta}(\cdot)\) and \(K_{\gamma}(\cdot)\) be kernel functions such that \(K_{\beta}(\cdot/h_{N})=h_{N}^{k_{1}+k_{2}}\mathcal{K}_{h_{N}}(\cdot)\) and \(K_{\gamma}(\cdot/\sigma_{N})=\sigma_{N}^{2}\mathcal{K}_{\sigma_{N}}(\cdot)\), respectively.
In what follows, we will only show the asymptotics of \(\hat{\gamma}\). The proof for \(\hat{\beta}\) is omitted since one can derive the asymptotics of \(\hat{\beta}\) by repeating the proof process for \(\hat{\gamma}\) but skipping the step handling the plug-in first step estimates (i.e., Lemma A.7). Throughout this section, we take it as a given that \(\hat{\beta}\) is \(\sqrt{N}\)-consistent and asymptotically normal.
For ease of illustration, with a bit of abuse of notation, we will work with objective function
\[\hat{\mathcal{L}}_{N}^{K}(r)\equiv\frac{1}{\sigma_{N}^{2}N(N-1)}\sum_{i\neq m}K _{\sigma_{N},\gamma}(V_{im}(\hat{\beta}))h_{im}(r),\]
where \(K_{\sigma_{N},\gamma}(\cdot)\equiv\sigma_{N}^{2}\mathcal{K}_{\sigma_{N}}(\cdot)\) and \(h_{im}(r)\equiv Y_{im(1,1)}[\operatorname{sgn}(W^{\prime}_{im}r)-\operatorname {sgn}(W^{\prime}_{im}\gamma)]\).9 Note that here we subtract the term \(\operatorname{sgn}(W^{\prime}_{im}\gamma)\) from objective function (2.7), analogous to Sherman (1993). Doing this does not affect the value of the estimator, and will facilitate the proofs that follow. Besides, we define
Footnote 9: By definition, \(K_{\sigma_{N},\gamma}(\cdot)=K_{\gamma}(\cdot/\sigma_{N})\).
\[\mathcal{L}_{N}^{K}(r)=\frac{1}{\sigma_{N}^{2}N(N-1)}\sum_{i\neq m}K_{\sigma_{ N},\gamma}(V_{im}(\beta))h_{im}(r)\]
and
\[\mathcal{L}(r)\equiv f_{V_{im}(\beta)}(0)\mathbb{E}[h_{im}(r)|V_{im}(\beta)=0].\]
We establish the consistency of \(\hat{\gamma}\) in Lemmas A.1-A.3 by applying Theorem 2.1 in Newey and McFadden (1994). The key step is to show the uniform convergence of \(\hat{\mathcal{L}}_{N}^{K}(r)\) for \(r\in\mathcal{R}\). To this end, we bound the differences among \(\hat{\mathcal{L}}_{N}^{K}(r)\), \(\mathcal{L}_{N}^{K}(r)\), and \(\mathcal{L}(r)\) in Lemmas A.1 and A.2.
The next step is to show the asymptotic normality of \(\hat{\gamma}\) by applying Theorem 2 of Sherman (1994a). Sufficient conditions for this theorem are that \(\hat{\gamma}-\gamma=O_{P}(N^{-1/2})\) and uniformly over a neighborhood of \(\gamma\) with a radius proportional to \(N^{-1/2}\),
\[\hat{\mathcal{L}}_{N}^{K}(r)=\frac{1}{2}(r-\gamma)^{\prime}\mathbb{V}_{\gamma} (r-\gamma)+\frac{1}{\sqrt{N}}(r-\gamma)^{\prime}\Psi_{N}+o_{P}(N^{-1}),\] (A.3)
where \(\mathbb{V}_{\gamma}\) is a negative definite matrix and \(\Psi_{N}\) is asymptotically normal, with mean zero and variance \(\mathbb{V}_{\Psi}\). To verify equation (A.3), we first show that uniformly over a neighborhood of \(\gamma\), \(\mathcal{R}_{N}\equiv\{r\in\mathcal{R}\|r-\gamma\|\leq\delta_{N}\}\) with \(\{\delta_{N}\}=O(N^{-\delta})\) for some \(0<\delta\leq 1/2\),
\[\hat{\mathcal{L}}_{N}^{K}(r)=\frac{1}{2}(r-\gamma)^{\prime}\mathbb{V}_{\gamma} (r-\gamma)+\frac{1}{\sqrt{N}}(r-\gamma)^{\prime}\Psi_{N}+o_{P}(\|r-\gamma\|^ {2})+O_{P}(\varepsilon_{N}),\] (A.4)
which is the task of Lemmas A.4-A.7. The \(O_{P}(\varepsilon_{N})\) term in equation (A.4) can be shown to be of order \(o_{P}(N^{-1})\) uniformly over \(\mathcal{R}_{N}\). Further, by Theorem 1 of Sherman (1994b), equation (A.4)
implies \(\hat{\gamma}-\gamma=O_{P}(\sqrt{\varepsilon_{N}}\lor N^{-1/2})=O_{P}(N^{-1/2})\). This rate result, together with equation (A.4), further verifies equation (A.3).
To obtain equation (A.4), we will work with the following expansion
\[\hat{\mathcal{L}}_{N}^{K}(r) =\mathcal{L}_{N}^{K}(r)+\Delta\mathcal{L}_{N}^{K}(r)+R_{N}\] \[=\mathbb{E}[\mathcal{L}_{N}^{K}(r)]+\frac{2}{N}\sum_{i}\left\{ \mathbb{E}[\mathcal{L}_{N}^{K}(r)|Z_{i}]-\mathbb{E}[\mathcal{L}_{N}^{K}(r)] \right\}+\rho_{N}(r)+\Delta\mathcal{L}_{N}^{K}(r)+R_{N},\] (A.5)
where
\[\Delta\mathcal{L}_{N}^{K}(r)\equiv\frac{1}{\sigma_{N}^{3}N(N-1)}\sum_{i\neq m }\nabla K_{\sigma_{N},\gamma}(V_{im}(\beta))^{\prime}(V_{im}(\hat{\beta})-V_{ im}\left(\beta\right))h_{im}(r),\] (A.6)
\[\rho_{N}(r)=\mathcal{L}_{N}^{K}(r)-\frac{2}{N}\sum_{i}\mathbb{E}[\mathcal{L}_ {N}^{K}(r)|Z_{i}]+\mathbb{E}[\mathcal{L}_{N}^{K}(r)],\]
and \(R_{N}\) denotes the remainder term in the expansion of higher order (as \(\sqrt{N}\sigma_{N}\to\infty\)). The first three terms in (A.5) are the \(U\)-statistic decomposition for \(\mathcal{L}_{N}^{K}(r)\) (see e.g., Sherman (1993) and Serfling (2009)). Lemmas A.4-A.6 establish asymptotic properties of these three terms, respectively. A linear representation for the fourth term in (A.5) is derived in Lemma A.7.
Here we present Lemmas A.1-A.7, whose proofs are relegated to Appendix E.
**Lemma A.1**.: _Under Assumptions C1-C7, \(\sup_{r\in\mathcal{R}}|\hat{\mathcal{L}}_{N}^{K}(r)-\mathcal{L}_{N}^{K}(r)|=o _{P}(1).\)_
**Lemma A.2**.: _Under Assumptions C1, C5, C6, and C7, \(\sup_{r\in\mathcal{R}}|\mathcal{L}_{N}^{K}(r)-\mathcal{L}(r)|=o_{P}(1)\)._
**Lemma A.3**.: _Under Assumptions C1-C7,, \(\hat{\gamma}\overset{P}{\to}\gamma\)._
**Lemma A.4**.: _Under Assumptions C1-C7,, uniformly over \(\mathcal{R}_{N}\), we have_
\[\mathbb{E}[\mathcal{L}_{N}^{K}(r)]=\frac{1}{2}(r-\gamma)^{\prime}\mathbb{V}_ {\gamma}(r-\gamma)+o(\|r-\gamma\|^{2}),\]
_where \(\mathbb{V}_{\gamma}\equiv\mathbb{E}[\nabla^{2}\tau_{i}(\gamma)]\)._
**Lemma A.5**.: _Under Assumptions C1-C7,, uniformly over \(\mathcal{R}_{N}\), we have_
\[\frac{2}{N}\sum_{m}\mathbb{E}[\mathcal{L}_{N}^{K}(r)|Z_{m}]-2\mathbb{E}[ \mathcal{L}_{N}^{K}(r)]=\frac{1}{\sqrt{N}}(r-\gamma)^{\prime}\Psi_{N,1}+o_{P} (\|r-\gamma\|^{2}),\]
_where \(\Psi_{N,1}\equiv N^{-1/2}\sum_{m}2\nabla\tau_{m}(\gamma)\)._
**Lemma A.6**.: _Under Assumptions C1, C6, and C7, uniformly over \(\mathcal{R}_{N}\), \(\rho_{N}(r)=O_{P}(N^{-1}\sigma_{N}^{-2})\)._
**Lemma A.7**.: _Suppose_
\[\hat{\beta}-\beta=\frac{1}{N}\sum_{i}\psi_{i,\beta}+o_{P}(N^{-1/2}),\] (A.7)
_where \(\psi_{i,\beta}\) is the influence function (of \(Z_{i}\)). \(\psi_{i,\beta}\) is i.i.d across \(i\) with \(\mathbb{E}\left(\psi_{i,\beta}\right)=0\).10 Further, suppose Assumptions C1-C9 hold. Then uniformly over \(\mathcal{R}_{N}\), we have_
Footnote 10: In fact, the linear representation of \(\hat{\beta}-\beta\) in (A.7) can be obtained by applying the same arguments in Theorem 2 of Sherman (1993) to a representation for \(\mathcal{L}_{N,\beta}^{K}(b)\) analogous to (A.3).
\[\Delta\mathcal{L}_{N}^{K}(r)=\frac{1}{\sqrt{N}}(r-\gamma)^{\prime}\Psi_{N,2}+o_ {P}(\|r-\gamma\|^{2})+O_{P}(N^{-1}\sigma_{N}^{-2}),\]
_with \(\Psi_{N,2}=\frac{1}{\sqrt{N}}\sum_{i}\left(-\int\nabla_{13}^{2}\mu(v_{m},v_{m },\gamma)f_{V}(v_{m})dv_{m}\right)\psi_{i,\beta}\), where \(\mu(v_{i},v_{m},r)\equiv G(v_{i},v_{m},r)f_{V}(v_{i})\),_
\[G(v_{i},v_{m},r)\equiv\mathbb{E}\left[B(V_{i},V_{m},W_{i},W_{m})S_{im}(r) \left(\begin{array}{c}x^{\prime}_{im1}\\ x^{\prime}_{im2}\end{array}\right)|V_{i}=v_{i},V_{m}=v_{m}\right],\]
\(\nabla_{1}\mu(\cdot,\cdot,\cdot)\) _denotes the partial derivative of \(\mu(\cdot,\cdot,\cdot)\) w.r.t. its first argument, and \(\nabla_{13}^{2}\mu(\cdot,\cdot,\cdot)\) denotes the partial derivative of \(\nabla_{1}\mu(\cdot,\cdot,\cdot)\) w.r.t. its third argument._
Proof of Theorem 2.2.: Putting results in Lemmas A.4-A.7 together, we write equation (A.5) as
\[\hat{\mathcal{L}}_{N}^{K}(r)=\frac{1}{2}(r-\gamma)^{\prime}\mathbb{V}_{\gamma }(r-\gamma)+\frac{1}{\sqrt{N}}(r-\gamma)^{\prime}\Psi_{N}+o_{P}(\|r-\gamma\|^ {2})+O_{P}(\varepsilon_{N})\] (A.8)
where \(\Psi_{N}=\Psi_{N,1}+\Psi_{N,2}\) and \(\varepsilon_{N}=N^{-1}\sigma_{N}^{-2}\). Theorem 1 of Sherman (1994b) then implies that \(\hat{\gamma}-\gamma=O_{P}(\sqrt{\varepsilon_{N}})=O_{P}(N^{-1/2}\sigma_{N}^{ -1})\).
Next, take \(\delta_{N}=O(\sqrt{\varepsilon_{N}})\) and \(\mathcal{R}_{N}=\{r\in\mathcal{R}|\|r-\gamma\|\leq\delta_{N}\}\). We repeat the proof for Lemma A.6 and deduce from a Taylor expansion around \(\gamma\) that \(\sup_{r\in\mathcal{R}_{N}}\mathbb{E}[\rho_{im}^{*}(r)^{2}]=O(\sigma_{N}^{2} \delta_{N}^{2})\). Apply Theorem 3 of Sherman (1994b) to see that uniformly over \(\mathcal{R}_{N}\), \(\rho_{N}(r)=O_{P}(N^{-1}\sigma_{N}^{\lambda-2}\delta_{N}^{\lambda})\) where \(0<\lambda<1\). Then we have
\[\rho_{N}(r)=O_{P}(N^{-1}\sigma_{N}^{\lambda-2}\delta_{N}^{\lambda})=O_{P}(N^{ -1})O_{P}(N^{-\lambda/2}\sigma_{N}^{-2})=o_{P}(N^{-1})\]
by invoking Assumption C9 and choosing \(\lambda\) sufficiently close to \(1\). This result in turn implies that the \(O_{P}(\varepsilon_{N})\) term in (A.8) has order \(o_{P}(N^{-1})\), and hence \(\hat{\gamma}-\gamma=O_{P}(N^{-1/2})\) by applying Theorem 1 of Sherman (1994b) once again.
Now (A.8) can be expressed as
\[\hat{\mathcal{L}}_{N}^{K}(r)=\frac{1}{2}(r-\gamma)^{\prime}\mathbb{V}_{\gamma} (r-\gamma)+\frac{1}{\sqrt{N}}(r-\gamma)^{\prime}\Psi_{N}+o_{P}(N^{-1}).\]
Let \(\Delta_{i}\equiv 2\nabla\tau_{i}(\gamma)-\left(\int\nabla_{13}^{2}\mu(v_{m},v_{m}, \gamma)f_{V}(v_{m})\mathrm{d}v_{m}\right)\psi_{i,\beta}\). Note that \(\mathbb{E}[\Delta_{i}]=0\) since \(\mathbb{E}[\nabla\tau_{i}(\gamma)]=0\) and \(\mathbb{E}[\psi_{i,\beta}]=0\). We deduce from Assumption C7 and Lindeberg-Levy CLT that \(\Psi_{N}\stackrel{{ d}}{{\rightarrow}}N(0,\mathbb{V}_{\Psi})\) where \(\mathbb{V}_{\Psi}=\mathbb{E}[\Delta_{i}\varLambda_{i}^{\prime}]\). The asymptotic normality of \(\hat{\gamma}\) then follows from Theorem 2 of Sherman (1994a), i.e., \(\sqrt{N}(\hat{\gamma}-\gamma)\stackrel{{ d}}{{\rightarrow}}N(0, \mathbb{V}_{\gamma}^{-1}\mathbb{V}_{\Psi}\mathbb{V}_{\gamma}^{-1})\).
### Proof of of Theorem 2.3
Note that \(\hat{\mathcal{L}}_{N}^{K}\left(r\right)\) was redefined in Appendix A.2. We continue to use the definition in the main body of the paper for this section, specifically,
\[\hat{\mathcal{L}}_{N}^{K}\left(\hat{\gamma}\right)=\frac{1}{\sigma_{N}^{2}N \left(N-1\right)}\sum_{i\neq m}K_{\sigma_{N},\gamma}\left(V_{im}\left(\hat{ \beta}\right)\right)Y_{im(1,1)}\text{sgn}\left(W_{im}^{\prime}\hat{\gamma} \right).\]
**Lemma A.8**.: _Suppose Assumptions C1-C9 hold,_
\[\frac{1}{\sigma_{N}^{2}N\left(N-1\right)}\sum_{i\neq m}\left[K_{\sigma_{N}, \gamma}\left(V_{im}\left(\hat{\beta}\right)\right)-K_{\sigma_{N},\gamma}\left( V_{im}\left(\beta\right)\right)\right]Y_{im(1,1)}\text{sgn}\left(W_{im}^{\prime} \gamma\right)=O_{P}\left(N^{-1}\sigma_{N}^{-2}\right).\]
**Lemma A.9**.: _Suppose Assumptions C1-C9 hold, then uniformly over \(r\in\mathcal{R},\)_
\[\hat{\mathcal{L}}_{N}^{K}\left(r\right) = \frac{1}{2}\left(r-\gamma\right)^{\prime}\mathbb{V}_{\gamma} \left(r-\gamma\right)+O_{P}\left(N^{-1/2}\left\|r-\gamma\right\|\right)+o_{P} \left(\left\|r-\gamma\right\|^{2}\right)\] \[+\frac{1}{\sigma_{N}^{2}N\left(N-1\right)}\sum_{i\neq m}K_{ \sigma_{N},\gamma}\left(V_{im}\left(\beta\right)\right)Y_{im(1,1)}\text{sgn} \left(W_{im}^{\prime}\gamma\right)+o_{P}\left(N^{-1/2}\right).\]
**Lemma A.10**.: _Suppose Assumptions C5-C9 hold, then_
\[\mathbb{E}\left[\sigma_{N}^{-2}K_{\sigma_{N},\gamma}\left(V_{im} \left(\beta\right)\right)Y_{im(1,1)}\text{sgn}\left(W_{im}^{\prime}\gamma \right)\left|Z_{i}\right]\] \[= \mathbb{E}\left[Y_{im(1,1)}\text{sgn}\left(W_{im}^{\prime}\gamma \right)\left|Z_{i},V_{m}\left(\beta\right)=V_{i}\left(\beta\right)\right]+o_{P }\left(N^{-1/2}\right).\]
A note before the proof of the theorem: \(\hat{\beta}\) affects \(\hat{\mathcal{L}}_{N}^{K}\left(\hat{\gamma}\right)\) at the rates of \(N^{-1}\sigma_{N}^{-2},\) due to Lemma A.8; and \(\hat{\gamma}\) affects \(\hat{\mathcal{L}}_{N}^{K}\left(\hat{\gamma}\right)\) at the rates of \(N^{-1},\) due to the decomposition in Lemma A.9.
Proof of Theorem 2.3.: Theorem 2.2 shows that \(\hat{\gamma}-\)\(\gamma=O_{P}\left(N^{-1/2}\right).\) Together with the result in Lemma A.9, we obtain
\[\hat{\mathcal{L}}_{N}^{K}\left(\hat{\gamma}\right)=\frac{1}{\sigma_{N}^{2}N \left(N-1\right)}\sum_{i\neq m}K_{\sigma_{N},\gamma}\left(V_{im}\left(\beta \right)\right)Y_{im(1,1)}\text{sgn}\left(W_{im}^{\prime}\gamma\right)+o_{P} \left(N^{-1/2}\right).\]
Clearly
\[\hat{\mathcal{L}}_{N}^{K}\left(\hat{\gamma}\right)\overset{P}{\to}\bar{ \mathcal{L}}\left(\gamma\right)= f_{V_{im}\left(\beta\right)}\left(0\right)\mathbb{E}\left[Y_{ im(1,1)}\text{sgn}\left(W_{im}^{\prime}\gamma\right)\left|V_{im}\left(\beta \right)=0\right].\]
Some standard calculation of a second-order U-statistic and the results in Lemma A.10 imply
\[\hat{\mathcal{L}}_{N}^{K}\left(\hat{\gamma}\right)-\bar{\mathcal{L}} \left(\gamma\right)=\frac{1}{\sigma_{N}^{2}N\left(N-1\right)}\sum_{i\neq m}K_{ \sigma_{N},\gamma}\left(V_{im}\left(\beta\right)\right)Y_{im(1,1)}\mathrm{sgn }\left(W_{im}^{\prime}\gamma\right)-\bar{\mathcal{L}}\left(\gamma\right)+o_{P }\left(N^{-1/2}\right)\] \[= \frac{2}{N}\sum_{i=1}^{N}\left\{\mathbb{E}\left[\sigma_{N}^{-2}K_ {\sigma_{N},\gamma}\left(V_{im}\left(\beta\right)\right)Y_{im(1,1)}\mathrm{ sgn}\left(W_{im}^{\prime}\gamma\right)\left|Z_{i}\right]-\mathbb{E}\left[\sigma_{N}^{-2}K_ {\sigma_{N},\gamma}\left(V_{im}\left(\beta\right)\right)Y_{im(1,1)}\mathrm{ sgn}\left(W_{im}^{\prime}\gamma\right)\right]\right\}\] \[+\mathbb{E}\left[\sigma_{N}^{-2}K_{\sigma_{N},\gamma}\left(V_{im }\left(\beta\right)\right)Y_{im(1,1)}\mathrm{sgn}\left(W_{im}^{\prime}\gamma \right)\right]-\bar{\mathcal{L}}\left(\gamma\right)+o_{P}\left(N^{-1/2}\right)\] \[= \frac{2}{N}\sum_{i=1}^{N}\left\{\mathbb{E}\left[Y_{im(1,1)} \mathrm{sgn}\left(W_{im}^{\prime}\gamma\right)\left|Z_{i},V_{im}\left(\beta \right)=0\right]-\mathbb{E}\left[Y_{im(1,1)}\mathrm{sgn}\left(W_{im}^{\prime} \gamma\right)\left|V_{im}\left(\beta\right)=0\right]\right\}\] \[+o_{P}\left(N^{-1/2}\right).\]
Invoking the Central Limit Theorem on the leading term of \(\hat{\mathcal{L}}_{N}^{K}\left(\hat{\gamma}\right)-\bar{\mathcal{L}}\) in the above yields
\[\sqrt{N}\left(\hat{\mathcal{L}}_{N}^{K}\left(\hat{\gamma}\right)-\bar{\mathcal{ L}}\left(\gamma\right)\right)\overset{d}{\rightarrow}N\left(0,\Delta\right).\]
## Appendix B Proofs for the Panel Data Model
### Proof of Theorem 3.2
In this section, we provide the proof of Theorem 3.2, applying the asymptotic theory developed in Seo and Otsu (2018). Before the main proof, we first present two supporting lemmas, Lemmas B.1 and B.2, whose proofs are relegated to Appendix E. The outline of the proof process is as follows. Lemma B.1 verifies the technical conditions in Seo and Otsu (2018). Lemma B.2 obtains technical terms for the final asymptotics. Then we apply the results in Seo and Otsu (2018) and get the asymptotics of our estimators in the proof of Theorem 3.2.
**Lemma B.1**.: _Suppose Assumptions P1-P9 hold. Then \(\phi_{Ni}\left(b\right)\) and \(\varphi_{Ni}\left(r\right)\) satisfy Assumption M in Seo and Otsu (2018)._
**Lemma B.2**.: _Suppose Assumptions P1-P9 hold. Then_
\[\lim_{N\rightarrow\infty}(Nh_{N}^{k_{1}+k_{2}})^{2/3}\mathbb{E}[\phi_{Ni}(\beta +\rho\left(Nh_{N}^{k_{1}+k_{2}}\right)^{-1/3})]=\frac{1}{2}\rho^{\prime} \mathbb{V}\rho,\]
\[\lim_{N\rightarrow\infty}(N\sigma_{N}^{2k_{1}})^{2/3}\mathbb{E}[\varphi_{Ni}( \gamma+\delta(N\sigma_{N}^{2k_{1}})^{-1/3})]=\frac{1}{2}\delta^{\prime} \mathbb{W}\delta,\]
\[\lim_{N\rightarrow\infty}(Nh_{N}^{k_{1}+k_{2}})^{1/3}\mathbb{E}[h_{N}^{k_{1} +k_{2}}\phi_{Ni}(\beta+\rho_{1}(Nh_{N}^{k_{1}+k_{2}})^{-1/3})\phi_{Ni}(\beta+ \rho_{2}(Nh_{N}^{k_{1}+k_{2}})^{-1/3})]=\mathbb{H}_{1}\left(\rho_{1},\rho_{2} \right),\]
\[\lim_{N\rightarrow\infty}(N\sigma_{N}^{2k_{1}})^{1/3}\mathbb{E}[\sigma_{N}^{2k_{1}} \varphi_{Ni}(\gamma+\delta_{1}(N\sigma_{N}^{2k_{1}})^{-1/3})\varphi_{Ni}(\gamma+ \delta_{2}(N\sigma_{N}^{2k_{1}})^{-1/3})]=\mathbb{H}_{2}\left(\delta_{1}, \delta_{2}\right),\]
_where \(\rho,\)\(\rho_{1},\) and \(\rho_{2}\) are \(k_{1}\times 1\) vectors, \(\delta,\)\(\delta_{1},\) and \(\delta_{2}\) are \(k_{2}\times 1\) vectors, \(\mathbb{V}\) is a \(k_{1}\times\)\(k_{1}\) matrix defined as_
\[\mathbb{V}= -\sum_{t>s}\sum_{d\in\mathcal{D}}\left\{\int 1\left[x^{\prime} \beta=0\right]\left(\frac{\partial\kappa_{dts}^{(1)}\left(x\right)^{\prime}}{ \partial x}\beta\right)f_{X_{1ts}|\left\{X_{2ts}=0,W_{ts}=0\right\}}\left(x \right)xx^{\prime}\mathrm{d}\sigma_{0}^{(1)}\right.\] (B.1) \[\left.+\int 1\left[x^{\prime}\beta=0\right]\left(\frac{ \partial\kappa_{dts}^{(2)}\left(x\right)^{\prime}}{\partial x}\beta\right)f_ {X_{2ts}|\left\{X_{1ts}=0,W_{ts}=0\right\}}\left(x\right)xx^{\prime}\mathrm{d} \sigma_{0}^{(2)}\right\},\]
\(\mathbb{W}\) _is a \(k_{2}\times\)\(k_{2}\) matrix defined as_
\[\mathbb{W}=-\sum_{t>s}\int 1\left[w^{\prime}\gamma=0\right]\left(\frac{ \partial\kappa_{(1,1)ts}^{(3)}\left(w\right)^{\prime}}{\partial w}\gamma \right)f_{W_{ts}|\left\{X_{1ts}=0,X_{2ts}=0\right\}}\left(w\right)ww^{\prime} \mathrm{d}\sigma_{0}^{(3)},\] (B.2)
_and \(\mathbb{H}_{1}\) and \(\mathbb{H}_{2}\) are written respectively as_
\[\mathbb{H}_{1}\left(\rho_{1},\rho_{2}\right)\] (B.3) \[= \frac{1}{2}\sum_{t>s}\sum_{d\in\mathcal{D}}\left\{\int\kappa_{dts} ^{(4)}\left(\bar{x}\right)\left[\left|\bar{x}^{\prime}\rho_{1}\right|+\left| \bar{x}^{\prime}\rho_{2}\right|-\left|\bar{x}^{\prime}\left(\rho_{1}-\rho_{2} \right)\right|\right]f_{X_{1ts}|\left\{X_{2ts}=0,W_{ts}=0\right\}}\left(0,\bar{x }\right)\mathrm{d}\bar{x}f_{X_{2ts},W_{ts}}\left(0,0\right)\right.\] \[+\int\kappa_{dts}^{(5)}\left(\bar{x}\right)\left[\left|\bar{x}^{ \prime}\rho_{1}\right|+\left|\bar{x}^{\prime}\rho_{2}\right|-\left|\bar{x}^{ \prime}\left(\rho_{1}-\rho_{2}\right)\right|\right]f_{X_{2ts}|\left\{X_{1ts}=0,W_{ts}=0\right\}}\left(0,\bar{x}\right)\mathrm{d}\bar{x}f_{X_{1ts},W_{ts}} \left(0,0\right)\right\}\bar{K}_{2}^{k_{1}+k_{2}},\]
_and_
\[\mathbb{H}_{2}\left(\delta_{1},\delta_{2}\right)= \frac{1}{2}\sum_{t>s}\int\kappa_{(1,1)ts}^{(6)}\left(\bar{w} \right)\left[\left|\bar{w}^{\prime}\delta_{1}\right|+\left|\bar{w}^{\prime} \delta_{2}\right|-\left|\bar{w}^{\prime}\left(\delta_{1}-\delta_{2}\right) \right|\right]\] (B.4) \[\cdot f_{W_{ts}|\left\{X_{1ts}=0,X_{2ts}=0\right\}}\left(0,\bar{w }\right)\mathrm{d}\bar{w}f_{X_{1ts},X_{2ts}}\left(0,0\right)\bar{K}_{2}^{2k_{1}}.\]
_Technical terms that appear in \(\mathbb{V},\)\(\mathbb{W},\)\(\mathbb{H}_{1},\)\(\mathbb{H}_{2}\) are defined as follows. \(\sigma_{0}^{(1)},\)\(\sigma_{0}^{(2)},\) and \(\sigma_{0}^{(3)}\) are the surface measures of \(\left\{X_{1ts}\beta=0\right\},\)\(\left\{X_{2ts}\beta=0\right\}\), and \(\left\{W_{ts}\gamma=0\right\},\) respectively. \(\bar{K}_{2}\equiv\int K\left(u\right)^{2}du.\) We decompose \(X_{1ts},X_{2ts}\) and \(W_{ts}\) into \(X_{jts}=a\beta+\bar{X}_{jts},j=1,2,\) and \(W_{ts}=a\gamma+\bar{W}_{ts},\) where \(\bar{X}_{jts},j=1,2,\) are orthogonal to \(\beta,\) and \(\bar{W}_{ts}\) is orthogonal to \(\gamma\). The density for \(X_{1ts}\) is written as \(f_{X_{1ts}}\left(a,\bar{X}_{1ts}\right)\) under this decomposition. Densities for \(X_{2ts}\) and \(W_{ts}\) are written similarly. Further,_
\[\kappa_{dts}^{(1)}\left(x\right)\equiv\mathbb{E}[Y_{idst}\left(-1 \right)^{d_{1}}|X_{i1ts}=x,X_{i2ts}=0,W_{its}=0],\] \[\kappa_{dts}^{(2)}\left(x\right)\equiv\mathbb{E}[Y_{idst}\left(- 1\right)^{d_{2}}|X_{i2ts}=x,X_{i1ts}=0,W_{its}=0],\] \[\kappa_{(1,1)ts}^{(3)}\left(w\right)\equiv\mathbb{E}\left[Y_{i(1, 1)ts}|W_{its}=w,X_{i1ts}=0,X_{i2ts}=0\right],\]
\[\kappa_{dts}^{(4)}\left(x\right) \equiv\mathbb{E}\left[\left|Y_{idst}\right|\ \left|X_{i11ts}=x,X_{i2ts}=0,W_{its}=0\right|,\right.\] \[\kappa_{dts}^{(5)}\left(x\right) \equiv\mathbb{E}\left[\left|Y_{idst}\right|\ \left|X_{i2ts}=x,X_{i1ts}=0,W_{its}=0\right|,\right.\]
_and_
\[\kappa_{(1,1)ts}^{(6)}\left(w\right)\equiv\mathbb{E}\left[\left|Y_{i(1,1)ts} \right|\ \left|W_{its}=w,X_{i1ts}=0,X_{i2ts}=0\right].\]
Proof of Theorem 3.2.: Lemma B.1 verifies the technical conditions for applying the results in Seo and Otsu (2018), and shows that \(\phi_{Ni}\left(b\right)\) and \(\varphi_{Ni}\left(r\right)\) are _manageable_ in the sense of Kim and Pollard (1990). By Assumption P9, Lemma 1 in Seo and Otsu (2018) and its subsequent analysis, we have
\[\hat{\beta}-\beta=O_{P}((Nh_{N}^{k_{1}+k_{2}})^{-1/3})\text{ and }\hat{\gamma}- \gamma=O_{P}((N\sigma_{N}^{2k_{1}})^{-1/3}).\]
Notice that \(\hat{\beta}\) can be equivalently obtained from
\[\arg\max_{b\in\mathcal{B}}(Nh_{N}^{k_{1}+k_{2}})^{2/3}\cdot N^{-1}\sum_{i=1}^{ N}\phi_{Ni}(\beta+(Nh_{N}^{k_{1}+k_{2}})^{-1/3}\{(Nh_{N}^{k_{1}+k_{2}})^{1/3}(b- \beta)\}).\]
We get the asymptotics of \(\hat{\beta}\) if we can get the asymptotics of
\[\arg\max_{\rho\in\mathbb{R}^{k_{1}}}(Nh_{N}^{k_{1}+k_{2}})^{2/3}\cdot N^{-1} \sum_{i=1}^{N}\phi_{Ni}(\beta+(Nh_{N}^{k_{1}+k_{2}})^{-1/3}\rho).\]
Since \(\phi_{Ni}\left(b\right)\) is _manageable_ in the sense of Kim and Pollard (1990), by Theorem 1 in Seo and Otsu (2018) and Lemma B.2, we have the uniform convergence of the above stochastic process and
\[(Nh_{N}^{k_{1}+k_{2}})^{2/3}\cdot N^{-1}\sum_{i=1}^{N}\phi_{Ni}(\beta+(Nh_{N}^ {k_{1}+k_{2}})^{-1/3}\rho)\rightsquigarrow\mathcal{Z}_{1}\left(\rho\right),\]
where \(\mathcal{Z}_{1}\left(\rho\right)\) is a Gaussian process with continuous sample path, expected value \(\frac{1}{2}\rho^{\gamma}\mathbb{V}\rho\) and covariance kernel \(\mathbb{H}_{1}\left(\rho_{1},\rho_{2}\right).\) Apply the CMT to obtain
\[(Nh_{N}^{k_{1}+k_{2}})^{1/3}(\hat{\beta}-\beta)\overset{d}{\to}\arg\max_{\rho }\mathcal{Z}_{1}\left(\rho\right).\]
Apply similar arguments to \(\hat{\gamma}\). By Theorem 1 in Seo and Otsu (2018) and Lemma B.2, we get
\[(N\sigma_{N}^{2k_{1}})^{1/3}\left(\hat{\gamma}-\gamma\right)\overset{d}{\to} \arg\max_{\delta\in\mathbb{R}^{k_{2}}}\mathcal{Z}_{2}\left(\delta\right),\]
where \(\mathcal{Z}_{2}\left(\delta\right)\) is a Gaussian process with continuous sample path, expected value \(\frac{1}{2}\delta^{\prime}\mathbb{V}\delta\) and covariance kernel \(\mathbb{H}_{2}\left(\delta_{1},\delta_{2}\right).\)
### Proof of Theorem 3.3
Proof of Theorem 3.3.: We show that the numerical bootstrap works for \(\beta\). The discussion on \(\gamma\) is omitted due to the similarity.
The key is to show the results in Hong and Li (2020) hold by modifying condition (vi) in Theorem 4.1 (using their notations), for example, for \(\beta\), that
\[\Sigma_{1/2}\left(\rho_{1},\rho_{2}\right)=\lim_{N\to\infty}(Nh_{N}^{k_{1}+k_{ 2}})^{1/3}\mathbb{E}[h_{N}^{k_{1}+k_{2}}\phi_{Ni}(\beta+\rho_{1}(Nh_{N}^{k_{1} +k_{2}})^{-1/3})\phi_{Ni}(\beta+\rho_{2}(Nh_{N}^{k_{1}+k_{2}})^{-1/3})]\]
exists for all \(\rho_{1},\rho_{2}\in\mathbb{R}^{k_{1}}\). This is indeed the case as shown by our Lemma B.2 in Appendix B. We now provide the details of the proof.
By some standard calculation as Seo and Otsu (2018), we can show that the convergence rate of \(\hat{\beta}^{*}\) is \((\varepsilon_{N1}^{-1}h_{N}^{k_{1}+k_{2}})^{-1/3}.\) Thus, if we manage to show the limit of
\[(\varepsilon_{N1}^{-1}h_{N}^{k_{1}+k_{2}})^{2/3}\cdot N^{-1}\sum_ {i=1}^{N}\phi_{Ni}(\beta+\rho(\varepsilon_{N1}^{-1}h_{N}^{k_{1}+k_{2}})^{-1/3 })+(\varepsilon_{N1}^{-1}h_{N}^{k_{1}+k_{2}})^{2/3}\cdot(N\varepsilon_{N1})^{ 1/2}\] (B.5) \[\cdot[N^{-1}\sum_{i=1}^{N}\phi_{Ni}^{*}(\beta+\rho(\varepsilon_{N1 }^{-1}h_{N}^{k_{1}+k_{2}})^{-1/3})-N^{-1}\sum_{i=1}^{N}\phi_{Ni}(\beta+\rho( \varepsilon_{N1}^{-1}h_{N}^{k_{1}+k_{2}})^{-1/3})],\]
then the limiting distribution of \(\hat{\beta}^{*}\) can be established.
For the first term in equation (B.5),
\[(\varepsilon_{N1}^{-1}h_{N}^{k_{1}+k_{2}})^{2/3}\cdot N^{-1}\sum _{i=1}^{N}\phi_{Ni}(\beta+\rho(\varepsilon_{N1}^{-1}h_{N}^{k_{1}+k_{2}})^{-1/3})\] (B.6) \[= (\varepsilon_{N1}^{-1}h_{N}^{k_{1}+k_{2}})^{2/3}\mathbb{E}[\phi_{ Ni}(\beta+\rho(\varepsilon_{N1}^{-1}h_{N}^{k_{1}+k_{2}})^{-1/3})]\] \[+(\varepsilon_{N1}^{-1}h_{N}^{k_{1}+k_{2}})^{2/3}\cdot N^{-1} \sum_{i=1}^{N}[\phi_{Ni}(\beta+\rho(\varepsilon_{N1}^{-1}h_{N}^{k_{1}+k_{2}})^ {-1/3})-\mathbb{E}[\phi_{Ni}(\beta+\rho(\varepsilon_{N1}^{-1}h_{N}^{k_{1}+k_{2} })^{-1/3})]]\] \[= \frac{1}{2}\rho^{\prime}\mathbb{V}\rho+O_{P}((\varepsilon_{N1}^{ -1}h_{N}^{k_{1}+k_{2}})^{2/3}\cdot N^{-1/2}\cdot h_{N}^{-(k_{1}+k_{2})/2}( \varepsilon_{N1}^{-1}h_{N}^{k_{1}+k_{2}})^{-1/6})=\frac{1}{2}\rho^{\prime} \mathbb{V}\rho+o_{P}(1),\]
where the second equality follows by Lemma B.2, the assumption on \(\varepsilon_{N1}\) such that the bias term \(h_{N}^{2}\) is a small order term compared to \((\varepsilon_{N1}^{-1}h_{N}^{k_{1}+k_{2}})^{2/3}\), and Markov inequality, and the last equality holds by \(N\varepsilon_{N1}\to\infty\).
The second term in equation (B.5) is a sample average of mean \(0\) series under \(P^{*}\) multiplied by
\((\varepsilon_{N1}^{-1}h_{N}^{k_{1}+k_{2}})^{1/3}(N\varepsilon_{N1})^{1/2}.\) The covariance kernel of the second term is
\[(\varepsilon_{N1}^{-1}h_{N}^{k_{1}+k_{2}})^{4/3}\cdot(N\varepsilon_{ N1})\cdot N^{-1}\cdot\mathbb{E}^{*}[\phi_{Ni}^{*}(\beta+\rho_{1}(\varepsilon_{N1}^{-1 }h_{N}^{k_{1}+k_{2}})^{-1/3})\phi_{Ni}^{*}(\beta+\rho_{2}(\varepsilon_{N1}^{-1} h_{N}^{k_{1}+k_{2}})^{-1/3})]\] \[= (\varepsilon_{N1}^{-1}h_{N}^{k_{1}+k_{2}})^{1/3}\mathbb{E}^{*}[h_ {N}^{k_{1}+k_{2}}\phi_{Ni}^{*}(\beta+\rho_{1}(\varepsilon_{N1}^{-1}h_{N}^{k_{1} +k_{2}})^{-1/3})\phi_{Ni}^{*}(\beta+\rho_{2}(\varepsilon_{N1}^{-1}h_{N}^{k_{1} +k_{2}})^{-1/3})]\] \[= (\varepsilon_{N1}^{-1}h_{N}^{k_{1}+k_{2}})^{1/3}\mathbb{E}[h_{N}^ {k_{1}+k_{2}}\phi_{Ni}(\beta+\rho_{1}(\varepsilon_{N1}^{-1}h_{N}^{k_{1}+k_{2} })^{-1/3})\phi_{Ni}(\beta+\rho_{2}(\varepsilon_{N1}^{-1}h_{N}^{k_{1}+k_{2}})^{- 1/3})]+o_{P}(1)\] \[= \mathbb{H}_{1}(\rho_{1},\rho_{2})+o_{P}(1),\] (B.7)
by the i.i.d. sampling and Lemma B.2.
Equations (B.6) and (B.7) imply that the term in equation (B.5) converges to \(\mathcal{Z}_{1}^{*}\left(\rho\right)\) with mean \(\frac{1}{2}\rho^{\prime}\mathbb{V}\rho\) and covariance kernel \(\mathbb{H}_{1}\left(\rho_{1},\rho_{2}\right)\). Thus, \(\mathcal{Z}_{1}^{*}\left(\rho\right)\) is an independent copy of \(\mathcal{Z}_{1}\left(\rho\right)\) and
\[(\varepsilon_{N1}^{-1}h_{N}^{k_{1}+k_{2}})^{1/3}(\hat{\beta}^{*}-\beta) \stackrel{{ d}}{{\to}}\arg\max_{\rho\in\mathbb{R}^{k_{1}}} \mathcal{Z}_{1}^{*}\left(\rho\right).\]
Finally, note, by \(N\varepsilon_{N1}\to\infty\) and
\[(\varepsilon_{N1}^{-1}h_{N}^{k_{1}+k_{2}})^{1/3}(\hat{\beta}^{*}- \hat{\beta}) =(\varepsilon_{N1}^{-1}h_{N}^{k_{1}+k_{2}})^{1/3}(\hat{\beta}^{*} -\beta)+(\varepsilon_{N1}^{-1}h_{N}^{k_{1}+k_{2}})^{1/3}(\hat{\beta}-\beta)\] \[=(\varepsilon_{N1}^{-1}h_{N}^{k_{1}+k_{2}})^{1/3}(\hat{\beta}^{*} -\beta)+O_{P}((\varepsilon_{N1}^{-1}h_{N}^{k_{1}+k_{2}})^{1/3}\cdot(Nh_{N}^{k_ {1}+k_{2}})^{-1/3})\] \[=(\varepsilon_{N1}^{-1}h_{N}^{k_{1}+k_{2}})^{1/3}(\hat{\beta}^{*} -\beta)+o_{P}(1),\]
we have
\[(\varepsilon_{N1}^{-1}h_{N}^{k_{1}+k_{2}})^{1/3}(\hat{\beta}^{*}-\hat{\beta}) \stackrel{{ d}}{{\to}}\arg\max_{\rho\in\mathbb{R}^{k_{1}}} \mathcal{Z}_{1}^{*}\left(\rho\right).\]
### Proof of Theorem 3.4
Proof of Theorem 3.4.: Under Assumptions P1-P9, we show that our estimator satisfies the technical conditions needed in Seo and Otsu (2018) in Lemma B.1. Applying Lemma 1 in Seo and Otsu (2018) to \(N^{-1}\mathcal{L}_{N,\gamma}^{P,K}(r)\), we get
\[N^{-1}\mathcal{L}_{N,\gamma}^{P,K}(r)-\bar{\mathcal{L}}^{P}\left(r\right)=N^{-1 }\mathcal{L}_{N,\gamma}^{P,K}(\gamma)-\bar{\mathcal{L}}^{P}\left(\gamma\right) +\varepsilon\left\|r-\gamma\right\|^{2}+O_{P}\left(\sigma_{N}^{2}\right)+O_{ P}\left(\left(N\sigma_{N}^{2k_{1}}\right)^{-2/3}\right),\] (B.8)
for any small \(\varepsilon>0\), where use some results in Lemma B.2.
Some standard calculations as in the proof of Lemma B.2 yield
\[\bar{\mathcal{L}}^{P}\left(r\right)-\bar{\mathcal{L}}^{P}\left(\gamma\right)= \frac{1}{2}\left(r-\gamma\right)^{\prime}\mathbb{W}\left(r-\gamma\right)+o_{P} \left(\left\|r-\gamma\right\|^{2}\right),\] (B.9)
for \(r\) in a small neighbourhood of \(\gamma\).
Note that we assume \(\left(N\sigma_{N}^{2k_{1}}\right)^{2/3}\sigma_{N}^{2}\to 0\) in P9, and \(\mathbb{W}\) is finite. Using these two conditions and with the results in (B.8) and (B.9), we obtain
\[N^{-1}\mathcal{L}_{N,\gamma}^{P,K}(r)=N^{-1}\mathcal{L}_{N,\gamma}^{P,K}( \gamma)+O_{P}\left(\left\|r-\gamma\right\|^{2}\right)+O_{P}\left(\left(N\sigma _{N}^{2k_{1}}\right)^{-2/3}\right),\]
which implies
\[N^{-1}\mathcal{L}_{N,\gamma}^{P,K}(\hat{\gamma})-N^{-1}\mathcal{L}_{N,\gamma}^ {P,K}(\gamma)=O_{P}\left(\left(N\sigma_{N}^{2k_{1}}\right)^{-2/3}\right),\]
because \(\hat{\gamma}-\gamma=O_{P}\left(\left(N\sigma_{N}^{2k_{1}}\right)^{-1/3} \right).\) That means:
\[\sqrt{N\sigma_{N}^{2k_{1}}}\left(N^{-1}\mathcal{L}_{N,\gamma}^{P,K}(\hat{ \gamma})-N^{-1}\mathcal{L}_{N,\gamma}^{P,K}(\gamma)\right)=o_{P}\left(1 \right).\] (B.10)
Lindeberg-Feller Central Limit Theorem implies that
\[\sqrt{N\sigma_{N}^{2k_{1}}}\left(N^{-1}\mathcal{L}_{N,\gamma}^{P,K}(\gamma)- \bar{\mathcal{L}}^{P}\left(\gamma\right)\right)\overset{d}{\to}N\left(0, \Delta^{P}\right).\]
Using (B.10), we get the desired result:
\[\sqrt{N\sigma_{N}^{2k_{1}}}\left(N^{-1}\mathcal{L}_{N,\gamma}^{P,K}(\hat{ \gamma})-\bar{\mathcal{L}}^{P}\left(\gamma\right)\right)\overset{d}{\to}N \left(0,\Delta^{P}\right).\]
\begin{table}
\begin{tabular}{l c c c c c c c} \hline & \multicolumn{3}{c}{\(\beta_{2}\)} & \multicolumn{3}{c}{\(\gamma_{2}\)} \\ \cline{2-9} & \multicolumn{2}{c}{COVERAGE} & \multicolumn{2}{c}{LENGTH} & \multicolumn{2}{c}{COVERAGE} & \multicolumn{2}{c}{LENGTH} \\ \hline \(N=250\) & 0.982 & 0.434 & 0.959 & 0.789 \\ \(N=500\) & 0.977 & 0.329 & 0.960 & 0.498 \\ \(N=1000\) & 0.955 & 0.234 & 0.952 & 0.341 \\ \hline \end{tabular}
\end{table}
Table 1B: Design 1, Nonparametric Bootstrap
\begin{table}
\begin{tabular}{l c c c c c c c} \hline & \multicolumn{3}{c}{\(\beta_{2}\)} & \multicolumn{3}{c}{\(\gamma_{2}\)} \\ \cline{2-9} & \multicolumn{2}{c}{COVERAGE} & \multicolumn{2}{c}{LENGTH} & \multicolumn{2}{c}{COVERAGE} & \multicolumn{2}{c}{LENGTH} \\ \hline \(N=1000\) & 0.965 & 0.847 & 0.847 & 1.068 \\ \(N=2500\) & 0.968 & 0.751 & 0.934 & 0.992 \\ \(N=5000\) & 0.953 & 0.679 & 0.938 & 0.931 \\ \hline \end{tabular}
\end{table}
Table 3B: Design 3, Numerical Bootstrap
\begin{table}
\begin{tabular}{l c c c c c c c c} \hline & \multicolumn{3}{c}{\(\beta_{2}\)} & \multicolumn{3}{c}{\(\gamma_{2}\)} \\ \cline{2-9} & \multicolumn{2}{c}{COVERAGE} & \multicolumn{2}{c}{LENGTH} & \multicolumn{2}{c}{COVERAGE} & \multicolumn{2}{c}{LENGTH} \\ \hline \(N=250\) & -0.028 & 0.131 & -0.011 & 0.090 & -0.035 & 0.182 & -0.001 & 0.102 \\ \(N=500\) & -0.019 & 0.099 & -0.010 & 0.069 & -0.022 & 0.126 & -0.009 & 0.091 \\ \(N=1000\) & -0.015 & 0.070 & -0.014 & 0.048 & -0.014 & 0.084 & -0.007 & 0.056 \\ \hline \end{tabular}
\end{table}
Table 2A: Design 2, Performance of \(\hat{\beta}\) and \(\hat{\gamma}\)
\begin{table}
\begin{tabular}{l c c c c c c c} \hline & \multicolumn{3}{c}{\(\beta_{2}\)} & \multicolumn{3}{c}{\(\gamma_{2}\)} \\ \cline{2-9} & \multicolumn{2}{c}{COVERAGE} & \multicolumn{2}{c}{LENGTH} & \multicolumn{2}{c}{COVERAGE} & \multicolumn{2}{c}{LENGTH} \\ \hline \(N=1000\) & 0.965 & 0.847 & 0.847 & 1.068 \\ \(N=2500\) & 0.968 & 0.751 & 0.934 & 0.992 \\ \(N=5000\) & 0.953 & 0.679 & 0.938 & 0.931 \\ \hline \end{tabular}
\end{table}
Table 3C: Design 3, Performance of \(\hat{\beta}\) and \(\hat{\gamma}\)
## Appendix D Technical Details for Bootstrap Inference
In this section, we discuss the validity of the nonparametric bootstrap for the cross-sectional model (Section 2.3).
### Cross-Sectional Model
We provide an outline to show the validity of the nonparametric bootstrap for our estimators in the cross section case. To fill in the gaps of the outline, one first needs to define a product probability space for both the original random series and the bootstrap series as in Wellner and Zhan (1996) (an application can be found in Abrevaya and Huang (2005)), define a suitable norm for functions, and then establish some uniform convergence results following Sherman (1993, 1994a,b) on this probability space with this norm. To apply Sherman's results, the objective functions need to be _manageable_ in the sense of Kim and Pollard (1990). This is indeed the case for our estimators. For example, equation (2.12) is a summation of
\[\mathcal{K}_{\sigma_{N}}(V_{im}(\hat{\beta}))Y_{im(1,1)}\text{sgn}\left(W^{ \prime}_{im}r\right)=\mathcal{K}_{\sigma_{N}}(V_{im}(\hat{\beta}))Y_{im(1,1)} \left(2\cdot 1\left[W^{\prime}_{im}r>0\right]-1\right).\]
The collection of indicator functions \(1\left[W^{\prime}_{im}r>0\right]\) for \(r\in\mathcal{R}\) is well known to be a Vapnik-Chervonenkis (VC) class. The kernel function \(\mathcal{K}\), when is chosen to satisfy some mild conditions (e.g., uniformly bounded with compact support and continuously differentiable), is also _manageable_. The complication of the bandwidth can be handled in the same way as in Seo and Otsu (2018).
Now suppose we have handled the technical details mentioned above. In what follows, we present the proof outline.
To ease expositions, we assume \(\hat{\beta}\) and \(\hat{\gamma}\) are obtained from maximizing, respectively, \(\mathbb{L}_{N,\beta}^{K}\left(b\right)\) and \(\mathbb{L}_{N,\gamma}^{K}(r,\hat{\beta}),\) which are defined as
\[\mathbb{L}_{N,\beta}^{K}\left(b\right)=\sum_{i=1}^{N-1}\sum_{m>i}\mathcal{K}_{ h_{N}}\left(X_{im}\right)\chi_{1}\left(Z_{i},Z_{m},b\right),\]
and
\[\mathbb{L}_{N,\gamma}^{K}(r,\hat{\beta})=\sum_{i=1}^{N-1}\sum_{m>i}\mathcal{K} _{\sigma_{N}}(X_{im}^{\prime}\hat{\beta})\chi_{2}\left(Z_{i},Z_{m},r\right).\]
We assume that \(X_{im}\) is a sub-vector of \(Z_{i}-Z_{m}\). Note that the objective functions here are different from the ones we used. But if we manage to show the consistency of the bootstrap here, it can be easily extended to our original estimators because they share the same structure.
We do normalization such that \(\chi_{1}\left(Z_{m},Z_{i},\beta\right)=0\) and \(\chi_{2}\left(Z_{i},Z_{m},\gamma\right)=0\) for all \(Z.\)11 Assume that \(\chi_{1}\left(Z_{i},Z_{m},b\right)=\chi_{1}\left(Z_{m},Z_{i},b\right),\)\(\beta\) uniquely maximizes \(\mathbb{E}\left[\chi_{1}\left(Z_{i},Z_{m},b\right)|X_{im}=0\right],\) and \(\mathcal{K}_{h_{N}}\) is a \(p\)-th order kernel with \(h_{N}\) that satisfies \(Nh_{N}^{p}=o\left(1\right).\) Similarly, we assume that \(\chi_{2}\left(Z_{i},Z_{m},r\right)=\chi_{2}\left(Z_{m},Z_{i},r\right),\)\(\gamma\) uniquely maximizes \(\mathbb{E}\left[\chi_{2}\left(Z_{i},Z_{m},r\right)|X_{im}^{\prime}\beta=0\right],\) and \(\mathcal{K}_{\sigma_{N}}\) is a \(p\)-th order kernel with \(\sigma_{N}\) that satisfies \(N\sigma_{N}^{p}=o\left(1\right).\) We similarly denote \(\mathbb{L}_{N,\beta}^{K\ast}\left(b\right)\) and \(\mathbb{L}_{N,\gamma}^{K\ast}(r,\hat{\beta}^{\ast})\) as the corresponding objective functions using the bootstrap series.
Footnote 11: Taking equation (2.11) as an example, this can be done by replacing \(\operatorname{sgn}\left(X^{\prime}b\right)\) with \(\operatorname{sgn}\left(X^{\prime}b\right)-\operatorname{sgn}\left(X^{\prime }\beta\right)\) in the objective function.
Before establishing the consistency of the bootstrap, it is helpful to do a recast of how we derive the asymptotics for \((\hat{\beta},\hat{\gamma}).\)\(\mathbb{L}_{N,\beta}^{K}\left(b\right)\) can be decomposed into
\[2\left[N\left(N-1\right)\right]^{-1}\mathbb{L}_{N,\beta}^{K}\left(b\right)\] (D.1) \[= \mathbb{E}\left[\mathcal{K}_{h_{N}}\left(X_{im}\right)\chi_{1} \left(Z_{i},Z_{m},b\right)\right]\] \[+\frac{2}{N}\sum_{i=1}^{N}\left\{\mathbb{E}\left[\mathcal{K}_{h_{ N}}\left(X_{im}\right)\chi_{1}\left(Z_{i},Z_{m},b\right)|Z_{i}\right]-\mathbb{E} \left[\mathcal{K}_{h_{N}}\left(X_{im}\right)\chi_{1}\left(Z_{i},Z_{m},b\right) \right]\right\}\] \[+\frac{1}{N\left(N-1\right)}\sum_{i=1}^{N}\sum_{m=1,m\neq i}^{N} \left\{\mathcal{K}_{h_{N}}\left(X_{im}\right)\chi_{1}\left(Z_{i},Z_{m},b\right) -\mathbb{E}\left[\mathcal{K}_{h_{N}}\left(X_{im}\right)\chi_{1}\left(Z_{i},Z_{ m},b\right)|Z_{i}\right]\right.\] \[\left.-\mathbb{E}\left[\mathcal{K}_{h_{N}}\left(X_{im}\right) \chi_{1}\left(Z_{i},Z_{m},b\right)|Z_{m}\right]+\mathbb{E}\left[\mathcal{K}_ {h_{N}}\left(X_{im}\right)\chi_{1}\left(Z_{i},Z_{m},b\right)\right]\right\}\] \[\equiv \mathcal{T}_{N1}+\mathcal{T}_{N2}+\mathcal{T}_{N3}.\]
Since \(\chi_{1}\left(Z_{i},Z_{m},\beta\right)=0\) and \(\beta\) uniquely maximizes \(\mathbb{E}\left[\chi_{1}\left(Z_{i},Z_{m},b\right)\left|X_{im}=0\right],\) we have
\[\mathcal{T}_{1N} =\mathbb{E}\left[\chi_{1}\left(Z_{i},Z_{m},b\right)\left|X_{im}=0 \right]+O_{P}\left(h_{N}^{p}\right)\right.\] \[=\frac{1}{2}\left(b-\beta\right)^{\prime}\mathbb{V}_{\beta}\left( b-\beta\right)+O_{P}\left(\left\|b-\beta\right\|^{3}\right)+O_{P}\left(h_{N}^{p} \right),\]
where the \(O_{P}(h_{N}^{p})\) is the bias term from matching and \(\mathbb{V}_{\beta}\) is a negative definite matrix. For the second term,
\[\mathcal{T}_{N2} =\frac{2}{N}\sum_{i=1}^{N}\left\{\mathbb{E}\left[\chi_{1}\left(Z _{i},Z_{m},b\right)\left|Z_{i},X_{im}=0\right]f_{X_{m}}\left(X_{i}\right)- \mathbb{E}\left[\chi_{1}\left(Z_{i},Z_{m},b\right)\left|X_{im}=0\right]\right\} +O_{P}\left(h_{N}^{p}\right)\right.\] \[=2\left(b-\beta\right)^{\prime}\frac{W_{N1}}{\sqrt{N}}+o_{P} \left(\left\|b-\beta\right\|^{2}\right)+O_{P}\left(h_{N}^{p}\right),\] (D.2)
where \(W_{N1}\) converges to a normal distribution. For the last term,
\[\mathcal{T}_{N3}=o_{P}\left(N^{-1}\right),\]
by similar arguments in Sherman (1993). Put all these results together to obtain
\[2\left[N\left(N-1\right)\right]^{-1}\mathbb{L}_{N,\beta}^{K}\left(b\right)= \frac{1}{2}\left(b-\beta\right)^{\prime}\mathbb{V}_{\beta}\left(b-\beta \right)+2\left(b-\beta\right)^{\prime}\frac{W_{N1}}{\sqrt{N}}+o_{P}\left( \left\|b-\beta\right\|^{2}\right)+o_{P}\left(N^{-1}\right).\]
\(\hat{\beta}-\beta\) can be shown to be \(O_{P}(N^{-1/2})\). Using the rate result and invoking the CMT, we have
\[\sqrt{N}(\hat{\beta}-\beta)=-2\mathbb{V}_{\beta}^{-1}W_{N1}+o_{P}\left(1 \right).\] (D.3)
Let \(\mathbb{E}^{*}\) and \(P^{*}\) denote the expectation and the density for the bootstrap series, respectively. By definition, \(P^{*}\) puts \(N^{-1}\) probability on each of \(\left\{Z_{i}\right\}_{i=1}^{N}.\) We can do a similar decomposition of
\(\mathbb{L}_{N,\beta}^{K^{*}}\left(b\right)\) as for \(\mathbb{L}_{N,\beta}^{K}\left(b\right):\)
\[2\left[N\left(N-1\right)\right]^{-1}\mathbb{L}_{N,\beta}^{K^{*}} \left(b\right)\] \[= \left[N\left(N-1\right)\right]^{-1}\sum_{i=1}^{N}\sum_{m=1,m\neq i }^{N}\mathcal{K}_{h_{N}}\left(X_{im}^{*}\right)\chi_{1}\left(Z_{i}^{*},Z_{m}^{ *},b\right)\] \[= \mathbb{E}^{*}\left[\mathcal{K}_{h_{N}}\left(X_{im}^{*}\right) \chi_{1}\left(Z_{i}^{*},Z_{m}^{*},b\right)\right]\] \[+\frac{2}{N}\sum_{i=1}^{N}\left\{\mathbb{E}^{*}\left[\mathcal{K}_ {h_{N}}\left(X_{im}^{*}\right)\chi_{1}\left(Z_{i}^{*},Z_{m}^{*},b\right)|Z_{i }^{*}\right]-\mathbb{E}^{*}\left[\mathcal{K}_{h_{N}}\left(X_{im}^{*}\right) \chi_{1}\left(Z_{i}^{*},Z_{m}^{*},b\right)\right]\right\}\] \[+\frac{1}{N\left(N-1\right)}\sum_{i=1}^{N}\sum_{m=1,m\neq i}^{N} \left\{\mathcal{K}_{h_{N}}\left(X_{im}^{*}\right)\chi_{1}\left(Z_{i}^{*},Z_{m }^{*},b\right)-\mathbb{E}^{*}\left[\mathcal{K}_{h_{N}}\left(X_{im}^{*}\right) \chi_{1}\left(Z_{i}^{*},Z_{m}^{*},b\right)|Z_{i}^{*}\right]\right.\] \[\left.-\mathbb{E}^{*}\left[\mathcal{K}_{h_{N}}\left(X_{im}^{*} \right)\chi_{1}\left(Z_{i}^{*},Z_{m}^{*},b\right)|Z_{m}^{*}\right]+\mathbb{E} ^{*}\left[\mathcal{K}_{h_{N}}\left(X_{im}^{*}\right)\chi_{1}\left(Z_{i}^{*},Z _{m}^{*},b\right)\right]\right\}\] \[\equiv \mathcal{T}_{N1}^{*}+\mathcal{T}_{N2}^{*}+\mathcal{T}_{N3}^{*}.\] (D.4)
For \(\mathcal{T}_{N1}^{*}\), by the definition of of \(\mathbb{E}^{*}\),
\[\mathcal{T}_{N1}^{*}= \mathbb{E}^{*}\left[\mathcal{K}_{h_{N}}\left(X_{im}^{*}\right) \chi_{1}\left(Z_{i}^{*},Z_{m}^{*},b\right)\right]\] \[= \frac{1}{N^{2}}\sum_{i=1}^{N}\sum_{m=1}^{N}\mathcal{K}_{h_{N}} \left(X_{im}\right)\chi_{1}\left(Z_{i},Z_{m},b\right)\] \[= \frac{1}{2}\left(b-\beta\right)^{\prime}\mathbb{V}_{\beta}\left(b -\beta\right)+2\left(b-\beta\right)^{\prime}\frac{W_{N1}}{\sqrt{N}}+O_{P} \left(\|b-\beta\|^{3}\right)+o_{P}\left(N^{-1}\right),\] (D.5)
where the last line holds because the second line is the same as \(2\left[N\left(N-1\right)\right]^{-1}\mathbb{L}_{N,\beta}^{K}\left(b\right)\) plus a small order term.
Both \(\mathcal{T}_{N2}\) and \(\mathcal{T}_{N2}^{*}\) are sample averages of mean zero series. Moreover, for \(\|b-\beta\|=O(N^{-1/2})\), \(N\mathcal{T}_{N2}=2\sqrt{N}\left(b-\beta\right)^{\prime}W_{N1}+o_{P}\left(1\right)\) and \(W_{N1}\) converges to a normal distribution. Then, we can apply the results in Gine and Zinn (1990) that \(NT_{N2}^{*}\) approximates the distribution of \(NT_{N2}\) for \(\|b-\beta\|=O(N^{-1/2})\). That is, for \(\|b-\beta\|=O(N^{-1/2})\),
\[\mathcal{T}_{N2}^{*}=2\left(b-\beta\right)^{\prime}\frac{W_{N1}^{*}}{\sqrt{N}} +o_{P}\left(N^{-1}\right),\] (D.6)
where \(W_{N1}^{*}\) is an independent copy of \(W_{N1}\).
For \(\mathcal{T}_{N3}^{*}\), terms across \(i\) and \(m\) are uncorrelated with each other under \(P^{*}\). Therefore, \(\left[\mathbb{E}^{*}\left(\mathcal{T}_{N3}^{*2}\right)\right]^{1/2}\) is of the same order as
\[N^{-1}\left\{\mathbb{E}^{*}\left[\left[\mathcal{K}_{h_{N}}\left(X_{im}^{*} \right)\chi_{1}\left(Z_{i}^{*},Z_{m}^{*},b\right)\right]^{2}\right]\right\}^{1 /2}=N^{-1}\left\{\frac{1}{N^{2}}\sum_{i=1}^{N}\sum_{m=1}^{N}\mathcal{K}_{h_{N}} \left(X_{im}\right)^{2}\chi_{1}\left(Z_{i},Z_{m},b\right)^{2}\right\}^{1/2}.\]
Note that \(\chi_{1}\left(Z_{i},Z_{m},\beta\right)=0,\) so the above term is \(o_{P}\left(N^{-1}\right)\) for \(\left\|b-\beta\right\|=O\left(N^{-1/2}\right).\) That means \(\left[\mathbb{E}^{*}\left(\mathcal{T}_{N3}^{*2}\right)\right]^{1/2}=o_{P}\left( N^{-1}\right).\) By Markov inequality,
\[\mathcal{T}_{N3}^{*}=o_{P}\left(N^{-1}\right).\] (D.7)
To summarize, equations (D.4)-(D.7) imply that for \(\left\|b-\beta\right\|=O\left(N^{-1/2}\right),\)
\[2\left[N\left(N-1\right)\right]^{-1}\mathbb{L}_{N,\beta}^{K*}\left(b\right)= \frac{1}{2}\left(b-\beta\right)^{\prime}\mathbb{V}_{\beta}\left(b-\beta \right)+2\left(b-\beta\right)^{\prime}\frac{W_{N1}}{\sqrt{N}}+2\left(b-\beta \right)^{\prime}\frac{W_{N1}^{*}}{\sqrt{N}}+o_{P}\left(N^{-1}\right).\]
We can similarly show that \(\hat{\beta}^{*}-\beta=O_{P}\left(N^{-1/2}\right).\) By the CMT again,
\[\sqrt{N}(\hat{\beta}^{*}-\beta)=-2\mathbb{V}_{\beta}^{-1}W_{N1}-2\mathbb{V}_{ \beta}^{-1}W_{N1}^{*}+o_{P}\left(1\right).\]
Substitute equation (D.3) into the equation above to get
\[\sqrt{N}(\hat{\beta}^{*}-\hat{\beta})=-2\mathbb{V}_{\beta}^{-1}W_{N1}^{*}+o_{ P}\left(1\right).\] (D.8)
Since \(W_{N1}^{*}\) is an independent copy of \(W_{N1},\)\(\sqrt{N}(\hat{\beta}^{*}-\hat{\beta})\) converges to the same distribution as the one \(\sqrt{N}(\hat{\beta}-\beta)\) converges to.
We turn to the inference for \(\hat{\gamma}.\) Again, we decompose \(\mathbb{L}_{N,\gamma}^{K}(r,\hat{\beta})\) and analyze each term in the decompositions one by one. We start with
\[2\left[N\left(N-1\right)\right]^{-1}\mathbb{L}_{N,\gamma}^{K} \left(r,\hat{\beta}\right)\] (D.9) \[= \left[N\left(N-1\right)\right]^{-1}\sum_{i=1}^{N}\sum_{m=1,m\neq i }^{N}\mathcal{K}_{\sigma_{N}}\left(X_{im}^{\prime}\hat{\beta}\right)\chi_{2} \left(Z_{i},Z_{m},r\right)\] \[= \left[N\left(N-1\right)\right]^{-1}\sum_{i=1}^{N}\sum_{m=1,m\neq i }^{N}\mathcal{K}_{\sigma_{N}}\left(X_{im}^{\prime}\beta\right)\chi_{2}\left(Z _{i},Z_{m},r\right)\] \[+\left[N\left(N-1\right)\right]^{-1}\sum_{i=1}^{N}\sum_{m=1,m\neq i }^{N}\chi_{2}\left(Z_{i},Z_{m},r\right)\nabla\mathcal{K}_{\sigma_{N}}\left(X_ {im}^{\prime}\beta\right)X_{im}^{\prime}\frac{\left(\hat{\beta}-\beta\right)} {\sigma_{N}}\] \[+\left[N\left(N-1\right)\right]^{-1}\sum_{i=1}^{N}\sum_{m=1,m\neq i }^{N}\chi_{2}\left(Z_{i},Z_{m},r\right)\nabla^{2}\mathcal{K}_{\sigma_{N}} \left(X_{im}^{\prime}\tilde{\beta}\right)\left[X_{im}^{\prime}\frac{\left( \hat{\beta}-\beta\right)}{\sigma_{N}}\right]^{2}\] \[\equiv \mathcal{T}_{N4}+\mathcal{T}_{N5}+\mathcal{T}_{N6},\]
where \(\tilde{\beta}\) is a vector that lies between \(\beta\) and \(\hat{\beta}\) and makes the equality hold.
Note that \(\mathcal{T}_{N4}\) shares the same structure as \(2\left[N\left(N-1\right)\right]^{-1}\mathbb{L}_{N,\beta}^{K}\left(b\right),\) and so
\[\mathcal{T}_{N4}=\frac{1}{2}\left(r-\gamma\right)^{\prime}\mathbb{V}_{\gamma} \left(r-\gamma\right)+2\left(r-\gamma\right)^{\prime}\frac{W_{N2}}{\sqrt{N}}+O_ {P}\left(\left\|r-\gamma\right\|^{3}\right)+O_{P}\left(\sigma_{N}^{p}\right)+ o_{P}\left(N^{-1}\right),\] (D.10)
where \(\mathbb{V}_{\gamma}\) is a negative definite matrix, and \(W_{N2}\) converges to a normal distribution.
For term \(\mathcal{T}_{N5}\),
\[\mathcal{T}_{N5}=\left\{\left[N\left(N-1\right)\right]^{-1}\sum_{i=1}^{N}\sum_ {m=1,m\neq i}^{N}\chi_{2}\left(Z_{i},Z_{m},r\right)\sigma_{N}^{-1}\nabla \mathcal{K}_{\sigma_{N}}\left(X_{im}^{\prime}\beta\right)X_{im}^{\prime} \right\}\left(\hat{\beta}-\beta\right).\]
Note that
\[\left[N\left(N-1\right)\right]^{-1}\sum_{i=1}^{N}\sum_{m=1,m\neq i }^{N}\chi_{2}\left(Z_{i},Z_{m},r\right)\sigma_{N}^{-1}\nabla\mathcal{K}_{ \sigma_{N}}\left(X_{im}^{\prime}\beta\right)X_{im}^{\prime}\] (D.11) \[+\frac{1}{N\left(N-1\right)}\sum_{i=1}^{N}\sum_{m=1,m\neq i}^{N} \left\{\chi_{2}\left(Z_{i},Z_{m},r\right)\sigma_{N}^{-1}\nabla\mathcal{K}_{ \sigma_{N}}\left(X_{im}^{\prime}\beta\right)X_{im}^{\prime}-\mathbb{E}\left[ \chi_{2}\left(Z_{i},Z_{m},r\right)\sigma_{N}^{-1}\nabla\mathcal{K}_{\sigma_{N} }\left(X_{im}^{\prime}\beta\right)X_{im}^{\prime}\right]\right\}\] \[-\mathbb{E}\left[\chi_{2}\left(Z_{i},Z_{m},r\right)\sigma_{N}^{-1 }\nabla\mathcal{K}_{\sigma_{N}}\left(X_{im}^{\prime}\beta\right)X_{im}^{ \prime}|Z_{m}\right]+\mathbb{E}\left[\chi_{2}\left(Z_{i},Z_{m},r\right)\sigma_ {N}^{-1}\nabla\mathcal{K}_{\sigma_{N}}\left(X_{im}^{\prime}\beta\right)X_{im}^ {\prime}\right]\right\}\]
with the lead term \(\mathbb{E}[\chi_{2}\left(Z_{i},Z_{m},r\right)\sigma_{N}^{-1}\nabla\mathcal{K }_{\sigma_{N}}\left(X_{im}^{\prime}\beta\right)X_{im}^{\prime}]\). As \(\int\nabla\mathcal{K}_{\sigma_{N}}\left(u\right)\mathrm{d}u=0\), the lead term cancels one more \(\sigma_{N}^{-1}\) when calculating the expectation. Since \(\chi_{2}\left(Z_{i},Z_{m},\gamma\right)=0\), we can write
\[\mathbb{E}\left[\chi_{2}\left(Z_{i},Z_{m},r\right)\sigma_{N}^{-1}\nabla \mathcal{K}_{\sigma_{N}}\left(X_{im}^{\prime}\beta\right)X_{im}\right]=\left( r-\gamma\right)^{\prime}\Gamma+O(\left\|r-\gamma\right\|^{2})+O\left(\sigma_{N}^{p} \right),\] (D.12)
for some matrix \(\Gamma.\) Collect all these result to obtain
\[\mathcal{T}_{N5} =\left(r-\gamma\right)^{\prime}\Gamma\left(\hat{\beta}-\beta \right)+O_{P}\left(\left\|r-\gamma\right\|^{2}\left\|\hat{\beta}-\beta\right\| \right)+O\left(\sigma_{N}^{p}\right)\] (D.13) \[=-2\left(r-\gamma\right)^{\prime}\Gamma\mathbb{V}_{\beta}^{-1} \frac{W_{N1}}{\sqrt{N}}+O_{P}\left(\left\|r-\gamma\right\|^{2}\left\|\hat{ \beta}-\beta\right\|\right)+o_{P}\left(\frac{\left\|r-\gamma\right\|}{\sqrt{N }}\right)+O\left(\sigma_{N}^{p}\right),\]
where the second line follows by substituting in equation (D.3).
For term \(\mathcal{T}_{N6}\), it is not hard to see that
\[\mathcal{T}_{N6}=O_{P}\left(\left\|r-\gamma\right\|\left\|\hat{\beta}-\beta \right\|^{2}\Bigg{/}\sigma_{N}^{2}\right)=o_{P}\left(N^{-1}\right),\] (D.14)
for \(\left\|r-\gamma\right\|=O(N^{-1/2})\), and \(\sigma_{N}\) in Assumption C7.
To summarize, equations (D.9), (D.10), (D.13), and (D.14) imply that for \(\left\|r-\gamma\right\|=O\left(N^{-1/2}\right),\)
\[2\left[N\left(N-1\right)\right]^{-1}\mathbb{L}_{N,\gamma}^{K}\left( r,\hat{\beta}\right)= \frac{1}{2}\left(r-\gamma\right)^{\prime}\mathbb{V}_{\gamma}\left( r-\gamma\right)\] \[+2\left(r-\gamma\right)^{\prime}\frac{W_{N2}}{\sqrt{N}}-2\left(r -\gamma\right)^{\prime}\Gamma\mathbb{V}_{\beta}^{-1}\frac{W_{N1}}{\sqrt{N}}+o _{P}\left(N^{-1}\right).\]
Once \(\hat{\gamma}-\gamma=O_{P}\left(N^{-1/2}\right)\) is established, applying the CMT gives
\[\sqrt{N}\left(\hat{\gamma}-\gamma\right)=-2\mathbb{V}_{\gamma}^{-1}W_{N2}+2 \mathbb{V}_{\gamma}^{-1}\Gamma\mathbb{V}_{\beta}^{-1}W_{N1}+o_{P}\left(1 \right).\] (D.15)
For \(\mathbb{L}_{N,\gamma}^{K*}\left(r,b\right),\) we decompose it similarly:
\[2\left[N\left(N-1\right)\right]^{-1}\mathbb{L}_{N,\gamma}^{K*} \left(r,\hat{\beta}^{*}\right)\] (D.16) \[=[N\left(N-1\right)]^{-1}\sum_{i=1}^{N}\sum_{m=1,m\neq i}^{N} \mathcal{K}_{\sigma_{N}}\left(X_{im}^{*\prime}\hat{\beta}^{*}\right)\chi_{2} \left(Z_{i}^{*},Z_{m}^{*},r\right)\] \[=[N\left(N-1\right)]^{-1}\sum_{i=1}^{N}\sum_{m=1,m\neq i}^{N} \mathcal{K}_{\sigma_{N}}\left(X_{im}^{*\prime}\hat{\beta}\right)\chi_{2} \left(Z_{i}^{*},Z_{m}^{*},r\right)\] \[+[N\left(N-1\right)]^{-1}\sum_{i=1}^{N}\sum_{m=1,m\neq i}^{N} \chi_{2}\left(Z_{i}^{*},Z_{m}^{*},r\right)\nabla\mathcal{K}_{\sigma_{N}} \left(X_{im}^{*\prime}\hat{\beta}\right)\left(\frac{X_{im}^{*\prime}\left( \hat{\beta}^{*}-\hat{\beta}\right)}{\sigma_{N}}\right)\] \[+[N\left(N-1\right)]^{-1}\sum_{i=1}^{N}\sum_{m=1,m\neq i}^{N} \chi_{2}\left(Z_{i}^{*},Z_{m}^{*},r\right)\nabla^{2}\mathcal{K}_{\sigma_{N}} \left(X_{im}^{*\prime}\hat{\beta}^{*}\right)\left(\frac{X_{im}^{*\prime}\left( \hat{\beta}^{*}-\hat{\beta}\right)}{\sigma_{N}}\right)^{2}\] \[\equiv\mathcal{T}_{N4}^{*}+\mathcal{T}_{N5}^{*}+\mathcal{T}_{N6}^ {*},\]
where \(\tilde{\beta}^{*}\) is some vector that lies between \(b\) and \(\hat{\beta}\) and makes the equality hold.
For \(\mathcal{T}_{N4}^{*}\), we have
\[\mathcal{T}_{N4}^{*}= \left[N\left(N-1\right)\right]^{-1}\sum_{i=1}^{N}\sum_{m=1,m\neq i}^ {N}\mathcal{K}_{\sigma_{N}}\left(X_{im}^{*\prime}\hat{\beta}\right)\chi_{2} \left(Z_{i}^{*},Z_{m}^{*},r\right)\] \[= \mathbb{E}^{*}\left[\mathcal{K}_{\sigma_{N}}\left(X_{im}^{* \prime}\hat{\beta}\right)\chi_{2}\left(Z_{i}^{*},Z_{m}^{*},r\right)\right]\] \[+\frac{2}{N}\sum_{i=1}^{N}\left\{\mathbb{E}^{*}\left[\mathcal{K}_ {\sigma_{N}}\left(X_{im}^{*\prime}\hat{\beta}\right)\chi_{2}\left(Z_{i}^{*},Z_ {m}^{*},r\right)\left|Z_{i}^{*}\right]-\mathbb{E}^{*}\left[\mathcal{K}_{ \sigma_{N}}\left(X_{im}^{*\prime}\hat{\beta}\right)\chi_{2}\left(Z_{i}^{*},Z_ {m}^{*},r\right)\right]\right\}\] \[+\frac{1}{N\left(N-1\right)}\sum_{i=1}^{N}\sum_{m=1,m\neq i}^{N} \left\{\mathcal{K}_{\sigma_{N}}\left(X_{im}^{*\prime}\hat{\beta}\right)\chi_{ 2}\left(Z_{i}^{*},Z_{m}^{*},r\right)-\mathbb{E}^{*}\left[\mathcal{K}_{\sigma_ {N}}\left(X_{im}^{*\prime}\hat{\beta}\right)\chi_{2}\left(Z_{i}^{*},Z_{m}^{*}, r\right)\left|Z_{i}^{*}\right]\right.\right.\] \[\left.\left.-\mathbb{E}^{*}\left[\mathcal{K}_{\sigma_{N}}\left(X _{im}^{*\prime}\hat{\beta}\right)\chi_{2}\left(Z_{i}^{*},Z_{m}^{*},r\right) \left|Z_{m}^{*}\right]+\mathbb{E}^{*}\left[\mathcal{K}_{\sigma_{N}}\left(X_{ im}^{*\prime}\hat{\beta}\right)\chi_{2}\left(Z_{i}^{*},Z_{m}^{*},r\right) \right]\right\}.\] \[\equiv \mathcal{T}_{N4,1}^{*}+\mathcal{T}_{N4,2}^{*}+\mathcal{T}_{N4,3} ^{*}.\] (D.17)
We analyze the three terms in (D.17) one by one.
For \(\mathcal{T}_{N4,1}^{*}\), we have
\[\mathcal{T}_{N4,1}^{*} =\mathbb{E}^{*}\left[\mathcal{K}_{\sigma_{N}}\left(X_{im}^{* \prime}\hat{\beta}\right)\chi_{2}\left(Z_{i}^{*},Z_{m}^{*},r\right)\right]\] (D.18) \[=\frac{1}{N^{2}}\sum_{i=1}^{N}\sum_{m=1}^{N}\mathcal{K}_{\sigma_{ N}}\left(X_{im}^{\prime}\hat{\beta}\right)\chi_{2}\left(Z_{i},Z_{m},r\right)\] \[=\frac{1}{2}\left(r-\gamma\right)^{\prime}\mathbb{V}_{\gamma} \left(r-\gamma\right)+2\left(r-\gamma\right)^{\prime}\frac{W_{N2}}{\sqrt{N}}-2 \left(r-\gamma\right)^{\prime}\Gamma\mathbb{V}_{\beta}^{-1}\frac{W_{N1}}{ \sqrt{N}}+o_{P}\left(N^{-1}\right),\]
where the last line holds because the term in the second line is \(2\left[N\left(N-1\right)\right]^{-1}\mathbb{L}_{N,\gamma}^{K}(r,\hat{\beta})\) plus a small order term.
For \(\mathcal{T}_{N4,2}^{*}\), we first note that
\[\sum_{i=1}^{N}\left\{\mathbb{E}\left[\mathcal{K}_{\sigma_{N}} \left(X_{im}^{\prime}\beta\right)\chi_{2}\left(Z_{i},Z_{m},r\right)\left|Z_{i} \right|-\mathbb{E}\left[\mathcal{K}_{\sigma_{N}}\left(X_{im}^{\prime}\beta \right)\chi_{2}\left(Z_{i},Z_{m},r\right)\right]\right\}\] (D.19) \[= \sqrt{N}\left(r-\gamma\right)^{\prime}W_{N2}+o_{P}\left(1\right),\]
for \(\left\|r-\gamma\right\|=O\left(N^{-1/2}\right),\) which holds for the same reason as for equation (D.2). Since \(\hat{\beta}-\beta=O_{P}\left(N^{-1/2}\right),\) by the equicontinuity we claimed,
\[\sum_{i=1}^{N}\left\{\mathbb{E}\left[\mathcal{K}_{\sigma_{N}}\left( X_{im}^{\prime}\beta\right)\chi_{2}\left(Z_{i},Z_{m},r\right)\left|Z_{i}\right]- \mathbb{E}\left[\mathcal{K}_{\sigma_{N}}\left(X_{im}^{\prime}\beta\right) \chi_{2}\left(Z_{i},Z_{m},r\right)\right]\right\}\] (D.20)
Then equations (D.19) and (D.20) imply that
\[2\sum_{i=1}^{N}\left\{\mathbb{E}_{N}\left[\mathcal{K}_{\sigma_{N}} \left(X_{im}^{\prime}\hat{\beta}\right)\chi_{2}\left(Z_{i},Z_{m},r\right)\left| Z_{i}\right|-\mathbb{E}_{N}\left[\mathcal{K}_{\sigma_{N}}\left(X_{im}^{\prime} \hat{\beta}\right)\chi_{2}\left(Z_{i},Z_{m},r\right)\right]\right\}\] \[=2\sqrt{N}\left(r-\gamma\right)^{\prime}W_{N2}+o_{P}\left(1\right),\]
which approximates a normal distribution for \(\left\|r-\gamma\right\|=O\left(N^{-1/2}\right).\) Thus, we are able to apply the results of Gine and Zinn (1990) and get that
\[N\mathcal{T}_{N4,2}^{*}=2\sum_{i=1}^{N}\left\{\mathbb{E}^{*}\left[\mathcal{K}_ {\sigma_{N}}\left(X_{im}^{*\prime}\hat{\beta}\right)\chi_{2}\left(Z_{i}^{*},Z _{m}^{*},r\right)\left|Z_{i}^{*}\right|-\mathbb{E}^{*}\left[\mathcal{K}_{ \sigma_{N}}\left(X_{im}^{*\prime}\hat{\beta}\right)\chi_{2}\left(Z_{i}^{*},Z_{ m}^{*},r\right)\right]\right\}\]
approximates the distribution of the above term. That is
\[\mathcal{T}_{N4,2}^{*}=2\left(r-\gamma\right)^{\prime}\frac{W_{N2}^{*}}{\sqrt {N}}+o_{P}\left(N^{-1}\right),\] (D.21)
where \(W_{N2}^{*}\) is an independent copy of \(W_{N2}\).
The same arguments as we used for \(\mathcal{T}_{N3}^{*}\) lead to
\[\mathcal{T}_{N4,3}^{*}=o_{P}\left(N^{-1}\right).\] (D.22)
Then, equations (D.17), (D.18), (D.21), and (D.22) together imply that for \(\left\|r-\gamma\right\|=O(N^{-1/2})\),
\[\mathcal{T}_{N4}^{*}= \frac{1}{2}\left(r-\gamma\right)^{\prime}\mathbb{V}_{\gamma} \left(r-\gamma\right)+2\left(r-\gamma\right)^{\prime}\frac{W_{N2}}{\sqrt{N}}- 2\left(r-\gamma\right)^{\prime}\Gamma\mathbb{V}_{\beta}^{-1}\frac{W_{N1}}{ \sqrt{N}}\] (D.23) \[+2\left(r-\gamma\right)^{\prime}\frac{W_{N2}^{*}}{\sqrt{N}}+o_{P }\left(N^{-1}\right).\]
For \(\mathcal{T}_{N5}^{*}\), we have
\[\mathcal{T}_{N5}^{*} =\left[N\left(N-1\right)\right]^{-1}\sum_{i=1}^{N}\sum_{m=1,m\neq i }^{N}\chi_{2}\left(Z_{i}^{*},Z_{m}^{*},r\right)\nabla\mathcal{K}_{\sigma_{N} }\left(X_{im}^{*\prime}\hat{\beta}\right)\left(\frac{X_{im}^{*\prime}\left( \hat{\beta}^{*}-\hat{\beta}\right)}{\sigma_{N}}\right)\] \[=\left\{\left[N\left(N-1\right)\right]^{-1}\sum_{i=1}^{N}\sum_{m= 1,m\neq i}^{N}\chi_{2}\left(Z_{i}^{*},Z_{m}^{*},r\right)\sigma_{N}^{-1} \nabla\mathcal{K}_{\sigma_{N}}\left(X_{im}^{*\prime}\hat{\beta}\right)X_{im}^ {*\prime}\right\}\left(\hat{\beta}^{*}-\hat{\beta}\right).\]
Following the same analysis for \(\mathcal{T}_{N5}\), the lead term for \(\mathcal{T}_{N5}^{*}\) is
\[\mathbb{E}^{*}\left[\chi_{2}\left(Z_{i}^{*},Z_{m}^{*},r\right)\sigma_{N}^{-1} \nabla\mathcal{K}_{\sigma_{N}}\left(X_{im}^{*\prime}\hat{\beta}\right)X_{im}^ {*\prime}\right]\left(\hat{\beta}^{*}-\hat{\beta}\right).\]
Note that
\[\mathbb{E}^{*}\left[\chi_{2}\left(Z_{i}^{*},Z_{m}^{*},r\right) \sigma_{N}^{-1}\nabla\mathcal{K}_{\sigma_{N}}\left(X_{im}^{*\prime}\hat{\beta} \right)X_{im}^{*\prime}\right]\left(\hat{\beta}^{*}-\hat{\beta}\right)\] \[=\left[\frac{1}{N^{2}}\sum_{i=1}^{N}\sum_{m=1}^{N}\chi_{2}\left(Z_ {i},Z_{m},r\right)\sigma_{N}^{-1}\nabla\mathcal{K}_{\sigma_{N}}\left(X_{im}^{ \prime}\beta\right)X_{im}\right]\left(\hat{\beta}^{*}-\hat{\beta}\right)\] \[=\left(r-\gamma\right)^{\prime}\Gamma\left(\hat{\beta}^{*}-\hat{ \beta}\right)+O_{P}\left(\|r-\gamma\|^{2}\left\|\hat{\beta}^{*}-\hat{\beta} \right\|\right)+o_{P}\left(N^{-1}\right)\] \[=-2\left(r-\gamma\right)^{\prime}\Gamma\mathbb{V}_{\beta}^{-1} \frac{W_{N1}^{*}}{\sqrt{N}}+o_{P}\left(\frac{\|r-\gamma\|}{\sqrt{N}}\right)+O _{P}\left(\|r-\gamma\|^{2}\left\|\hat{\beta}^{*}-\hat{\beta}\right\|\right)+o _{P}\left(N^{-1}\right),\]
where we use equations (D.11) and (D.12) to get the third line, and we substitute equation (D.8) in the last line. Then we have
\[\mathcal{T}_{N5}^{*}=-2\left(r-\gamma\right)^{\prime}\Gamma\mathbb{V}_{\beta}^ {-1}\frac{W_{N1}^{*}}{\sqrt{N}}+o_{P}\left(\frac{\|r-\gamma\|}{\sqrt{N}}\right) +O_{P}\left(\|r-\gamma\|^{2}\left\|\hat{\beta}^{*}-\hat{\beta}\right\|\right)+ o_{P}\left(N^{-1}\right).\] (D.24)
For \(\mathcal{T}_{N6}^{*}\), it is not hard to see that
\[\mathcal{T}_{N6}^{*}=O_{P}\left(\frac{\|r-\gamma\|\left\|\hat{\beta}^{*}-\hat {\beta}\right\|^{2}}{\sigma_{N}^{2}}\right).\] (D.25)
Then, equations (D.16), (D.23), (D.24), and (D.25) together imply that for \(\|r-\gamma\|=O\left(N^{-1/2}\right)\),
\[2\left[N\left(N-1\right)\right]^{-1}\mathbb{L}_{N,\gamma}^{K*} \left(r,\hat{\beta}^{*}\right)\] \[= \frac{1}{2}\left(r-\gamma\right)^{\prime}\mathbb{V}_{\gamma} \left(r-\gamma\right)+2\left(r-\gamma\right)^{\prime}\frac{W_{N2}}{\sqrt{N}}- 2\left(r-\gamma\right)^{\prime}\Gamma\mathbb{V}_{\beta}^{-1}\frac{W_{N1}}{ \sqrt{N}}+2\left(r-\gamma\right)^{\prime}\frac{W_{N2}^{*}}{\sqrt{N}}\] \[-2\left(r-\gamma\right)^{\prime}\Gamma\mathbb{V}_{\beta}^{-1} \frac{W_{N1}^{*}}{\sqrt{N}}+o_{P}\left(N^{-1}\right),\]
from which we can show that \(\hat{\gamma}^{*}-\gamma=O_{P}\left(N^{-1/2}\right).\) With this result and by the CMT, we get
\[\sqrt{N}\left(\hat{\gamma}^{*}-\gamma\right)=-2\mathbb{V}_{\gamma}^{-1}W_{N2} +2\mathbb{V}_{\gamma}^{-1}\Gamma\mathbb{V}_{\beta}^{-1}W_{N1}-2\mathbb{V}_{ \gamma}^{-1}W_{N2}^{*}+2\mathbb{V}_{\gamma}^{-1}\Gamma\mathbb{V}_{\beta}^{-1}W _{N1}^{*}+o_{P}\left(1\right),\]
which implies
\[\sqrt{N}\left(\hat{\gamma}^{*}-\hat{\gamma}\right)=-2\mathbb{V}_{\gamma}^{-1}W _{N2}^{*}+2\mathbb{V}_{\gamma}^{-1}\Gamma\mathbb{V}_{\beta}^{-1}W_{N1}^{*}+o_ {P}\left(N^{-1}\right)\]
by substituting equation (D.15) in.
Since \(W_{N1}^{*}\) and \(W_{N2}^{*}\) are independent copies of \(W_{N1}\) and \(W_{N2}\), respectively, \(\sqrt{N}\left(\hat{\gamma}^{*}-\hat{\gamma}\right)\) approximates the distribution of \(\sqrt{N}\left(\hat{\gamma}-\gamma\right).\)
Some Additional Discussions or Results
We provide some discussions on the convergence rates for our cross-sectional estimator and panel data estimator in Section E.1. We explain why we choose to construct our estimators based on the identification strategy in the panel data models in Section E.2.
### On the Convergence Rates
The rate of convergence of the estimators for the cross-sectional model is \(N^{-1/2}\). The intuition is that each observation can be matched with \(Nh_{N}^{k_{1}+k_{2}}\) and \(N\sigma_{N}^{2k_{1}}\) observations (in probability) for estimating \(\beta\) and \(\gamma\), respectively. Thus, all observations are useful and the curse of dimensionality is not a concern for the cross-sectional estimators. In contrast, an observation is useful for the panel data estimators if all its covariates are very close to each other across two time periods. For a fixed \(T,\) the probability of an observation being useful is proportional to \(h_{N}^{k_{1}+k_{2}}\) or \(\sigma_{N}^{2k_{1}}\). Therefore, the number of "useful" observations is proportional to \(Nh_{N}^{k_{1}+k_{2}}\) and \(N\sigma_{N}^{2k_{1}}\), and the estimation suffers from the curse of dimensionality. We may remedy this problem if \(T\to\infty\) such that \(Th_{N}^{k_{1}+k_{2}}\to\infty\) and \(T\sigma_{N}^{2k_{1}}\to\infty\) and each observation can be "matched" at certain two time periods with large probability. Since this paper focuses on models for short panel data, we leave the discussion on this issue to future research. Further, the objective functions have a "sharp" edge as in Kim and Pollard (1990). As a result, the convergence rates for \(\hat{\beta}\) and \(\hat{\gamma}\) are expected to be \((Nh_{N}^{k_{1}+k_{2}})^{-1/3}\) and \((N\sigma_{N}^{2k_{1}})^{-1/3}\), respectively. The cross-sectional estimators, however, do not suffer this "sharp" edge effect because the objective functions are U-statistics and the "sharp" edge effect vanishes after we decompose the objective functions (see, e.g., the second line of equation (A.5)) when deriving the asymptotics.
### Some Discussions on Constructing Estimators in the Panel Data Models
Unlike the cross-section case, here we choose not to estimate \(\gamma\) by matching \(X^{\prime}_{jt}\hat{\beta}\) and \(X^{\prime}_{js}\hat{\beta}\), \(j=1,2\), because of the following observation. As demonstrated in Theorem 3.2 below, the convergence rate of \(\hat{\beta}\) is \((Nh_{N}^{k_{1}+k_{2}})^{-1/3}\). If we knew the true value of \(\beta\), \(\hat{\gamma}\) could be obtained by matching \(X^{\prime}_{jt}\beta\) and \(X^{\prime}_{js}\beta\). Using the same arguments, we can show that this infeasible estimator has a rate of convergence \((N\sigma_{N}^{2})^{-1/3}\). As a result, it is only possibly beneficial to match \(X_{jt}\hat{\beta}\) and \(X_{js}\hat{\beta}\) for estimating \(\gamma\) when \(\sigma_{N}^{2}=o(h_{N}^{k_{1}+k_{2}})\); otherwise the asymptotics of \(\hat{\beta}\) would dominate the limiting distribution of \(\hat{\gamma}\). However, this involves careful selection of tuning parameters \(h_{N}\) and \(\sigma_{N}\). In more general cases, this selection relies not only on the dimension of the regressor space but also on the cardinality of the choice set, and thus should be made on a case-by-case basis. Because of the lack of a universally applicable treatment, we choose to simply match the covariates to avoid this complexity and the ensuing discussion.
Proofs of Technical Lemmas
We present proofs of Lemmas A.1-A.7 and Lemmas B.1-B.2 in order. For Lemmas A.1-A7, we use the following results. These are direct results from the full rank conditions, smoothness conditions, and finite moment conditions in C2 and C5. Since the calculations are standard, we list them without proofs.
For \(\varrho_{i}(b)\), the following holds.
* Let \(\mathcal{N}_{\beta}\) denote a neighborhood of \(\beta.\) Then,
* All mixed second partial derivatives of \(\varrho_{i}(b)\) exist on \(\mathcal{N}_{\beta}\).
* There is an integrable function \(M_{1}(Z_{i})\) such that for all \(b\in\mathcal{N}_{\beta}\), \(\|\nabla^{2}\varrho_{i}(b)-\nabla^{2}\varrho_{i}(\beta)\|_{F}\leq M_{1}(Z_{i} )\|b-\beta\|.\)
* \(\mathbb{E}[\|\nabla\varrho_{i}(\beta)\|^{2}]<\infty\), \(\mathbb{E}[\|\nabla^{2}\varrho_{i}(\beta)\|_{F}]<\infty\), and \(\mathbb{E}[\nabla^{2}\varrho_{i}(\beta)]\) is negative definite.
For \(\tau_{i}(r)\), the following holds.
* Let \(\mathcal{N}_{\gamma}\) denote a neighborhood of \(\gamma\). Then,
* All mixed second partial derivatives of \(\tau_{i}(r)\) exist on \(\mathcal{N}_{\gamma}\).
* There is an integrable function \(M_{2}(Z_{i})\) such that for all \(r\in\mathcal{N}_{\gamma}\)\(\|\nabla^{2}\tau_{i}(r)-\nabla^{2}\tau_{i}(\gamma)\|_{F}\leq M_{2}(Z_{i})\|r-\gamma\|\).
* \(\mathbb{E}[\|\nabla\tau_{i}(\gamma)\|^{2}]<\infty\), \(\mathbb{E}[\|\nabla^{2}\tau_{i}(\gamma)\|_{F}]<\infty\), and \(\mathbb{E}[\nabla^{2}\tau_{i}(\gamma)]\) is negative definite.
* All mixed second partial derivatives of \(\mu(v_{1},v_{2},r)\) exist on \(\mathcal{N}_{\gamma}\).
* There exists an continuous, integrable function \(M_{3}(v_{1},v_{2})\) such that for all \(r\in\mathcal{N}_{\gamma}\), \(\|\nabla^{2}_{11}\mu(v_{1},v_{2},r)-\nabla^{2}_{11}\mu(v_{1},v_{2},\gamma)\|_ {F}\leq M_{3}(v_{1},v_{2})\|r-\gamma\|\).
* \(\mathbb{E}[\|\nabla^{2}_{13}\mu(V_{i}\left(\beta\right),V_{i}\left(\beta \right),\gamma)\|_{F}^{2}]<\infty\) and \(\sup_{r\in\mathcal{N}_{\gamma}}\mathbb{E}[\|\nabla^{2}_{33}\nabla_{1}\mu(V_{i }\left(\beta\right),V_{i}\left(\beta\right),\gamma)\|_{F}]<\infty\).
Further, using the definition of \(\tau_{i}(r)\), we can write \(\tau_{i}(r)\) as
\[\tau_{i}(r)=f_{V_{im}}(0)\mathbb{E}[h_{im}(r)|V_{i}=v_{i},W_{i}=w_{i},V_{im}=0 ]=\int B(v_{i},v_{i},w_{i},w_{m})S_{im}(r)f_{V,W}(v_{i},w_{m})dw_{m}.\]
Proof of Lemma a.1.: First, note that
\[\frac{1}{2}\sigma_{N}^{2}\sup_{r\in\mathcal{R}}|\hat{\mathcal{L}}_{N}^{K}(r)- \mathcal{L}_{N}^{K}(r)|\leq\frac{1}{N(N-1)}\sum_{i\neq m}|K_{\sigma_{N},\gamma }(\hat{V}_{im})-K_{\sigma_{N},\gamma}(V_{im})|.\] (F.1)
Applying a second-order Taylor expansion to the right-hand side of (F.1) yields the lead term of the form
\[\frac{1}{\sigma_{N}N(N-1)}\sum_{i\neq m}|\nabla_{1}K_{\sigma_{N}, \gamma}(V_{im})X^{\prime}_{im1}(\hat{\beta}-\beta)+\nabla_{2}K_{\sigma_{N}, \gamma}(V_{im})X^{\prime}_{im2}(\hat{\beta}-\beta)|\] \[\leq \frac{1}{\sigma_{N}N(N-1)}\sum_{i\neq m}\sum_{j=1}^{2}|\nabla_{j}K _{\gamma}(V_{im}/\sigma_{N})X^{\prime}_{imj}(\hat{\beta}-\beta)|,\]
where \(\nabla_{j}K_{\gamma}(\cdot)\) (\(\nabla_{j}K_{\sigma_{N},\gamma}(\cdot)\)) denotes the partial derivative of \(K_{\gamma}(\cdot)\) (\(K_{\sigma_{N},\gamma}(\cdot)\)) w.r.t. its \(j\)-th argument corresponding to alternative \(j\). It suffices to focus on the term with \(j=1\). Then by Assumptions C1-C7, and the Cauchy-Schwarz inequality, we conclude that
\[\frac{1}{N(N-1)}\sum_{i\neq m}\left|\nabla_{1}K_{\gamma}(V_{im}/ \sigma_{N})\frac{X^{\prime}_{im1}(\hat{\beta}-\beta)}{\sigma_{N}}\right|\leq \frac{1}{N(N-1)}\sum_{i\neq m}|\nabla_{1}K_{\gamma}(V_{im}/\sigma_{N})|\|X_{ im1}\|\frac{\|\hat{\beta}-\beta\|}{\sigma_{N}}\] \[= [O_{P}(\sigma_{N}^{2})+o_{P}(1)]O_{P}(\|\hat{\beta}-\beta\|/ \sigma_{N})=o_{P}(1),\]
where the penultimate equality follows from the SLLN of \(U\)-statistics (see e.g., Serfling (2009)) and \(\mathbb{E}[|\nabla_{1}K_{\gamma}(V_{im}/\sigma_{N})|\|X_{im1}\|]=O_{P}(\sigma_ {N}^{2})\), and the last equality is due to the \(\sqrt{N}\)-consistency of \(\hat{\beta}\). Then the desired convergence result follows.
Proof of Lemma a.2.: Note that Lemma A.2 follows from \(\sup_{r\in\mathcal{R}}|\mathcal{L}_{N}^{K}(r)-\mathbb{E}[\mathcal{L}_{N}^{K}( r)]|=o_{P}(1)\) and \(\sup_{r\in\mathcal{R}}|\mathbb{E}[\mathcal{L}_{N}^{K}(r)]-\mathcal{L}(r)|=o(1)\).
Define \(\mathcal{F}_{N}=\{K_{\sigma_{N},\gamma}(v_{im})h_{im}(r)|r\in\mathcal{R}\}\) with \(\sigma_{N}>0\) and \(\sigma_{N}\to 0\). \(\mathcal{F}_{N}\) is a sub-class of the fixed class \(\mathcal{F}\equiv\{K_{\gamma}(v_{im}/\sigma)h_{im}(r)|r\in\mathcal{R},\sigma >0\}=\mathcal{F}_{r}\mathcal{F}_{\sigma}\), with \(\mathcal{F}_{\sigma}\equiv\{K_{\gamma}(v_{im}/\sigma)|\sigma>0\}\), which is Euclidean for the constant envelope \(\sup_{v\in\mathbb{R}^{2}}|K_{\gamma}(v)|\) by Lemma 22 in Nolan and Pollard (1987) and \(\mathcal{F}_{r}\equiv\{h_{im}(r)|r\in\mathcal{R}\}\). Noticing that \(h_{im}(r)\) is uniformly bounded by \(2\), Example 2.11 and Lemma 2.15 in Pakes and Pollard (1989) then implies that \(\mathcal{F}_{r}\) is Euclidean for the constant envelope \(2\). Putting all these results together, we conclude using Lemma 2.14 in Pakes and Pollard (1989) that \(\mathcal{F}\) is Euclidean for the constant envelope \(2\sup_{v\in\mathbb{R}^{2}}|K_{\gamma}(v)|\). Applying Corollary 4 in Sherman (1994a), we obtain that \(\sup_{r\in\mathcal{R}}|\mathcal{L}_{N}^{K}(r)-\mathbb{E}[\mathcal{L}_{N}^{K}( r)]|=O_{P}(N^{-1}/\sigma_{N}^{2})=o_{P}(1)\) by Assumption C7.
It remains to show that \(\sup_{r\in\mathcal{R}}|\mathbb{E}[\mathcal{L}_{N}^{K}(r)]-\mathcal{L}(r)|=o(1)\). Let \(\eta(\cdot)\equiv f_{V_{im}}(\cdot)\mathbb{E}[h_{im}(r)|V_{im}=\cdot]\).
Then \(\mathcal{L}(r)=\eta(0)\) by definition. We can write by Assumptions C6 and C7 that
\[\sup_{r\in\mathcal{R}}|\mathbb{E}[\mathcal{L}_{N}^{K}(r)]-\mathcal{ L}(r)|=\sup_{r\in\mathcal{R}}|\sigma_{N}^{-2}\mathbb{E}[K_{\gamma}(V_{im}/\sigma_{N})h _{im}(r)]-\eta(0)|\] \[= \sup_{r\in\mathcal{R}}|\int\sigma_{N}^{-2}K_{\gamma}(v/\sigma_{N} )\eta(v)\mathrm{d}v-\eta(0)|=\sup_{r\in\mathcal{R}}|\int\sigma_{N}^{-2}K_{ \gamma}(v/\sigma_{N})[\eta(0)+v^{\prime}\nabla\eta(\bar{v})]\mathrm{d}v-\eta( 0)|\] \[= \sup_{r\in\mathcal{R}}|\int K_{\gamma}(u)[\eta(0)+\sigma_{N}u^{ \prime}\nabla\eta(u_{N})]\mathrm{d}u-\eta(0)|=\sup_{r\in\mathcal{R}}|\sigma_{N }\int K_{\gamma}(u)u^{\prime}\nabla\eta(u_{N})\mathrm{d}u|\] \[\leq \sigma_{N}\cdot\sup_{r\in\mathcal{R}}\int|K_{\gamma}(u)|\|u\|\| \nabla\eta(u_{N})\|\mathrm{d}u=O(\sigma_{N})=o(1),\]
where the third equality applies a mean-value expansion and the fourth equality uses a change of variables. Then, the desired result follows.
Proof of Lemma a.3.: The proof proceeds by verifying the four sufficient conditions for Theorem 2.1 in Newey and McFadden (1994): (S1) \(\mathcal{R}\) is a compact set, (S2) \(\sup_{r\in\mathcal{R}}|\hat{\mathcal{L}}_{N}^{K}(r)-\mathcal{L}(r)|=o_{P}(1)\), (S3) \(\mathcal{L}(r)\) is continuous in \(r\), and (S4) \(\mathcal{L}(r)\) is uniquely maximized at \(\gamma\).
Condition (S1) is satisfied by Assumption C4. Lemmas A.1 and A.2 together imply that condition (S2) holds. Note that
\[\mathbb{E}[h_{im}(r)|V_{im}=0]=\mathbb{E}\left\{\mathbb{E}[Y_{im( 1,1)}(\mathrm{sgn}(W^{\prime}_{im}r)-\mathrm{sgn}(W^{\prime}_{im}\gamma))|Z_{i },Z_{m}]|V_{im}=0\right\}\] \[= \mathbb{E}\left[(P(Y_{i(1,1)}=1|Z_{i})-P(Y_{m(1,1)}=1|Z_{m}))( \mathrm{sgn}(W^{\prime}_{im}r)-\mathrm{sgn}(W^{\prime}_{im}\gamma))|V_{im}=0 \right].\]
Then the identification condition (S4) can be verified by similar arguments used in the proof of Theorem 2.1 given that \(f_{V_{im}}(0)>0\) by Assumption C5.
It remains to verify the continuity of \(\mathcal{L}(r)\) in \(r\). Assuming \(r^{(1)}>0\) w.l.o.g., \(\mathbb{E}[h_{im}(r)|V_{im}=0]\) can be expressed as the sum of terms like
\[P(Y_{im(1,1)}=d,W^{(1)}_{im}>-\tilde{W}^{\prime}_{im}\tilde{r}/ r^{(1)}|V_{im}=0)\] \[= \int\int_{-\tilde{W}^{\prime}_{im}\tilde{r}/r^{(1)}}^{\infty}P( Y_{im(1,1)}=d|W^{(1)}_{im}=w,\tilde{W}_{im},V_{im}=0)f_{W^{(1)}_{im}|\tilde{W}_{ im},V_{im}=0}(w)\mathrm{d}w\mathrm{d}F_{\tilde{W}_{im}|V_{im}=0}.\]
for some \(d\in\{-1,0,1\}\). Then \(\mathcal{L}(r)\) is continuous in \(r\) if \(f_{W^{(1)}_{im}|\tilde{W}_{im},V_{im}=0}(\cdot)\) does not have any mass points, which is guaranteed by Assumption C3. This completes the proof.
Proof of Lemma a.4.: First, we express \(\mathbb{E}[\mathcal{L}_{N}^{K}(r)]\) as the integral
\[\int\sigma_{N}^{-2}K_{\sigma_{N},\gamma}(v_{im})B(v_{i},v_{m},w_{ i},w_{m})S_{im}(r)\mathrm{d}F_{V,W}(v_{i},w_{i})\mathrm{d}F_{V,W}(v_{m},w_{m})\] (F.2) \[= \int K_{\gamma}(u_{im})B(v_{m}+u_{im}\sigma_{N},v_{m},w_{i},w_{m} )S_{im}(r)f_{V,W}(v_{m}+u_{im}\sigma_{N},w_{i})\mathrm{d}w_{i}\mathrm{d}u_{im} \mathrm{d}F_{V,W}(v_{m},w_{m}),\]
where we apply the change of variables \(u_{im}=v_{im}/\sigma_{N}\) to obtain the equality.
Take the \(\kappa_{\gamma}^{\text{th}}\)-order Taylor expansion inside the integral around \(v_{m}\) to obtain the lead term
\[\int K_{\gamma}(u_{im})B(v_{m},v_{m},w_{i},w_{m})S_{im}(r)f_{V,W}(v _{m},w_{i})\mathrm{d}w_{i}\mathrm{d}u_{im}\mathrm{d}F_{V,W}(\nu_{m},w_{m})\] \[= \int B(v_{m},v_{m},w_{i},w_{m})S_{im}(r)f_{V,W}(v_{m},w_{i}) \mathrm{d}w_{i}\mathrm{d}F_{V,W}(v_{m},w_{m})=\mathbb{E}[\tau_{m}(r)],\] (F.3)
where the first equality follows by \(\int K_{\gamma}(u)du=1\). All remaining therms are zero except the last one which is of order \(O(\sigma_{N}^{\kappa}\delta_{N})=o(\|r-\gamma\|^{2})\) by Assumptions C6 and C7.
Note that a second-order Taylor expansion around \(\gamma\) gives
\[\tau_{m}(r)-\tau_{m}(\gamma)=(r-\gamma)^{\prime}\nabla\tau_{m}( \gamma)+\frac{1}{2}(r-\gamma)^{\prime}\nabla^{2}\tau_{m}(\bar{r})(r-\gamma)\] \[= (r-\gamma)^{\prime}\nabla\tau_{m}(\gamma)+\frac{1}{2}(r-\gamma) ^{\prime}\nabla^{2}\tau_{m}(\gamma)(r-\gamma)+\frac{1}{2}(r-\gamma)^{\prime} \left[\nabla^{2}\tau_{m}(\bar{r})-\nabla^{2}\tau_{m}(\gamma)\right](r-\gamma),\]
and hence by Assumption C7
\[\mathbb{E}[\tau_{m}(r)]= (r-\gamma)^{\prime}\mathbb{E}[\nabla\tau_{m}(\gamma)]+\frac{1}{2 }(r-\gamma)^{\prime}\mathbb{E}[\nabla^{2}\tau_{m}(\gamma)](r-\gamma)+o(\|r- \gamma\|^{2})\] \[= \frac{1}{2}(r-\gamma)^{\prime}\mathbb{V}_{\gamma}(r-\gamma)+o(\|r -\gamma\|^{2}),\] (F.4)
where the second equality uses the fact that \(\mathbb{E}[\nabla\tau_{m}(\gamma)]=0\) since \(\mathbb{E}[\tau_{m}(r)]\) is maximized at \(\gamma\). Then, applying (F.2), (F.3), and (F.4) proves the lemma.
Proof of Lemma a.5.: We can establish a representation for \(\mathbb{E}[\mathcal{L}_{N}^{K}(r)|Z_{m}]\) using the same arguments as in proving Lemma A.4, but no longer integrating over \(Z_{m}\). Specifically, a change of variables \(u_{im}=v_{im}/\sigma_{N}\) gives
\[\mathbb{E}[\mathcal{L}_{N}^{K}(r)|Z_{m}=z_{m}]=\int\sigma_{N}^{-2 }K_{\sigma_{N},\gamma}(v_{im})B(v_{i},v_{m},w_{i},w_{m})S_{im}(r)\mathrm{d}F_{ V,W}(v_{i},w_{i})\] \[= \int K_{\gamma}(u_{im})B(v_{m}+u_{im}\sigma_{N},v_{m},w_{i},w_{m} )S_{im}(r)f_{V,W}(v_{m}+u_{im}\sigma_{N},w_{i})\mathrm{d}w_{i}\mathrm{d}u_{im}.\]
The lead term of the \(\kappa_{\gamma}^{\text{th}}\)-order Taylor expansion inside the integral around \(v_{m}\) is
\[\int K_{\gamma}(u_{im})B(v_{m},v_{m},w_{i},w_{m})S_{im}(r)f_{V,W}( v_{m},w_{i})\mathrm{d}w_{i}\mathrm{d}u_{im}\] \[= \int B(v_{m},v_{m},w_{i},w_{m})S_{im}(r)f_{V,W}(v_{m},w_{i}) \mathrm{d}w_{i}=\tau_{m}(r)\]
by \(\int K_{\gamma}(u)du=1\), and the sample average of the bias term is of order \(o(\|r-\gamma\|^{2})\).
Then, applying Lemma A.4 and Assumption C7, we can write
\[\frac{2}{N}\sum_{m}\mathbb{E}[\mathcal{L}_{N}^{K}(r)|Z_{m}]-2 \mathbb{E}[\mathcal{L}_{N}^{K}(r)]\] \[= \frac{2}{N}\sum_{m}\tau_{m}(r)-(r-\gamma)^{\prime}\mathbb{V}_{ \gamma}(r-\gamma)+o_{P}(\|r-\gamma\|^{2})\] \[= \frac{2}{N}\sum_{m}(r-\gamma)^{\prime}\nabla\tau_{m}(\gamma)+ \frac{1}{N}\sum_{m}(r-\gamma)^{\prime}\left(\nabla^{2}\tau_{m}(\gamma)- \mathbb{V}_{\gamma}\right)(r-\gamma)+o_{P}(\|r-\gamma\|^{2}).\]
Then the desired result follows by noticing that \(N^{-1}\sum_{m}\nabla^{2}\tau_{m}(\gamma)-\mathbb{V}_{\gamma}=o_{P}(1)\) by the SLLN.
Proof of Lemma a.6.: By construction, we can write
\[\rho_{N}(r)=\frac{1}{N(N-1)}\sum_{i\neq m}\rho_{im}(r),\]
where \(\rho_{im}(r)\equiv(K_{\sigma_{N},\gamma}(V_{im})h_{im}(r)-2N^{-1}\sum_{i} \mathbb{E}[K_{\sigma_{N},\gamma}(V_{im})h_{im}(r)|Z_{i}]+\mathbb{E}[K_{\sigma_ {N},\gamma}(V_{im})h_{im}(r)])/\sigma_{N}^{2}\).
Note that \(\rho_{im}(\gamma)=0\) and \(|\rho_{im}(r)|\) is bounded by a multiple of \(M/\sigma_{N}^{2}\) where \(M\) is a positive constant. Define \(\mathcal{F}_{N}^{*}=\{\rho_{im}^{*}(r)|r\in\mathcal{R}\}\) where \(\rho_{im}^{*}(r)\equiv\sigma_{N}^{2}\rho_{im}(r)/M\). Then the Euclidean properties of the (\(P\)-degenerate) class of functions \(\mathcal{F}_{N}^{*}\) are deduced using similar arguments for proving Lemma A.2 in combination with Corollary 17 and Corollary 21 in Nolan and Pollard (1987). As \(\int K_{\sigma_{N},\gamma}(v)^{p}\mathrm{d}v=O(\sigma_{N}^{2})\) for \(p=1,2\), we have \(\sup_{r\in\mathcal{R}_{N}}\mathbb{E}[\rho_{im}^{*}(r)^{2}]=O(\sigma_{N}^{2})\). Then applying Theorem 3 of Sherman (1994b) gives
\[\frac{1}{N(N-1)}\sum_{i\neq m}\rho_{im}^{*}(r)=O_{P}(N^{-1}\sigma_{N}^{\alpha}),\]
where \(0<\alpha<1\) and hence \(q_{N}(r)=O_{P}(N^{-1}\sigma_{N}^{\alpha-2})=O_{P}(N^{-1}\sigma_{N}^{-2})\).
Proof of Lemma a.7.: The first step is to plug (A.5) into (A.4) so that \(\Delta\mathcal{L}_{N}^{K}(r)\) can be expressed as
\[\frac{1}{\sigma_{N}^{3}N(N-1)(N-2)}\sum_{i\neq m\neq l}[\nabla_{1}K_{\sigma_{ N},\gamma}(V_{im})X_{im1}^{\prime}\psi_{l,\beta}+\nabla_{2}K_{\sigma_{N}, \gamma}(V_{im})X_{im2}^{\prime}\psi_{l,\beta}]h_{im}(r)\] (F.5)
plus a remainder term of higher order since \(\sqrt{N}\sigma_{N}\to\infty\). (F.5) is a third order \(U\)-process. We use the \(U\)-statistic decomposition found in Sherman (1994b) (see also Serfling (2009)) to derive a linear representation. Note that (F.5) has unconditional mean zero, so as is its mean conditional on each of its first two arguments. Furthermore, by Theorem 3 of Sherman (1994b) and similar arguments for proving Lemma A.6, the remainder term of the decomposition (projection) is of order \(O_{P}(N^{-1}\sigma_{N}^{-2})\). Hence it suffices to derive a linear representation for its mean conditional on
its third argument:
\[\frac{1}{\sigma_{N}^{3}N}\sum_{l}\int B(v_{i},v_{m},w_{i},w_{m})S_{im}(r)\nabla K_{ \sigma_{N},\gamma}(v_{im})^{\prime}\left(\begin{array}{c}x^{\prime}_{im1} \psi_{l,\beta}\\ x^{\prime}_{im2}\psi_{l,\beta}\end{array}\right)\mathrm{d}F_{X,W}(x_{i},w_{i}) \mathrm{d}F_{X,W}(x_{m},w_{m}).\]
where \(F_{X,W}(\cdot,\cdot)\) denotes the joint distribution function of \((X_{i},W_{i})\). While the above integral is expressed w.r.t. \((x_{i},w_{i})\) and \((x_{m},w_{m})\), it will prove convenient to express the integral in terms of \(\nu_{i}\) and \(\nu_{m}\). We do so as follows:
\[\frac{1}{\sigma_{N}^{3}N}\sum_{l}\left(\int\nabla K_{\sigma_{N},\gamma}(v_{im })^{\prime}G(v_{i},v_{m},r)f_{V}(v_{i})f_{V}(v_{m})\mathrm{d}v_{i}\mathrm{d}v_ {m}\right)\psi_{l,\beta}.\] (F.6)
Apply a change of variables in (F.6) with \(u_{im}=v_{im}/\sigma_{N}\) to obtain
\[\frac{1}{\sigma_{N}^{3}N}\sum_{l}\left(\int\nabla K_{\sigma_{N}, \gamma}(v_{im})^{\prime}G(v_{i},v_{m},r)f_{V}(v_{i})f_{V}(v_{m})\mathrm{d}v_{ i}\mathrm{d}v_{m}\right)\psi_{l,\beta}\] \[= \frac{1}{\sigma_{N}N}\sum_{l}\left(\int\nabla K_{\gamma}(u_{im}) ^{\prime}G(v_{m}+u_{im}\sigma_{N},v_{m},r)f_{V}(v_{m}+u_{im}\sigma_{N})f_{V}(v _{m})\mathrm{d}u_{im}\mathrm{d}v_{m}\right)\psi_{l,\beta}.\]
As \(\int\nabla K_{\gamma}(u)^{\prime}\mathrm{d}u=0\) and \(\int u_{j}\nabla_{j}K_{\gamma}(u)\mathrm{d}u=-1\) for \(j=1,2\), a second-order expansion inside the integral around \(u_{im}\sigma_{N}=0\) yields the lead term12
Footnote 12: The second-order term is of order \(O_{P}(\delta_{N}\sigma_{N}/\sqrt{N})\), which will be \(o_{P}(\|r-\gamma\|^{2})\).
\[-\frac{1}{N}\sum_{l}\left(\int\nabla_{1}\mu(v_{m},v_{m},r)f_{V}(v_{m})\mathrm{ d}v_{m}\right)\psi_{l,\beta}.\]
We next apply a second-order Taylor expansion of \(\nabla_{1}\mu(v_{m},v_{m},r)\) around \(\gamma\) to yield
\[(r-\gamma)^{\prime}\frac{1}{N}\sum_{l}\left(-\int\nabla_{13}^{2}\mu(v_{m},v_ {m},\gamma)f_{V}(v_{m})\mathrm{d}v_{m}\right)\psi_{l,\beta}+o_{P}(\|r-\gamma \|^{2}),\]
which concludes the proof of this lemma.
Proof of Lemma a.8.: Recall that \(V_{im}\left(\beta\right)=\left(X^{\prime}_{im1}\beta,X^{\prime}_{im2}\beta \right).\) The following calculation is useful:
\[K_{\sigma_{N},\gamma}\left(V_{im}\left(\hat{\beta}\right)\right) -K_{\sigma_{N},\gamma}\left(V_{im}\left(\beta\right)\right)\] \[= K\left(\frac{X^{\prime}_{im1}\hat{\beta}}{\sigma_{N}}\right)K \left(\frac{X^{\prime}_{im2}\hat{\beta}}{\sigma_{N}}\right)-K\left(\frac{X^{ \prime}_{im1}\beta}{\sigma_{N}}\right)K\left(\frac{X^{\prime}_{im2}\beta}{ \sigma_{N}}\right)\] \[= K^{\prime}\left(\frac{X^{\prime}_{im1}\beta}{\sigma_{N}}\right) K\left(\frac{X^{\prime}_{im2}\beta}{\sigma_{N}}\right)\frac{X^{\prime}_{im1} \left(\hat{\beta}-\beta\right)}{\sigma_{N}}+K\left(\frac{X^{\prime}_{im1}\beta }{\sigma_{N}}\right)K^{\prime}\left(\frac{X^{\prime}_{im2}\beta}{\sigma_{N}} \right)\frac{X^{\prime}_{im2}\left(\hat{\beta}-\beta\right)}{\sigma_{N}}+O_{P} \left(N^{-1}\sigma_{N}^{-2}\right),\]
where the last line holds by the Taylor expansion. Suppose we have a random variable \(q_{im}\) such
that \(\mathbb{E}\left(q_{im}|V_{im}\left(\beta\right)=v\right)\) exists, \(\kappa_{\gamma}\)-th differentiable in \(v,\) and derivatives up to \(\kappa_{\gamma}\)-th are uniformly bounded. We make the following claim
\[\frac{1}{\sigma_{N}^{2}N\left(N-1\right)}\sum_{i\neq m}K^{\prime}\left(\frac{X_ {im1}^{\prime}\beta}{\sigma_{N}}\right)K\left(\frac{X_{im2}^{\prime}\beta}{ \sigma_{N}}\right)q_{im}=O_{P}\left(N^{-1/2}\right).\] (F.8)
whose proof is deferred to the end. Using the above claim,
\[\frac{1}{\sigma_{N}^{2}N\left(N-1\right)}\sum_{i\neq m}K^{\prime} \left(\frac{X_{im1}^{\prime}\beta}{\sigma_{N}}\right)K\left(\frac{X_{im2}^{ \prime}\beta}{\sigma_{N}}\right)\frac{X_{im1}^{\prime}\left(\hat{\beta}- \beta\right)}{\sigma_{N}}Y_{im(1,1)}\mathrm{sgn}\left(W_{im}^{\prime}\gamma\right)\] (F.9) \[= \left[\frac{1}{\sigma_{N}^{2}N\left(N-1\right)}\sum_{i\neq m}K^{ \prime}\left(\frac{X_{im1}^{\prime}\beta}{\sigma_{N}}\right)K\left(\frac{X_{ im2}^{\prime}\beta}{\sigma_{N}}\right)X_{im1}^{\prime}Y_{im(1,1)}\mathrm{sgn} \left(W_{im}^{\prime}\gamma\right)\right]\frac{\left(\hat{\beta}-\beta\right) }{\sigma_{N}}\] \[= O_{P}\left(\frac{N^{-1/2}\left(\hat{\beta}-\beta\right)}{\sigma _{N}}\right)=O_{P}\left(N^{-1}\sigma_{N}^{-1}\right),\]
where we let \(q_{im}=X_{im1}^{\prime}Y_{im(1,1)}\mathrm{sgn}(W_{im}^{\prime}\gamma)\,,\) and use the result \(\hat{\beta}-\beta=O_{P}\left(N^{-1/2}\right).\) Clearly, this also holds if we replace \(X_{im1}^{\prime}\left(\hat{\beta}-\beta\right)\) with \(X_{im2}^{\prime}\left(\hat{\beta}-\beta\right).\)
Combining the results in (F.7) and (F.9), we obtain
\[\frac{1}{\sigma_{N}^{2}N\left(N-1\right)}\sum_{i\neq m}\left[K_{ \sigma_{N},\gamma}\left(V_{im}\left(\hat{\beta}\right)\right)-K_{\sigma_{N}, \gamma}\left(V_{im}\left(\beta\right)\right)\right]Y_{im(1,1)}\mathrm{sgn} \left(W_{im}^{\prime}\gamma\right)\] \[= 2\cdot O_{P}\left(N^{-1}\sigma_{N}^{-1}\right)+O_{P}\left(N^{-1 }\sigma_{N}^{-2}\right)=O_{P}\left(N^{-1}\sigma_{N}^{-2}\right),\]
as desired.
The remaining task is to show the claim in (F.8). Some standard results on U-statistics imply that the variance of (F.8) is \(O_{P}\left(N^{-1}\right).\) So we only need show the expectation of the U-statistic is \(O\left(N^{-1/2}\right).\) Using it, we calculate the the expectation as follows:
\[\mathbb{E}\left[K^{\prime}\left(\frac{X_{im1}^{\prime}\beta}{ \sigma_{N}}\right)K\left(\frac{X_{im2}^{\prime}\beta}{\sigma_{N}}\right)q_{im}\right]\] \[= \mathbb{E}\left[K^{\prime}\left(\frac{X_{im1}^{\prime}\beta}{ \sigma_{N}}\right)K\left(\frac{X_{im2}^{\prime}\beta}{\sigma_{N}}\right) \mathbb{E}\left(q_{im}|X_{im1}^{\prime}\beta,X_{im2}^{\prime}\beta\right)\right]\] \[= \int\int K^{\prime}\left(\frac{x_{1}}{\sigma_{N}}\right)K\left( \frac{x_{2}}{\sigma_{N}}\right)\mathbb{E}\left(q_{im}|X_{im1}^{\prime}\beta=x_{ 1},X_{im2}^{\prime}\beta=x_{2}\right)f_{X_{im1}^{\prime}\beta,X_{im2}^{\prime }\beta}\left(x_{1},x_{2}\right)dx_{1}dx_{2}\] \[= \int\int K^{\prime}\left(u_{1}\right)K\left(u_{2}\right)\mathbb{E }\left(q_{im}|X_{im1}^{\prime}\beta=u_{1}\sigma_{N},X_{im2}^{\prime}\beta=u_{2 }\sigma_{N}\right)f_{X_{im1}^{\prime}\beta,X_{im2}^{\prime}\beta}\left(u_{1} \sigma_{N},u_{2}\sigma_{N}\right)du_{1}du_{2}\] \[= \mathbb{E}\left(q_{im}|X_{im1}^{\prime}\beta=0,X_{im2}^{\prime} \beta=0\right)f_{X_{im1}^{\prime}\beta,X_{im2}^{\prime}\beta}\left(0,0\right) \int\int K^{\prime}\left(u_{1}\right)K\left(u_{2}\right)du_{1}du_{2}+o_{P} \left(N^{-1/2}\right)\] \[= 0+o_{P}\left(N^{-1/2}\right)=o_{P}\left(N^{-1/2}\right),\]
where the second last line holds by the fact that the bias term after the Taylor expansion is \(o_{P}\left(N^{-1/2}\right),\) and the last line holds by \(\int K^{\prime}\left(u\right)du=0.\) This shows the claim.
Proof of Lemma a.9.: We note \(\hat{\mathcal{L}}_{N}^{K}\left(r\right)\) was redefined in Appendix A.2. Based on how it was re-written, equation (A.8) implies that
\[\hat{\mathcal{L}}_{N}^{K}\left(r\right) = \frac{1}{2}\left(r-\gamma\right)^{\prime}\mathbb{V}_{\gamma}\left( r-\gamma\right)+O_{P}\left(N^{-1/2}\left\|r-\gamma\right\|\right)+o_{P}\left( \left\|r-\gamma\right\|^{2}\right)\] \[+\frac{1}{\sigma_{N}^{2}N\left(N-1\right)}\sum_{i\neq m}K_{ \sigma_{N},\gamma}\left(V_{im}\left(\beta\right)\right)Y_{im(1,1)}\mathrm{sgn }\left(W_{im}^{\prime}\gamma\right)+O_{P}\left(N^{-1}\sigma_{N}^{-2}\right).\]
Using the result in Lemma A.8,
\[\hat{\mathcal{L}}_{N}^{K}\left(r\right) = \frac{1}{2}\left(r-\gamma\right)^{\prime}\mathbb{V}_{\gamma} \left(r-\gamma\right)+O_{P}\left(N^{-1/2}\left\|r-\gamma\right\|\right)+o_{P} \left(\left\|r-\gamma\right\|^{2}\right)\] \[+\frac{1}{\sigma_{N}^{2}N\left(N-1\right)}\sum_{i\neq m}K_{ \sigma_{N},\gamma}\left(V_{im}\left(\beta\right)\right)Y_{im(1,1)}\mathrm{ sgn}\left(W_{im}^{\prime}\gamma\right)+o_{P}\left(N^{-1/2}\right)+O_{P}\left(N^{-1} \sigma_{N}^{-2}\right).\]
\(N^{1/2}\sigma_{N}^{2}\rightarrow\infty\) was imposed in Assumption C.9. Thus, \(O_{P}\left(N^{-1}\sigma_{N}^{-2}\right)=o_{P}\left(N^{-1/2}\right),\) and this completes the proof.
Proof of Lemma a.10.: Using the definition that \(V_{im}\left(\beta\right)=\left(X_{im1}^{\prime}\beta,X_{im2}^{\prime}\beta\right)\), we calculate the following:
\[\mathbb{E}\left[\sigma_{N}^{-2}K_{\sigma_{N},\gamma}\left(V_{im} \left(\beta\right)\right)Y_{im(1,1)}\mathrm{sgn}\left(W_{im}^{\prime}\gamma \right)\left|Z_{i}\right]\] \[= \mathbb{E}\left[\sigma_{N}^{-2}K\left(\frac{X_{im1}^{\prime}\beta }{\sigma_{N}}\right)K\left(\frac{X_{im2}^{\prime}\beta}{\sigma_{N}}\right) \mathbb{E}\left[Y_{im(1,1)}\mathrm{sgn}\left(W_{im}^{\prime}\gamma\right) \left|Z_{i},X_{m1},X_{m2}\right|\right]Z_{i}\right]\] \[= \mathbb{E}\left[Y_{im(1,1)}\mathrm{sgn}\left(W_{im}^{\prime} \gamma\right)\left|Z_{i},V_{m}\left(\beta\right)=V_{i}\left(\beta\right) \right]+o_{P}\left(N^{-1/2}\right)\]
where the last line hold by some standard calculation, the Taylor expansion, and Assumption C.9. This shows the conclusion.
Proof of Lemma b.1.: We verify the conditions in Assumption M, specifically, M.i, M.ii, and M.iii, in Seo and Otsu (2018) one by one. Recall that
\[\phi_{Ni}\left(b\right) \equiv\sum_{t>s}\sum_{d\in\mathcal{D}}\{\mathcal{K}_{h_{N}}(X_{i2 ts},W_{its})Y_{idst}\left(-1\right)^{d_{1}}\left(1[X_{i1ts}^{\prime}b>0]-1[X_{i1ts}^{ \prime}\beta>0]\right)\] \[+\mathcal{K}_{h_{N}}(X_{i1ts},W_{its})Y_{idst}\left(-1\right)^{d_ {2}}\left(1[X_{i2st}^{\prime}b>0]-1[X_{i2st}^{\prime}\beta>0]\right)\},\]
The "\(h_{n}\)" in Seo and Otsu (2018) needs to be set as \(h_{N}^{k_{1}+k_{2}}\) for the term \(\phi_{Ni}\left(b\right)\). Similarly, the "\(h_{n}\)" in Seo and Otsu (2018) needs to be set as \(\sigma_{N}^{2k_{1}}\) for the term \(\varphi_{Ni}\left(r\right)\). We conduct the analysis for \(\phi_{Ni}\left(b\right)\) first, and the results for \(\varphi_{Ni}\left(r\right)\) follow similarly. For \(\phi_{Ni}\left(b\right),\) we analyze the term
\[\mathcal{K}_{h_{N}}(X_{i2ts},W_{its})Y_{idst}\left(-1\right)^{d_{1}}\left(1[X_{ i1ts}^{\prime}b>0]-1[X_{i1ts}^{\prime}\beta>0]\right),\]
and the remaining terms can be analyzed similarly, thanks to the similar structure of those terms.
**On Assumption M.i in Seo and Otsu (2018)**. Note that
\[\mathbb{E}[\mathcal{K}_{h_{N}}(X_{i2ts},W_{its})Y_{idst}\left(-1 \right)^{d_{1}}\left(1\left[X_{i1ts}^{\prime}b>0\right]-1\left[X_{i1ts}^{\prime }\beta>0\right]\right)]\] (F.10) \[= \int_{\mathbb{R}^{k_{2}}}\int_{\mathbb{R}^{k_{1}}}\mathbb{E}[Y_{ idst}\left(-1\right)^{d_{1}}\left(1\left[X_{i1ts}^{\prime}b>0\right]-1\left[X_{ i1ts}^{\prime}\beta>0\right]\right)|X_{i2ts}=x_{2},W_{its}=w]\] \[\cdot\mathcal{K}_{h_{N}}(x_{2},w)f_{X_{2ts},W_{ts}}\left(x_{2},w \right)\mathrm{d}x_{2}\mathrm{d}w\] \[= \mathbb{E}[Y_{idst}\left(-1\right)^{d_{1}}\left(1\left[X_{i1ts}^ {\prime}b>0\right]-1\left[X_{i1ts}^{\prime}\beta>0\right]\right)|X_{i2ts}=0,W_ {its}=0]f_{X_{2ts},W_{ts}}\left(0,0\right)+O(h_{N}^{2}),\]
where the second equality holds by change of variables and Taylor expansion, and the small order term holds by the properties of the kernel function and the boundedness conditions imposed in Assumptions P5-P7. Further, by Assumption P8, the small order term satisfies \(O(h_{N}^{2})=o((Nh_{N}^{k_{1}+k_{2}})^{-2/3}).\) As a result, we only need to focus on the lead term
\[\mathbb{E}[Y_{idst}\left(-1\right)^{d_{1}}\left(1\left[X_{i1ts}^{ \prime}b>0\right]-1\left[X_{i1ts}^{\prime}\beta>0\right]\right)|X_{i2ts}=0,W_{its }=0]\] \[= \mathbb{E}[\mathbb{E}[Y_{idst}\left(-1\right)^{d_{1}}|X_{i1ts},X_ {i2ts}=0,W_{its}=0]\left(1\left[X_{i1ts}^{\prime}b>0\right]-1\left[X_{i1ts}^{ \prime}\beta>0\right]\right)|X_{i2ts}=0,W_{its}=0]\] \[\equiv \mathbb{E}[\kappa_{dts}^{(1)}\left(X_{i1ts}\right)\left(1\left[X_ {i1ts}^{\prime}b>0\right]-1\left[X_{i1ts}^{\prime}\beta>0\right]\right)|X_{i2 ts}=0,W_{its}=0]\] \[= \int_{\mathbb{R}^{k_{1}}}\kappa_{dts}^{(1)}\left(x\right)\left(1 \left[x^{\prime}b>0\right]-1\left[x^{\prime}\beta>0\right]\right)f_{X_{1ts}|\{ X_{2ts}=0,W_{ts}=0\}}\left(x\right)\mathrm{d}x\] \[\equiv \int_{\mathbb{R}^{k_{1}}}\kappa_{dts}^{(1)}\left(x\right)\left(1 \left[x^{\prime}b>0\right]-1\left[x^{\prime}\beta>0\right]\right)f_{X_{1ts}|\{ X_{2ts}=0,W_{ts}=0\}}\left(x\right)\mathrm{d}x,\] (F.11)
where to ease notations we denote
\[\kappa_{dts}^{(1)}\left(x\right)=\mathbb{E}[Y_{idst}\left(-1\right)^{d_{1}}|X_ {i1ts}=x,X_{i2ts}=0,W_{its}=0]\]
in the third line.
We now get the first and second derivatives of the above term w.r.t. \(b\) around \(\beta.\) Since the calculation is the classical differential geometry and very similar to that in Sections 5 and 6.4 of Kim and Pollard (1990) and Section B.1 of Seo and Otsu (2018), we only present key steps and omit some standard similar details. First, define the mapping
\[T_{b}=(I-\left\|b\right\|_{2}^{-2}bb^{\prime})(I-\beta\beta^{\prime})+\left\|b \right\|_{2}^{-2}b\beta^{\prime}.\]
Then \(T_{b}\) maps \(\left\{X^{\prime}_{i1ts}b>0\right\}\) onto \(\left\{X^{\prime}_{i1ts}\beta>0\right\}\) and \(\left\{X^{\prime}_{i1ts}b=0\right\}\) onto \(\left\{X^{\prime}_{i1ts}\beta=0\right\}.\) With equation (F.11), equations (5.2) and (5.3) in Kim and Pollard (1990) imply
\[\frac{\partial}{\partial b}\mathbb{E}\left[Y_{idst}\left(-1\right) ^{d_{1}}\left(1\left[X^{\prime}_{i1ts}b>0\right]-1\left[X^{\prime}_{i1ts}\beta> 0\right]\right)\right|X_{i2ts}=0,W_{its}=0\right]\] \[=\frac{\partial}{\partial b}\int_{\mathbb{R}^{1}}\kappa_{dts}^{ \left(1\right)}\left(x\right)\left(1\left[x^{\prime}b>0\right]-1\left[x^{ \prime}\beta>0\right]\right)f_{X_{1ts}\left|\left\{X_{2ts}=0,W_{ts}=0\right\}} \left(x\right)\mathrm{d}x,\] \[=\left\|b\right\|_{2}^{-2}b^{\prime}\beta\left(I-\left\|b\right\| _{2}^{-2}bb^{\prime}\right)\int 1\left[x^{\prime}\beta=0\right]\kappa_{dts}^{ \left(1\right)}\left(T_{b}x\right)xf_{X_{1ts}\left|\left\{X_{2ts}=0,W_{ts}=0 \right\}}\left(T_{b}x\right)\mathrm{d}\sigma_{0}^{\left(1\right)},\]
where \(\sigma_{0}^{\left(1\right)}\) is the surface measure of \(\left\{X^{\prime}_{i1ts}\beta=0\right\}.\) Note that \(T_{\beta}x=x,\) then
\[1\left[x^{\prime}\beta=0\right]\kappa_{dts}^{\left(1\right)}\left(T_{\beta}x \right)=1\left[x^{\prime}\beta=0\right]\kappa_{dts}^{\left(1\right)}\left(x \right)=0\]
because
\[\kappa_{dts}^{\left(1\right)}\left(X_{i1t}\right)\Big{|}_{x^{ \prime}\beta=0} =\mathbb{E}\left[Y_{idst}\left(-1\right)^{d_{1}}\right|X^{\prime}_{ i1ts}\beta=0,X_{i2ts}=0,W_{its}=0\right]\] \[=\mathbb{E}\left[\left(Y_{idt}-Y_{ids}\right)\left(-1\right)^{d_{ 1}}\right|X^{\prime}_{i1t}\beta=X^{\prime}_{i1s}\beta,X_{i2t}=X_{i2s},W_{it}=W _{is}\right]\] \[=0,\]
where the last line holds by Assumption P2 that \(\xi_{s}\overset{d}{=}\xi_{t}|(\alpha,Z^{T}).\) This implies that
\[\frac{\partial}{\partial b}\mathbb{E}\left[Y_{idst}\left(-1\right)^{d_{1}} \left(1\left[X^{\prime}_{i1ts}b>0\right]-1\left[X^{\prime}_{i1ts}\beta>0\right] \right)\right|X_{i2ts}=0,W_{its}=0\right]\bigg{|}_{b=\beta}=0.\]
For the same reason, the nonzero component of the second order derivative at \(b=\beta\) only comes from the derivative of \(\kappa_{dts}^{\left(1\right)}\left(X_{i1ts}\right).\) Notice that \(\left.\frac{\partial\kappa_{dts}^{\left(1\right)}\left(T_{b}x\right)}{\partial b }\right|_{b=\beta}=-\left(\frac{\partial\kappa_{dts}^{\left(1\right)}\left(x \right)^{\prime}}{\partial x}\beta\right)x,\) we have
\[\frac{\partial^{2}}{\partial b\partial b^{\prime}}\mathbb{E} \left[Y_{idst}\left(-1\right)^{d_{1}}\left(1\left[X^{\prime}_{i1ts}b>0\right]- 1\left[X^{\prime}_{i1ts}\beta>0\right]\right)\right|X_{i2ts}=0,W_{its}=0\bigg{]} \bigg{|}_{b=\beta}\] \[= -\int 1\left[x^{\prime}\beta=0\right]\left(\frac{\partial\kappa_{ dts}^{\left(1\right)}\left(x\right)^{\prime}}{\partial x}\beta\right)f_{X_{1ts} \left|\left\{X_{2ts}=0,W_{ts}=0\right\}}\left(x\right)xx^{\prime}\mathrm{d} \sigma_{0}^{\left(1\right)}\equiv\mathbb{V}_{dts}^{\left(1\right)}.\]
With the first and second derivatives obtained, we can move on to apply the Taylor expansion to the expectation term in equation (F.11) around \(b=\beta\) as
\[\mathbb{E}\left[Y_{idst}\left(-1\right)^{d_{1}}\left(1\left[X^{ \prime}_{i1ts}b>0\right]-1\left[X^{\prime}_{i1ts}\beta>0\right]\right)\right|X _{i2ts}=0,W_{its}=0\right]\] \[= \frac{1}{2}\left(b-\beta\right)^{\prime}\mathbb{V}_{dts}^{\left(1 \right)}\left(b-\beta\right)+o\left(\left\|b-\beta\right\|^{2}\right).\]
Substitute this back to equation (F.10), we have
\[\mathbb{E}\left[\mathcal{K}_{h_{N}}(X_{i2ts},W_{its})Y_{idst}\left(-1 \right)^{d_{1}}\left(1\left[X^{\prime}_{i1ts}b>0\right]-1\left[X^{\prime}_{i1ts} \beta>0\right]\right)\right]\] \[= \frac{1}{2}\left(b-\beta\right)^{\prime}\mathbb{V}_{dts}^{(1)} \left(b-\beta\right)+o\left(\left\|b-\beta\right\|^{2}\right)+o\left(\left(Nh _{N}^{k_{1}+k_{2}}\right)^{-2/3}\right).\]
We can similarly define
\[\mathbb{V}_{dts}^{(2)}\equiv-\int 1\left[x^{\prime}\beta=0\right]\left(\frac{ \partial\kappa_{dts}^{(2)}\left(x\right)^{\prime}}{\partial x}\,\beta\right) f_{X_{2ts}|\left\{X_{1ts}=0,W_{ts}=0\right\}}\left(x\right)xx^{\prime}\mathrm{d} \sigma_{0}^{(2)},\]
where
\[\kappa_{dts}^{(2)}\left(x\right)\equiv\mathbb{E}\left[\left.Y_{idst}\left(-1 \right)^{d_{2}}\right|X_{i2ts}=x,X_{i1ts}=0,W_{its}=0\right],\]
and \(\sigma_{0}^{(2)}\) is the surface measure of \(\{X^{\prime}_{i2ts}\beta=0\}\). A similar derivation yields
\[\mathbb{E}\left[\mathcal{K}_{h_{N}}(X_{i1ts},W_{its})Y_{idst}\left( -1\right)^{d_{2}}\left(1\left[X^{\prime}_{i2st}b>0\right]-1\left[X^{\prime}_{ i2st}\beta>0\right]\right)\right]\] \[= \frac{1}{2}\left(b-\beta\right)^{\prime}\mathbb{V}_{dts}^{(2)} \left(b-\beta\right)+o\left(\left\|b-\beta\right\|^{2}\right)+o\left(\left(Nh _{N}^{k_{1}+k_{2}}\right)^{-2/3}\right).\]
Put everything so far together, we have
\[\mathbb{E}\left[\phi_{Ni}\left(b\right)\right]=\frac{1}{2}\left(b-\beta \right)^{\prime}\left[\sum_{t>s}\sum_{d\in\mathcal{D}}\left(\mathbb{V}_{dts}^ {(1)}+\mathbb{V}_{dts}^{(2)}\right)\right]\left(b-\beta\right)+o(\left\|b- \beta\right\|^{2})+o((Nh_{N}^{k_{1}+k_{2}})^{-2/3}).\] (F.12)
Assumption M.i in Seo and Otsu (2018) is then verified by
\[\mathbb{V}\equiv \sum_{t>s}\sum_{d\in\mathcal{D}}\left(\mathbb{V}_{dts}^{(1)}+ \mathbb{V}_{dts}^{(2)}\right).\] \[= -\sum_{t>s}\sum_{d\in\mathcal{D}}\left\{\int 1\left[x^{\prime} \beta=0\right]\left(\frac{\partial\kappa_{dts}^{(1)}\left(x\right)^{\prime}}{ \partial x}\,\beta\right)f_{X_{1ts}|\left\{X_{2ts}=0,W_{ts}=0\right\}}\left(x \right)xx^{\prime}\mathrm{d}\sigma_{0}^{(1)}\right.\] \[\left.+\int 1\left[x^{\prime}\beta=0\right]\left(\frac{\partial \kappa_{dts}^{(2)}\left(x\right)^{\prime}}{\partial x}\,\beta\right)f_{X_{2ts}| \left\{X_{1ts}=0,W_{ts}=0\right\}}\left(x\right)xx^{\prime}\mathrm{d}\sigma_{0 }^{(2)}\right\}.\]
We can obtain similar results for \(\varphi_{Ni}\left(r\right)\) using exactly the same line of analysis, i.e.,
\[\mathbb{E}\left[\varphi_{Ni}\left(r\right)\right]=\frac{1}{2}\left(r-\gamma \right)^{\prime}\mathbb{W}\left(r-\gamma\right)+o(\left\|r-\gamma\right\|^{2}) +o((N\sigma_{N}^{2k_{1}})^{-2/3}),\] (F.13)
where
\[\mathbb{W}\equiv -\sum_{t>s}\int 1\left[w^{\prime}\gamma=0\right]\left(\frac{ \partial\kappa_{(1,1)ts}^{(3)}\left(w\right)^{\prime}}{\partial w}\,\gamma \right)f_{W_{ts}|\left\{X_{1ts}=0,X_{2ts}=0\right\}}\left(w\right)ww^{\prime} \mathrm{d}\sigma_{0}\]
with
\[\kappa_{(1,1)ts}^{(3)}\left(w\right)\equiv\mathbb{E}\left[Y_{i(1,1)ts}|W_{its}=w,X_ {i1ts}=0,X_{i2ts}=0\right]\]
and \(\sigma_{0}^{(3)}\) being the surface measure of \(\left\{W_{its}^{\prime}\gamma=0\right\}\).
**On Assumption M.ii in** Seo and Otsu (2018).: Again, we first verify this condition for
\[\mathcal{K}_{h_{N}}(X_{i2ts},W_{its})Y_{idst}\left(-1\right)^{d_{1}}\left(1 \left[X_{i1ts}^{\prime}b>0\right]-1\left[X_{i1ts}^{\prime}\beta>0\right] \right),\]
and other components in \(\phi_{Ni}\left(b\right)\) follow similarly. We evaluate norm of the difference of the above term at \(b=b_{1}\) and \(b=b_{2}\) multiplied by \(h_{N}^{\left(k_{1}+k_{2}\right)/2}\):
\[h_{N}^{\left(k_{1}+k_{2}\right)/2}\left\|\mathcal{K}_{h_{N}}(X_ {i2ts},W_{its})Y_{idst}\left(-1\right)^{d_{1}}\left(1\left[X_{i1ts}^{\prime}b _{1}>0\right]-1\left[X_{i1ts}^{\prime}b_{2}>0\right]\right)\right\|\] (F.14) \[= \left\{\mathbb{E}\left[\mathbb{E}\left[h_{N}^{k_{1}+k_{2}} \mathcal{K}_{h_{N}}^{2}(X_{i2ts},W_{its})Y_{idst}^{2}\right]X_{i1ts}\right] \left(1\left[X_{i1ts}^{\prime}b_{1}>0\right]-1\left[X_{i1ts}^{\prime}b_{2}>0 \right]\right)^{2}\right]\right\}^{1/2}\] \[= \left\{\mathbb{E}\left[\mathbb{E}\left[h_{N}^{k_{1}+k_{2}} \mathcal{K}_{h_{N}}^{2}(X_{i2ts},W_{its})Y_{idst}^{2}\right]X_{i1ts}\right] \left|1\left[X_{i1ts}^{\prime}b_{1}>0\right]-1\left[X_{i1ts}^{\prime}b_{2}>0 \right]\right|\right]\right\}^{1/2}\] \[\geq C_{1}\mathbb{E}\left[\left|1\left[X_{i1ts}^{\prime}b_{1}>0 \right]-1\left[X_{i1ts}^{\prime}b_{2}>0\right]\right|\right]\geq C_{2}\left\|b-\beta\right\|_{2},\]
where the second equality holds by the fact that the difference of two indicator functions can only take values \(-1,0,\) or \(1\). \(C_{1}\) and \(C_{2}\) are two positive constants. Applying the same analysis to all other terms in \(\phi_{Ni}\left(b\right),\) we have
\[h_{N}^{\left(k_{1}+k_{2}\right)/2}\left\|\phi_{Ni}\left(b_{1}\right)-\phi_{Ni} \left(b_{2}\right)\right\|\geq C_{3}\left\|b_{1}-b_{2}\right\|\]
for some positive constant \(C_{3}\).
Applying similar analysis to \(\varphi_{Ni}\left(r\right)\) leads to
\[\sigma_{N}^{k_{1}}\left\|\varphi_{Ni}\left(r_{1}\right)-\varphi_{Ni}\left(r_{2 }\right)\right\|\geq C_{4}\left\|r_{1}-r_{2}\right\|\]
for some positive constant \(C_{4}\).
**On Assumption M.iii in** Seo and Otsu (2018).: We still begin with the analysis on
\[\mathcal{K}_{h_{N}}(X_{i2ts},W_{its})Y_{idst}\left(-1\right)^{d_{1}}\left(1 \left[X_{i1ts}^{\prime}b>0\right]-1\left[X_{i1ts}^{\prime}\beta>0\right] \right),\]
and other components in \(\phi_{Ni}\left(b\right)\) follow similarly. We evaluate the square of difference of the above
term at \(b=b_{1}\) and \(b=b_{2}\), such that \(\|b_{1}-b_{2}\|<\varepsilon\), multiplied by \(h_{N}^{k_{1}+k_{2}}\):
\[h_{N}^{k_{1}+k_{2}}\mathbb{E}\left[\sup_{b_{1},b_{2}\in\mathcal{B},\|b_{1}-b_{2}\|<\varepsilon}\left[\mathcal{K}_{h_{N}}(X_{i2ts},W_{its})Y_{idst} \left(-1\right)^{d_{1}}\left(1\left[X_{i1ts}^{\prime}b_{1}>0\right]-1\left[X_{i1 ts}^{\prime}b_{2}>0\right]\right)\right]^{2}\right]\] \[\leq \mathbb{E}\left[\mathbb{E}\left[\left.h_{N}^{k_{1}+k_{2}}\mathcal{ K}_{h_{N}}^{2}(X_{i2ts},W_{its})Y_{idst}^{2}\right|X_{i1ts}\right]\sup_{b_{1},b_{2} \in\mathcal{B},\|b_{1}-b_{2}\|<\varepsilon}\left|1\left[X_{i1ts}^{\prime}b_{1}> 0\right]-1\left[X_{i1ts}^{\prime}b_{2}>0\right]\right|\right]\] \[\leq C_{5}\mathbb{E}\left[\sup_{b_{1},b_{2}\in\mathcal{B},\|b_{1}-b_{ 2}\|<\varepsilon}\left|1\left[X_{i1ts}^{\prime}b_{1}>0\right]-1\left[X_{i1ts} ^{\prime}b_{2}>0\right]\right|\right]\leq C_{6}\varepsilon,\]
where the second line follows by the fact that the maximum of absolute value of the difference of two indicator functions is \(1\), and the last line holds by Assumptions P4 and P5. Apply the same analysis to all other terms in \(\phi_{Ni}\left(b\right)\) to obtain
\[h_{N}^{k_{1}+k_{2}}\mathbb{E}\left[\sup_{b_{1},b_{2}\in\mathcal{B},\|b_{1}-b_ {2}\|<\varepsilon}\left[\phi_{Ni}\left(b_{1}\right)-\phi_{Ni}\left(b_{2} \right)\right]^{2}\right]\leq C_{7}\varepsilon\]
for some positive constant \(C_{7}\).
Similar analysis on \(\varphi_{Ni}\left(r\right)\) leads to
\[h_{N}^{2k_{1}}\mathbb{E}\left[\sup_{r_{1},r_{2}\in\mathcal{R},\|r_{1}-r_{2}\| <\varepsilon}\left[\varphi_{Ni}\left(r_{1}\right)-\varphi_{Ni}\left(r_{2} \right)\right]^{2}\right]\leq C_{8}\varepsilon\]
for some positive constant \(C_{8}\).
Proof of Lemma b.2.: The first two equalities are direct results from Lemma B.1 by Taylor expansions. Taking \(b=\beta+\rho(Nh_{N}^{k_{1}+k_{2}})^{-1/3}\) in equation (F.12) yields
\[\left(Nh_{N}^{k_{1}+k_{2}}\right)^{2/3}\mathbb{E}\left[\phi_{Ni}\left(\beta+ \rho\left(Nh_{N}^{k_{1}+k_{2}}\right)^{-1/3}\right)\right]=\frac{1}{2}\rho^{ \prime}\mathbb{V}\rho+o\left(1\right)\rightarrow\frac{1}{2}\rho^{\prime} \mathbb{V}\rho.\]
Similarly, setting \(r=\gamma+\delta(N\sigma_{N}^{2k_{1}})^{-1/3}\) in equation (F.13) gives
\[\left(N\sigma_{N}^{2k_{1}}\right)^{2/3}\mathbb{E}\left[\varphi_{Ni}\left( \gamma+\delta\left(N\sigma_{N}^{2k_{1}}\right)^{-1/3}\right)\right]=\frac{1}{ 2}\delta^{\prime}\mathbb{W}\delta+o\left(1\right)\rightarrow\frac{1}{2} \delta^{\prime}\mathbb{W}\delta.\]
\(\mathbb{H}_{1}\) and \(\mathbb{H}_{2}\) can be obtained in the same way as in Kim and Pollard (1990). We omit some similar yet tedious details and refer interested readers to Section 6.4 in Kim and Pollard (1990).
To get \(\mathbb{H}_{1}\), we let \(\upsilon_{N}\equiv(Nh_{N}^{k_{1}+k_{2}})^{1/3}\) and define
\[\mathbb{L}\left(\rho_{1}-\rho_{2}\right) \equiv\lim_{N\rightarrow\infty}\upsilon_{N}\mathbb{E}\left\{h_{N} ^{k_{1}+k_{2}}\left[\phi_{Ni}\left(\beta+\rho_{1}\upsilon_{N}^{-1}\right)-\phi _{Ni}\left(\beta+\rho_{2}\upsilon_{N}^{-1}\right)\right]^{2}\right\},\] \[\mathbb{L}\left(\rho_{1}\right) \equiv\lim_{N\rightarrow\infty}\upsilon_{N}\mathbb{E}\left\{h_{N }^{k_{1}+k_{2}}\left[\phi_{Ni}\left(\beta+\rho_{1}\upsilon_{N}^{-1}\right)-\phi _{Ni}\left(\beta\right)\right]^{2}\right\},\text{ and }\] \[\mathbb{L}\left(\rho_{2}\right) \equiv\lim_{N\rightarrow\infty}\upsilon_{N}\mathbb{E}\left\{h_{N }^{k_{1}+k_{2}}\left[\phi_{Ni}\left(\beta+\rho_{2}\upsilon_{N}^{-1}\right)-\phi _{Ni}\left(\beta\right)\right]^{2}\right\}.\]
Since \(\phi_{Ni}\left(\beta\right)=0\),
\[\mathbb{H}_{1}\left(\rho_{1},\rho_{2}\right)=\frac{1}{2}\left[\mathbb{L}\left( \rho_{1}\right)+\mathbb{L}\left(\rho_{2}\right)-\mathbb{L}\left(\rho_{1}-\rho _{2}\right)\right].\] (F.15)
We calculate \(\mathbb{L}\) as follows. Notice that
\[\mathbb{E}\left[h_{N}^{k_{1}+k_{2}}\mathcal{K}_{h_{N}}(X_{ijts},W _{its})^{2}\right]\] \[= \int\frac{1}{h_{N}^{k_{1}+k_{2}}}\left[\Pi_{\iota=1}^{k_{1}}K \left(\frac{X_{ijts,\iota}}{h_{N}}\right)\Pi_{\iota=1}^{k_{2}}K\left(\frac{W_{ itts,\iota}}{h_{N}}\right)\right]^{2}\mathrm{d}F_{X_{ijts},W_{its}}=O(1),\] \[\mathbb{E}\left[h_{N}^{k_{1}+k_{2}}\mathcal{K}_{h_{N}}(X_{ijts},W _{its})\mathcal{K}_{h_{N}}(X_{ij^{\prime}t^{\prime}s^{\prime}},W_{it^{\prime }s^{\prime}})\right]\] \[= \int\int\frac{1}{h_{N}^{k_{1}+k_{2}}}\Pi_{\iota=1}^{k_{1}}K \left(\frac{X_{ijts,\iota}}{h_{N}}\right)\Pi_{\iota=1}^{k_{2}}K\left(\frac{W_{ itts,\iota}}{h_{N}}\right)\] \[\cdot\Pi_{\iota=1}^{k_{1}}K\left(\frac{X_{ij^{\prime}t^{\prime}s^ {\prime},\iota}}{h_{N}}\right)\Pi_{\iota=1}^{k_{2}}K\left(\frac{W_{it^{\prime }s^{\prime},\iota}}{h_{N}}\right)\mathrm{d}F_{X_{ijts},W_{its}}\mathrm{d}F_{X_{ ij^{\prime}t^{\prime}s^{\prime}},W_{it^{\prime}s^{\prime}}}\] \[= o\left(1\right),\text{ for any }\left(j,t,s\right)\neq\left(j^{\prime},t^{\prime},s^{\prime}\right),\]
and
\[\phi_{Ni}\left(\beta+\rho_{1}\upsilon_{N}^{-1}\right)-\phi_{Ni} \left(\beta+\rho_{2}\upsilon_{N}^{-1}\right)\] \[= \sum_{t>s}\sum_{d\in\mathcal{D}}\left\{\mathcal{K}_{h_{N}}(X_{i2ts },W_{its})Y_{idst}\left(-1\right)^{d_{1}}\left(1\left[X_{i1ts}^{\prime}\left( \beta+\rho_{1}\upsilon_{N}^{-1}\right)>0\right]-1\left[X_{i1ts}^{\prime}\left( \beta+\rho_{2}\upsilon_{N}^{-1}\right)>0\right]\right)\] \[+\mathcal{K}_{h_{N}}(X_{i1ts},W_{its})Y_{idst}\left(-1\right)^{d_{ 2}}\left(1\left[X_{i2st}^{\prime}\left(\beta+\rho_{1}\upsilon_{N}^{-1}\right)> 0\right]-1\left[X_{i2st}^{\prime}\left(\beta+\rho_{2}\upsilon_{N}^{-1}\right)> 0\right]\right)\right\}.\]
Thus, the interaction terms after expanding the square in \(\mathbb{L}\)'s are asymptotically negligible, and we only need to focus on the squares of each term above.
We now derive the square of the first term multiplied by \(h_{N}^{k_{1}+k_{2}}\) and the rest follow similarly.
\[\mathbb{E}\left\{h_{N}^{k_{1}+k_{2}}\left[\mathcal{K}_{h_{N}}(X_{i2 lts},W_{its})Y_{idst}\left(-1\right)^{d_{1}}\left(1\left[X_{i1ts}^{\prime}\left( \beta+\rho_{1}\upsilon_{N}^{-1}\right)>0\right]-1\left[X_{i1ts}^{\prime}\left( \beta+\rho_{2}\upsilon_{N}^{-1}\right)>0\right]\right)\right]^{2}\right\}\] \[= \mathbb{E}\left\{\left[\left|Y_{idst}\right|\left|1\left[X_{i1ts}^ {\prime}\left(\beta+\rho_{1}\upsilon_{N}^{-1}\right)>0\right]-1\left[X_{i1ts}^ {\prime}\left(\beta+\rho_{2}\upsilon_{N}^{-1}\right)>0\right]\right|\;\left|X_{ i2ts},W_{its}\right]h_{N}^{k_{1}+k_{2}}\mathcal{K}_{h_{N}}^{2}(X_{i2lts},W_{its})\right\}\] \[= \mathbb{E}\left[\left|Y_{idst}\right|\left|1\left[X_{i1ts}^{ \prime}\left(\beta+\rho_{1}\upsilon_{N}^{-1}\right)>0\right]-1\left[X_{i1ts}^ {\prime}\left(\beta+\rho_{2}\upsilon_{N}^{-1}\right)>0\right]\right|\;\left|X_{ i2ts}=0,W_{its}=0\right]\] \[\cdot f_{X_{2ts},W_{ts}}\left(0,0\right)\left[\int K^{2}\left(u \right)\mathrm{d}u\right]^{k_{1}+k_{2}}+O\left(h_{N}\right)\] \[= \mathbb{E}\left\{\mathbb{E}\left[\left|Y_{idst}\right|\;\left|X_{ i1ts},X_{i2lts}=0,W_{its}=0\right]\right.\] \[\cdot\left|1\left[X_{i1ts}^{\prime}\left(\beta+\rho_{1}\upsilon_{ N}^{-1}\right)>0\right]-1\left[X_{i1ts}^{\prime}\left(\beta+\rho_{2}\upsilon_{ N}^{-1}\right)>0\right]\right|\;\left|X_{i2ts}=0,W_{its}=0\right\}\] \[\cdot f_{X_{2ts},W_{ts}}\left(0,0\right)\left[\int K^{2}\left(u \right)\mathrm{d}u\right]^{k_{1}+k_{2}}+O\left(h_{N}\right)\] \[= \int\kappa_{dts}^{\left(4\right)}\left(x\right)\left|1\left[x^{ \prime}\left(\beta+\rho_{1}\upsilon_{N}^{-1}\right)>0\right]-1\left[x^{\prime }\left(\beta+\rho_{2}\upsilon_{N}^{-1}\right)>0\right]\right|f_{X_{1ts}|\{X_{ 2ts}=0,W_{its}=0\}}\left(x\right)\mathrm{d}x\] \[\cdot f_{X_{2ts},W_{ts}}\left(0,0\right)\bar{K}_{2}^{k_{1}+k_{2}}+ O\left(h_{N}\right),\]
where \(\kappa_{dts}^{\left(4\right)}\left(x\right)\equiv\mathbb{E}\left[\left|Y_{idst} \right|\;\left|X_{i1ts}=x,X_{i2ts}=0,W_{its}=0\right]\) and \(\bar{K}_{2}\equiv\int K^{2}\left(u\right)\mathrm{d}u\). By Assumption P9, \(\upsilon_{N}h_{N}=(Nh_{N}^{k_{1}+k_{2}})^{1/3}h_{N}\to 0\), so the bias term is asymptotic negligible. To calculate the above integral, we follow Kim and Pollard (1990) and decompose \(X_{1ts}=a\beta+\bar{X}_{1ts}\), with \(\bar{X}_{1ts}\) orthogonal to \(\beta.\) We use \(f_{X_{1ts}}\left(a,\bar{x}\right)\) to denote the density of \(X_{1ts}\) at \(X_{1ts}=a\beta+\bar{x}\). Using the results in Kim and Pollard (1990),
\[\lim_{N\rightarrow\infty}\upsilon_{N}\cdot\int\kappa_{dts}^{\left(4 \right)}\left(x\right)\left|1\left[x^{\prime}\left(\beta+\rho_{1}\upsilon_{N}^{ -1}\right)>0\right]-1\left[x^{\prime}\left(\beta+\rho_{2}\upsilon_{N}^{-1} \right)>0\right]\right|f_{X_{1ts}|\{X_{2ts}=0,W_{ts}=0\}}\left(x\right)\mathrm{d}x\] \[= \int\kappa_{dts}^{\left(4\right)}\left(\bar{x}\right)\left|\bar{x} ^{\prime}\left(\rho_{1}-\rho_{2}\right)\right|f_{X_{1ts}|\{X_{2ts}=0,W_{ts}=0\}} \left(0,\bar{x}\right)\mathrm{d}\bar{x}.\]
To summarize, we have
\[\lim_{N\rightarrow\infty}\upsilon_{N}\mathbb{E}\left\{h_{N}^{k_{1 }+k_{2}}\left[\mathcal{K}_{h_{N}}(X_{i2ts},W_{its})Y_{idst}\left(-1\right)^{d_{1 }}\left(1\left[X_{i1ts}^{\prime}\left(\beta+\rho_{1}\upsilon_{N}^{-1}\right)>0 \right]-1\left[X_{i1ts}^{\prime}\left(\beta+\rho_{2}\upsilon_{N}^{-1}\right)>0 \right]\right)\right]^{2}\right\}\] \[= \int\kappa_{dts}^{\left(4\right)}\left(\bar{x}\right)\left|\bar{x} ^{\prime}\left(\rho_{1}-\rho_{2}\right)\right|f_{X_{1ts}|\{X_{2ts}=0,W_{ts}=0\}} \left(0,\bar{x}\right)\mathrm{d}\bar{x}f_{X_{2ts},W_{ts}}\left(0,0\right)\bar{K }_{2}^{k_{1}+k_{2}}.\]
Similarly, we have for the second term
\[\lim_{N\rightarrow\infty}\upsilon_{N}\mathbb{E}\left\{h_{N}^{k_{1 }+k_{2}}\left[\mathcal{K}_{h_{N}}(X_{i1ts},W_{its})Y_{idst}\left(-1\right)^{d_{2 }}\left(1\left[X_{i2st}^{\prime}\left(\beta+\rho_{1}\upsilon_{N}^{-1}\right)>0 \right]-1\left[X_{i2st}^{\prime}\left(\beta+\rho_{2}\upsilon_{N}^{-1}\right)>0 \right]\right)\right]^{2}\right\}\] \[= \int\kappa_{dts}^{\left(5\right)}\left(\bar{x}\right)\left|\bar{x} ^{\prime}\left(\rho_{1}-\rho_{2}\right)\right|f_{X_{2ts}|\{X_{1ts}=0,W_{ts}=0\}} \left(0,\bar{x}\right)\mathrm{d}\bar{x}f_{X_{1ts},W_{ts}}\left(0,0\right)\bar{K }_{2}^{k_{1}+k_{2}},\]
where
\[\kappa_{dts}^{\left(5\right)}\left(x\right)\equiv\mathbb{E}\left[\left|Y_{idst} \right|\;\left|X_{i2ts}=x,X_{i1ts}=0,W_{its}=0\right].\]
Therefore,
\[\mathbb{L}\left(\rho_{1}-\rho_{2}\right) =\lim_{N\rightarrow\infty}\upsilon_{N}\mathbb{E}\left\{h_{N}^{k_{1}+ k_{2}}\left[\phi_{Ni}\left(\beta+\rho_{1}\upsilon_{N}^{-1}\right)-\phi_{Ni}\left( \beta+\rho_{2}\upsilon_{N}^{-1}\right)\right]^{2}\right\}\] \[=\sum_{t>s}\sum_{d\in\mathcal{D}}\left\{\int\kappa_{dts}^{\left(4 \right)}\left(\bar{x}\right)\left|\bar{x}^{\prime}\left(\rho_{1}-\rho_{2} \right)\right|f_{X_{1ts}|\left\{X_{2ts}=0,W_{ts}=0\right\}}\left(0,\bar{x} \right)\mathrm{d}\bar{x}f_{X_{2ts},W_{ts}}\left(0,0\right)\right.\] \[\left.+\int\kappa_{dts}^{\left(5\right)}\left(\bar{x}\right) \left|\bar{x}^{\prime}\left(\rho_{1}-\rho_{2}\right)\right|f_{X_{2ts}|\left\{ X_{1ts}=0,W_{ts}=0\right\}}\left(0,\bar{x}\right)\mathrm{d}\bar{x}f_{X_{1ts},W_{ts}} \left(0,0\right)\right\}\bar{K}_{2}^{k_{1}+k_{2}}.\]
Finally, by equation (F.15),
\[\mathbb{H}_{1}\left(\rho_{1},\rho_{2}\right)\] \[= \frac{1}{2}\sum_{t>s}\sum_{d\in\mathcal{D}}\left\{\int\kappa_{dts }^{\left(4\right)}\left(\bar{x}\right)\left[\left|\bar{x}^{\prime}\rho_{1} \right|+\left|\bar{x}^{\prime}\rho_{2}\right|-\left|\bar{x}^{\prime}\left(\rho _{1}-\rho_{2}\right)\right|\right]f_{X_{1ts}|\left\{X_{2ts}=0,W_{ts}=0\right\}} \left(0,\bar{x}\right)\mathrm{d}\bar{x}f_{X_{2ts},W_{ts}}\left(0,0\right)\right.\] \[+\left.\int\kappa_{dts}^{\left(5\right)}\left(\bar{x}\right)\left[ \left|\bar{x}^{\prime}\rho_{1}\right|+\left|\bar{x}^{\prime}\rho_{2}\right|- \left|\bar{x}^{\prime}\left(\rho_{1}-\rho_{2}\right)\right|\right]f_{X_{2ts}| \left\{X_{1ts}=0,W_{ts}=0\right\}}\left(0,\bar{x}\right)\mathrm{d}\bar{x}f_{X_ {1ts},W_{ts}}\left(0,0\right)\right\}\bar{K}_{2}^{k_{1}+k_{2}}.\]
The same arguments can be applied to obtain \(\mathbb{H}_{2}\) as
\[\mathbb{H}_{2}\left(\delta_{1},\delta_{2}\right)= \frac{1}{2}\sum_{t>s}\int\kappa_{(1,1)ts}^{\left(6\right)}\left( \bar{w}\right)\left[\left|\bar{w}^{\prime}\delta_{1}\right|+\left|\bar{w}^{ \prime}\delta_{2}\right|-\left|\bar{w}^{\prime}\left(\delta_{1}-\delta_{2} \right)\right|\right]f_{W_{ts}|\left\{X_{1ts}=0,X_{2ts}=0\right\}}\left(0,\bar{w }\right)\mathrm{d}\bar{w}\] \[\cdot f_{X_{1ts},X_{2ts}}\left(0,0\right)\bar{K}_{2}^{2k_{1}},\]
where \(W_{ts}=a\gamma+\bar{W}_{ts}\) with \(\bar{W}_{ts}\) orthogonal to \(\gamma\), \(f_{W_{ts}}\left(a,\bar{w}\right)\) denotes the density of \(W_{ts}\) at \(W_{ts}=a\gamma+\bar{W}_{ts}\), and
\[\kappa_{(1,1)ts}^{\left(6\right)}\left(w\right)\equiv\mathbb{E}\left[\left|Y_{ i(1,1)ts}\right|\ \left|W_{its}=w,X_{i1ts}=0,X_{i2ts}=0\right].\]
|
2307.13760 | Carrollian limit of quadratic gravity | We study the Carrollian limit of the (general) quadratic gravity in four
dimensions. We find that in order for the Carrollian theory to be a
modification of the Carrollian limit of general relativity, the parameters in
the action must depend on the speed of light in a specific way. By focusing on
the leading and the next-to-leading orders in the Carrollian expansion, we show
that there are four such non equivalent Carrollian theories. Imposing
conditions to remove tachyons (from the linearized theory), we end up with a
classification of Carrollian theories according to the leading-order and
next-to-leading-order actions. All modify the Carrollian limit of general
relativity with quartic terms of the extrinsic curvature. To the leading order,
we show that two theories are equivalent to general relativity, one to $R+R^2$
theory, and one to the general quadratic gravity. To the next-to-leading order,
two are equivalent to $R+R^2$ while the other two to the general quadratic
gravity. We study the two theories that are equivalent to $R+R^2$ to the
leading order and write their magnetic limit actions. | Poula Tadros, Ivan KoláŠ| 2023-07-25T18:38:49Z | http://arxiv.org/abs/2307.13760v2 | # Carrollian limit of quadratic gravity
###### Abstract
We study the Carrollian limit of the (general) quadratic gravity in four dimensions. We find that in order for the Carrollian theory to be a modification of the Carrollian limit of general relativity, the parameters in the action must depend on the speed of light in a specific way. By focusing on the leading and the next-to-leading orders in the Carrollian expansion, we show that there are four such nonequivalent Carrollian theories. Imposing conditions to remove tachyons (from the linearized theory), we end up with a classification of Carrollian theories according to the leading-order and next-to-leading-order actions. All modify the Carrollian limit of general relativity with quartic terms of the extrinsic curvature. To the leading order, we show that two theories are equivalent to general relativity, one to \(R+R^{2}\) theory, and one to the general quadratic gravity. To the next-to-leading order, two are equivalent to \(R+R^{2}\) while the other two to the general quadratic gravity. We study the two theories that are equivalent to \(R+R^{2}\) to the leading order and write their magnetic limit actions.
## I Introduction
The _quadratic gravity_ can be derived as an effective field theory by truncating the expansion of the bosonic section of string theory with the first order being _general relativity (GR)_[1; 2; 3; 4; 5] or by imposing a maximal momentum to strings [6]. It has been studied even before the connection to string theory as a renormalizable theory of gravity [7; 8; 9]. It admits a wide class of black-hole and other spherically symmetric (exact) solutions [10; 11; 12; 13]. Nevertheless, in general, it suffers from the presence of unphysical ghost and tachyonic degrees of freedom [8].
The _Carrollian limit_ was first considered independently by Levy-Leblond [14] and Sen Gupta [15] as the ultralocal limit of the Poincare group where the speed of light \(c\) approaches zero, \(c\to 0\). However, at the time, due to the lack of physical application of this limit, it was only studied by mathematicians until 40 years later when the Carrollian limit was linked to many applications in physics. Now, Carrollian physics and Carrollian structures are studied in the context of representations of the Carroll group i.e. Carroll particles [16; 17; 18; 19], condensed matter physics [20; 21; 22], field theory [23; 24; 25; 26], conformal field theory [27; 28; 29; 30], fluid mechanics [31; 32; 33; 34; 35; 36], cosmology [37; 38], string theory [39; 40; 41], gravity [42; 43; 44; 45; 46; 47; 48; 49; 50] (it is regarded as the strong coupling limit of gravity theories [51]), black holes [52; 53; 54; 55; 56; 19], null boundaries [57; 58; 28; 59] and dynamics of particles near black-hole horizons [60; 61; 62].
The connection between the Carrollian limit and physics near black-hole horizons was shown in [52] utilizing the membrane paradigm [63; 64; 65] which is a paradigm showing that the physics of a black hole on a stretched horizon is dual to that of a relativistic fluid on a \((2+1)\)-dimensional submanifold. Taking the Carrollian limit of both sides gives a duality between physics on the horizon and a Carrollian fluid. It was shown afterwards that there are two nonequivalent Carrollian limits of a relativistic theory called the _electric_ and _magnetic limits_. The electric limit comes directly from the _leading order (LO)_ in the _Carrollian expansion_, i.e., the expansion in \(c\), while the magnetic limit is a certain truncation of the _next-to-leading order (NLO)_ of this expansion.
In this paper we analyze the electric and the magnetic Carrollian limit of quadratic gravity, which is the first step towards the analysis of dynamics of particles near black-hole horizons. We study the electric limit of the general quadratic gravity theory, construct a classification of Carrollian theories from it, and the magnetic limit of the resulting ghost-free theories. Throughout the paper we use the units where Newton's constant \(G\) is set to \(G=1/(16\pi)\). The paper is organized as follows:
* In Sec. II, we review the pre-ultralocal (PUL) parametrization, which is suitable for the Carrollian expansion, and calculate the PUL versions of various tensors appearing in a general four-dimensional quadratic gravity action.
* In Sec. III, we review the electric Carrollian limit of GR and show the ultralocality of the spacetime evolution.
* In Sec. IV, we perform the Carrollian expansion of quadratic gravity action. We show that the parameters \(\alpha\) and \(\beta\) in the action must depend on \(c\) in a specific way otherwise the resulting theory would be drastically different from the Carrollian limit of GR. Requiring the resulting theory to be a modification to the Carrollian limit of GR to LO or NLO gives four nonequivalent Carrollian theories.
* In Sec. V, we study those limits one by one and derive conditions on \(\alpha\) and \(\beta\) to remove tachyons (from the linearized theory) in each case to the LO and NLO.
* In Sec. VI, we study the magnetic limit of the ghost-free and tachyon-free theories.
* The paper is concluded with a brief summary and discussions of our results in Sec. VII.
* In Appendix A, we review the mathematical aspects of Carrollian physics from algebraic and geometric points of view and explain the duality to physics near black holes' horizons in more detail.
* In Appendix B, we review the basics of Carrollian transformations on flat spacetimes, and how they induce symmetries on a general (curved) Carrollian manifold. We also summarize the importance of truncation for the NLO Lagrangians to get Carrollian theories.
## II Pre-ultralocal parametrization
The _pre-ultralocal (PUL) parametrization_ is a parametrization of the metric on a manifold using the decomposition of its tangent bundle into a vertical and horizontal subbundles (see below). It is the most convenient parametrization of the spacetime for the analysis of Carrollian gravity since it is well adapted to the ultralocal structure of the Carrollian limit and it displays the speed of light \(c\) explicitly, which makes the calculations more obvious. In what follows, we briefly explain the mathematical background of the PUL parametrization. By following the calculations and notations from [46], we present the PUL version of the Riemannian tensor which will be used to calculate terms in quadratic gravity action in later sections.
Let \((M,\mathbf{g})\) be a \((d+1)\)-dimensional Lorentzian manifold (with mostly positive signature). Let us denote the tangent bundle of \(M\) by \(TM\) and define two sub-bundles of \(TM\) according to the signature of the metric: The first is called the _vertical bundle_\(\mathrm{Ver}M\) (or the timelike bundle) and it corresponds to the timelike direction, i.e., its fibers are endowed with a vector space isomorphic to the time coordinate. The second is referred to as the _horizontal bundle_\(\mathrm{Hor}M\) (or the spatial bundle) and it represents the remaining \(d\) spacelike directions. It is easy to prove that \(TM=\mathrm{Ver}M\oplus\mathrm{Hor}M\). Furthermore, it generates a foliation of the manifold whose slices are the sub-manifolds of a constant time coordinate \(t\). This foliation allows us to define orthogonal spatial and timelike sections as follows: Consider a covector \(T_{\mu}\) and a vector \(V^{\mu}\) from \(\mathrm{Ver}M\), where \(\mu,\nu,\ldots=1,2,\ldots,d+1\) are tensor indices in \(TM\). Next, we consider a symmetric tensor \(\Pi_{\mu\nu}\) from \(\mathrm{Hor}M\), which is the induced metric (or the first fundamental form), and its inverse \(\Pi^{\mu\nu}\).
By construction of the sub-bundles and the foliation we have
\[T_{\mu}V^{\mu} =-1, \tag{1a}\] \[-V^{\mu}T_{\nu}+\Pi^{\rho\mu}\Pi_{\rho\nu} =\delta^{\nu}_{\mu},\] (1b) \[T_{\mu}\Pi^{\mu\nu} =0,\] (1c) \[\Pi_{\mu\nu}V^{\nu} =0. \tag{1d}\]
The PUL parametrization of the metric \(g_{\mu\nu}\) is given by
\[g_{\mu\nu} =-c^{2}T_{\mu}T_{\nu}+\Pi_{\mu\nu}, \tag{2a}\] \[g^{\mu\nu} =-\tfrac{1}{c^{2}}V^{\mu}V^{\nu}+\Pi^{\mu\nu}. \tag{2b}\]
The metric, its inverse, and the spatial tensors can be written in terms of vielbeins as
\[g_{\mu\nu} =\eta_{AB}E^{A}_{\mu}E^{B}_{\nu}, \tag{3a}\] \[g^{\mu\nu} =\eta^{AB}\Theta^{\mu}_{A}\Theta^{\nu}_{B},\] (3b) \[\Pi_{\mu\nu} =\eta_{ab}E^{a}_{\mu}E^{b}_{\nu},\] (3c) \[\Pi^{\mu\nu} =\eta^{ab}\Theta^{\mu}_{A}\Theta^{\nu}_{B}, \tag{3d}\]
where \(E^{A}_{\mu}\) and \(\Theta^{\mu}_{A}\) are the vielbeins. Indices \(A,B,\dots\) are vielbein labels running from \(1\) to \(d+1\) (the dimension of \(TM\)) while \(a,b,\dots\) are vielbein labels running from \(1\) to \(d\) (the dimension of the \(\text{Hor}M\)). Comparing the PUL parametrization with the vielbein definition we get \(E^{A}_{\mu}=(cT_{\mu},E^{a}_{\mu})\) and \(\Theta^{\mu}_{A}=(-c^{-1}V^{\mu},\Theta^{\mu}_{a})\).
Following [46], we assume that all fields are analytic in \(c^{2}\) and expand them as follows:
\[V^{\mu} =v^{\mu}+c^{2}M^{\mu}+O(c^{4}), \tag{4a}\] \[T_{\mu} =\tau_{\mu}+c^{2}N_{\mu}+O(c^{4}),\] (4b) \[\Theta^{\mu}_{a} =\theta^{\mu}_{a}+c^{2}\pi^{\mu}_{a}+O(c^{4}),\] (4c) \[E^{a}_{\mu} =e^{a}_{\mu}+c^{2}F^{a}_{\mu}+O(c^{4}),\] (4d) \[\Pi^{\mu\nu} =h^{\mu\nu}+c^{2}\Phi^{\mu\nu}+O(c^{4}),\] (4e) \[\Pi_{\mu\nu} =h_{\mu\nu}+c^{2}\Phi_{\mu\nu}+O(c^{4}), \tag{4f}\]
where \(v^{\mu},M^{\mu},\tau_{\mu},N_{\nu},\theta^{\mu}_{a},\pi^{\mu}_{a},e^{a}_{\mu},F^{a}_{\mu},h^{\mu\nu},\Phi^{\mu\nu}\) are fields used to define geometries in the Carrollian limit. These fields are not all independent but they are related by two constraints. Thus, we can write \(\tau_{\mu}\) and \(\theta^{\mu}_{a}\) in terms of the other fields. Including more orders in \(c^{2}\) leads to defining more fields that interpolate between the Carrollian theory (LO in the expansion) and the full theory on the manifold. Expanding (1a), we get
\[\tau_{\mu}v^{\mu}+c^{2}(\tau_{\mu}M^{\mu}+N_{\nu}v^{\mu})+c^{4}N_{\mu}M^{\mu}=-1. \tag{5}\]
Comparing the LO and NLO terms we arrive at
\[\tau_{\mu}v^{\mu} =-1, \tag{6a}\] \[\tau_{\mu}M^{\mu}+N_{\nu}v^{\mu} =0. \tag{6b}\]
Similarly, if we expand (1b), we obtain
\[-\tau_{\nu}v^{\mu}+h^{\mu\rho}h_{\rho\nu}+c^{2}(h^{\mu\rho}\Phi_{\rho\nu}+ \Phi^{\mu\rho}h_{\rho\nu}-M^{\mu}\tau_{\nu}-v^{\mu}N_{\mu})+c^{4}\Phi^{\mu\rho }\Phi_{\rho\nu}=\delta^{\mu}_{\nu}, \tag{7}\]
which by comparison of LO and NLO terms gives
\[-\tau_{\nu}v^{\mu}+h^{\mu\rho}h_{\rho\nu}=\delta^{\mu}_{\nu}, \tag{8a}\] \[h^{\mu\rho}\Phi_{\rho\nu}+\Phi^{\mu}h_{\rho\nu}-M^{\mu}\tau_{\nu }-v^{\mu}N_{\mu}=0. \tag{8b}\]
Now, by expanding (2a) we also get
\[h_{\mu\nu}+c^{2}\Phi_{\mu\nu}=\delta_{ab}e^{a}_{\mu}e^{b}_{\nu}+c^{2}\delta_{ ab}(F^{a}_{\mu}e^{b}_{\nu}+e^{a}_{\mu}F^{b}_{\nu})+c^{4}\delta_{ab}F^{a}_{\mu}F^{b}_ {\nu}. \tag{9}\]
and after comparing the LO and NLO terms, we arrive at
\[h_{\mu\nu} =\delta_{ab}e^{a}_{\mu}e^{b}_{\nu}, \tag{10a}\] \[\Phi_{\mu\nu} =\delta_{ab}(F^{a}_{\mu}e^{b}_{\nu}+e^{a}_{\mu}F^{b}_{\nu}). \tag{10b}\]
Similarly, (2b) leads to
\[h^{\mu\nu} =\delta^{ab}\theta^{\mu}_{a}\theta^{\nu}_{b}, \tag{11a}\] \[\Phi^{\mu\nu} =\delta^{ab}(\theta^{\mu}_{a}\pi^{\nu}_{b}+\pi^{\mu}_{a}\theta^{ \nu}_{b}). \tag{11b}\]
Remark that the induced metric \(\mathbf{h}\) and the set of all \(\mathbf{v}\in\mathcal{V}\), give rise to the Carrollian spacetime \((\mathcal{C},\mathcal{V},\mathbf{h})\) from Appendix A.2, where \(\mathcal{C}\) represents the limit of \(M\).
To derive a compatible connection with the PUL parametrization [46; 59], we notice that \(V^{\mu}\) and \(\Pi_{\mu\nu}\) are invariant under Carroll boosts. Thus, they must be covariantly constant. Although this cannot determine a connection uniquely, it was argued in Appendix B of [46] that the most convenient choice is
\[C^{\rho}_{\mu\nu}=-V^{\prime\rho}\partial_{(\mu}T_{\nu)}-V^{\rho}T_{(\mu}\pounds _{\mathbf{V}}T_{\nu)}+\tfrac{1}{2}\Pi^{\rho\lambda}\big{[}\partial_{\mu}\Pi_{\nu \lambda}+\partial_{\nu}\Pi_{\lambda\mu}-\partial_{\lambda}\Pi_{\mu\nu}\big{]} -\Pi^{\rho\lambda}T_{\nu}\mathcal{K}_{\mu\lambda}, \tag{12}\]
where \(\mathcal{K}_{\mu\lambda}=-\tfrac{1}{2}\pounds_{\mathbf{V}}\Pi_{\mu\lambda}\) is the extrinsic curvature (or the second fundamental from). The connection \(C^{\rho}_{\mu\nu}\) has a nonzero torsion given by
\[T^{\rho}_{\mu\nu}=2\Pi^{\rho\lambda}T_{[\mu}\mathcal{K}_{\nu]\lambda}, \tag{13}\]
which, to the LO, reads
\[T^{\rho}_{\mu\nu}=2h^{\rho\lambda}\tau_{[\mu}K_{\nu]\lambda}. \tag{14}\]
To proceed parameterizing the Riemann tensor of the Levi-Civita connection, we write its Christoffel symbols \(\Gamma^{\rho}_{\mu\nu}\) in terms of the PUL fields using (2) and (3). The result is
\[\begin{array}{c}\Gamma^{\rho}_{\mu\nu}=\frac{1}{c^{2}}\big{[}-\frac{1}{2}V^{ \rho}V^{\lambda}\partial_{\mu}\Pi_{\nu\lambda}-\frac{1}{2}V^{\rho}V^{\lambda} \partial_{\nu}\Pi_{\lambda\mu}+\frac{1}{2}V^{\rho}V^{\lambda}\partial_{\lambda }\Pi_{\mu\nu}\big{]}+\frac{1}{2}\big{[}\Pi^{\rho\lambda}\partial_{\mu}\Pi_{\nu \lambda}\\ \qquad\qquad+\Pi^{\rho\lambda}\partial_{\nu}\Pi_{\lambda\mu}-\Pi^{\rho\lambda }\partial_{\lambda}\Pi_{\mu\nu}+V^{\rho}V^{\lambda}\partial_{\mu}(T_{\nu}T_{ \lambda})+V^{\rho}V^{\lambda}\partial_{\nu}(T_{\mu}T_{\lambda})\\ \qquad\qquad-V^{\rho}V^{\lambda}\partial_{\lambda}(T_{\nu}T_{\mu})\big{]}+c^ {2}\big{[}\Pi^{\rho\lambda}\partial_{\mu}(T_{\nu}T_{\lambda})-\Pi^{\rho\lambda }\partial_{\nu}(T_{\mu}T_{\lambda})+\Pi^{\rho\lambda}\partial_{\lambda}(T_{ \nu}T_{\mu})\big{]}\.\end{array} \tag{15}\]
With the help of the coordinate expression of the Lie derivative we can rewrite \(\Gamma^{\rho}_{\mu\nu}\) as
\[\Gamma^{\rho}_{\mu\nu}=\frac{1}{c^{2}}\big{[}-V^{\rho}\mathcal{K}_{\mu\nu} \big{]}+\big{[}C^{\rho}_{\mu\nu}+\Pi^{\rho\lambda}T_{\nu}\mathcal{K}_{\mu\lambda }\big{]}+c^{2}\big{[}-T_{(\mu}\Pi^{\rho\lambda}B_{\nu)\lambda}\big{]}, \tag{16}\]
where \(B_{\mu\nu}=\partial_{\mu}T_{\nu}-\partial_{\nu}T_{\mu}\) is the exterior derivative of the covector \(T_{\mu}\), which is the same as Eq. (2.21) in [46]. Finally, we are equipped to parameterize the Riemann tensor of \(\Gamma^{\rho}_{\mu\nu}\),
\[R^{\rho}{}_{\lambda\mu\nu}=\partial_{\mu}\Gamma^{\rho}_{\nu\lambda}-\partial_ {\nu}C^{\rho}_{\mu\lambda}+\Gamma^{\rho}_{\mu\sigma}\Gamma^{\sigma}_{\nu \lambda}-\Gamma^{\rho}_{\nu\sigma}\Gamma^{\sigma}_{\mu\lambda}. \tag{17}\]
Inserting (15), we obtain
\[\begin{array}{c}R^{\rho}{}_{\lambda\mu\nu}=\frac{1}{c^{2}}\big{[}-\partial_{ \mu}(V^{\rho}\mathcal{K}_{\nu\lambda})+\partial_{\nu}(V^{\rho}\mathcal{K}_{ \mu\lambda})-V^{\rho}C^{\sigma}_{\nu\lambda}\mathcal{K}_{\mu\sigma}+V^{\rho}C^ {\sigma}_{\mu\lambda}\mathcal{K}_{\nu\sigma}-V^{\rho}\mathcal{K}_{\mu\sigma} \Pi^{\sigma\alpha}T_{\lambda}\mathcal{K}_{\nu\alpha}\\ \qquad\qquad+V^{\rho}\mathcal{K}_{\nu\sigma}\Pi^{\sigma\alpha}T_{\lambda} \mathcal{K}_{\mu\alpha}-C^{\rho}_{\mu\sigma}V^{\sigma}\mathcal{K}_{\nu \lambda}+C^{\rho}_{\nu\sigma}V^{\sigma}\mathcal{K}_{\mu\lambda}+\mathcal{K}_{ \nu\lambda}\Pi^{\rho\alpha}\mathcal{K}_{\mu\alpha}-\mathcal{K}_{\mu\lambda} \Pi^{\rho\alpha}\mathcal{K}_{\nu\alpha}\big{]}\\ \qquad\qquad+\big{[}\partial_{\mu}C^{\rho}_{\nu\lambda}+\partial_{\mu}(\Pi^{ \rho\alpha}T_{\lambda}\mathcal{K}_{\mu\alpha})-\partial_{\nu}C^{\rho}_{\mu \lambda}-\partial_{\nu}(\Pi^{\rho\alpha}T_{\lambda}\mathcal{K}_{\nu\alpha})+V^ {\rho}\mathcal{K}_{\mu\sigma}T_{(\nu}\Pi^{\sigma\alpha}B_{\lambda)\alpha}\\ \qquad\qquad-V^{\rho}\mathcal{K}_{\nu\sigma}T_{(\mu}\Pi^{\sigma\alpha}B_{ \lambda)\alpha}+T_{(\mu}\Pi^{\rho\alpha}B_{\sigma)\alpha}V^{\sigma}\mathcal{K}_ {\nu\lambda}-T_{(\nu}\Pi^{\rho\alpha}B_{\rho)\alpha}V^{\sigma}\mathcal{K}_{ \mu\lambda}\\ \qquad\qquad+C^{\rho}_{\mu\sigma}C^{\sigma}_{\nu\lambda}-C^{\rho}_{\nu\sigma}C^ {\sigma}_{\mu\lambda}+C^{\rho}_{\mu\sigma}\Pi^{\sigma\alpha}T_{\lambda} \mathcal{K}_{\nu\alpha}-C^{\rho}_{\nu\sigma}\Pi^{\sigma\alpha}T_{\lambda} \mathcal{K}_{\mu\alpha}+\Pi^{\rho\alpha}T_{\sigma}\mathcal{K}_{\mu\alpha}C^{ \sigma}_{\nu\lambda}-\Pi^{\rho\alpha}T_{\sigma}\mathcal{K}_{\nu\alpha}C^{ \sigma}_{\mu\lambda}\big{]}\\ \qquad\qquad+c^{2}\big{[}-\partial_{\mu}(T_{\nu}(\Pi^{\rho\alpha}B_{\lambda) \alpha})+\partial_{\nu}(T_{(\mu}\Pi^{\rho\alpha}B_{\lambda)\alpha})-C^{\rho}_ {\mu\sigma}T_{(\nu}\Pi^{\rho\alpha}B_{\lambda)\alpha}+C^{\rho}_{\nu\sigma}T_{( \mu}\Pi^{\sigma\alpha}B_{\lambda)\alpha}\\ \qquad\qquad-T_{(\mu}\Pi^{\rho\alpha}B_{\sigma)\alpha}C^{\sigma}_{\nu\lambda} +T_{(\nu}\Pi^{\rho\alpha}B_{\sigma)\alpha}C^{\sigma}_{\mu\lambda}-T_{(\mu}\Pi^{ \rho\alpha}B_{\sigma)\alpha}T_{\lambda}\mathcal{K}_{\nu\beta}+T_{(\nu}\Pi^{\rho \alpha}B_{\sigma)\alpha}\Pi^{\sigma\beta}T_{\lambda}\mathcal{K}_{\mu\beta} \big{]}\\ \qquad\qquad+c^{4}\big{[}T_{(\mu}\Pi^{\rho\alpha}B_{\sigma)\alpha}T_{(\nu}\Pi^ {\sigma\beta}B_{\lambda)\beta}-T_{(\nu}\Pi^{\rho\alpha}B_{\sigma)\alpha}T_{( \mu}\Pi^{\sigma\beta}B_{\lambda)\beta}\big{]}.\end{array} \tag{18}\]
## III Carrollian expansion of GR
Having derived the PUL parametrization of the Riemann tensor in (18), we can now review the Carrollian expansion of the GR following [46]. Recall that the Einstein-Hilbert action in four dimensions (\(d=3\)) is,
\[S=c^{3}\int R\sqrt{-g}d^{4}x. \tag{19}\]
Let us first calculate the PUL parametrization of the Ricci scalar \(R\). By contracting \(\rho\) and \(\mu\) in (18), we can write the Ricci tensor in the form
\[\begin{array}{c}R_{\lambda\nu}=\frac{1}{c^{2}}\big{[}-\nabla_{\mu}(V^{\mu} \mathcal{K}_{\nu\lambda})-2V^{\mu}C^{\sigma}_{[\mu\lambda]}\mathcal{K}_{\nu \sigma}+\mathcal{K}_{\nu\lambda}\mathcal{K}-\mathcal{K}_{\mu\lambda}\Pi^{\mu \alpha}\mathcal{K}_{\nu\alpha}\big{]}+\big{[}\overset{c}{R}_{\lambda\nu}+\nabla_{ \mu}(\Pi^{\mu\alpha}T_{\lambda}\mathcal{K}_{\nu\alpha})-\nabla_{\nu}(T_{\lambda} \mathcal{K})\\ \qquad\qquad+2C^{\mu}_{[\nu\beta]}\Pi^{\beta\alpha}T_{\lambda}\mathcal{K}_{ \mu\alpha}+\mathcal{K}^{\alpha}_{(\nu}B_{\lambda)\alpha}-\frac{1}{2}V^{\mu} \mathcal{K}_{\nu\sigma}T_{\lambda}\Pi^{\rho\alpha}B_{\mu\alpha}-\frac{1}{2}T_{ \nu}\Pi^{\mu\alpha}V^{\sigma}B_{\sigma\alpha}\mathcal{K}_{\mu\lambda}\big{]} \\ \qquad\qquad+c^{2}\big{[}-\nabla_{\mu}(T_{(\nu}\Pi^{\mu\alpha}B_{\lambda) \alpha})+2C^{\sigma}_{[\nu\mu}T_{(\sigma}\Pi^{\mu\alpha}B_{\lambda)\alpha}+T_{( \nu}\Pi^{\mu\alpha}B_{\sigma)\alpha}\Pi^{\sigma\beta}T_{\lambda}\mathcal{K}_{\mu \beta}\big{]}\\ \qquad\qquad+c^{4}\big{[}-\frac{1}{4}T_{\nu}\Pi^{\mu\alpha}B_{\sigma\alpha}T_{ \lambda}\Pi^{\sigma\beta}B_{\mu\beta}\big{]},\end{array} \tag{20}\]
where \(\nabla_{\mu}\) is the covariant derivative corresponding to the connection \(C^{\rho}_{\mu\nu}\). Here, we also introduced the trace of the extrinsic curvature, \(\mathcal{K}=\Pi^{\mu\nu}\mathcal{K}_{\mu\nu}\), and the Ricci tensor of the connection \(C^{\rho}_{\mu\nu}\),
\[\overset{c}{R}_{\lambda\nu}=\partial_{\mu}C^{\mu}_{\nu\lambda}-\partial_{\nu}C^ {\mu}_{\mu\lambda}+C^{\mu}_{\mu\sigma}C^{\sigma}_{\nu\lambda}-C^{\
The PUL parametrization of the Ricci scalar is obtained by contraction with the inverse metric and employing \(\Pi^{\lambda\nu}\nabla_{\mu}(V^{\mu}\mathcal{K}_{\nu\lambda})=\nabla_{\nu}(V^{ \nu}\mathcal{K})\). The result is
\[\begin{split} R=&\tfrac{1}{c^{2}}\big{[}\mathcal{K}^{2}- \Pi^{\lambda\nu}\mathcal{K}_{\mu\lambda}\Pi^{\mu\alpha}\mathcal{K}_{\nu\alpha} -2\nabla_{\nu}(V^{\nu}\mathcal{K})\big{]}\\ +&\big{[}-\Pi^{\lambda\nu}\overset{c}{R}_{\lambda \nu}+\Pi^{\lambda\nu}\nabla_{\mu}(\Pi^{\mu\alpha}T_{\lambda}\mathcal{K}_{\nu \alpha})-\Pi^{\lambda\nu}\nabla_{\nu}(T_{\lambda}\mathcal{K})+V^{\lambda}V^{ \nu}\nabla_{\mu}(T_{(\nu}\Pi^{\mu\alpha}B_{\lambda)\alpha})-V^{\lambda}V^{\nu} \nabla_{\nu}(T_{(\mu}\Pi^{\mu\alpha}B_{\lambda)\alpha})\big{]}\\ +& c^{2}\big{[}-\Pi^{\lambda\nu}\nabla_{\mu}(T_{( \nu}\Pi^{\mu\alpha}B_{\lambda)\alpha})+\Pi^{\lambda\nu}\nabla_{\nu}(T_{(\mu} \Pi^{\mu\alpha}B_{\lambda)\alpha})-\tfrac{1}{4}B_{\mu\nu}B^{\mu\nu}\big{]}, \end{split} \tag{11}\]
where \(B^{\mu\nu}=\Pi^{\mu\alpha}\Pi^{\rho\beta}B_{\alpha\beta}\). We used \(V^{\mu}\overset{c}{R}_{\mu\nu}=0\) in the calculations.
Using the relation \(\nabla_{\rho}\Pi^{\mu\nu}=-V^{(\mu}\Pi^{\nu)\rho}B_{\sigma\lambda}[\delta^{ \lambda}_{\rho}-V^{\lambda}T_{\rho}]\), we can find that \(\Pi^{\lambda\nu}\nabla_{\mu}(T_{\lambda}\mathcal{K}_{\nu}^{\mu})=0\), \(\Pi^{\lambda\nu}\nabla_{\nu}(T_{\lambda}\mathcal{K})=0\), \(V^{\lambda}V^{\nu}\nabla_{\mu}(T_{(\nu}\Pi^{\mu\alpha}B_{\lambda)\alpha})=-V^ {\lambda}\nabla_{\mu}(B_{\lambda}^{\ \mu})\) and \(-\Pi^{\lambda\nu}\nabla_{\mu}(T_{(\nu}\Pi^{\mu\alpha}B_{\lambda)\alpha})=\tfrac {1}{2}B^{\mu\nu}B_{\mu\nu}\). Employing these identities, the Ricci scalar simplifies to
\[R=\tfrac{1}{c^{2}}\big{[}\mathcal{K}^{2}-\mathcal{K}^{\mu\nu}\mathcal{K}_{\mu \nu}-2\nabla_{\nu}(V^{\nu}\mathcal{K})\big{]}+\big{[}-\overset{c}{R}-\nabla_{ \mu}(V^{\nu}B_{\nu}^{\ \mu})\big{]}+c^{2}[\tfrac{1}{4}B^{\mu\nu}B_{\mu\nu} \big{]}, \tag{12}\]
where \(\overset{c}{R}=\Pi^{\mu\nu}\overset{c}{R}_{\mu\nu}\). Furthermore, we can separate the total derivative terms as they corresponds to the boundary terms in actions of physical theories. Finally, the PUL parametrization of the Ricci scalar can be written in the form
\[R=\tfrac{1}{c^{2}}\big{[}\mathcal{K}^{2}-\mathcal{K}^{\mu\nu}\mathcal{K}_{\mu \nu}\big{]}+\big{[}-\overset{c}{R}\big{]}+c^{2}\big{[}\tfrac{1}{4}B^{\mu\nu}B _{\mu\nu}\big{]}+\text{boundary terms}, \tag{13}\]
where we collected all the boundary terms from all orders. Note that the boundary terms will be used in the calculation of quadratic curvature terms. (They are not important in this section since we compute the LO of GR.)
Hence, the PUL parameterization of the Einstein-Hilbert action is then
\[S=c^{2}\int\tfrac{1}{c^{2}}\big{[}\mathcal{K}^{2}-\mathcal{K}^{\mu\nu} \mathcal{K}_{\mu\nu}\big{]}+\big{[}-\overset{c}{R}\big{]}+c^{2}\big{[}\tfrac{ 1}{4}B^{\mu\nu}B_{\mu\nu}\big{]}Ed^{4}x, \tag{14}\]
where \(E=\det(T_{\mu},E_{\mu}^{a})\), so after the Carrollian expansion to the LO, i.e. the electric limit, we arrive at
\[S=c^{2}\int\big{[}K^{2}-K^{\mu\nu}K_{\mu\nu}\big{]}ed^{4}x, \tag{15}\]
where \(K_{\mu\nu}=-\tfrac{1}{2}\mathcal{L}_{\mathbf{v}}h_{\mu\nu}\), \(K^{\mu\nu}=h^{\mu\alpha}h^{\nu\beta}K_{\alpha\beta}\) and \(e=\det(\tau_{\mu},e_{\mu}^{a})\). It is worth mentioning that although this integral is four-dimensional, one should be able to decompose it into a one-dimensional and a three-dimensional integrals. This was done explicitly in the 'zero signature approach' for gravity and other field theories [26; 66; 67]. We will prove the equivalence between the two approaches (for any finite order gravity theory) in a future work.
In order to get the field equations, we first vary with respect to \(v^{\mu}\) and equate to zero,
\[\tfrac{1}{2}\tau_{\mu}\big{[}K^{2}-K^{\mu\nu}K_{\mu\nu}\big{]}-h^{\nu\alpha} \nabla_{\alpha}[K_{\mu\nu}-Kh_{\mu\nu}\big{]}=0. \tag{16}\]
Since \(\tau_{\mu}\) and \(h^{\mu\nu}\) are independent, we can write two field equations as
\[\begin{split} K^{2}-K^{\mu\nu}K_{\mu\nu}&=0,\\ h^{\nu\alpha}\nabla_{\alpha}\big{[}K_{\mu\nu}-Kh_{\mu\nu}\big{]}& =0.\end{split} \tag{17}\]
Now varying (15) with respect to \(h^{\mu\nu}\) and equating to zero we get
\[\tfrac{1}{2}h_{\mu\nu}\big{[}K^{2}-K^{\mu\nu}K_{\mu\nu}\big{]}-KK_{\mu\nu}+h_ {\mu\nu}K^{2}-v^{\alpha}\nabla_{\alpha}(h_{\mu\nu}K-K_{\mu\nu})=0. \tag{18}\]
From equation (18) the first two terms vanish and we end up with a third field equation,
\[-KK_{\mu\nu}+h_{\mu\nu}K^{2}-v^{\alpha}\nabla_{\alpha}(h_{\mu\nu}K-K_{\mu\nu})=0. \tag{19}\]
Following [46], we can rearrange the last equation into
\[\mathcal{L}_{\mathbf{v}}K_{\mu\nu}=-2K_{\mu}^{\alpha}K_{\nu\alpha}+KK_{\mu\nu}. \tag{20}\]
The left-hand side is the Lie derivative of the extrinsic curvature \(K_{\mu\nu}\) of the leaves of the foliation (equal time submanifolds) with respect to \(\mathbf{v}\). Thus it tells us about the evolution of each point as we move along the integral curves of \(v\), i.e., the time evolution. Other than that the equation only depends on the extrinsic curvature \(K_{\mu\nu}\) (Lie derivative of the induced metric \(h_{\mu\nu}\)), so the evolution of the spacetime in GR is ultralocal. However, unlike the nonrelativistic limit [50], the three field equations at the LO are nontrivial and define a distinct theory. In order to support motion we need to go to the NLO terms, which represent corrections to Carrollian theory towards the full Lorentzian theory of GR.
Carrollian expansion of quadratic gravity
Quadratic gravity is a theory where quadratic curvature terms are added to the action, which makes it renormalizable [7; 8]. It also emerges from string theory by imposing a cutoff for the maximum possible momenta [6]. The action for the theory is given by
\[S=c^{3}\int\big{[}R-\alpha R^{\mu\nu}R_{\mu\nu}+\beta R^{2}\big{]}\sqrt{-g}d^{4}x. \tag{4.1}\]
In Sec. III we computed the PUL parametrization of \(R\). Now, we will do the same also for the two other terms in the action, \(R^{\mu\nu}R_{\mu\nu}\) and \(R^{2}\). Using (3.2), we can find the PUL parametrization of \(R^{\mu\nu}R_{\mu\nu}\),
\[\begin{array}{l}R^{\mu\nu}R_{\mu\nu}=\frac{1}{c^{4}}\big{[}\Pi^{\nu\alpha} \Pi^{\lambda\beta}\nabla_{\mu}(V^{\mu}\mathcal{K}_{\alpha\beta})\nabla_{\rho}( V^{\rho}\mathcal{K}_{\nu\lambda})-2\mathcal{K}^{\alpha\beta}\mathcal{K}\nabla_{ \mu}(V^{\mu}\mathcal{K}_{\alpha\beta})+\mathcal{K}^{\lambda\nu}\mathcal{K}_{ \lambda\nu}\mathcal{K}^{2}\\ \qquad\qquad\qquad-V^{\mu}V^{\nu}\nabla_{\mu}\mathcal{K}\nabla_{\nu} \mathcal{K}+2\mathcal{K}_{\alpha\beta}\mathcal{K}^{\alpha\beta}V^{\nu}\nabla_ {\nu}\mathcal{K}-(\mathcal{K}^{\mu\nu}\mathcal{K}_{\mu\nu})^{2}\big{]}\\ \qquad\qquad\qquad+\frac{1}{c^{2}}\big{[}-2\overset{c}{R}\lambda^{\nu}\nabla_ {\mu}(V^{\mu}\mathcal{K}_{\lambda\nu})-\Pi^{\nu\alpha}\nabla_{\mu}(V^{\mu} \mathcal{K}_{\alpha\beta})\mathcal{K}^{\rho\beta}B_{\nu\rho}-\frac{1}{2}\Pi^{ \lambda\beta}\mathcal{K}^{\rho\alpha}B_{\lambda\rho}V^{\mu}\nabla_{\mu} \mathcal{K}_{\alpha\beta}\\ \qquad\qquad\qquad+2\overset{c}{R}\lambda_{\mu\nu}\mathcal{K}^{\lambda\nu} \mathcal{K}+\mathcal{K}^{\mu\nu}\mathcal{K}\mathcal{K}_{\alpha}^{\alpha}B_{ \nu\alpha}-\Pi^{\nu\alpha}\nabla_{\mu}\mathcal{K}_{\alpha}^{\alpha}\nabla_{ \rho}\mathcal{K}_{\nu}^{\rho}+2\Pi^{\nu\alpha}\nabla_{\mu}\mathcal{K}_{\alpha }^{\alpha}\nabla_{\nu}\mathcal{K}\\ \qquad\qquad\qquad+V^{\lambda}\nabla_{\mu}\mathcal{K}_{\alpha}^{\mu}\mathcal{K }^{\rho\alpha}B_{\lambda\rho}-\Pi^{\nu\alpha}\nabla_{\nu}\mathcal{K}\nabla_{ \alpha}\mathcal{K}-V^{\lambda}\nabla_{\alpha}\mathcal{K}\mathcal{K}^{\alpha \varepsilon}B_{\lambda\varepsilon}-2V^{\nu}V^{\alpha}\nabla_{\mu}(\Pi^{\mu \beta}B_{\alpha\beta})\nabla_{\nu}\mathcal{K}\\ \qquad\qquad\qquad-V^{\alpha}\mathcal{K}^{\alpha\lambda}B_{\sigma\alpha}V^{ \rho}\mathcal{K}_{\lambda}^{\beta}B_{\rho\beta}+2V^{\lambda}\nabla_{\mu}(\Pi^{ \mu\rho}(dT_{\lambda\rho}))\mathcal{K}^{\alpha\beta}\mathcal{K}_{\alpha\beta} -B^{\lambda\nu}\nabla_{\mu}(V^{\mu}\mathcal{K}_{\lambda\nu})\\ \qquad\qquad\qquad+\frac{3}{2}\Pi^{\beta\nu}\Pi^{\alpha\lambda}B_{\alpha\beta} \mathcal{K}_{\lambda\nu}\mathcal{K}\big{]}\\ \qquad\qquad\qquad+\big{[}\frac{1}{2}\mathcal{K}^{\alpha\beta}\mathcal{K}_{ \alpha\beta}B^{\mu\nu}B_{\mu\nu}-\Pi^{\nu\alpha}\nabla_{\alpha}\mathcal{K} \nabla_{\rho}(\Pi^{\rho\beta}B_{\nu\beta})-\frac{1}{4}V^{\sigma}\mathcal{K}^ {\nu\rho}B_{\sigma\rho}\nabla_{\mu}(\Pi^{\mu\alpha}B_{\nu\alpha})\\ \qquad\qquad\qquad-\frac{1}{2}V^{\lambda}V^{\alpha}\nabla_{\mu}(\Pi^{\mu\beta }B_{\alpha\beta})\nabla_{\nu}(\Pi^{\nu\rho}B_{\lambda\rho})+\frac{3}{2} \overset{c}{R}^{\alpha\lambda}\mathcal{K}_{\lambda}^{\beta}B_{\alpha\beta}\\ \qquad\qquad\qquad+\overset{c}{R}^{\mu\nu}\overset{c}{R}_{\mu\nu}+\frac{1}{4} \mathcal{K}_{\beta\lambda}\mathcal{K}^{\rho\lambda}B^{\nu\beta}B_{\nu\rho}+ \frac{1}{4}\mathcal{K}^{\beta\lambda}\mathcal{K}^{\alpha\rho}B_{\alpha\beta}B_{ \lambda\rho}\big{]}\\ \qquad\qquad\qquad+c^{4}\big{[}\frac{1}{16}(B^{\alpha}B_{\alpha\beta})^{2} \big{]}.\end{array} \tag{4.2}\]
By expanding this expression to the LO, we arrive at
\[\begin{array}{l}R^{\mu\nu}R_{\mu\nu}=\frac{1}{c^{4}}\big{[}h^{\nu\alpha}h^{ \lambda\beta}\nabla_{\mu}(v^{\mu}K_{\alpha\beta})\nabla_{\rho}(v^{\rho}K_{\nu \lambda})-2K^{\alpha\beta}K\nabla_{\mu}(v^{\mu}K_{\alpha\beta})+K^{\lambda\nu} K_{\lambda\nu}K^{2}\\ \qquad\qquad\qquad-v^{\lambda}v^{\nu}\nabla_{\mu}K\nabla_{\nu}K+2K_{\alpha\beta}K ^{\alpha\beta}v^{\nu}\nabla_{\nu}K-(K^{\mu\nu}K_{\mu\nu})^{2}\big{]}.\end{array} \tag{4.3}\]
The PUL parametrization of \(R^{2}\) can be computed from (3.4),
\[\begin{array}{l}R^{2}=\frac{1}{c^{4}}\big{[}\mathcal{K}^{4}-2\mathcal{K}^{2} \mathcal{K}^{\mu\nu}\mathcal{K}_{\mu\nu}-4\mathcal{K}^{2}\nabla_{\nu}(V^{\nu} \mathcal{K})+(\mathcal{K}^{\mu\nu}\mathcal{K}_{\mu\nu})^{2}+4\mathcal{K}^{ \mu\nu}\mathcal{K}_{\mu\nu}\nabla_{\nu}(V^{\nu}\mathcal{K})+4\nabla_{\mu}(V^{ \mu}\mathcal{K})\nabla_{\nu}(V^{\nu}\mathcal{K})\big{]}\\ \qquad\qquad+\frac{1}{c^{2}}\big{[}-\mathcal{K}^{2}\overset{c}{R}-\mathcal{K}^{ 2}\nabla_{\mu}(V^{\lambda}B_{\lambda}^{\ \mu})+\mathcal{K}_{\mu\nu}\mathcal{K}^{\mu\nu}\overset{c}{R}+\mathcal{K}_{ \mu\nu}\mathcal{K}^{\mu\nu}\nabla_{\rho}(V^{\lambda}B_{\lambda}^{\ \rho})+2\overset{c}{R}\nabla_{\mu}(V^{\mu}\mathcal{K})\\ \qquad\qquad\qquad+2\nabla_{\mu}(V^{\mu}K)\nabla_{\nu}(V^{\lambda}B_{\lambda}^{ \ \nu})\big{]}+\big{[}\frac{1}{2}\mathcal{K}B^{\mu\nu}B_{\mu\nu}-\frac{1}{2} \mathcal{K}^{\mu\nu}\mathcal{K}_{\mu\nu}B^{\sigma\rho}B_{\sigma\rho}-B^{\mu\nu}B _{\mu\nu}\nabla_{\rho}(V^{\rho}\mathcal{K})\\ \qquad\qquad\qquad+\big{(}\overset{c}{R}\big{)}^{2}+\nabla_{\mu}(V^{\lambda}B_{ \lambda}^{\ \mu})\nabla_{\rho}(V^{\sigma}B_{\sigma}^{\ \rho})+2\overset{c}{R}\nabla_{\mu}(V^{\lambda}B_{\lambda}^{\ \mu})\big{]}+c^{2}\big{[}-\frac{1}{4} \overset{c}{R}B^{\mu\nu}B_{\mu\nu}\] \[\qquad\qquad-B^{\mu\nu}B_{\mu\nu}\nabla_{\rho}(V^{\sigma}B_{\sigma }^{\ \rho})\big{]}+c^{4}\big{[}\frac{1}{16}(B_{\mu\nu}B^{\mu\nu})^{2}\big{]},\end{array} \tag{4.4}\]
and its Carrollian expansion to the LO is
\[R^{2}=\frac{1}{c^{4}}\big{[}K^{4}-2K^{2}K^{\mu\nu}K_{\mu\nu}-4K^{2}\nabla_{\nu}(v^{\nu}K)+( K^{\mu\nu}K_{\mu\nu})^{2}+4K^{\mu\nu}K_{\mu\nu}\nabla_{\nu}(v^{\nu}K)+4\nabla_{\mu}(v^{ \mu}K)\nabla_{\nu}(v^{\nu}K)\big{]}. \tag{4.5}\]
Substituting (3.6), (4.3), (4.5) into the action (4.1) we get
\[\begin{array}{l}S=\int\big{\{}c^{2}\big{[}K^{2}-K^{\mu\nu}K_{\mu\nu}\big{]}-\alpha \big{[}h^{\nu\alpha}h^{\lambda\beta}\nabla_{\mu}(v^{\mu}K_{\alpha\beta}) \nabla_{\rho}(v^{\rho}K_{\nu\lambda})-2K^{\alpha\beta}K\nabla_{\mu}(v^{\mu}K_{ \alpha\beta})+K^{\lambda\nu}K_{\lambda\nu}K^{2}\\ \qquad\qquad\qquad-v^{\mu}v^{\nu}\nabla_{\mu}K\nabla_{
where \(T^{\rho}_{\mu\nu}\) is the torsion of the connection defined in Sec. II. Since the PUL-parameterization vector \(v^{\mu}\) is covariantly constant by definition and \(T^{\rho}_{\mu\nu}\) is given by (14), the relation reduces to
\[\pounds_{\mathbf{v}}K_{\mu\nu}=v^{\sigma}\nabla_{\sigma}K_{\mu\nu}-K^{\sigma}_{(\mu }K_{\nu)\sigma}. \tag{18}\]
Substituting in (16) and using the fact that \(v^{\sigma}\nabla_{\sigma}\) acts on scalars simply as \(\pounds_{\mathbf{v}}\), we get
\[\begin{split} S=\int&\big{\{}c^{2}\big{[}K^{2}-K^{ \mu\nu}K_{\mu\nu}\big{]}-\alpha\big{[}h^{\nu\alpha}h^{\beta}\pounds_{\mathbf{v}}K_{ \alpha\beta}\pounds_{\mathbf{v}}K_{\nu\lambda}+2\pounds_{\mathbf{v}}K_{\nu\lambda}K^{ \sigma(\nu}K^{\lambda)}_{\sigma}\\ &+K^{\sigma}_{(\alpha}K_{\beta)\sigma}K^{\rho(\alpha}K^{\beta)}_ {\rho}-2K^{\alpha\beta}K\pounds_{\mathbf{v}}K_{\alpha\beta}-2K^{\alpha\beta}KK^{ \sigma}_{(\alpha}K_{\beta)\sigma}+K^{2}K^{\mu\nu}K_{\mu\nu}\\ &-(\pounds_{\mathbf{v}}K)^{2}+2K_{\mu\nu}K^{\mu\nu}\pounds_{\mathbf{v}}K -(K_{\mu\nu}K^{\mu\nu})^{2}\big{]}+\beta\big{[}K^{4}-2K^{2}K_{\mu\nu}K^{\mu \nu}\\ &-4K^{2}\pounds_{\mathbf{v}}K+(K_{\mu\nu}K^{\mu\nu})^{2}+4K_{\mu\nu}K ^{\mu\nu}\pounds_{\mathbf{v}}K+(\pounds_{\mathbf{v}}K)^{2}\big{]}\big{\}}ed^{4}x.\end{split} \tag{19}\]
Note that only the first two terms have the factor \(c^{2}\). Thus, assuming \(\alpha\) and \(\beta\) being independent of \(c\), the Carrollian limit of the theory would exclude the first two terms coming from the Carrollian limit of the Ricci scalar. This means that the resulting theory would not couple to \(R\) and it will be drastically different from the Carrollian limit of GR [cf. (3.8)]. Hence, \(\alpha\) and \(\beta\) should depend on \(c\). In this case, we get an infinite number of nonequivalent Carrollian theories, but only four of them modify GR to LO or NLO. Notice that this limit is, as expected, ultralocal since there are no space derivatives in the Lagrangian, and therefore, there would not be space derivatives in the field equations. This means that the evolution of a point cannot be affected by neighboring points no matter how close they are.
Similar calculations were done in [68] by rescaling specific terms in the action. However, our approach gives more freedom to rescale terms differently and gives more nonequivalent theories. Other papers considered specific solutions for \(f(R)\) gravity [69; 70; 71]. A general classification of theories for the most general quadratic gravity theory will be provided in the next section.
## V Theories from the Carrollian limit of quadratic gravity
In this section, we study Carrollian theories resulting from the Carrollian limit of quadratic gravity. Different (nonequivalent) theories arise from assuming different dependencies of \(\alpha\) and \(\beta\) on the speed of light \(c\) in (19). Thus, we classify them as such and denote them by \((n,m)\), where \(\alpha=c^{n}\alpha^{\prime}\) and \(\beta=c^{m}\beta^{\prime}\). The relevant theories are listed in Tab. 1. As mentioned above, not all theories are modifications to GR. For example, the theories with negative powers of \(c\) in \(\alpha\) or \(\beta\) but also (0,0), (0,2) and (2,0) are not physically interesting since they are drastically different from GR at LO. It is easy to see that dependencies with odd powers of \(c\) ultimately converge to one of the theories in Tab. 1. Theories with higher-power dependencies on \(c\) cannot modify GR to the LO nor the NLO but to higher orders however, since \(\alpha\) and \(\beta\) dependencies on \(c\) is a non perturbative assumption, having higher powers of \(c\) in the action without being an overall factor can lead to inconsistencies in the Galilean limit. Thus, in what follows, we focus only on the the four interesting Carrollian theories (2,2), (2,4), (4,2), and (4,4).
### (2,2) Carrollian theory
Consider the case where \(\alpha\) and \(\beta\) are quadratic in the speed of light, \(\alpha=c^{2}\alpha^{\prime}\), \(\beta=c^{2}\beta^{\prime}\), with \(\alpha^{\prime}\) and \(\beta^{\prime}\) being constants independent of \(c\). We will study the resulting action to the LO, i.e., the electric limit. From Tab. 1, the action is
\[S=c^{3}\int\big{[}R-\alpha R^{\mu\nu}R_{\mu\nu}+\beta R^{2}\big{]}\sqrt{-g}d^{ 4}x. \tag{20}\]
Writing \(\alpha=c^{2}\alpha^{\prime}\) and \(\beta=c^{2}\beta^{\prime}\), where \(\alpha^{\prime}\) and \(\beta^{\prime}\) are \(c\) independent constants, we can write the action as
\[S=\int c^{3}\big{[}R-c^{2}\alpha^{\prime}R^{\mu\nu}R_{\mu\nu}+c^{2}\beta^{ \prime}R^{2}\big{]}\sqrt{-g}d^{4}x, \tag{21}\]
which in the LO of the Carrollian expansion gives
\[\begin{split} S=c^{2}\int&\big{\{}\big{[}K^{2}-K^{ \mu\nu}K_{\mu\nu}\big{]}-\alpha^{\prime}\big{[}h^{\nu\alpha}h^{\beta}\pounds_{ \mathbf{v}}K_{\alpha\beta}\pounds_{\mathbf{v}}K_{\nu\lambda}+2\pounds_{\mathbf{v}}K_{\nu \lambda}K^{\sigma(\nu}K^{\lambda)}_{\sigma}\\ &+K^{\sigma}_{(\alpha}K_{\beta)\sigma}K^{\rho(\alpha}K^{\beta)}_{ \rho}-2K^{\alpha\beta}K\pounds_{\mathbf{v}}K_{\alpha\beta}-2K^{\alpha\beta}KK^{ \sigma}_{(\alpha}K_{\beta)\sigma}+K^{2}K^{\mu\nu}K_{\mu\nu}\\ &-(\pounds_{\mathbf{v}}K)^{2}+2K_{\mu\nu}K^{\mu\nu}\pounds_{\mathbf{v}}K -(K_{\mu\nu}K^{\mu\nu})^{2}\big{]}+\beta^{\prime}\big{[}K^{4}-2K^{2}K_{\mu\nu}K ^{\mu\nu}\\ &-4K^{2}\pounds_{\mathbf{v}}K+(K_{\mu\nu}K^{\mu\nu})^{2}+4K_{\mu\nu}K^{ \mu\nu}\pounds_{\mathbf{v}}K+(\pounds_{\mathbf{v}}K)^{2}\big{]}\big{\}}ed^{4}x.\end{split} \tag{22}\]
Since the Carrollian expansion and the weak-field regime are not conflicting, the conditions to find tachyons remain the same. In [7] it was found that the additional degrees of freedom have masses of1
Footnote 1: Remark that \(\alpha\) and \(\beta\) in our convention have opposite signs to the convention used in [7].
\[m_{0} =\frac{1}{\sqrt{2}}\frac{1}{\sqrt{-\alpha}}, \tag{5.4a}\] \[m_{2} =\frac{1}{\sqrt{2}}\frac{1}{\sqrt{\alpha-3\beta}}. \tag{5.4b}\]
The conditions to avoid tachyons are (at any order of the Carrollian expansion)
\[\alpha\leq 0, \tag{5.5a}\] \[\alpha-3\beta\geq 0, \tag{5.5b}\]
which translates to
\[\alpha^{\prime}\leq 0, \tag{5.6a}\] \[\alpha^{\prime}-3\beta^{\prime}\geq 0, \tag{5.6b}\]
in the case of (2,2) theory.
### (2,4) Carrollian theory
Let us now investigate the case where \(\alpha=c^{2}\alpha^{\prime}\) and \(\beta=c^{4}\beta^{\prime}\). The action is
\[S=c^{3}\int\big{[}R-c^{2}\alpha^{\prime}R^{\mu\nu}R_{\mu\nu}+c^{4}\beta^{ \prime}R^{2}\big{]}\sqrt{-g}d^{4}x. \tag{5.7}\]
\begin{table}
\begin{tabular}{|c|l|l|} \hline Theory & Action contributing to the LO & Type of modification to the Carrollian limit of GR \\ \hline (0,0) & \(S=c^{3}\int\big{[}-\alpha R^{\mu\nu}R_{\mu\nu}+\beta R^{2}\big{]}\sqrt{-g}d^{4}x\) & _Not a modification of GR_ \\ \hline (0,2) & \(S=c^{3}\int-\alpha R^{\mu\nu}R_{\mu\nu}\sqrt{-g}d^{4}x\) & _Not a modification of GR_ \\ \hline (2,0) & \(S=c^{3}\int\beta R^{2}\sqrt{-g}d^{4}x\) & _Not a modification of GR_ \\ \hline (2,2) & \(S=c^{3}\int\big{[}R-\alpha R^{\mu\nu}R_{\mu\nu}+\beta R^{2}\big{]}\sqrt{-g}d^{4}x\) & _Modifies GR to the LO_ \\ \hline (2,4) & \(S=c^{3}\int\big{[}R-\alpha R^{\mu\nu}R_{\mu\nu}\big{]}\sqrt{-g}d^{4}x\) & _Modifies GR to the LO with \(R^{\mu\nu}R_{\mu\nu}\) terms and the NLO by \(R^{2}\) terms_ \\ \hline (4,2) & \(S=c^{3}\int\big{[}R+\beta R^{2}\big{]}\sqrt{-g}d^{4}x\) & _Modifies GR to the LO with \(R^{2}\) terms and the NLO by \(R^{\mu\nu}R_{\mu\nu}\) terms_ \\ \hline (4,4) & \(S=c^{3}\int R\sqrt{-g}d^{4}x\) & _Modifies GR in the NLO_ \\ \hline \end{tabular}
\end{table}
Table 1: This table summarizes some possible Carrollian theories arising from quadratic gravity that couple to \(R\) at most in the NLO. We list the theories with factors of \(c\) with non-negative powers since negative \(c\) dependencies are clearly not modifications of the Carrollian limit of GR. For example, although (0,0) cannot be a modification to the Carrollian limit of GR, we can say that \(R\) terms are a NLO modification of this theory. There are other geometries which are modifications to the listed geometries like (0,4) which can be regarded as a next-to-next-to-leading order modification of (0,2) while GR itself is the NLO. We can extend the list indefinitely adding more geometries modifying GR to higher orders but here we focus on the LO and NLO.
To the LO in the Carrollian expansion, we get the action
\[S=c^{2}\int \big{\{}\big{[}K^{2}-K^{\mu\nu}K_{\mu\nu}\big{]}-\alpha^{\prime} \big{[}h^{\nu\alpha}h^{\lambda\beta}\pounds_{\mathbf{v}}K_{\alpha\beta}\pounds_{\mathbf{v} }K_{\nu\lambda}+2\pounds_{\mathbf{v}}K_{\nu\lambda}K^{\sigma(\nu}K_{\sigma}^{\lambda)} \tag{5.8}\] \[\qquad+K_{(\alpha}^{\sigma}K_{\beta)\sigma}K^{\rho(\alpha}K_{\rho} ^{\beta)}-2K^{\alpha\beta}K\pounds_{\mathbf{v}}K_{\alpha\beta}-2K^{\alpha\beta}KK_{( \alpha}^{\sigma}K_{\beta)\sigma}+K^{2}K^{\mu\nu}K_{\mu\nu}\] \[\qquad-(\pounds_{\mathbf{v}}K)^{2}+2K_{\mu\nu}K^{\mu\nu}\pounds_{\mathbf{v }}K-(K_{\mu\nu}K^{\mu\nu})^{2}\big{]}\big{\}}ed^{4}x.\]
Notice that this theory is the same as the Carrollian limit of \(R-\alpha R_{\mu\nu}R^{\mu\nu}\). The conditions (5.5) to the LO reduce to \(\alpha^{\prime}=0\). Thus, to the LO, the theory without tachyons is the same as the Carrollian limit of GR.
Assuming \(\alpha^{\prime}\) and \(\beta^{\prime}\) to be of the same numerical order, the conditions to the LO and NLO respectively are
\[\alpha^{\prime} =0, \tag{5.9a}\] \[\beta^{\prime} \leq 0. \tag{5.9b}\]
Thus, the theory without tachyons to the NLO would be
\[S=c^{3}\int\big{[}R_{NLO}+c^{4}\beta^{\prime}(R^{2})_{LO}\big{]}\sqrt{-g}d^{4}x, \tag{5.10}\]
where \(R_{NLO}\) is the Ricci scalar expanded to the NLO and \((R^{2})_{LO}\) is the LO of the Carrollian expansion of \(R^{2}\).
### (4,2) Carrollian theory
Considering the dependencies are \(\alpha=c^{4}\alpha^{\prime}\) and \(\beta=c^{2}\beta^{\prime}\), the action is
\[S=c^{3}\int\big{[}R-c^{4}\alpha^{\prime}R^{\mu\nu}R_{\mu\nu}+c^{2}\beta^{ \prime}R^{2}\big{]}\sqrt{-g}d^{4}x. \tag{5.11}\]
The corresponding LO action reads
\[S=c^{2}\int \big{[}\big{(}K^{2}-K^{\mu\nu}K_{\mu\nu}\big{)}+\beta^{\prime} \big{[}K^{4}-2K^{2}K_{\mu\nu}K^{\mu\nu} \tag{5.12}\] \[\qquad-4K^{2}\pounds_{\mathbf{v}}K+(K_{\mu\nu}K^{\mu\nu})^{2}+4K_{\mu \nu}K^{\mu\nu}\pounds_{\mathbf{v}}K+(\pounds_{\mathbf{v}}K)^{2}\big{]}\big{]}ed^{4}x.\]
In this case the conditions (5.5) then reduce to \(\beta^{\prime}\leq 0\). Expanding the conditions to the NLO we obtain
\[\beta^{\prime} \leq 0, \tag{5.13a}\] \[\alpha^{\prime} =0. \tag{5.13b}\]
Hence, this theory is equivalent to the Carrollian limit of \(R-\beta R^{2}\) theory to all orders with NLO action being the same as (5.10).
### (4,4) Carrollian theory
If we consider \(\alpha=c^{4}\alpha^{\prime}\) and \(\beta=c^{4}\beta^{\prime}\), then the action reads
\[S=c^{3}\int\big{[}R-c^{4}\alpha^{\prime}R^{\mu\nu}R_{\mu\nu}+c^{4}\beta^{ \prime}R^{2}\big{]}\sqrt{-g}d^{4}x. \tag{5.14}\]
For this theory, the LO action is the same as GR. At the NLO and higher orders it will receive corrections from both \(R^{2}\) and \(R_{\mu\nu}R^{\mu\nu}\) terms. The conditions (5.5) are the same as in the (2,2) case.
## VI The magnetic limit
In this section we study the magnetic limit of the theories (2,4) and (4,2) because these two theories are free from tachyons and ghosts if \(\beta^{\prime}\leq 0\). The magnetic limit is obtained by truncating the NLO action such that the resulting action is invariant under Carroll symmetries, for more details about the importance of truncation see Appendix B. In the case of quadratic gravity we have to truncate it the same way as GR, i.e., we have to put all the NLO fields to zero. It is well known that the NLO captures all the dynamics of the Carrollian limit [46]. Thus, the field equations from the magnetic limit leads to corrections to the dynamics in GR and even more solutions that are non existent in GR.
### The magnetic limit of (2,4)
Imposing the truncation
\[M^{\mu}=N_{\mu}=\Phi_{\mu\nu}=\Phi^{\mu\nu}=0, \tag{6.1}\]
we get the LO and NLO of the terms of (5.10) to be
\[R_{LO} =K^{2}-K_{\mu\nu}K^{\mu\nu}, \tag{6.2a}\] \[R_{NLO} =-\overset{c}{R},\] (6.2b) \[(R^{2})_{LO} =(K^{2}-K_{\mu\nu}K^{\mu\nu})(K^{2}-K_{\mu\nu}K^{\mu\nu}+4 \pounds_{\mathbf{v}}K)-4(\pounds_{\mathbf{v}}K)^{2}. \tag{6.2c}\]
As shown in the previous section, the LO of this theory is identical to the LO of GR, i.e., the constraints and the evolution equation are the same as (3.10) and (3.13). In the NLO, the LO constraints and evolution equations must hold, so they serve as constraints to the NLO field equations. Thus, taking the trace of (3.13) we get
\[h^{\mu\nu}\pounds_{\mathbf{v}}K_{\mu\nu}=-2K_{\mu\nu}K^{\mu\nu}+K^{2}, \tag{6.3}\]
then noting that
\[\pounds_{\mathbf{v}}K=h^{\mu\nu}\pounds_{\mathbf{v}}K_{\mu\nu}+2K_{\mu\nu}K^{\mu\nu}, \tag{6.4}\]
we get
\[\pounds_{\mathbf{v}}K=K^{2}. \tag{6.5}\]
Thus, the constraints on the magnetic action Lagrangian are
\[K^{2}-K^{\mu\nu}K_{\mu\nu} =0, \tag{6.6a}\] \[h^{\nu\alpha}\nabla_{\alpha}\big{[}K_{\mu\nu}-Kh_{\mu\nu}\big{]} =0\] (6.6b) \[\pounds_{\mathbf{v}}K =K^{2}. \tag{6.6c}\]
Notice that this is not a general equation. It is valid only in (2,4) and the theories whose LO is identical to GR.
Using the above relations, we can write the action for the magnetic limit of (2,4) as
\[S=-\int d^{4}xe\big{[}-\overset{c}{R}+\beta^{\prime}\big{[}(K^{2 }-K_{\mu\nu}K^{\mu\nu})(K^{2}-K_{\mu\nu}K^{\mu\nu}+4\pounds_{\mathbf{v}}K) \tag{6.7}\] \[\qquad\qquad\qquad\qquad-4(\pounds_{\mathbf{v}}K)^{2}\big{]}+\lambda _{1}(K^{2}-K_{\mu\nu}K^{\mu\nu})+\beta^{\prime}\lambda_{2}(\pounds_{\mathbf{v}}K- K^{2})\big{]},\]
where \(\lambda_{1}\) and \(\lambda_{2}\) are Lagrange multipliers. As expected, the theory (2,4) modifies the magnetic limit of GR with quartic terms in the extrinsic curvature and imposes an additional constraint. It would be interesting to see how these terms modify the dynamics of different solutions of the field equations especially black holes. We expect that this theory has more solutions than the Carrollian limit of GR, namely those corresponding to the Schwarzchild-Bach black holes [12; 13]. If this is the case then one should examine if some terms can be considered as a flux that is analogous to the magnetic field in [61]. Here, however, the flux would come from the theory instead of being turned on by hand.
### The magnetic limit of (4,2)
This case is more complicated than (2,4) since the LO is more involved than that of GR. We first study the constraints and the evolution equations for the LO, then move on to the NLO. To the LO, the action is
\[S=\int d^{4}xe[(K^{2}-K_{\mu\nu}K^{\mu\nu})(1+\beta^{\prime}[K^{2}-K^{\mu\nu}K _{\mu\nu}+4\pounds_{\mathbf{v}}K])-\beta^{\prime}(\pounds_{\mathbf{v}}K)^{2}]. \tag{6.8}\]
Varying with respect to \(v^{\mu}\) and \(h^{\mu\nu}\), we get the constraints
\[(K^{2}-K_{\mu\nu}K^{\mu\nu})(1+\beta^{\prime}[K^{2}-K^{\mu\nu}K_ {\mu\nu}+4\pounds_{\mathbf{v}}K])-\beta^{\prime}(\pounds_{\mathbf{v}}K)^{2} =0, \tag{6.9a}\] \[h^{\mu\rho}\nabla_{\mu}(K_{\rho\nu}-Kh_{\rho\nu}+2\beta^{\prime}[K _{\rho\nu}(-3(K^{2}-K_{\alpha\beta}K^{\alpha\beta})+4\pounds_{\mathbf{v}}K)-Kh_{ \rho\nu}(K^{2}-K_{\alpha\beta}K^{\alpha\beta}+2\pounds_{\mathbf{v}}K)]) =0. \tag{6.9b}\]
Varying with respect to \(h^{\mu\nu}\) and using the constraints, the evolution equation is
\[2(KK_{\mu\nu}-K^{\sigma}_{\mu}K_{\nu\sigma})(1+\beta^{\prime}(2(K^{2 }-K_{\alpha\beta}K^{\alpha\beta})+4\pounds_{\mathbf{v}}K))+2(2\beta^{\prime}(K^{2}-K_ {\alpha\beta}K^{\alpha\beta})-\beta^{\prime}\pounds_{\mathbf{v}}K)(\pounds_{\mathbf{v}} K_{\mu\nu}-4K^{\sigma}_{\mu}K_{\sigma\nu}) \tag{6.10}\] \[+\pounds_{\mathbf{v}}\big{[}(Kh_{\mu\nu}-K_{\mu\nu})(1+\beta^{\prime}( 2(K^{2}-K_{\alpha\beta}K^{\alpha\beta})+4\pounds_{\mathbf{v}}K)\big{]}-8\beta^{ \prime}\pounds_{\mathbf{v}}\big{[}K_{\mu\nu}(2(K^{2}-K_{\alpha\beta}K^{\alpha\beta })-\pounds_{\mathbf{v}}K)\big{]}\] \[+2\beta^{\prime}\pounds_{\mathbf{v}}\pounds_{\mathbf{v}}\big{[}2(K^{2}-K_ {\alpha\beta}K^{\alpha\beta})-\pounds_{\mathbf{v}}K\big{]}=0.\]
As expected, setting \(\beta^{\prime}=0\), the equation reduces to the evolution equation of GR. The corrections to GR due to the \(R^{2}\) term are quartic in the extrinsic curvature.
After truncation the NLO action reads
\[S=c^{3}\int\!e\big{[}\overset{c}{R}+\beta^{\prime}(-K^{2}+K_{\mu\nu}K^{\mu\nu} +2\pounds_{\mathbf{v}}K)(\overset{c}{R}+\nabla_{\mu}(v^{\lambda}b_{\lambda}^{\;\mu} ))\big{]}d^{4}x. \tag{6.11}\]
However, the equations for the LO must also, so we have to add (6.9) to the Lagrangian as a constraint,
\[S=c^{3}\int\!e\big{[}\overset{c}{R}+\beta^{\prime}(-K^{2}+K_{\mu \nu}K^{\mu\nu}+2\pounds_{\mathbf{v}}K)(\overset{c}{R}+\nabla_{\mu}(v^{\lambda}b_{ \lambda}^{\;\mu})) \tag{6.12}\] \[\qquad\qquad+\lambda((K^{2}-K_{\mu\nu}K^{\mu\nu})(1+\beta^{\prime }[K^{2}-K^{\mu\nu}K_{\mu\nu}+4\pounds_{\mathbf{v}}K])-\beta^{\prime}(\pounds_{\mathbf{ v}}K)^{2})\big{]}d^{4}x,\]
where \(\lambda\) is a Lagrange multiplier. Notice that the field equations for this action must include (6.10).
Now, we study a special case of the above equations where we treat \(\pounds_{\mathbf{v}}K\) as an independent variable. Varying the action with respect to \(v^{\sigma}\), we get the equations
\[(K^{2}-K_{\mu\nu}K^{\mu\nu})(1+\beta^{\prime}[K^{2}-K^{\mu\nu}K_{ \mu\nu}+4\pounds_{\mathbf{v}}K])-\beta^{\prime}(\pounds_{\mathbf{v}}K)^{2} =0, \tag{6.13a}\] \[h^{\rho\sigma}\nabla_{\sigma}(Kh_{\rho\mu}-K_{\rho\mu}) =0. \tag{6.13b}\]
Varying the action with respect to \(\pounds_{\mathbf{v}}K\) and assuming \(\pounds_{\mathbf{v}}K\neq 0\), we get
\[\pounds_{\mathbf{v}}K=2(K^{2}-K_{\mu\nu}K^{\mu\nu}). \tag{6.14}\]
From (6.13a) and (6.14), we get the equations
\[\pounds_{\mathbf{v}}K =\tfrac{-2}{5\beta^{\prime}}, \tag{6.15a}\] \[K^{2}-K_{\mu\nu}K^{\mu\nu} =\tfrac{-1}{5\beta^{\prime}}. \tag{6.15b}\]
Varying the action with respect to \(h^{\mu\nu}\) and using (6.15) we get
\[\pounds_{\mathbf{v}}K_{\mu\nu}=-2K^{\sigma}_{\mu}K_{\sigma\nu}+Kh_{\mu\nu}. \tag{6.16}\]
Collecting the independent field equations we get the system
\[\pounds_{\mathbf{v}}K =\tfrac{-2}{5\beta^{\prime}}, \tag{6.17a}\] \[K^{2}-K_{\mu\nu}K^{\mu\nu} =\tfrac{-1}{5\beta^{\prime}},\] (6.17b) \[-2K^{\sigma}_{\mu}K_{\sigma\nu}+Kh_{\mu\nu} =\pounds_{\mathbf{v}}K_{\mu\nu},\] (6.17c) \[h^{\rho\sigma}\nabla_{\sigma}(Kh_{\rho\mu}-K_{\rho\mu}) =0. \tag{6.17d}\]
It turns out that this system solves (6.9) and (6.10). Thus, the solutions to the system (6.17) are also solutions to the full (4,2) equations at LO. Notice that this system solves the full theory but the converse is not true. This means that a solution for (6.17) is a solution for the full theory but its set of solutions is only a subset of that of the full theory. It is also worth mentioning that this system cannot reproduce GR without a cosmological constant i.e. it is not valid for \(\beta^{\prime}=0\).
Notice that (6.17) is similar to equations (4.18) in [46], which describe GR with a cosmological constant, except for the evolution equation is the same as GR without a cosmological constant. Modifications to the gravitational sector to reproduce a cosmological constant (without adding a cosmological constant term in the Lagrangian) were studied in \(f(R)\) gravity [72]. Thus, we can interpret the effect of the \(R^{2}\) term to be an effective cosmological constant with the value \(-1/(5\beta^{\prime})\). We will leave the solutions of this system of equations to future works. Now we use them as constraints to write the action for the magnetic limit.
For the special case where \(\pounds_{\mathbf{v}}K\) is considered independent, the NLO action reads
\[\begin{split} S=c^{3}\int\!e\bigl{[}\overset{c}{R}+\beta^{\prime}(-K^{2} +K_{\mu\nu}K^{\mu\nu}+2\pounds_{\mathbf{v}}K)(\overset{c}{R}+\nabla_{\mu}(v^{\lambda }b_{\lambda}^{\mu}))\\ +\lambda_{1}(\pounds_{\mathbf{v}}K+\tfrac{2}{5\beta^{\prime}})+\lambda_ {2}(K^{2}-K_{\mu\nu}K^{\mu\nu}+\tfrac{1}{5\beta^{\prime}})\bigr{]}d^{4}x,\end{split} \tag{6.18}\]
where \(\lambda_{1}\), \(\lambda_{2}\) are Lagrange multipliers, and \(b_{\mu\nu}=\partial_{\mu}\tau_{\nu}-\partial_{\nu}\tau_{\mu}\).
It is clear that the action contains a cosmological term. This is a direct result of the emergence of an effective cosmological constant in the LO equations. Like the magnetic limit action of (2,4), this action modifies the magnetic limit of GR but with a non zero cosmological constant. Applying this to the general magnetic limit action (6.12) we conclude that it includes a cosmological term in addition to terms that can be interpreted as flux. Notice that magnetic limits are no longer ultralocal due to the presence of spatial derivatives of the metric in the form of the Ricci scalar and terms containing the covariant derivative of \(b_{\mu\nu}\). This allows some dynamics that was absent in the electric limit.
## VII Conclusions
In the present paper, we studied the electric and magnetic Carrollian limits of quadratic gravity. We calculated the PUL parametrization of terms with quadratic curvature in the action. After the Carrollian expansion, we saw that such terms are of the order of \(c^{-4}\) while the Ricci scalar term is only of the order of \(c^{-2}\). From that, we concluded that the Carrollian limit of quadratic gravity requires \(\alpha\) and \(\beta\) to depend on \(c\) in a particular way so that the resulting theory is a modification of GR. We classified different limits according to the dependencies of \(\alpha\) and \(\beta\) on \(c\). For example, the three of them \((0,0)\) (no dependence on \(c\)), \((0,2)\), and \((2,0)\) are not GR modifications because to the LO only the terms of order \(c^{-4}\) survive, i.e., only the quadratic terms in curvature but not the Ricci scalar. The only four theories that are modifications of GR (to the LO and NLO) are summarized in Tab. 2 together with the corresponding modifications.
Focusing on the ghost-free theories, namely (2,4) and (4,2), we see that (2,4) is the same as GR to the LO, so the electric limit and the constraints to the magnetic limit are the same as those of GR. However, to the NLO the theory has extra terms which can be interpreted as an additional flux. In the case of (4,2) the LO and the NLO are equivalent to that of \(R+\beta^{\prime}R^{2}\) theory. The constraints and the evolution equations are in general much more complicated. However, there is a special case where the LO equations reduce to GR with a cosmological constant, this means that the full theory gives rise to an emergent cosmological constant in addition to the extra terms which, like in the (2,4) case, can be interpreted as an additional flux.
\begin{table}
\begin{tabular}{|c|c|c|c|} \hline \multicolumn{4}{|c|}{Carrollian theories from quadratic gravity after removing tachyons} \\ \hline Theory & Action contributing to the LO & Action contributing to the NLO & Conditions \\ \hline (2,2) & \(S=c^{3}\!\int\!\bigl{[}R_{LO}-c^{2}\alpha^{\prime}(R^{\mu\nu}R_{\mu\nu})_{LO} \bigr{]}\) & \(S=c^{3}\!\int\!\bigl{[}R_{NLO}-c^{2}\alpha^{\prime}(R^{\mu\nu}R_{\mu\nu})_{ NLO}\bigr{]}\) & \(\alpha^{\prime}\leq 0\), \\ & \(+c^{2}\beta^{\prime}(R^{2})_{LO}\bigr{]}\sqrt{-g}d^{4}x\) & \(+c^{2}\beta^{\prime}(R^{2})_{NLO}\bigr{]}\sqrt{-g}d^{4}x\) & \(\alpha^{\prime}-3\beta^{\prime}\geq 0\) \\ \hline (2,4) & \(S=c^{3}\!\int\!\bigl{[}R_{LO}\bigr{]}\sqrt{-g}d^{4}x\) & \(S=c^{3}\!\int\!\bigl{[}R_{NLO}+c^{4}\beta^{\prime}(R^{2})_{LO}\bigr{]}\sqrt{-g} d^{4}x\) & \(\alpha^{\prime}=0\) \\ & & \(\beta^{\prime}\leq 0\) \\ \hline (4,2) & \(S=c^{3}\!\int\!\bigl{[}R_{LO}+c^{2}\beta^{\prime}(R^{2})_{LO}\bigr{]}\sqrt{-g} d^{4}x\) & \(S=c^{3}\!\int\!\bigl{[}R_{NLO}+c^{4}\beta^{\prime}(R^{2})_{NLO}\bigr{]}\sqrt{-g} d^{4}x\) & \(\alpha^{\prime}=0\) \\ & & \(\beta^{\prime}\leq 0\) \\ \hline (4,4) & \(S=c^{3}\!\int\!\bigl{[}R_{LO}\bigr{]}\sqrt{-g}d^{4}x\) & \(S=c^{3}\!\int\!\bigl{[}R_{NLO}-c^{4}\alpha^{\prime}(R^{\mu\nu}R_{\mu\nu})_{ NLO}\bigr{]}\) & \(\alpha^{\prime}\leq 0\), \\ & & \(+c^{4}\beta^{\prime}(R^{2})_{NLO}\bigr{]}\sqrt{-g}d^{4}x\) & \(\alpha^{\prime}-3\beta^{\prime}\geq 0\) \\ \hline \end{tabular}
\end{table}
Table 2: After imposing the conditions to remove tachyons, the set of resulting theories consists either of the full Stelle’s gravity to various orders or variations of \(R+R^{2}\) theories. It is worth mentioning that, as said before, theories with odd powers of \(c\) will be equivalent to one of the theories above, and higher powers of \(c\) may be problematic in the Galilean limit. Note that the LO actions possess Carrollian symmetries by construction so they are Carrollian theories, but the NLO action do not. The NLO of the Carrollian expansion does not preserve Carrollian symmetry in general, however, certain truncation recovers the symmetries resulting in the magnetic Carrollian limit of the theory.
More work has to be done to study the field equations for these theories to the LO and NLO. It would be interesting to compare each case with GR to understand what modifications can arise from different quartic terms of the extrinsic curvature. Another direction for future research is to calculate the Galilean limit of quadratic gravity. Since the dependence of \(\alpha\) and \(\beta\) on \(c\) is not a perturbative assumption, the higher powers of \(c\) in the action may be problematic in the Galilean limit. In the current classification the most attractive options for future study are \((2,4)\) and \((4,2)\) since, after imposing the tachyon removing conditions, we get the Carrollian limit of \(R+\beta R^{2}\), a renormalizable theory with no ghosts or tachyons (only if \(\beta\) is positive) which is deduced directly from the string theory. We plan to study black hole solutions for these theories. Since \(R+R^{2}\) theories have more black hole solutions than GR, a direction for a future work is to study black hole solutions for their actions. These should coincide with the Carrollian limit of Schwarzchild-Bach solutions. it is interesting to analyze the dynamics of Carrollian particles on horizons of various black-hole solutions and compare the dynamics with that of [61] and study the modifications arising from the quartic terms.
## Acknowledgements
The authors would like to thank Eric Bergshoeff (Groningen, Netherlands), Pavel Krtous, David Kubiznak (Prague, Czechia), and Marc Henneaux (Brussels, Belgium) for stimulating discussions. P.T. and I.K. were supported by Primus grant PRIMUS/23/SCI/005 from Charles University.
## Appendix A Mathematical Overview and relation to black hole physics
The Carroll algebra is given by Carrollian limit of the Poincare group, also considered as the ultralocal Inonu-Wigner contraction for the Poincare group [73]. It was first constructed independently by Levy-Leblond [14] and Sen Gupta [15] as the limit of the Poincare group as the speed of light tends to zero or equivalently, when the time separation is much less than the space separation. In this limit the light cone converges into a line making any motion of particles with nonzero energy impossible. However, due to the lack of physical applications in this limit, the Carroll group and the geometry associated with it, i.e. Carrollian geometry, were studied solely by mathematicians and mathematical physicist for a long time. The study of Carrollian physics by physicists began when a connection between the Carrollian limit and physics near black-hole horizons was established [52]. In their paper they showed that any null hypersurface is endowed with a Carrollian structure. Since then many papers came out studying Carrollian limit of GR, dynamics of particles near black-hole horizons, and the geometry on horizons of different gravity theories as well as mathematical aspects [74; 75; 76; 77; 45; 46; 47]. For a review on Carrollian geometry and its relation with Galilean geometry i.e. Newton-Cartan geometry see [78]. In this appendix, we review the mathematical structures and properties of the Carroll group and the Carrollian geometric structures and their relation to black holes.
### Algebraic structure
Beginning with the Poincare group \(ISO(1,3)\). The group can be decomposed into \(ISO(1)\) generated by the time translation generator \(P_{0}\), and \(ISO(3)\) generated by space translation generators \(P_{i}\) and space rotation generators \(M_{ij}\), where \(i,j,\ldots=1,2,3\), and Lorentz boost generators \(M_{0i}\) relating the two subgroups. To perform the ultralocal contraction2 we define a parameter \(\omega\) and re-scale \(ISO(1)\) and the boosts as follows: \(P_{0}\rightarrow\omega P_{0}\), \(M_{0i}\rightarrow\omega M_{0i}\). Then, we take the limit \(\omega\rightarrow\infty\). In order to derive the commutation relations between the generators of the new algebra, we begin with the Poincare algebra and consider only the relations with consistent dependency on \(\omega\). The resulting commutation relations are
Footnote 2: In the literature (for example in [79; 44; 27]), it is called the ultrarelativistic contraction and the ultrarelativistic limit. However, we use the term ‘ultralocal’ instead since ‘ultralrelativistic’ was used in the case \(v\to c\) not \(c\to 0\) and the defining feature of the Carrollian limit is ultralocality.
\[\begin{split}\big{[}M_{ij},M_{0k}\big{]}&=2\delta_{k [i}M_{j]0},\\ \big{[}M_{0i},M_{0j}\big{]}&=0,\\ \big{[}P_{i},M_{0j}\big{]}&=\delta_{ij}P_{0},\\ \big{[}M_{ij},P_{k}\big{]}&=2\delta_{k[i}P_{j]},\\ \big{[}M_{ij},M_{kl}\big{]}&=2\big{(}\delta_{i[l}M_{ k]j}-\delta_{j[l}M_{k]i}\big{)}.\end{split} \tag{12}\]
This algebra is called the Carroll algebra and it is the symmetry algebra of all Carrollian theories. The described contraction aligns with our intuition about Carrollian limits since we rescale the time translation and boost generators and send them to infinity implying that space generators are very small in comparison. Another aspect is that, unlike the full Poincare algebra, the commutation relations show that space generators (space rotations and space translations) get transformed into time generators (time translation) or boosts but not the other way around. This means that space motion eventually get swiped away in the favour of time translation and boosts. This is equivalent to saying that space translations are negligible compared to time translations and boosts i.e. the motion in space is negligible compared to motion in time, or the closure of the light cone to a line. This is opposite to the Galilean contraction where we take the limit \(\omega\to 0\), or equivalently rescale space translations and boosts and send the parameter to infinity i.e. time translations are negligible compared to space translations and boosts. This results in the opening of the light cone to a sheet.
Note that the ultralocal limit of a relativistic theory will result in a theory with Carrollian symmetries by construction. However, not all Carrollian theories originate from the ultralocal limit of a relativistic theory. There exist other ways to construct Carrollian theories, for example, by defining a theory directly on a Carrollian manifolds. This leads to a richer symmetry group and geometric structures [44, 79].
### Geometric structure
By geometric structure we mean the description of the mathematical structure using manifolds and bundles. Carrollian geometric structures can be described as an _intrinsic_ or _extrinsic_ structure of a manifold. For more detailed mathematical description see [45, 59].
The intrinsic description defines a _weak Carrollian spacetime_\(\mathcal{C}\) of dimension \(d\) as a fiber bundle with a degenerate metric with a base space of a sphere \(S\) and a one-dimensional fiber. This structure comes with two mappings \(\pi:\mathcal{C}\to S\) such that \(\pi^{-1}(S)\) is one-dimensional (this represents time), and \(d\pi:T\mathcal{C}\to TS\), where \(T\mathcal{C}\) is the tangent bundle of \(\mathcal{C}\) containing at least one nowhere vanishing vector field for every spatial surface, and \(TS\) is the tangent bundle of \(S\). A _Carrollian spacetime_ is a weak Carrollian spacetime with an Ehresmann connection. It defines a smooth decomposition of the tangent space of the Carrollian spacetime into a _vertical_ (thought of as time), and _horizontal_ parts (thought of as space), \(T\mathcal{C}=\mathrm{Ver}\oplus\mathrm{Hor}\), where \(\mathrm{Ver}=\mathrm{ker}(d\pi)\). This decomposition is similar to the ADM decomposition, however, here the metric is decomposed into a timelike vector and the induced metric on spatial surfaces rather than in terms
Figure 1: In the ultralocal contraction, the space generators are much smaller than the time ones resulting in the collapse of the light cone into a line, see Fig. 1, while the Galilean contraction is the opposite resulting in the opening of the light cone. A detailed explanation can be found in [80].
of a lapse function, a shift vector, and the induced metric.3 Equivalently, a weak Carrollian spacetime is the triple \((\mathcal{C},\mathcal{V},\mathbf{h})\) where \(\mathcal{C}\) is a manifold, \(\mathcal{V}\) is a vector bundle on \(\mathcal{C}\) and \(\mathbf{h}\) is a degenerate metric on \(\mathcal{C}\) such that \(\mathbf{h}(\mathbf{v},.)=0\) for every \(\mathbf{v}\in\mathcal{V}\). The Carrollian theories obtained in this paper by means of the Carrollian expansion (in LO) are defined on Carrollian spacetimes.
Footnote 3: In some cases like in the adapted coordinates used in [59; 81; 82], we can identify this decomposition with the ADM decomposition with zero shift vector.
Alternatively, we can characterize the Carrollian structure also extrinsically by means of the rigged structure on a \(d\)-dimensional timelike submanifold \(H\) of the \((d+1)\)-dimensional Lorentzian manifold \((M,\mathbf{g})\) (smooth manifold \(M\) equipped with Lorentzian metric \(\mathbf{g}\)). Let us define a normal covector \(\mathbf{n}\) to \(H\) and a vector \(\mathbf{k}\) that is dual to it, \(n_{\mu}k^{\mu}=1\). The pair \((\mathbf{n},\mathbf{k})\) is called the _rigged structure_. Due to the Frobenius theorem its existence is equivalent to a foliation of \(M\) with \(d\)-dimensional leaves corresponding to surfaces of constant coordinate \(r\) (such that \(\mathbf{n}=\mathbf{d}r\)), which we choose to be copies of \(H\) and call them the _stretched horizons_. Furthermore, we assume that a leaf \(N\) representing the limit \(r\to 0\) is null and call it the _true horizon_. The rigged structure on \(H\) defines a projection operator from \(TM\) to \(TH\), called the _rigging projector_,
\[P^{\mu}_{\nu}=g^{\mu}_{\nu}-k^{\mu}n_{\nu}. \tag{10}\]
Let us denote the norm of \(\mathbf{n}\) by \(g^{\mu\nu}n_{\mu}n_{\nu}=2\rho\) and define a tangential vector \(\mathbf{v}\) to \(H\) as
\[v^{\mu}=P^{\mu}_{\nu}n^{\nu}=n^{\mu}-2\rho k^{\mu}. \tag{11}\]
With this definition we can decompose the rigged projector as
\[P^{\mu}_{\nu}=h^{\mu}_{\nu}+k_{\nu}v^{\mu}, \tag{12}\]
where \(\mathbf{h}\) is the induced metric on \(H\). Notice that \(h_{\mu\nu}\omega^{\mu}v^{\nu}=-2\rho+4\rho^{2}k_{\mu}k^{\mu}\). Although it is not necessary here, in the black hole applications \(\mathbf{k}\) is typically chosen to be null [63]. Taking the limit \(\rho\to 0\), which corresponds to \(r\to 0\), of the triple \((\mathbf{P},\mathbf{v},\mathbf{h})\), given by (10), (11), and (12), defines the _Carrollian structure_ on \(N\). Here, \(k_{\mu}=g_{\mu\nu}k^{\nu}\) plays the role of the Ehresmann connection. In other words, \(\rho\), indicating the distance between \(H\) and \(N\) (in the limiting sense when \(H\) is close to \(N\)), plays a role of the speed of light. This procedure is depicted in Fig. 2. Remark that the metric \(\mathbf{h}\) is regular in the limit \(\rho\to 0\). Note that the splitting done by (12) is equivalent to the splitting done by the mapping \(d\pi\), hence, the extrinsic and intrinsic descriptions are equivalent, where \(N\) in the extrinsic description plays the role of the Carrollian manifold \(\mathcal{C}\) in the intrinsic description.
The extrinsic description of Carrollian structures with the concept of stretched horizons is useful for understanding the relationship between the Carrollian limit of the gravitational theories and the dynamics near black-holes horizons. The physics on a stretched horizon of the black hole is shown to be equivalent to a relativistic fluid on a \((2+1)\)-dimensional sub-manifold in what is known as the membrane paradigm [63; 64; 65]. However, when trying to define the same quantities on the true horizon they diverge. These divergences can be regularized but the regularization which depends on the foliation of stretched horizons used. The way to define finite quantities on and near the true horizon is to use the identification by the membrane paradigm. Since the stretched horizons converge to the true horizon in the Carrollian limit, we can identify the physics on and near the true horizon by taking the Carrollian limit of the dual fluid on a \((2+1)\)-dimensional sub-manifold. This was shown explicitly in [52]. The quantities are well defined since the metrics on stretched horizons converge to a regular metric on the true horizon as shown in the previous section. Just described relation between Carrollian limits and black holes is visualized in Fig. 3.
## Appendix B Carrollian transformations and symmetries
In this appendix, we review the notions of Carrollian transformations, compare it to the Galilean transformations, then define Carrollian symmetries on a Carrollian manifold.
### Galilean and Carrollian transformations
In this section we give a quick description of the Galilean and Carrollian transformations on a flat spacetime. We begin with the familiar Galilean boosts
\[x^{\prime}=x+vt, \tag{13a}\] \[t^{\prime}=t, \tag{13b}\]
where \(x\) denotes the space coordinates, \(t\) is the time and \(v\) is the velocity. Along with space rotations, these transformations form the Galilean algebra which features an absolute time and no upper bound for the velocity. The Galilean algebra can be deduced from the Poincare algebra by Inonu-Wigner contraction with \(c\to\infty\). This aligns with the light cone opening up discussed in the previous appendix.
Carrollian boosts are defined by interchanging space and time in the Galilean boosts i.e.
\[t^{\prime} =t+vx, \tag{10a}\] \[x^{\prime} =x. \tag{10b}\]
These boosts feature an absolute space and a zero spatial velocity. Along with space rotation, it forms the Carrollian algebra (10). As discussed, it is the result of the opposite Inonu-Wigner contraction of the Galilean one i.e. \(c\to 0\).
### Carrollian symmetries on a general Carrollian manifold
Here, we take a look on how to generalise (10) to a General curved Carrollian manifold.
Figure 3: The membrane paradigm identifies physics on a stretched horizon with a relativistic fluid. Taking the limit of \(\rho\) on one side is shown to be equivalent to taking the limit \(c\to 0\) on the other side of the duality. Thus, physics near black-hole horizons can be described by the Carrollian limit of the theory.
Figure 2: In the extrinsic description, we have an infinite set of stretched horizons given by \(r=\text{constant}\) with tangent vector \(\mathbf{v}\) and transverse vector \(\mathbf{k}\). The quantity \(\rho\) indicates the distance between the stretched and the true horizon (in the limiting sense when \(H\) is close to \(N\)) and plays a role of the speed of light. Here, the true horizon is the limiting case \(r\to 0\) or equivalently \(\rho\to 0\) for which the vectors \(\mathbf{v}_{\rho=0}\) (tangent to the true horizon). Hence, the Carrollian structure on the stretched horizons defines the Carrollian structure on the true horizon by taking the limits above.
Let \(\mathcal{C}\) be a Carrollian manifold with \(h_{\mu\nu}\) being the components of the induced metric and \(v^{\mu}\) be the preferred "time" vector field. For the boosts to preserve the Carrollian structure, they must leave \(h_{\mu\nu}\) and \(v^{\mu}\) invariant but not necessarily the inverse metric \(h^{\mu\nu}\) nor the one form associated to \(v^{\mu}\) which we call \(\tau_{\mu}\).
Firstly, we define an analog of the velocity by the action of the boost on \(\tau_{\mu}\) i.e.
\[\delta\tau_{\mu}=\lambda_{\mu}(x), \tag{10}\]
where \(\lambda_{\mu}\) depends on the position on the spatial slice it is defined in as well as the slice itself. It is easy to see from (6) that \(v^{\mu}\lambda_{\mu}=0\) i.e. it is a spatial covector. Using this definition, by performing the Inonu-Wigner contraction, the Carrollian boosts on \(h^{\mu\nu}\) is given by
\[\delta h^{\mu\nu}=h^{\mu\rho}\lambda_{\rho}v^{\nu}+h^{\nu\rho}\lambda_{\rho}v ^{\mu}. \tag{11}\]
These symmetries are called the Carrollian symmetries. Every Carrollian theory must have those symmetries, otherwise, the theories could not have been defined on a Carrollian manifold in the first place.
### Carrollian gravity and the NLO fields
Here, we explain why the truncation of the NLO of gravity theories are crucial to get a Carrollian theory. In the LO, straightforward calculations using (10) and (11) to check that \(K_{\mu\nu}\), \(K\), \(K_{\mu\nu}K^{\mu\nu}\) and \(\pounds_{v}K\) are all invariant i.e. the LO is Carrollian without the need of truncation. However, we can see from (6), (8) and (11) that the NLO fields transform in a highly non trivial way under Carrollian transformations. Thus, in general terms in NLO Lagrangians are not invariant under Carrollian transformation so the theory cannot be defined on a Carrollian manifold properly, for example in (4,2), without truncation, we have terms in the NLO Lagrangian of the form \(K^{3}h_{\mu\nu}\pounds_{\Psi}\Phi^{\mu\nu}\) (and various similar combinations of an invariant part with one non invariant NLO field) which can not be invariant unless NLO fields are set to zero. So, truncation is necessary.
|
2303.11480 | Inferring ocean transport statistics with probabilistic neural networks | Using a probabilistic neural network and Lagrangian observations from the
Global Drifter Program, we model the single particle transition probability
density function (pdf) of ocean surface drifters. The transition pdf is
represented by a Gaussian mixture whose parameters (weights, means and
covariances) are continuous functions of latitude and longitude determined to
maximise the likelihood of observed drifter trajectories. This provides a
comprehensive description of drifter dynamics allowing for the simulation of
drifter trajectories and the estimation of a wealth of dynamical statistics
without the need to revisit the raw data. As examples, we compute global
estimates of mean displacements over four days and lateral diffusivity. We use
a probabilistic scoring rule to compare our model to commonly used transition
matrix models. Our model outperforms others globally and in three specific
regions. A drifter release experiment simulated using our model shows the
emergence of concentrated clusters in the subtropical gyres, in agreement with
previous studies on the formation of garbage patches. An advantage of the
neural network model is that it provides a continuous-in-space representation
and avoids the need to discretise space, overcoming the challenges of dealing
with nonuniform data. Our approach, which embraces data-driven probabilistic
modelling, is applicable to many other problems in fluid dynamics and
oceanography. | Martin T. Brolly | 2023-03-20T22:20:15Z | http://arxiv.org/abs/2303.11480v2 | # Inferring ocean transport statistics with probabilistic neural networks
###### Abstract
Using a probabilistic neural network and Lagrangian observations from the Global Drifter Program, we model the single particle transition probability density function (pdf) of ocean surface drifters. The transition pdf is represented by a Gaussian mixture whose parameters (weights, means and covariances) are continuous functions of latitude and longitude determined to maximise the likelihood of observed drifter trajectories. This provides a comprehensive description of drifter dynamics allowing for the simulation of drifter trajectories and the estimation of a wealth of dynamical statistics without the need to revisit the raw data. As examples, we compute global estimates of mean displacements over four days and lateral diffusivity. We use a probabilistic scoring rule to compare our model to commonly used transition matrix models. Our model outperforms others globally and in three specific regions. A drifter release experiment simulated using our model shows the emergence of concentrated clusters in the subtropical gyres, in agreement with previous studies on the formation of garbage patches. An advantage of the neural network model is that it provides a continuous-in-space representation and avoids the need to discretise space, overcoming the challenges of dealing with nonuniform data. Our approach, which embraces data-driven probabilistic modelling, is applicable to many other problems in fluid dynamics and oceanography.
## 1 Introduction
The motion of turbulent fluids can be characterised usefully by dynamical statistics such as dispersion, energy spectra and velocity structure functions (e.g., Batchelor 1953, Monin & Yaglom 1971). In oceanography much effort has been directed towards inferring such statistics from observations (e.g., LaCasce 2008, van Sebille et al. 2018). In many cases, these inference tasks can be related to problems in conditional probability density estimation. For example, estimating single-particle dispersion is related to estimating the conditional density
\[p\left(\mathbf{X}(t+\tau)-\mathbf{X}(t)\mid\mathbf{X}(t),\,t,\,\tau\right), \tag{1}\]
where \(\mathbf{X}(t)\) is the position of a particle at time \(t\), in that the dispersion is the variance of this distribution. Similarly, the velocity structure functions are moments of the conditional
density
\[p\left(\mathbf{u}(\mathbf{x}_{1},\,t)-\mathbf{u}(\mathbf{x}_{2},\,t)\mid\mathbf{x}_{1},\, \mathbf{x}_{2},\,t\right), \tag{2}\]
where \(\mathbf{u}(\mathbf{x},\,t)\) is the fluid velocity at position \(\mathbf{x}\). By estimating full conditional densities like (1) and (2), it is possible to estimate simultaneously a number of related statistics. For instance, (1) describes entirely the single-particle displacement statistics, while (2) encodes velocity structure functions of all orders, providing two-point Eulerian velocity statistics. It is no surprise, then, that estimating these conditional densities accurately is a nontrivial task.
In this work we consider a particular tool for conditional density estimation, the mixture density network (MDN) (Bishop, 1994), and test its performance in learning fluid statistics from observations. MDNs are machine learning models, which combine artificial neural networks with probabilistic mixture models to represent conditional densities (Bishop, 2006). Their use has increased rapidly in recent years with applications in a variety of fields for a range of reduced order modelling and emulation tasks, including surrogate modelling of fluid flow (Maulik et al., 2020), parameterisation of subgrid momentum forcing in ocean models (Guillaumin and Zanna, 2021), emulation of complex stochastic models in epidemiology (Davis et al., 2020) and multi-scale models of chemical reaction networks (Bortolussi and Palmieri, 2018), and subgrid scale closures in large eddy simulations of turbulent combustion (Shin et al., 2021).
We focus on learning the single-particle transition density (1) in the ocean near-surface using Lagrangian trajectory data collected as part of the Global Drifter Program (Lumpkin and Centurioni, 2019). A model of the transition density provides, at every point in the ocean, a probabilistic forecast for drifter displacements from that location. We show that the MDN model outperforms existing stochastic models of drifter dynamics based on Ulam's method (Ulam, 1960; Froyland, 2001), as well as another simple benchmark model, and eliminates the difficulty of designing appropriate discretisations of space needed for such models.
From the transition density it is possible to derive estimates of a range of single-particle statistics. As examples, we provide maps of the mean displacement over four days as a function of initial position \(\mathbf{X}_{0}\), as well as the lateral diffusivity. The transition density produces highly non-Gaussian statistics in some regions. By calculating the Kullback-Leibler divergence between our full model and a simplified Gaussian model, we quantify and map non-Gaussianity in drifter displacements.
The MDN model also provides the basis for a discrete-time Markov process model of drifter dynamics, offering a continuous space alternative to Markov chain models which have been used in numerous studies (Maximenko et al., 2012; van Sebille et al., 2012; Miron et al., 2017, 2021). We perform a global simulation of drifters for a period of ten years with initial positions given on a uniform grid, and reproduce the 'garbage patches' in subtropical gyres seen in previous studies.
The article is structured as follows. In SS2 we discuss conditional density estimation and the estimation of conditional statistics. In SS3 we introduce MDNs. In SS4 we describe the MDN model of the single-particle transition density from drifter observations. We compare its performance with alternative models, present derived single-particle statistics and simulate the clustering of drifters in subtropical gyres. In SS5 we conclude and suggest further problems where MDNs may be a useful tool.
Conditional modelling
While the aim of regression is to model \(\mathbb{E}[\mathbf{Y}\mid\mathbf{X}]\), where \(\mathbf{X}\) and \(\mathbf{Y}\) are random variables, conditional modelling (or conditional density estimation, CDE) is the task of inferring the full conditional probability density \(p(\mathbf{Y}\mid\mathbf{X})\)1. By modelling conditional densities, rather than just conditional means, we incorporate information about the variability of \(\mathbf{Y}\mid\mathbf{X}\); more than this, conditional models can capture skewness, excess kurtosis and multimodality. This comprehensive description of conditional statistics is valuable in applications where single point-estimates are insufficient due to inherent variability, and where there is interest in non-Gaussian statistics, including those associated with rare events. Conditional models can be used in two ways: (i) as stochastic surrogate models (or emulators), and (ii) as a tool for estimating conditional statistics.
Footnote 1: We restrict attention to the case of continuous random variables.
Parametric conditional models (such as MDNs) assume that, for each possible value of \(\mathbf{X}\), the distribution of \(\mathbf{Y}\mid\mathbf{X}\) belongs to a certain family of parametric distributions, i.e.
\[p(\mathbf{Y}\mid\mathbf{X})=\rho(\mathbf{Y}\,;\,\mathbf{\theta}(\mathbf{X})), \tag{3}\]
where \(\rho(\,\cdot\,;\,\mathbf{\theta})\) is the probability density corresponding to a family of distributions parameterised by \(\mathbf{\theta}\). In this case, not only must the form of \(\rho\) be chosen, but the dependence on the conditioned variable must also be modelled by some representation of \(\mathbf{\theta}(\mathbf{X})\).
### Estimating conditional statistics
Given data \(\{\mathbf{X}_{i},\mathbf{Y}_{i}\}\), a standard approach to estimating conditional statistics \(\mathbb{E}[\mathbf{f}(\mathbf{Y})\mid\mathbf{X}]\) is to first discretise (or 'bin') in \(\mathbf{X}\) and produce local estimates \(\mathbb{E}[\overline{\mathbf{f}(\mathbf{Y})}](\tilde{\mathbf{X}})\) for each value of the discretised variable \(\tilde{\mathbf{X}}\), typically by Monte Carlo estimation, such that
\[\mathbb{E}[\overline{\mathbf{f}(\mathbf{Y})}](\tilde{\mathbf{X}}):=\frac{\sum_{i}\mathbf{f}( \mathbf{Y}_{i})\,\,\mathbb{I}_{B}(\mathbf{X}_{i})}{\sum_{i}\,\mathbb{I}_{B}(\mathbf{X}_{i })}, \tag{4}\]
where \(\mathbb{I}_{B}\) is the indicator function of \(B\), the set of values of \(\mathbf{X}\) whose discretised value is \(\tilde{\mathbf{X}}\). For estimates (4) to be useful, one must design a suitable discretisation of the domain of \(\mathbf{X}\), which balances the need to choose a fine enough discretisation to resolve details in \(\mathbf{X}\) with the need to take sufficiently large bins to have enough data for these estimates to have reasonably small variance. This can be especially challenging when data is sparse, or when the density of data is highly inhomogeneous.
Conditional modelling offers an alternative approach wherein one first constructs a model of the conditional density, as in (3), that is continuous in both \(\mathbf{X}\) and \(\mathbf{Y}\), then computes estimates
\[\mathbb{E}_{\mathcal{M}}[\mathbf{f}(\mathbf{Y})\mid\mathbf{X}]:=\int\mathbf{f}(\mathbf{Y})\,\, \rho(\mathbf{Y}\,;\,\mathbf{\theta}(\mathbf{X}))\,\mathrm{d}\mathbf{Y} \tag{5}\]
for as many statistics as desired at any value of \(\mathbf{X}\) in the domain, without the need to revisit the raw data. In some cases the expectations \(\mathbb{E}_{\mathcal{M}}\) can be calculated using a closed-form expression. Where no such expression is known, the expectation can be computed by numerical integration or a Monte Carlo method. Since these calculations rely only on evaluating the modelled conditional density, or sampling from it, they are not limited by sparsity of data. Also, for a given \(\mathbf{X}^{*}\), estimates of the form (4) are informed only by observations in the same
bin as \(\mathbf{X}^{*}\), whereas in a conditional model, all observations are used to fit \(\rho(\mathbf{Y}\,;\,\mathbf{\theta}(\mathbf{X}^{*}))\). The schematic in figure 1 contrasts the standard approach and the conditional modelling approaches.
## 3 Mixture density networks
A mixture density network (Bishop 1994, 2006) is a conditional model where an artificial neural network is employed to represent the function \(\mathbf{\theta}(\mathbf{X})\) in (3) and the parametric form \(\rho(\,\cdot\,;\,\mathbf{\theta})\) corresponds to a mixture distribution. The density of a general mixture distribution is
\[\rho(\,\cdot\,;\,\,\mathbf{\theta})=\sum_{i=1}^{N_{\mathrm{c}}}\alpha_{i}\ \rho_{i}(\,\cdot\,;\,\,\mathbf{\theta}_{i}), \tag{6}\]
where \(N_{\mathrm{c}}\) is the number of components in the mixture, the \(i^{\mathrm{th}}\) component has density \(\rho_{i}(\,\cdot\,;\,\,\mathbf{\theta}_{i})\) with parameters \(\mathbf{\theta}_{i}\), \(\mathbf{\theta}=[(\alpha_{1},\ \mathbf{\theta}_{1}),\ \cdots,\ (\alpha_{N_{\mathrm{c}}},\ \mathbf{\theta}_{N_{\mathrm{c}}})]\) and the \(\alpha_{i}\) are component weights subject to the constraint
\[\sum_{i=1}^{N_{\mathrm{c}}}\alpha_{i}=1. \tag{7}\]
Commonly, the component densities \(\rho_{i}\) are chosen from the same family and, in particular, Gaussian, but components can be chosen differently. In the Gaussian case, the \(\mathbf{\theta}_{i}\) are conditional means and covariances.
The neural network representation of \(\mathbf{\theta}(\,\cdot\,)\) is itself parametric with parameters \(\mathbf{w}\); hence, MDNs model \(p(\mathbf{Y}\,\mid\,\mathbf{X})\) with \(\rho(\mathbf{Y}\,;\,\mathbf{\theta}(\mathbf{X}\,;\,\mathbf{w}))\). The network can have any architecture,
Figure 1: Estimating conditional statistics by a standard approach versus by first constructing a model for the conditional density.
but that of a multilayer perceptron (Rumelhart et al., 1985) (also known as a fully connected multilayer feedforward neural network) with nonlinear activation functions is common -- in this case \(\mathbf{w}\) consists of the weights and biases.
A natural loss function for conditional models, which quantifies how well they fit data, is the negative (conditional) log likelihood of observations \(\mathcal{D}=\{\mathbf{X}_{i},\mathbf{Y}_{i}\}\) under the model. In MDNs this is
\[\mathcal{L}(\mathbf{w}\,;\,\mathcal{D})=\sum_{i}-\log\rho\left(\mathbf{Y}_{i}\,;\,\bm {\theta}(\mathbf{X}_{i}\,;\,\mathbf{w})\right). \tag{8}\]
Training an MDN then amounts to finding optimal values for the neural network's parameters
\[\mathbf{w}^{*}=\operatorname*{arg\,min}_{\mathbf{w}}\;\mathcal{L}(\mathbf{w}\,;\,\mathcal{ D}). \tag{9}\]
Minimising the negative log likelihood is equivalent to maximising the log likelihood of training data, also referred to as the log score in probabilistic forecasting (Bernardo, 1979; Gneiting and Raftery, 2007; Brocker and Smith, 2007). Maximum likelihood estimation in this context differs from the more familiar setting of fitting an unconditional model for \(p(\mathbf{Y})\) given observed data \(\{\mathbf{Y}_{i}\}\) -- here, there is generically only one observed value of \(\mathbf{Y}\mid\mathbf{X}\) corresponding to each observed value of \(\mathbf{X}\), and for most values of \(\mathbf{X}\) there are no observations at all. It is clear, then, that, for each value of \(\mathbf{X}\), we are certainly not in the large-data regime that would allow one to invoke asymptotic properties of maximum likelihood estimates. The quality of parametric conditional models (3) depends critically on how well \(\mathbf{\theta}(\mathbf{X}\,;\,\mathbf{w}^{*})\) represents how the distribution of \(\mathbf{Y}\mid\mathbf{X}\) varies with \(\mathbf{X}\). In particular, since MDNs employ a neural network to model \(\mathbf{\theta}(\mathbf{X})\), and neural networks are highly flexible models, it is common for MDNs to exhibit poor generalisation unless regularisation techniques are used. In the following section we employ a widely used regularisation technique known as early stopping (see e.g. Prechelt (2012)), wherein a small proportion of training data (referred to as the test set) are not used to inform steps in the optimisation scheme, but are instead used to track the evolution of an estimate of the model's generalisation error (the value of the loss function evaluated on data outside the training set). The guiding heuristic is that it is typical for the generalisation error of neural networks to reach a minimum as training progresses before increasing due to overfitting -- early stopping is a strategy where one terminates model training when the generalisation error is believed to have reached this minimum. Details of our implementation are given in the following section.
## 4 Application to single-particle statistics of the ocean near-surface
In this section we present an MDN model of the single-particle transition density (1) of ocean surface drifting buoys (drifters). The model's parameters are inferred from trajectory data collected as part of the Global Drifter Program (Lumpkin and Centurioni, 2019).
### Data
We use the Global Drifter Program quality-controlled 6-hour interpolated dataset, which includes positions (latitude and longitude) and sea-surface temperatures. Drifter velocity estimates are also provided, though these are obtained by simple finite-differencing of position
in time and subject to error. The raw measurements are treated according to the procedure of Hansen & Poulain (1996), which involves the removal of suspected spurious values and interpolation to regular 6-hour intervals. The interpolation method, which is a form of kriging (Hansen & Herman 1989), assumes contamination by an uncorrelated zero-mean noise and makes assumptions about the structure functions of the discretised position process. We leave as a caveat to our results that this preprocessing of the data could be questioned and proceed taking the interpolated data as our ground-truth. Only position observations are used in our modelling. Figure 2 shows how many observed displacements are recorded per squared kilometre in each \(1^{\circ}\) latitude \(\times 1^{\circ}\) longitude square. These data were recorded between 1989 and 2021 and include a total of 23893 drriter trajectories. We split the data in two parts, by selecting approximately half (11946) of the drriter trajectories at random to use for creating the model and set the remaining data aside for validation. The overall dataset contains over 18 million observations of 6-hour displacements.
In section 4.3 we perform a model comparison. Skill scores are computed for the full training and validation datasets with global coverage, as well as for three restricted regions, labelled \(A\), \(B\), and \(C\), shown in figure 3, having extents \(20-50^{\circ}\) W, \(30-50^{\circ}\) N; \(145-175^{\circ}\) E, \(20-40^{\circ}\) N; and \(110-130^{\circ}\) W, \(10^{\circ}\) S-\(10^{\circ}\) N.
### Model
The transition density is not modelled in the most general form. Instead, we (i) consider, at first, a fixed value of the time-lag \(\tau\), so that the transition density may be written
\[p(\mathbf{X}_{n+1}\mid\mathbf{X}_{n}), \tag{10}\]
where \(\mathbf{X}_{n}=\mathbf{X}(t_{0}+n\tau)\), and (ii) assume the process \(\mathbf{X}(t)\) is time-homogeneous, such that (1) is independent of the initial time \(t\), (10) is independent of \(n\) and
\[p(\Delta\mathbf{X}\mid\mathbf{X}_{0}) \tag{11}\]
Figure 2: Count of drriter observations per squared kilometre.
\(\Delta\mathbf{X}\) is the displacement of a drifter from its position at the previous timestep, denoted \(\mathbf{X}_{0}\). By assuming time-homogeneity we neglect the effects of seasonality and low-frequency variability in ocean dynamics. If, additionally, an assumption of Markovianity is made, then (10) is enough to construct a discrete-time Markov process model (\(\mathbf{X}_{n}\)) for drifter position (Pavliotis, 2014). For a Markov assumption to be accurate, the discretisation timescale \(\tau\) must be chosen appropriately. We choose a timescale of 4 days on the basis that the Lagrangian velocity decorrelation time (or integral timescale) at the surface was previously estimated from drifters to be approximately 2-3 days in all four ocean basins (Rupolo, 2007).
We model (11) using an MDN \(-\) see the schematic in figure 4. The model takes as input \(\mathbf{X}_{0}\), given in longitude-latitude coordinates, and its output is a Gaussian mixture distribution with \(N_{c}=32\) mixture components modelling \(\Delta\mathbf{X}\mid\mathbf{X}_{0}\), also in degrees of longitude and latitude from \(\mathbf{X}_{0}\). The neural network part of the model thus encodes
\[\mathbf{\theta}(\cdot)=\left\{\alpha_{i}(\cdot),\ \mathbf{\mu}_{i}(\cdot),\ \mathbf{C}_{i}( \cdot)\right\}_{i=1}^{N_{c}} \tag{12}\]
such that
\[\begin{split}\dot{p}(\Delta\mathbf{X}\mid\mathbf{X}_{0})=\sum_{i=1}^{N_{ c}}\alpha_{i}(\mathbf{X}_{0})\det\left(2\pi\mathbf{C}_{i}\left(\mathbf{X}_{0}\right) \right)^{-\frac{1}{2}}\\ \times\exp\left[-\frac{1}{2}\left(\Delta\mathbf{X}-\mathbf{\mu}_{i} \left(\mathbf{X}_{0}\right)\right)^{\mathrm{T}}\mathbf{C}_{i}^{-1}\left(\mathbf{X}_{0} \right)\ \left(\Delta\mathbf{X}-\mathbf{\mu}_{i}\left(\mathbf{X}_{0}\right)\right)\right],\end{split} \tag{13}\]
where \(\mathbf{\mu}_{i}\) and \(\mathbf{C}_{i}\) are the mean vector and covariance matrix of mixture component \(i\). The number of mixture components is a hyperparameter which could be optimised. We chose \(N_{c}=32\) on the basis that 32 component mixtures were found to be sufficiently expressive in trial experiments with MDNs.
The architecture chosen for the neural network is the standard multilayer perceptron, with six hidden (i.e. interior) layers. The first four hidden layers have 256 neurons and the remaining two have 512. The activation function \(\tanh(x)\) is applied to each of the hidden
Figure 3: Regions considered for model comparison in section 4.3.
layers. Thus, the activity of hidden layer \(i\), is
\[\mathbf{h}_{i}=\tanh\left(W_{i}\mathbf{h}_{i-1}+\mathbf{b}_{i}\right) \tag{14}\]
for \(i>1\), and
\[\mathbf{h}_{1}=\tanh\left(W_{1}\mathbf{X}_{0}+\mathbf{b}_{1}\right). \tag{15}\]
Here, \(\mathbf{W}_{i}\in\mathbb{R}^{d_{i}\times d_{i-1}}\) and \(\mathbf{b}_{i}\in\mathbb{R}^{d_{i}}\) are the weight and bias parameters corresponding to the \(i^{\text{th}}\) layer, having \(d_{i}\) neurons. Note that \(\mathbf{w}=\{\mathbf{W}_{i},\mathbf{b}_{i}\}\). The final layer has custom activation functions designed to enforce the natural constraints on the components of \(\mathbf{\theta}\). In particular, the softmax activation function \(a_{\text{sm}}(\mathbf{x})=\exp(\mathbf{x})\,/\,\sum_{i}\exp(x_{i})\) is applied to the neural network outputs which correspond to the mixture component weights, \(\mathbf{\alpha}\), to ensure that these are positive and satisfy the constraint (7). Each covariance matrix \(\mathbf{C}_{i}\) is represented by the components of a lower triangular Cholesky factor \(-\) positivity of the diagonal elements is enforced by taking an exponential. When \(N_{c}=32\) we have \(\dim(\mathbf{\theta})=192\), and the total number of neural network parameters, i.e. weights and biases, is \(\dim(\mathbf{w})=690,880\). We train the model by minimising the negative log likelihood loss function (8) using the Adam algorithm (Kingma & Ba, 2015). We note that the number of widths of hidden layers are further hyperparameters which we have chosen after experimentation with test problems. We do not attempt to find optimal values for these in this work.
As is common in machine learning, we standardise the data before training (LeCun et al., 2012), that is we transform both the input data, \(\{\mathbf{X}_{0i}\}\), and output data, \(\{\Delta\mathbf{X}_{i}\}\), separately, by subtracting the mean of the training data and dividing each component by its standard deviation in the training data, so that each component of the transformed data has zero mean and unit variance. While theoretical justifications for this practice are lacking or unsatisfactory, we found that it did improve noticeably the numerical stability of the optimisation procedure. In any case, the transformation that we apply is invertible, although care must be taken to correctly invert the rescaling of the transition density. For example, if we denote the standardised variables by \(\widehat{\mathbf{X}_{0}}\) and \(\widehat{\Delta\mathbf{X}}\), then the model approximates \(p(\widehat{\Delta\mathbf{X}}\mid\widehat{\mathbf{X}_{0}})\), and we can recover the transition density with the correct units as
\[p(\Delta\mathbf{X}\mid\mathbf{X}_{0})=\frac{p\left(\widehat{\Delta\mathbf{X}}\mid\widehat {\mathbf{X}_{0}}\right)}{\widehat{\text{std}}(\Delta X)\,\widehat{\text{std}}( \Delta Y)}, \tag{16}\]
where \(\widehat{\text{std}}(\,\cdot\,)\) denotes the sample standard deviation among the training data.
Figure 4: Schematic of the MDN model of the single-particle transition density of drifters.
One aspect of neural networks that is particularly relevant to the problem at hand, is that they struggle to represent periodic functions (Liu et al., 2020). Given that we operate in longitude-latitude coordinates, a model of the transition density ought to be periodic in longitude. However, since the neural network model receives the initial position \(\mathbf{X}_{0}\) as simply a vector in \(\mathbb{R}^{2}\), the concept of a spherical domain is not built in to the representation. Indeed, the MDN model produces discontinuities in \(p(\Delta\mathbf{X}\mid\mathbf{X}_{0})\) at the dateline due to model error on either side. To improve continuity at the dateline we employ a crude technique, wherein we replicate the data twice, once shifted by \(360^{\circ}\) longitude west, and once shifted by \(360^{\circ}\) east.
The model is implemented in Python using TensorFlow (TensorFlow Developers, 2021) and TensorFlow Probability (TensorFlow Probability Developers, 2021) and trained using four NVIDIA Tesla V100 16GB GPUs in parallel. Of the 50% of data used to construct the model, 90%, again chosen randomly, was used to inform iterations of the optimisation procedure, and 10% was used for early stopping -- we refer to these portions of the data as the training and test sets, respectively. Training took approximately 90 minutes. The evolution during training of the loss function on training and test sets is shown in figure 5 as a function of epoch. An epoch is the number of iterations taken for all data to be used once in the Adam algorithm. The stopping criterion used for the optimisation, an example of early stopping, was that the test loss had not decreased since 50 epochs previous.
### Model evaluation and comparison
Since the MDN model is probabilistic, its performance should be assessed using skill scores for probabilistic forecasts, as opposed to performance metrics commonly used for deterministic models, such as the mean squared error. As discussed above, minimising the negative log likelihood is equivalent to maximising the log score, since this is exactly the log likelihood. The log score has attractive properties, namely strict propriety (Brocker and Smith, 2007) and locality (Du, 2021). Indeed it is the only smooth local strictly proper scoring rule for continuous variables up to affine transformation (Bernardo, 1979). A scoring rule is strictly proper
Figure 5: Evolution of the training and test loss in the MDN model during optimisation. The loss shown is the mean negative log likelihood per datapoint (i.e. a normalised form of (8)) in terms of the standardised variables \(\mathbf{\widetilde{X}}_{0}\) and \(\widehat{\Delta\mathbf{X}}\).
if its expectation (with respect to data) is maximised uniquely by the correct/perfect model (assuming it exists). A scoring rule is local if it is a function only of the value of the forecast probability distribution evaluated at the observed data, and does not depend for example on other features of the forecast distribution, such as its shape. For validation purposes we can compute the log score on our validation data set. However, while the value of the log score can be easily interpreted in the case of forecasts of discrete/categorical variables, its value in the case of continuous variables is not immediately meaningful, since it refers to probability density, which has dimensions inverse to the area of its support, meaning that the scale of the log score is problem-dependent. On the other hand, the log score can be more easily interpreted when used as a relative score between models -- in particular, the mean difference of log scores between models reflects the average additional probability the first model places on observed outcomes compared to the other model, measured in units of information, nats (or shannons when \(\log_{2}\) is used in the definition of the score). The difference of log scores is invariant under smooth transformations of the forecast variable (Du, 2021); this means, in particular, that differences in log scores are unaffected by a change of units. Thus, in order to evaluate the MDN model we compare it with alternative models. We describe two alternative models, one used extensively in the literature, and one proposed here as a simple but reasonable alternative. We also compare with a simplified version of the MDN model, which features only one mixture component, i.e. for which \(N_{c}=1\). The log score of all models is computed on both training and validation data to assess relative performance.
#### 4.3.1 Transition matrix model
Previous work (Maximenko et al., 2012; van Sebille et al., 2012; Miron et al., 2017, 2021), modelled drifter dynamics with a discrete-time Markov chain using Ulam's method (Ulam, 1960; Froyland, 2001). This requires to discretise space into bins \(\{B_{i}\}\) and estimate the transition matrix
\[P_{ij}=\mathbb{P}(\mathbf{X}_{n+1}\in B_{j}\mid\mathbf{X}_{n}\in B_{i}), \tag{17}\]
which is the discrete analogue of the transition density (10). Indeed the primary difference between a Markov chain model and our Markov process model is that ours is continuous in space. The elements of the transition matrix are usually estimated by the standard approach sketched in figure 1, where we have \(\mathbf{Y}=\mathbf{X}_{n+1}\) and \(f(\mathbf{Y})=1_{B_{j}}(\mathbf{X}_{n+1})\) -- this corresponds to the maximum-likelihood estimate for each \(P_{ij}\) and, hence, maximises the log score on the training dataset. Note that the transition matrix can be used to construct a corresponding transition density which is piecewise constant on gridcells in \(\mathbf{X}_{n}\) and \(\mathbf{X}_{n+1}\) via
\[p(\Delta\mathbf{X}\mid\mathbf{X}_{0})=\frac{P_{ij}}{A(B_{j})},\quad\text{ when }\mathbf{X}_{0}\in B_{i},\ \mathbf{X}_{0}+\Delta\mathbf{X}\in B_{j}, \tag{18}\]
where \(A(B_{j})\) is the area2 of \(B_{j}\). This is important for allowing comparison with models which are continuous in space.
Footnote 2: For consistency with the transition density as given by the MDN model, these areas must be calculated in terms of the same variables, i.e. degrees longitude by degrees latitude.
An advantage of Markov chain models is that analysis of their long time behaviour is straightforward -- the left and right eigenvectors of the transition matrix can be studied to identify almost-invariant sets, as in Miron et al. (2017). This has been called the eigenvector method (Froyland et al., 2014). The extension of this analysis to the continuous-space setting
using our model, which we leave for future work, requires the calculation of eigenfunctions of the relevant Perron-Frobenius operator \(\mathcal{P}\), which acts on probability density functions to evolve them forward in time, such that
\[p(\mathbf{X}_{n+1}) =\mathcal{P}(p(\mathbf{X}_{n})) \tag{19}\] \[:=\int_{\Omega}p(\mathbf{X}_{n})\,p(\mathbf{X}_{n+1}\mid\mathbf{X}_{n})\, \mathrm{d}\mathbf{X}_{n}. \tag{20}\]
Alternatively, it is worth noting that the MDN model can be used to construct a transition matrix, by numerical integration of the transition density, that is by computing numerically
\[P_{ij}=\int_{B_{i}}\int_{B_{j}}p(\mathbf{X}_{n+1}\mid\mathbf{X}_{n})\,\mathrm{d}\mathbf{X}_{ n+1}\,\mathrm{d}\mathbf{X}_{n}. \tag{21}\]
In figure 6 we show the log transition density \(\log p(\Delta\mathbf{X}\mid\mathbf{X}_{0})\) derived from the transition matrix model via (18) for two different initial positions \(\mathbf{X}_{0}\). The first is located within the core of the Gulf Stream at \(34.85^{\circ}\) N, \(74.50^{\circ}\) W, and the second is just outside the Gulf stream at \(33.67^{\circ}\) N, \(72.55^{\circ}\) W. Notice that in each case the support of the density is the set of grid cells to which transitions were observed in the training data. In other words, transitions to other grid cells have probability zero under the model. We return to this point in section 4.3.3.
#### 4.3.2 Gaussian transitions with gridded parameters (GTGP)
A simple model for the transition density (11) is that, given initial positions \(\mathbf{X}_{0}\), transitions are conditionally Gaussian with conditional mean and covariance given by functions of \(\mathbf{X}_{0}\) which are piecewise constant on grid cells, i.e.
\[\Delta\mathbf{X}\mid\mathbf{X}_{0}\sim\mathcal{N}\left(\mathbf{\mu}\left(\mathbf{X}_{0}\right),\,\mathbf{C}\left(\mathbf{X}_{0}\right)\right), \tag{22}\]
with \(\mathbf{\mu}\left(\mathbf{X}_{0}\right)\) and \(\mathbf{C}\left(\mathbf{X}_{0}\right)\) piecewise constant in \(\mathbf{X}_{0}\). The parameters \(\mathbf{\mu}\) and \(\mathbf{C}\) are estimated by sorting the observations into bins and computing the sample mean and sample covariance for each bin. The sample mean is the maximum likelihood estimate of \(\mathbf{\mu}\), while the sample
Figure 6: Maps of the log transition probability density function, \(\log p(\Delta\mathbf{X}\mid\mathbf{X}_{0})\), for initial positions, \(\mathbf{X}_{0}\), (a) in the Gulf Stream (\(34.85^{\circ}\) N, \(74.50^{\circ}\) W), and (b) adjacent to the Gulf Stream (\(33.67^{\circ}\) N, \(72.55^{\circ}\) W), derived from the transition matrix model with \(\tau=4\) days via (18). Yellow dots indicate \(\mathbf{X}_{0}\).
covariance differs from the maximum likelihood estimate of \(\mathbf{C}\) only by a factor of \(\frac{N-1}{N}\approx 1\), where \(N\) is the number of training data in the given bin. Hence, the parameter estimates used are very close to those which maximise the log score on training data.
Figure 7 shows the mean of displacements from a GTGP model with a regular \(1^{\circ}\times 1^{\circ}\) longitude-latitude grid and \(\tau=4\) days, as a function of initial position.
#### 4.3.3 Model scores
We compute skill scores for the full training and validation datasets with global coverage, and for regions \(A\), \(B\), and \(C\). For both the transition matrix and GTGP models it is necessary to choose a spatial discretisation; herein we consider only square latitude-longitude grids,
Figure 7: Mean of displacements from the GTGP model, with \(\tau=4\) days, as a function of initial position.
so that the only parameter to be chosen is the grid cell side length. This choice affects their performance. If a relatively high resolution discretisation is used, the models attain relatively high scores in training, but generalise poorly, as reflected in poor scores on validation data. In the case of the GTGP model, an issue arises when validation data falls in grid cells not visited by drifters in the training set, since sample means and covariances cannot be estimated in bins where data is absent. As a simple solution, we set the value of \(\mathbf{\mu}\) and \(\mathbf{C}\) on unvisited grid cells equal to a global (or regional) estimate. A flaw of the transition matrix model, with the transition matrix estimated as discussed above, is that validation data can have zero probability under the model, and hence achieve a log score of minus infinity. This situation is avoided by taking sufficiently large grid cells, but this leads to exceptionally low scores. On the other hand, a validation score can be computed with smaller grid cells if one is prepared to simply discard validation data which have zero probability under the model. This seems overly generous, as the transition matrix model will be scored increasingly highly as the grid cell size is reduced to zero and an increasing number of the validation data are neglected. As a compromise, we fix the grid cell size for the transition matrix model to \(1^{\circ}\times 1^{\circ}\), the resolution used in some previous studies (van Sebille et al., 2012) where the transition matrix model was used, and discard validation data with zero probability -- the proportion of validation data discarded was 7% globally, and 2%, 9% and 4% in regions \(A\), \(B\) and \(C\), respectively. For the GTGP model the grid cell size was optimised to maximise validation scores using grid search cross validation: a common procedure which amounts to trying a range of values of a model hyperparameter (in this case the grid cell size) and choosing the value which optimises the validation score. The optimal grid cell size found ranged from \(1.1^{\circ}\) in region A to \(5^{\circ}\) globally. The scores are presented in table 1. In all regions the MDN models outperform the alternatives, with the 32-component model achieving slightly higher scores than the single-component model. Note that the scores reported happen to be negative -- this is not by convention, but instead reflects that log probability densities are often negative. A higher score is a better score.
### Results
Once trained, the model can be used in at least two ways: (i) to derive estimates of single-particle displacement statistics, and (ii) to simulate drifter trajectories. However, we first ex
\begin{table}
\begin{tabular}{l l|c c c c} & & TM & GTGP & MDN1 & MDN32 \\ \hline Global: & Training & \(-1.61\) & \(-1.20\) & \(-1.07\) & \(-1.02\) \\ & Validation & \(-1.74\) & \(-1.30\) & \(-1.14\) & \(-1.11\) \\ \(A\): & Training & \(-1.78\) & \(-1.25\) & \(-1.35\) & \(-1.30\) \\ & Validation & \(-1.89\) & \(-1.42\) & \(-1.40\) & \(-1.35\) \\ \(B\): & Training & \(-1.95\) & \(-1.90\) & \(-1.91\) & \(-1.85\) \\ & Validation & \(-2.16\) & \(-2.01\) & \(-2.01\) & \(-1.96\) \\ \(C\): & Training & \(-1.93\) & \(-1.59\) & \(-1.63\) & \(-1.56\) \\ & Validation & \(-2.00\) & \(-1.66\) & \(-1.61\) & \(-1.57\) \\ \end{tabular}
\end{table}
Table 1: Training and validation scores for the transition matrix and GTGP models, as well as the single-component MDN and full 32-component MDN models, in each of the regions considered (see the map in figure 3). The scores are the mean log score (i.e. mean log likelihood) per datapoint calculated in terms of the variables \(\mathbf{X}_{0}\) and \(\Delta\mathbf{X}\) in their original degrees longitude/latitude units.
amine the transition density directly. In figure 8 we show the log transition density \(\log p(\Delta\mathbf{X}\mid\mathbf{X}_{0})\) for two different initial positions \(\mathbf{X}_{0}\). In the first case, where \(\mathbf{X}_{0}\) is located within the core of the Gulf Stream at \(34.85^{\circ}\) N, \(74.50^{\circ}\) W, the transition density is strongly non-Gaussian, with contours extending roughly to the south and northeast, showing the influence of the Gulf Stream on drifters. In the second case, where \(\mathbf{X}_{0}\) is just outside the Gulf stream at \(33.67^{\circ}\) N, \(72.55^{\circ}\) W, the transition density is closer to Gaussian.
In order to quantify the extent to which the transition density deviates from being Gaussian, and how this varies from one region of the ocean to another, we computed the Kullback-Leibler (KL) divergence3 of the single-component MDN model, which is Gaussian, from the full 32-component model as a function of initial position. The result is shown in figure 9. Note that, since a closed-form expression for the KL divergence between two Gaussian mixtures is not known (Cui & Datcu, 2015), we provide simple Monte Carlo estimates based on 5000 samples at each of the vertices of a \(1^{\circ}\times 1^{\circ}\) grid. Where the KL divergence is zero, the two models agree exactly, indicating that displacements are Gaussian. The larger the KL divergence is, the greater the disagreement between the models, and the further from Gaussian the full model is. As a point of reference for interpreting the magnitude of the KL divergence, note that the if \(Z_{0}\sim\mathcal{N}(m_{0},\,1)\) and \(Z_{1}\sim\mathcal{N}(m_{1},\,1)\), then, writing their pdfs as \(p_{0}\) and \(p_{1}\), \(D_{\text{KL}}(p_{1}\parallel p_{0})=(m_{1}-m_{0})^{2}\). Non-Gaussianity of displacements is likely due primarily to inhomogeneity of ocean velocities -- drifters can explore a range of flow statistics as they move, and the convolved effects of these are reflected in observed displacements. An alternative explanation is that the underlying velocity field is non-Gaussian -- evidence of non-Gaussian velocities in the North Atlantic has been presented by Bracco et al. (2000) and LaCasce (2005) on the basis of observations from both subsurface current meters and subsurface floats.
Footnote 3: The KL divergence of \(p\) from \(q\), also known as the relative entropy, defined \(D_{\text{KL}}(q\parallel p)=\int q(x)\,\log\frac{q(x)}{p(x)}\,\text{dx}\), is a measure of the divergence of a probability density \(p\) from a reference probability density \(q\) – often interpreted as the amount of information lost when \(p\) is used to approximate \(q\).
As can be seen in figure 8, the model assigns nonzero probability to drifter displacements intersecting land. This is unavoidable given that the support of the assumed parametric form, that of a Gaussian mixture, extends to infinity; moreover, this may not be entirely spurious, given that some drifters do run aground. In 2012 Lumpkin et al. (2012) reevaluated drifter data to study the causes of drifter deaths. They concluded that approximately 27% of drifter
Figure 8: Maps of the log transition density, \(\log p(\Delta\mathbf{X}\mid\mathbf{X}_{0})\), for initial positions, \(\mathbf{X}_{0}\), (a) in the Gulf Stream (\(34.85^{\circ}\) N, \(74.50^{\circ}\) W), and (b) adjacent to the Gulf Stream (\(33.67^{\circ}\) N, \(72.55^{\circ}\) W), derived from the MDN model with \(\tau=4\) days. Yellow dots indicate \(\mathbf{X}_{0}\).
deaths were due to running aground, with a further 10% being picked up by humans, and the remainder failing due to internal faults. Outside of coastal regions this issue is unlikely to have a strong effect on the estimates of displacement statistics considered in section 4.4.1. The implications for dritter simulations are discussed further in section 4.4.2.
#### 4.4.1 Displacement statistics
In this section we present maps of single-particle statistics derived from the model. As a first example, we show the mean of displacements over the 4-day time increment of our model. We further provide global estimates of lateral diffusivity.
Figure 10 shows the mean of dritter displacements as a function of initial position. While the output of the model is in longitude-latitude coordinates \((\lambda,\phi)\), we apply a simple conversion to kilometres based on a local tangent-plane approximation
\[\Delta X =R\,\Delta\phi \tag{23a}\] \[\Delta Y =R\,\Delta\lambda\,\cos\phi_{0}, \tag{23b}\]
where \(R\) is the radius of the Earth at the equator. The imprint of several features of the surface dynamics, such as the western boundary currents and equatorial (counter) currents, is clear.
For the sake of comparison with previous work, we consider the estimation of lateral diffusivity from our model, though we emphasise that by modelling the full transition density, we provide a more accurate description of Lagrangian statistics than can be captured by the familiar advective-diffusive model of dispersion put forward by Davis (1987, 1991). The estimation of ocean diffusivity by various methods has been the subject of numerous papers (Oh et al. 2000, Zhurbas & Oh 2003, 2004, Klocker et al. 2012, Abernathey & Marshall 2013, Klocker & Abernathey 2014, Ying et al. 2019). The estimation of diffusivity from dritter displacements is straightforward only when there exists a suitable sampling time, which is larger than the time for dritter velocities to decorrelate, i.e. for dritter motion to become diffusive, and such
Figure 9: Kullback–Leibler divergence of the single-component MDN model from the full 32-component MDN model, as a function of initial position. Larger values indicate stronger deviations from Gaussianity in displacements.
Figure 10: Mean of displacements from the MDN model, with \(\tau=4\) days, as a function of initial position.
that the scale of drifter displacements over that time scale is small relative to spatial variations in the diffusivity. In this case a simple estimate of the lateral diffusivity tensor \(\mathbf{K}(\mathbf{x})\) is
\[\mathbf{K}(\mathbf{x})=\frac{1}{2\,\tau}\,\text{Cov}(\Delta\mathbf{X}\mid\mathbf{X}_{0}=\bm {x}), \tag{24}\]
where \(\tau\) is the suitably chosen time scale, and the conditional covariance is estimated by either one of the approaches sketched in figure 1. Unfortunately, such a time scale may not exist in the ocean, and, if it does exist, it likely varies in space, making its determination difficult. This challenge has been borne out in previous studies (LaCasce et al. 2014, Zhurbas et al. 2014). Oh et al. (2000) proposed a method to circumvent the issues created by inhomogeneity. They proposed to isolate the cross-flow component of the displacement covariance, identified by the minor principal component (the smaller eigenvalue of displacement covariance), and use this to provide a scalar estimate of diffusivity, since the cross-flow component is less affected by shear in the mean flow. In figure 11(a) we provide a similar estimate, derived from the MDN model with \(\tau=4\) days,
\[K(\mathbf{x})=\frac{1}{2\,\tau}\,\lambda_{2}(\mathbf{x}), \tag{25}\]
where \(\lambda_{2}(\mathbf{x})\) is the smallest eigenvalue of
\[\text{Cov}(\Delta\mathbf{X}\mid\mathbf{X}_{0}=\mathbf{x})=\sum_{i}\alpha_{i}\mathbf{C}_{i }+\sum_{i}\left[\alpha_{i}\left(\mathbf{\mu}_{i}-\sum_{i}\alpha_{i}\mathbf{\mu}_{i} \right)\left(\mathbf{\mu}_{i}-\sum_{i}\alpha_{i}\mathbf{\mu}_{i}\right)^{T}\right]. \tag{26}\]
The result agrees very well with estimates provided by Zhurbas & Oh (2004) for the Atlantic and Pacific oceans. Figure 11(b) shows the difference of estimates of the form (25) with \(\tau=14\) days and \(\tau=4\) days, respectively. In many areas, the diffusivity estimates are slightly amplified by taking a larger time lag \(\tau\), with greater differences visible in some particularly energetic regions; however, the effect is indeed much weaker than that observed with analogous along-flow diffusivity estimates derived from the largest eigenvalue of the displacement covariance matrix.
Leaving aside the challenges of estimating diffusivity from displacements, which are common to all methods, we highlight as this point some advantages to our approach. Using the MDN model, trained with maximum likelihood and effectively regularised by the use of early-stopping, removes the difficulty of tuning the resolution of bins. Instead, the effective resolution of our mean displacement and diffusivity estimates is set automatically by the resolution of the data, and is free to vary optimally in space. This allows us to produce at once global estimates, which resolve well-sampled flow features very well and are forgiving in regions where data is relatively sparse, with the exception of very high-latitude regions, where there is simply no data to constrain the model.
#### 4.4.2 Drifter simulations
In this section we demonstrate the simulation of drifter trajectories using the MDN model as the basis for a discrete time Markov process model. In a discrete-time setting, assuming Markovianity means assuming that \(p(\mathbf{X}_{n+1}\mid\mathbf{X}_{n},\,\mathbf{X}_{n-1},\,\cdots)=p(\mathbf{X}_{n+1}\mid\mathbf{X} _{n})\). In this case, sampling trajectories amounts to repeatedly sampling displacements in sequence according to the transition density, since, given the current position, displacements are statistically independent of previous positions.
Figure 11: Global estimate of lateral diffusivity derived from the MDN model of the transition density.
A complication of simulating drifters in this way is that, for reasons discussed above, drifters can hit land. In this work we do not attempt to model the beaching of drifters, since it is not clear that the Global Drifter Program dataset contains sufficient reliable information \(-\) in particular, it remains a challenge to determine whether drifters have run aground or not (Lumpkin et al., 2012). To exclude the possibility of running aground in our drifter simulations we implement a simple rejection sampling scheme, wherein displacements sampled from the transition density which would bring a drifter on land are rejected, and a new displacement is sampled until a displacement which keeps the drifter in the ocean is drawn. This amounts to sampling according to the conditional density \(p(\Delta\mathbf{X}\mid\mathbf{X}_{0},\mathbf{X}_{0}+\Delta\mathbf{X}\not\in\text{land})\), and is equivalent to the standard practice when using transition matrix models of restricting the domain considered to the ocean and normalising probability estimates correspondingly. To determine whether a proposed new position is on land, we check intersection with a 110m-resolution land mask.
We simulated the evolution of a set of drifters initialised on the vertices of a \(2^{\circ}\times 2^{\circ}\) grid for a period of 10 years. Note that the evolution of each drifter is simulated independently. This means that multi-particle statistics that would characterise the joint evolution of drifters released simultaneously in the ocean are not represented and, in particular, that the current model is not appropriate for simulating the release of a cloud of tracer particles on short time scales; however, it can be expected to represent the behaviour of drifters or buoyant tracers over large spatial and temporal scales. Similar experiments, carried out by Maximenko et al. (2012) and van Sebille et al. (2012) using transition matrix models trained on Global Drifter Program data, studied the clustering of simulated drifters due to near-surface convergence and the formation of so-called garbage patches, including the North Pacific Garbage Patch (Moore et al., 2001) and others corresponding to the other subtropical ocean gyres. The simulations of van Sebille et al. (2012) showed a further cluster in the Barents Sea which formed only after several decades.
The results of our model simulation are largely in agreement with these previous studies. The distribution of the simulated drifters is shown in figure 12 at the beginning of the simulation and after one, three, and ten years of evolution under the MDN model. After one year the drifters have become relatively sparse in equatorial regions. After three years clusters in the subtropical gyres have begun to appear, and after ten years, these are very well defined. Smaller clusters are also seen to appear, notably in the North Sea and in the seas south of Papua, as well as in some high latitude regions including along the west coast of Greenland and off Antarctica around \(100-130^{\circ}\) E.. Validating these clusters, that is, assessing whether marine debris is likely to accumulate in these areas, is difficult, because in situ observations remain sparse (Ryan et al., 2009). It may be that the dynamics in these regions, which are poorly sampled by GDP drifters, are simply underresolved by the MDN model, leading to spurious convergence zones. We note, for example, in figure 10 that mean displacements do not appear to represent the detail of known currents in the southern portion of the North Sea, which is not visited by drifters in the GDP data (see figure 2). In general, as is true for any data-driven model, caution should be exercised when interpreting model outputs in regions where data is lacking.
## 5 Conclusions
This work demonstrates the use of conditional density estimation, and, in particular, stochastic neural networks, in a fluid dynamical problem, namely that of diagnosing single-particle
statistics from trajectory data. We show how such probabilistic models are useful both as emulators, and as an indirect means of estimating conditional statistics. By operating in the framework of probabilistic modelling we are able to appeal to the extensive literature on statistical inference, probabilistic forecasting, model comparison and validation, and thereby avoid ad hoc choices of loss functions and performance metrics. Our model is compared, using a probabilistic scoring rule, to alternative models, including a Markov chain model used extensively in the literature, and is shown to outperform these, both globally and in three specific regions.
By modelling the single-particle transition density of surface drifters, we gain estimates of a range of conditional statistics simultaneously, which capture the occurrence of strongly non-Gaussian statistics in some areas of the ocean. We provide global maps of mean displacement and lateral diffusivity, but emphasise that these examples provide only a limited summary of the information contained in the transition density; further statistics, including higher moments of displacements can readily be computed from our model. Interpreted as the basis for a discrete-time Markov process, our model is also used to simulate the evolution of a set of drifters seeded globally on a uniform grid, and shows the emergence of clusters of drifters in the subtropical gyres, in agreement with previous work on the formation of garbage patches.
The approach espoused in this work is equally applicable to other problems in fluid dynamics and oceanography. One example is the estimation of structure functions from either Eulerian or Lagrangian velocity data. Another is the estimation of multi-particle statistics, such as relative dispersion, via modelling of multi-particle transition densities. Yet another is the learning of stochastic parameterisations in climate/atmosphere/ocean models. Guillau
Figure 12: Histograms of simulated drifters initially and after one, three, and ten years of evolution under the MDN model, respectively.
min & Zanna (2021) made progress on the parameterisation of subgrid momentum forcing in an ocean model with a single-component MDN model, but the approach is applicable more broadly, e.g. to the parameterisation of subgrid transport.
In this work we have largely neglected the need to quantify uncertainty in model parameters and to incorporate prior knowledge in our modelling. These needs would be met by a Bayesian approach, where, instead of estimating parameters by maximum likelihood, we apply Bayesian inference to obtain posterior distributions on parameters, which account for prior knowledge. Indeed, all of the results presented herein would benefit from uncertainty quantification. In the case of conditional statistics, a Bayesian approach would, e.g., allow to identify where there is not enough data to inform reliable estimates of lateral diffusivity; and in general, incorporating prior knowledge may help to regularise our model of the transition density, so that, in the case of drifter simulations, spurious convergence zones can be avoided. The application of Bayesian inference to MDNs remains challenging, but we consider this an important future direction.
**Acknowledgements.** I am grateful to Jacques Vanneste, James Maddison and Aretha Teckentrup for their advice, input and overall support of this work. I also thank Dhruv Balwada for helpful discussions. Thanks are also due to the reviewers for their suggestions which have improved the manuscript.
**Funding**. The author was supported by the MAC-MIGS Centre for Doctoral Training under EPSRC grant EP/S023291/1. This work used the Cirrus UK National Tier-2 HPC Service at EPCC (www.cirrus.ac.uk) funded by the University of Edinburgh and EPSRC (EP/P020267/1).
**Declaration of interests.** The author reports no conflict of interest.
**Data availability statement.** The code required to reproduce the results herein is available at doi.org/10.5281/zenodo.7737161, along with the trained MDN model and a Jupyter Notebook which demonstrates its use. The processed GDP data used and drifter simulation data are available at doi.org/10.7488/ds/3821.
|
2305.15568 | PCA-aided calibration of systems comprising multiple unbiased sensors | The calibration of sensors comprising inertial measurement units is crucial
for reliable and accurate navigation. Such calibration is usually performed
with specialized expensive rotary tables or requires sophisticated signal
processing based on iterative minimization of nonlinear functions, which is
prone to get stuck at local minima. We propose a novel calibration algorithm
based on principal component analysis. The algorithm results in a closed-form
formula for the sensor sensitivity axes and scale factors. We illustrate the
proposed algorithm with simulation experiments, in which we assess the
calibration accuracy in the case of calibration of a system consisting of 12
single-axis gyroscopes. | Marek W. Rupniewski | 2023-05-24T21:01:21Z | http://arxiv.org/abs/2305.15568v2 | # PCA-aided calibration of systems comprising multiple unbiased sensors
###### Abstract
The calibration of sensors comprising inertial measurement units is crucial for reliable and accurate navigation. Such calibration is usually performed with specialized expensive rotary tables or requires sophisticated signal processing based on iterative minimization of nonlinear functions, which is prone to get stuck at local minima. We propose a novel calibration algorithm based on principal component analysis. The algorithm results in a closed-form formula for the sensor sensitivity axes and scale factors. We illustrate the proposed algorithm with simulation experiments, in which we assess the calibration accuracy in the case of calibration of a system consisting of 12 single-axis gyroscopes.
inertial measurement unit (IMU) calibration, principal component analysis (PCA), multiple-sensor system
## I Introduction
Accelerometers and gyroscopes are commonly known as inertial sensors. Inertial measurement units typically consist of multiple such sensors and are often augmented by magnetometers to estimate the inclination better. Before the navigation system is used, it must be calibrated. It is especially vital for sensors produced in micro-electrical-mechanical systems (MEMS) technology. Such sensors are generally delivered uncalibrated [1] to reduce the production costs of IMUs for mass-market products. There are two predominant classes of methods for IMU sensor calibration. One is based on expensive specialized equipment, such as precise mechanical platforms [2] or optical tracking systems (e.g., [3]). The other class, called multi-position calibration, relies on the measurements carried out under static conditions, which utilizes the knowledge of the magnitude of the measured vector quantity, such as the Earth's gravity, e.g., [4, 5]. To the best of our knowledge, the methods that fall into the latter class require nonlinear optimization realized by iterative algorithms, e.g., Gauss-Newton algorithm [6], Newton-Raphson [4], Levenberg-Marquardt algorithm [1], or other algorithms provided by numerical toolboxes [5]. Our study falls into multi-position calibration as well. However, we propose measurement signal processing that leads to a closed form for the calibration parameters. We consider systems that consist of \(m\) single-axis sensors, each of which measures the projection of a given vector-valued quantity onto the sensor sensitive axis. Such systems may consist of, e.g., accelerometers, gyroscopes, or magnetometers. We assume the following sensor measurement model
\[m_{ij}=\boldsymbol{a}_{i}^{T}\boldsymbol{v}_{j}+\eta_{ij},\quad i=1,\,\ldots, \,m,\quad j=1,\,\ldots,\,n, \tag{1}\]
where \(m_{ij}\) is the read-out of \(i\)-th sensor, \(\boldsymbol{v}_{j}\in\mathbf{R}^{d}\) is the measured vector quantity at \(j\)-th position of the system, \(\boldsymbol{a}_{i}\in\mathbf{R}^{d}\) is a vector that encodes the scale and the sensitive axis of the sensor, and \(\eta_{ij}\) is the Gaussian noise with zero mean. In our study, we treat dimension \(d\) as an arbitrary positive integer. In practice, two values of \(d\) are predominant, i.e., \(d=3\) for the case of \(3\)-dimensional state space, and \(d=2\) for the state space that takes the form of a plane. The absence of bias terms in (1) may be an inherent property of the system's sensors, It can also result from bias estimation during a pre-calibration procedure, see Section IV. We assume that the measured quantity stays constant in magnitude for all \(n\) positions of the considered system, i.e.,
\[\|\boldsymbol{v}_{j}\|=c,\quad j=1,\,\ldots,\,n, \tag{2}\]
where \(\|\boldsymbol{v}_{j}\|\) denotes the Euclidean norm of vector \(\boldsymbol{v}_{j}\), and \(c\) is an arbitrary scalar constant. Earth's gravity or magnetic fields measured at a given point on Earth exemplify such quantities. The Angular rate of a rotary table that rotates with a fixed angular speed is another such quantity. In the former case, we may consider the calibration of a system consisting of accelerometers or magnetometers, and in the latter case -- a system comprising gyroscopes.
The paper is organized as follows. The next section presents the main contribution of our study, i.e., the algorithm for finding the sensitive vectors of a sensor set. Section III discusses the number of positions required for the proposed calibration procedure. Section IV presents the results of numerical experiments, in which we have used the proposed algorithm to calibrate a system that consists of four triads of single-axis gyroscopes, see Fig. 1. Section V concludes our paper.
## II Calibration procedure
Throughout the paper, we denote matrices with small bold letters with no subscript, e.g., \(\boldsymbol{a}\), matrix columns with single-subscripted bold letters, e.g., \(\boldsymbol{a}_{i}\), and matrix entries with
regular double-subscripted letters, e.g. \(a_{ij}\). Equation (1) takes the following form in the matrix notation.
\[\mathbf{m}=\mathbf{p}+\mathbf{\eta}=\mathbf{a}^{T}\mathbf{v}+\mathbf{\eta}. \tag{3}\]
### _Noiseless case_
Let us first consider the noiseless case, i.e., the case where \(\mathbf{\eta}=\mathbf{0}\) in Eq. (3). Assume that matrix \(\mathbf{p}\) is of the maximum possible rank, which is \(d\). Thus, so are the ranks of matrices \(\mathbf{a}\) and \(\mathbf{v}\). Let vectors \(\mathbf{e}_{1}\),..., \(\mathbf{e}_{d}\in\mathbf{R}^{m}\) constitute a linear basis of the subspace \(V\subset\mathbf{R}^{m}\) that is spanned by the columns of matrix \(\mathbf{p}\), and let matrix \(\mathbf{b}\) define the decomposition of matrix \(\mathbf{p}\) columns in this basis, i.e.,
\[\mathbf{p}=\mathbf{a}^{T}\mathbf{v}=\mathbf{e}\mathbf{b}. \tag{4}\]
There must exist vectors \(\mathbf{f}_{1}\),..., \(\mathbf{f}_{d}\in\mathbf{R}^{d}\) such that
\[\mathbf{e}_{k}=\mathbf{a}^{T}\mathbf{f}_{k},\quad k=1,\,\ldots,\,d. \tag{5}\]
By combining Eqs. (4) and (5) we get
\[\mathbf{v}=\mathbf{f}\mathbf{b} \tag{6}\]
and, in particular,
\[c^{2}=\|\mathbf{v}_{j}\|^{2}=\mathbf{b}_{j}^{T}\mathbf{f}^{T}\mathbf{f}\mathbf{b}_{j}=\mathbf{b}_{j}^{ T}\mathbf{g}\mathbf{b}_{j},\quad j=1,\,\ldots,\,n. \tag{7}\]
We may treat Eqs. (7) as a set of \(n\) scalar equations for the entries of a symmetric matrix \(\mathbf{g}=\mathbf{f}^{T}\mathbf{f}\). Once the equation set is solved for these entries, one may compute matrix \(\mathbf{f}\) by eigendecomposition of \(\mathbf{g}\):
\[\mathbf{g}=\mathbf{f}^{T}\mathbf{f}=\mathbf{q}^{T}\mathbf{\lambda}\mathbf{q}, \tag{8}\]
and thus
\[\mathbf{f}=\mathbf{\lambda}^{\frac{1}{2}}\mathbf{q}, \tag{9}\]
where \(\mathbf{\lambda}\) is a diagonal matrix with non-negative entries, and \(\mathbf{q}\) is an orthogonal matrix. By combining Eqs. (6) and (9)
\[\mathbf{v}=\mathbf{\lambda}^{\frac{1}{2}}\mathbf{q}\mathbf{b}. \tag{10}\]
Eventually, by substituting (10) into (4) and solving it for \(\mathbf{a}\), we get
\[\mathbf{a}=\mathbf{\lambda}^{-\frac{1}{2}}\mathbf{q}\mathbf{e}^{T}. \tag{11}\]
The following algorithm concludes this subsection.
**Algorithm 1**.: **Inputs:** _Noiseless sensor readings in the form of matrix \(\mathbf{p}\in\mathbf{R}^{m\times n}\) of rank \(d\), the magnitude \(c\) of the measured vector quantity (see Eqs. (1)-(3)) **Output:** _Matrices \(\hat{\mathbf{v}}\in\mathbf{R}^{d\times n}\) and \(\hat{\mathbf{a}}\in\mathbf{R}^{d\times m}\) such that \(\mathbf{p}=\hat{\mathbf{a}}^{T}\hat{\mathbf{v}}\)._
1. _Choose an arbitrary linear basis_ \(\mathbf{e}_{1}\)_,...,_ \(\mathbf{e}_{d}\) _for the subspace spanned by the columns of matrix_ \(\mathbf{p}\) _and decompose these columns relative to the basis:_ \[\mathbf{p}=\mathbf{e}\mathbf{b}.\]
2. _Solve the following set of linear equations for the entries of symmetric matrix_ \(\mathbf{g}\in\mathbf{R}^{d\times d}\)_;_ \[c^{2}=\mathbf{b}_{j}^{T}\mathbf{g}\mathbf{b}_{j},\quad j=1,\,\ldots,\,n.\]
3. _Compute the eigendecomposition of matrix_ \(\mathbf{g}\)_:_ \[\mathbf{g}=\mathbf{q}^{T}\mathbf{\lambda}\mathbf{q}.\]
4. _Compute_ \(\hat{\mathbf{v}}=\mathbf{\lambda}^{\frac{1}{2}}\mathbf{q}\mathbf{b}\) _and_ \(\hat{\mathbf{a}}=\mathbf{\lambda}^{-\frac{1}{2}}\mathbf{q}\mathbf{e}^{T}\)_._
**Remark 1**.: _If Eq. (7) has a unique solution \(\mathbf{g}\), then columns of matrices \(\hat{\mathbf{v}}\) and \(\hat{\mathbf{a}}\) are determined uniquely up to an orthogonal transformation, i.e., they are equal to \(\mathbf{v}\) and \(\mathbf{a}\), respectively, up to the multiplication from the left by an arbitrary orthogonal matrix \(\mathbf{r}\in\mathbf{R}^{d\times d}\). If Eq. (7) fails to have a unique solution \(\mathbf{g}\), then the algorithm cannot recover the original matrix \(\mathbf{v}\) from the readings even up to an orthogonal transformation._
**Remark 2**.: _If constant \(c\) is not known, then Algorithm 1 cannot determine the scale factors of sensors. However, by taking any value of \(c\), e.g., \(c=1\), we may at least reconstruct the sensors' sensitive axes up to an orthogonal transformation, and find the scale factors of the sensors up to a common factor, provided that Eq. (7) has a unique solution._
### _Noisy measurements_
Due to the noise terms, matrix \(\mathbf{m}\) of Eq. (3) is of the full rank, i.e., the probability of the rank of matrix \(\mathbf{m}\) being full is \(1\). Consequently, if the number of positions \(n\) is bigger than the number of sensors \(m\), then the columns of matrix \(\mathbf{m}\) span the whole space \(\mathbf{R}^{m}\). However, if the noise terms are small, the columns \(\mathbf{m}_{j}\) treated as points in \(\mathbf{R}^{m}\) must lie close to the subspace \(V\) spanned by the columns of matrix \(\mathbf{p}\). Therefore, we may estimate the subspace \(V\) as a \(d\)-dimensional subspace \(\hat{V}\subset\mathbf{R}^{m}\) that is the closest to the columns of matrix \(\mathbf{m}\) in terms of the mean squared Euclidean distance. This task can be accomplished by Principal Component Analysis (PCA) method as presented in Pearson's seminal paper [7]. If
\[\mathbf{t}=\mathbf{m}\mathbf{w},\quad\mathbf{w}\in\mathbf{R}^{m\times d} \tag{12}\]
is the PCA transformation of \(\mathbf{m}\) truncated to the first \(d\) principal axes, then the columns of matrix \(\mathbf{t}\) span subspace \(\hat{V}\). Before we follow the procedure introduced in the previous subsection, we need to approximate the columns of matrix \(\mathbf{m}\) with those of matrix \(\mathbf{t}\). Let us recall that the truncated PCA transformation can be obtained by truncated Singular Value Decomposition (SVD):
\[\mathbf{m}\approx\mathbf{us}\mathbf{w}^{T},\quad\mathbf{s}\in\mathbf{R}^{d\times d}, \tag{13}\]
Fig. 1: Sensitive vectors of four triads of gyroscopes considered in Section IV
where the columns of matrices \(\mathbf{u}\in\mathbf{R}^{n\times d}\) and \(\mathbf{w}\in\mathbf{R}^{m\times d}\) are orthonormal, and \(\mathbf{s}\) is a diagonal matrix. The truncated SVD gives
\[\mathbf{t}=\mathbf{m}\mathbf{w}=\mathbf{u}\mathbf{s}. \tag{14}\]
By Eckart-Young theorem [8], matrix \(\hat{\mathbf{m}}=\mathbf{us}\mathbf{w}^{T}=\mathbf{t}\mathbf{w}^{T}\) is the best rank \(d\) approximation to matrix \(\mathbf{m}\) with respect to the Frobenius norm, i.e., the sum of squares of the entries of matrix \(\mathbf{m}-\mathbf{x}\), where \(\mathbf{x}\) is of rank \(d\), attains its minimum at \(\mathbf{x}=\hat{\mathbf{m}}\). Once we have approximated matrix \(\mathbf{m}\) with rank \(d\) matrix \(\mathbf{t}\mathbf{w}^{T}\), we can recall the procedure presented in the previous subsection. Note that by having the SVD decomposition of matrix \(\hat{\mathbf{m}}\):
\[\hat{\mathbf{m}}=\underbrace{\mathbf{u}}_{\mathbf{e}}\underbrace{\mathbf{s}\mathbf{w}^{T}}_{\mathbf{b }}, \tag{15}\]
we can compute the analog of Eq. (4) by taking \(\mathbf{e}=\mathbf{u}\) and \(\mathbf{b}=\mathbf{s}\mathbf{w}^{T}\) as depicted in Eq. (15).
The following algorithm concludes this subsection.
**Algorithm 2**.: **Inputs:** _Sensor readings in the form of matrix \(\mathbf{m}\in\mathbf{R}^{m\times n}\), the length \(c\) of vectors \(\mathbf{v}_{j}\) (see Eqs. (1)-(3))_
**Output:** _Matrices \(\hat{\mathbf{v}}\in\mathbf{R}^{d\times n}\) and \(\hat{\mathbf{a}}\in\mathbf{R}^{d\times m}\) that, by the product \(\hat{\mathbf{a}}^{T}\hat{\mathbf{v}}\), form the best rank \(d\) approximation to matrix \(\mathbf{m}\)._
1. _Compute truncated SVD of rank_ \(d\) _for matrix_ \(\mathbf{m}\)_:_ \[\mathbf{m}\approx\mathbf{us}\mathbf{w}^{T}\]
2. _Solve the following set of linear equations for the entries of symmetric matrix_ \(\mathbf{g}\in\mathbf{R}^{d\times d}\)_:_ \[c^{2}=\mathbf{b}_{j}^{T}\mathbf{g}\mathbf{b}_{j},\quad j=1,\,\ldots,\,n,\] _where_ \(\mathbf{b}_{j}\) _are the columns of matrix_ \(\mathbf{b}=\mathbf{s}\mathbf{w}^{T}\)_,_
3. _Compute the eigendecomposition of matrix_ \(\mathbf{g}\)_:_ \[\mathbf{g}=\mathbf{q}^{T}\mathbf{\lambda}\mathbf{q}.\]
4. _Compute_ \(\hat{\mathbf{v}}=\mathbf{\lambda}^{\frac{1}{2}}\mathbf{q}\mathbf{b}\) _and_ \(\hat{\mathbf{a}}=\mathbf{\lambda}^{-\frac{1}{2}}\mathbf{q}\mathbf{u}^{T}\)_._
Note that Algorithm 2 generalizes Algorithm 1, i.e., Algorithm 2 may also be used in the absence of noise. Also, Remarks 1 and 2 stay valid except for the necessary change of the corresponding equalities that hold up to an orthogonal transformation into approximate equalities \(\hat{\mathbf{a}}\approx\mathbf{a}\) and \(\hat{\mathbf{v}}\approx\mathbf{v}\).
**Remark 3**.: _If the sensor readings are noisy and the system of equations in Step 2 of Algorithm 2 is overdetermined, then the solution referred to in that step is meant to be the least square solution._
**Remark 4**.: _Algorithm 2 is robust to small divergence from the assumption on equal length of vectors \(\mathbf{v}_{j}\), as the disparity between the lengths of the vectors can be considered as an extra noise that contributes to \(\mathbf{\eta}\) in Eq. (3)._
## III The number of required measurements
As stated in Remark 1, Algorithm 1 reconstructs matrices \(\mathbf{a}\) and \(\mathbf{v}\) up to an orthogonal transformation, provided that Eq. (7) has a unique solution \(\mathbf{g}\). The same holds for an approximate reconstruction in the case of Algorithm 2. Since matrix \(\mathbf{g}\in\mathbf{R}^{d\times d}\) is symmetric, the number of linear equations required to specify the entries of \(\mathbf{g}\) uniquely is \(\frac{d(d+1)}{2}\). In other words, in order to reconstruct matrices \(\mathbf{a}\) and \(\mathbf{v}\) with Algorithms 1 or 2 the number of positions \(n\) has to satisfy the following inequality
\[n\geq\frac{d(d+1)}{2}. \tag{16}\]
In this section, we show that this bound cannot be loosened in general, i.e., there exist cases in which no algorithm can reconstruct matrices \(\mathbf{a}\) and \(\mathbf{v}\) with a smaller number of measurement setups \(n\).
The number of scalar measurements that form matrix \(\mathbf{p}\) is \(nm\). The given magnitude of vectors \(\mathbf{v}_{i}\) results in \(n\) extra scalar data. The number of unknown entries of matrices \(\mathbf{a}\) and \(\mathbf{v}\) is \(d(n+m)\). These matrices are to be determined up to an orthogonal transformation of \(\mathbf{R}^{d}\). The dimension of the group of such transformations of \(\mathbf{R}^{d}\) is \(\frac{d(d-1)}{2}\). Thus, for the desired reconstruction of \(\mathbf{a}\) and \(\mathbf{v}\), the following inequality must hold\(nm+n\geq(m+n)d-\frac{d(d-1)}{2}\). By rearranging this inequality, we get
\[\left(n-d\right)\left(m-d+1\right)\geq\frac{d(d-1)}{2}. \tag{17}\]
In particular, the number of setups \(n\) hast to be greater than the dimension \(d\), and the number of sensors \(m\) must be at least \(d\). Moreover, for the minimal number of sensors \(m=d\), Inequality (17) results in (16). In particular, \(n\geq 3\) for dimension \(d=2\), and \(n\geq 6\) for \(d=3\).
## IV Calibration of multiple gyroscopes
Gyroscopes are often produced in presumably orthonormal triads. Multiple instances of such triads can be used to reduce measurement errors after data fusion [9]. We have considered
Fig. 2: Calibration error of four triads of gyroscopes
a system of four gyroscope triads in the configuration shown in Fig. 1, i.e., with sensitive vectors of the gyroscopes constituting the following matrix:
\[\mathbf{a}_{\text{model}}=\begin{pmatrix}1&0&0&0&1&0&0&-1&0&1&0&0\\ 0&1&0&-1&0&0&1&0&0&0&0&-1\\ 0&0&1&0&0&1&0&0&1&0&1&0\end{pmatrix}.\]
Such a system needs calibration because of the internal sensitive-axes misalignment, scale factor spread between sensors comprising every single triad, and the finite precision of the multi-triad assembly. One may perform the needed calibration with the help of a rotary table that can turn with a known angular rate. We propose the following measurements for each of \(n\) different positions of the system on the table.
1. Place the system in the \(i\)-th position on the steady rotary table and record the reading of the sensors. The time-averaged readings \(\mathbf{b}^{\prime}_{i}\) are used as the estimates for the biases of the gyroscopes.
2. Switch on the rotary table and wait until it rotates steadily.
3. Record and time-average the readings of the sensors to form vectors \(\mathbf{m}^{\prime}_{i}\).
We remove the measurement bias by subtracting the steady-state read-outs from the readings obtained during the rotary movement, i.e., we set
\[\mathbf{m}_{i}=\mathbf{m}^{\prime}_{i}-\mathbf{b}^{\prime}_{i}. \tag{18}\]
To simulate a real scenario, we assumed that each of the sensor sensitivity axes, represented by columns of matrix \(\mathbf{a}\), differ from the columns of \(\mathbf{a}_{\text{model}}\) by a random Gaussian vector with zero mean and diagonal covariance matrix with \(\sigma=0.01\) on the diagonal. We picked at random \(n\) different positions (orientations) of the system and simulated the sensors readings according to Eq. (3). Then, we invoked Algorithm 2 to compute the matrix \(\hat{\mathbf{a}}\) of sensitive vectors of the gyroscopes. Matrix \(\hat{\mathbf{a}}\) is expected to approximate \(\mathbf{a}\) up to an orthogonal transformation. Therefore, to compare these matrices, we first find two orthogonal matrices \(\mathbf{h}\) and \(\mathbf{k}\) such that \(\mathbf{ha}\) and \(\mathbf{k}\hat{\mathbf{a}}\) are upper-triangular with positive entries on the main diagonal. Then, we compute the calibration error \(\epsilon\), which we define as the Frobenius norm of matrix \(\mathbf{ha}-\mathbf{k}\hat{\mathbf{a}}\), i.e.,
\[\epsilon=\|\mathbf{h}\mathbf{a}-\mathbf{k}\hat{\mathbf{a}}\|_{F}, \tag{19}\]
where \(\|\mathbf{x}\|_{F}\) denotes the square root of the sum of squares of the entries of matrix \(\mathbf{x}\). For each of the considered values of \(n\) and noise standard deviation \(\sigma\), we have repeated the simulation of the calibration procedure \(1000\) times to assess its statistical behavior. Fig. 2 shows the boxplot of the calibration error. The line plots of Fig. 3 indicate that the median and the interquartile range of the calibration error are approximately proportional to the standard deviation \(\sigma\) of the measurement error. Figure 4 shows that these statistics drop with the number \(n\) of measurements at the rate \(n^{-0.5}\) approximately.
## V Conclusion
We have proposed a novel method for calibrating multiple-sensor systems with the help of a constant magnitude vector quantity. The crucial merit of the method is the closed form for the computed parameters under calibration. The method requires the sensors to be unbiased or the sensor bias to be computed before applying the proposed calibration procedure. We have noted that the proposed algorithm allows for the calibration of the sensors up to an orthogonal transformation, provided the magnitude of the measured quantity is known. In the opposite case, the calibration procedure leaves a common scale factor unresolved. We have deduced the minimum number of positions needed to complete the calibration with the above-specified degree of ambiguity. The proposed calibration method accuracy depends on the number of considered positions and the measurement noise level. The results of the conducted numerical experiments indicate that the calibration error is linearly proportional to the standard deviation of measurement errors and inversely proportional to the square of the number of positions.
Fig. 4: Calibration error statistics (for four triads of gyroscopes) as functions of the number of calibration positions \(n\)
Fig. 3: Calibration error statistics (for four triads of gyroscopes) as functions of the standard deviation \(\sigma\) of the measurement noise |
2306.14326 | Computational Asymmetries in Robust Classification | In the context of adversarial robustness, we make three strongly related
contributions. First, we prove that while attacking ReLU classifiers is
$\mathit{NP}$-hard, ensuring their robustness at training time is
$\Sigma^2_P$-hard (even on a single example). This asymmetry provides a
rationale for the fact that robust classifications approaches are frequently
fooled in the literature. Second, we show that inference-time robustness
certificates are not affected by this asymmetry, by introducing a
proof-of-concept approach named Counter-Attack (CA). Indeed, CA displays a
reversed asymmetry: running the defense is $\mathit{NP}$-hard, while attacking
it is $\Sigma_2^P$-hard. Finally, motivated by our previous result, we argue
that adversarial attacks can be used in the context of robustness
certification, and provide an empirical evaluation of their effectiveness. As a
byproduct of this process, we also release UG100, a benchmark dataset for
adversarial attacks. | Samuele Marro, Michele Lombardi | 2023-06-25T19:41:14Z | http://arxiv.org/abs/2306.14326v1 | # Computational Asymmetries in Robust Classification
###### Abstract
In the context of adversarial robustness, we make three strongly related contributions. First, we prove that while attacking ReLU classifiers is \(NP\)-hard, ensuring their robustness at training time is \(\Sigma_{P}^{2}\)-hard (even on a single example). This asymmetry provides a rationale for the fact that robust classifications approaches are frequently fooled in the literature. Second, we show that inference-time robustness certificates are not affected by this asymmetry, by introducing a proof-of-concept approach named Counter-Attack (CA). Indeed, CA displays a reversed asymmetry: running the defense is \(NP\)-hard, while attacking it is \(\Sigma_{2}^{P}\)-hard. Finally, motivated by our previous result, we argue that adversarial attacks can be used in the context of robustness certification, and provide an empirical evaluation of their effectiveness. As a byproduct of this process, we also release UG100, a benchmark dataset for adversarial attacks.
Machine Learning, Robustness, Robustness, Robustness, Robustness, Robustness
## 1 Introduction
Adversarial attacks, i.e. algorithms designed to fool machine learning models, represent a significant threat to the applicability of such models in real-world contexts (Brown et al., 2017; Brendel et al., 2019; Wu et al., 2020). Despite years of research effort, countermeasures (i.e. "defenses") to adversarial attacks are frequently fooled by applying small tweaks to existing techniques (Carlini and Wagner, 2016; 2017; He et al., 2017; Hosseini et al., 2019; Tramer et al., 2020; Croce et al., 2022). We argue that this pattern is due to differences between the fundamental mathematical problems that defenses and attacks need to tackle, and we investigate this topic by providing three contributions.
First, we prove a set of theoretical results about the complexity of attack and training-time defense problems, including the fact that _attacking a ReLU classifier is \(NP\)-hard in the general case, while finding a parameter set that makes a ReLU classifier robust on even a single input is \(\Sigma_{2}^{P}\)-hard_. To the best of our knowledge, this is the first complexity bound for general ReLU classifiers, and the main contribution of this work. We also provide more general bounds for non-polynomial classifiers, and show in particular that an \(A\)-time classifier can be attacked in \(\mathit{NP}^{A}\) time. Instead of using a PAC-like formalization, we rely on a worst-case semantic of robustness. This approach results in a formalization that is both more easier to deal with and independent of data distribution assumptions, while still _providing a rationale for difficulties in training robust classifiers_ that are well-known in the related literature. Our proofs also lay the ground work for identifying tractable classes of defenses.
Second, we prove by means of an example that _inference-time defenses can sidestep the asymmetry_. Our witness is a proof-of-concept approach, referred to as Counter-Attack (CA), that evaluates robustness on the fly for a specific input (w.r.t. to a maximum distance \(\varepsilon\)) by running an adversarial attack. Properties enjoyed by this technique are likely to extend to other inference-time defense methods, if they are based on similar principles. Notably, when built over an exact attack, _generating a certificate is \(NP\)-hard_ in the worst case, \(\varepsilon\)_-bounded attacks are impossible_, and _attacking using perturbations of magnitude \(\varepsilon^{\prime}>\varepsilon\) is \(\Sigma_{2}^{P}\)-hard_. On the other hand, using a non-exact attack results in partial guarantees (no false positives for heuristic attacks, no false negatives for bounding techniques).
Finally, since our results emphasize the connection between verification and attack problems, we provide an empirical investigation of the use of heuristic attacks for verification. _We found heuristic attacks to be high-quality approximators for exact decision boundary distances_: a pool of seven heuristic attacks provided an accurate (average over-estimate between 2.04% and 4.65%) and predictable (average \(R^{2}>0.99\)) approximation of the true optimum for small-scale Neural Networks trained on the MNIST and CIFAR10 datasets. We release1 our benchmarks and adversarial examples (both exact and heuristic) in a new dataset, named UG100.
Footnote 1: All our code, models, and data are available under MIT license at [https://github.com/samuelemarro/counter-attack](https://github.com/samuelemarro/counter-attack).
Overall, we hope our contributions can support future research by highlighting potential structural challenges, point
ing out key sources of complexity, inspiring research on heuristics and tractable classes, and suggesting alternative perspectives on how to build robust classifiers.
## 2 Background and Formalization
In this section, we introduce key definitions (adapted from Dreossi et al. (2019)) that we will use to frame our results. Our aim is to capture the key traits shared by most of the literature on adversarial attacks, so as to identify properties that are valid under broad assumptions.
Adversarial Attacks and RobustnessWe start by defining the concept of _adversarial example_, which intuitively represents a modification of a legitimate input that is so limited as to be inconsequential for a human observer, but sufficient to mislead a target model. Formally, let \(f:X\rightarrow\{1,\dots,N\}\) be a discrete classifier. Let \(B_{p}(\mathbf{x},\varepsilon)=\{\mathbf{x}^{\prime}\in X\,|\,\|\mathbf{x}-\mathbf{x}^{\prime} \|_{p}\leq\varepsilon\}\) be a \(L^{p}\) ball of radius \(\varepsilon\) and center \(\mathbf{x}\). Then we have:
**Definition 2.1** (Adversarial Example).: Given an input \(\mathbf{x}\), a threshold \(\varepsilon\), and a \(L^{p}\) norm2, an adversarial example is an input \(\mathbf{x}^{\prime}\in B_{p}(\mathbf{x},\varepsilon)\) such that \(f(\mathbf{x}^{\prime})\in C(\mathbf{x})\), where \(C(\mathbf{x})\subseteq\{1,\dots,N\}\setminus\{f(\mathbf{x})\}\).
Footnote 2: We use the term “norm” for \(0<p<1\) even if in such cases the \(L^{p}\) function is not subadditive.
This definition is a simplification compared to human perception, but it is adequate for a sufficiently small \(\varepsilon\), and it is adopted in most of the relevant literature. An _adversarial attack_ can then be viewed as an optimization procedure that attempts to find an adversarial example. We define an adversarial attack for a classifier \(f\) as a function \(a_{f,p}:X\to X\) that solves the following optimization problem:
\[\operatorname*{arg\,min}_{\mathbf{x}^{\prime}\in X}\{\|\mathbf{x}^{\prime}-\mathbf{x}\|_{ p}\mid f(\mathbf{x}^{\prime})\in C(\mathbf{x})\} \tag{1}\]
The attack is considered successful if the returned solution \(\mathbf{x}^{\prime}=a_{f,p}(\mathbf{x})\) also satisfies \(\|\mathbf{x}^{\prime}-\mathbf{x}\|_{p}\leq\varepsilon\). We say that an attack is _exact_ if it solves Equation (1) to optimality (or, in the case of its decision variant, if it succeeds if and only if a solution exists); otherwise, we say that the attack is _heuristic_. An attack is said to be _targeted_ if \(C(\mathbf{x})=C_{t,y^{\prime}}(\mathbf{x})=\{y^{\prime}\}\) with \(y^{\prime}\neq f(\mathbf{x})\); it is instead _untargeted_ if \(C_{u}(\mathbf{x})=\{1,\dots,N\}\setminus\{f(\mathbf{x})\}\). We define the _decision boundary distance_\(d^{*}_{p}(\mathbf{x})\) of a given input \(\mathbf{x}\) as the minimum \(L^{p}\) distance between \(\mathbf{x}\) and another input \(\mathbf{x}^{\prime}\) such that \(f(\mathbf{x})\neq f(\mathbf{x}^{\prime})\). This is also the value of \(\|a_{f,p}(\mathbf{x})-\mathbf{x}\|_{p}\) for an exact, untargeted, attack.
Intuitively, a classifier is _robust w.r.t. an example_\(\mathbf{x}\) iff \(\mathbf{x}\) cannot be successfully attacked. Formally:
**Definition 2.2** ((\(\varepsilon\), \(p\))-Local Robustness).: A discrete classifier \(f\) is (\(\varepsilon\), \(p\))-locally robust w.r.t. an example \(\mathbf{x}\in X\) iff \(\forall\mathbf{x}^{\prime}\in B_{p}(\mathbf{x},\varepsilon)\) we have \(f(\mathbf{x}^{\prime})=f(\mathbf{x})\).
Under this definition, finding a parameter set \(\mathbf{\theta}\) that makes a classifier \(f_{\mathbf{\theta}}\) robust on \(\mathbf{x}_{0}\) can be seen as solving the following constraint satisfaction problem:
\[\text{find }\mathbf{\theta}\text{ s. t. }\forall\mathbf{x}^{\prime}\in B_{p}(\mathbf{x}_{0}, \varepsilon).f_{\mathbf{\theta}}(x^{\prime})=f_{\mathbf{\theta}}(x) \tag{2}\]
which usually features an additional constraint on the minimum clean accuracy of the model (although we make no assumptions on this front). Note that classifiers are usually expected to be robust on more than one point. However, we will show that the computational asymmetry exists even if we require robustness on a single point.
A common optimization reformulation of Equation (2), which enforces robustness _and_ accuracy, is the nested optimization problem used for adversarial training in Madry et al. (2018). Specifically, if we have a single ground truth data point \(\langle\mathbf{x}_{0},y\rangle\), the optimization problem is:
\[\operatorname*{arg\,min}_{\mathbf{\theta}}\max_{\mathbf{x}^{\prime}\in B_{p}(\mathbf{x}_{0 },\varepsilon)}\mathcal{L}(\mathbf{\theta},\mathbf{x}^{\prime},y_{0}) \tag{3}\]
where \(\mathcal{L}\) is a proxy for \(f_{\mathbf{\theta}}(\mathbf{x}^{\prime})=y\) (e.g. the cross entropy loss between \(f_{\mathbf{\theta}}(\mathbf{x}^{\prime})\) and \(y\)). The link between \(\exists\forall\) queries (such as that in Equation (2) and nested optimization problems (such as that in Equation (3)) underlies the intuition of several of our theoretical results (see Section 3.1).
ReLU Networks and FSFP SpacesAdditionally, our results rely on definitions of ReLU networks and FSFP spaces.
**Definition 2.3** (ReLU network).: A ReLU network is a composition of sum, multiplication by a constant, and ReLU activation, where \(ReLU:\mathbb{R}\rightarrow\mathbb{R}_{0}^{+}\) is defined as \(ReLU(x)=max(x,0)\).
Note that any hardness result for ReLU classifiers also extends to general classifiers.
Fixed-Size Fixed-Precision (FSFP) spaces, on the other hand, capture two common assumptions about real-world input spaces: all inputs can be represented with the same number of bits and there exists a positive minorant of the distance between inputs.
**Definition 2.4** (Fixed-Size Fixed-Precision space).: Given a real \(p>0\), a space \(X\subseteq\mathbb{R}^{n}\) is FSFP if there exists a \(\nu\in\mathbb{N}\) such that \(\forall\mathbf{x}.|r(\mathbf{x}^{\prime})|\leq\nu\) (where \(|r(\mathbf{x})|\) is the size of the representation of \(\mathbf{x}\)) and there exists a \(\mu\in\mathbb{R}\) such that \(\mu>0\) and \(\forall\mathbf{x},\mathbf{x}^{\prime}\in X.\left(\|\mathbf{x}^{\prime}-\mathbf{x}\|_{p}<\mu \implies\mathbf{x}=\mathbf{x}^{\prime}\right)\).
Examples of FSFP spaces include most image encodings, as well as 32-bit and 64-bit IEE754 tensors. Examples of non-FSFP spaces include the set of all rational numbers in an interval. Similarly to ReLU networks, hardness results for FSFP spaces also apply to more general spaces.
\(\Sigma_{2}^{P}\) Complexity Several of our theoretical results concern complexity classes in the Polynomial Hierarchy such as \(\Sigma_{2}^{P}\). \(\Sigma_{2}^{P}\) is the class of problems that can be solved in \(\mathit{NP}\) time if we have an oracle that solves an \(\mathit{NP}\)-time problem in \(O(1)\). \(\Sigma_{2}^{P}\)-hard problems include finding a strong Nash equilibrium (Gottlob et al., 2011) and \(\mathit{co}\Pi_{2}3\)SAT (Stockmeyer, 1976). A notable conjecture is the Polynomial Hierarchy conjecture (Stockmeyer, 1976), a generalization of the \(P\neq\mathit{NP}\) conjecture which states that the Polynomial Hierarchy does not collapse (i.e. \(P\subsetneq\mathit{NP}\subsetneq\Sigma_{2}^{P}\subsetneq\Sigma_{3}^{P}\dots\)). In other words, under broad assumptions, we cannot solve a \(\Sigma_{2}^{P}\)-hard problem efficiently even if we can solve \(\mathit{NP}\)-hard problems in constant time.
## 3 An Asymmetrical Setting
In this section, we prove the existence of a structural asymmetry between the computational classes of attack and training-time defense problems (barring the collapse of the Polynomial Hierarchy) by studying their decision versions3. While the asymmetry is worst-case in nature, it holds under broad assumptions and provides an explanation for why attacks seem to outperform defenses in practice.
Footnote 3: Note that hardness results for decision problems trivially extend to their corresponding optimization variants.
### Intuition
The intuition behind our theorems consists in three main observations:
* ReLU networks, due to their expressive power, are capable of computing input-output relations that are _at least as complex_ as Boolean formulae;
* Attacking usually requires solving an optimization problem, whose decision variant (finding _any_ adversarial example) can be expressed as an \(\exists\) query;
* Training a robust classifier, on the other hand, usually requires solving a nested optimization problem, whose decision variant (finding _any_ robust parameter set) can be expressed as an \(\exists\forall\) query.
From these considerations, we show that solving \(3\)SAT can be reduced to attacking the ReLU classifier that computes the corresponding Boolean formula, and thus that attacking a ReLU classifier is \(\mathit{NP}\)-hard (Theorem 3.1).
We then prove that, given a \(3\)CNF formula \(z(\mathbf{x},\mathbf{y})\), it is possible to build a ReLU classifier \(f_{\mathbf{x}}(\mathbf{y})\) (where \(\mathbf{x}\) are parameters and \(\mathbf{y}\) are inputs) that computes the same formula. We use this result to prove that \(\mathit{co}\Pi_{2}3\)SAT (a subclass of \(\mathit{TQBF}\) that is known to be \(\Sigma_{2}^{P}\)-hard) can be reduced to finding a parameter set that makes \(f\) robust, which means that the latter is \(\Sigma_{2}^{P}\)-hard (Theorem 3.7).
Note that, when performing the reductions, we choose the ReLU networks that we need to solve the corresponding problem without considering how likely they are to arise in natural settings. This approach (which is common in proofs by reduction) allows us to study the worst-case complexity of both tasks without making assumptions on the training distribution or the specifics of the learning algorithm. Studying the average-case complexity of such tasks would of course be of great importance, however: 1) such an approach would require to introduce assumptions about the training distribution; and 2) despite the recent advancements in fields such as PAC learning, average case proof in this setting are still very difficult to obtain except in very specific cases (see Section 3.4). We hope that our theoretical contributions will allow future researchers to extend our work to average-case results.
In short, while our theorems rely on specific instances of ReLU classifiers, they capture very general phenomena: ReLU networks can learn functions that are at least as complex as Boolean formulae, and robust training requires solving a nested optimization problem. The proofs thus provide an intuition on the formal mechanisms that underly the computational asymmetries, while at the same time outlining directions for studying tractable classes (since both \(3SAT\) and \(\mathit{TQBF}\) are extensively studied in the literature).
### Preliminaries
We begin by extending the work of Katz et al. (2017), who showed that proving linear properties of ReLU networks is \(\mathit{NP}\)-complete. Specifically, we prove that the theorem holds even in the special case of adversarial attacks:
**Theorem 3.14** (Untargeted \(L^{\infty}\) attacks against ReLU classifiers are \(\mathit{NP}\)-complete).: _Let \(U\)-\(ATT_{p}\) be the set of all tuples \(\langle\mathbf{x},\varepsilon,f\rangle\) such that:_
Footnote 4: The proofs of all our theorems and corollaries can be found in the appendices.
\[\exists\mathbf{x}^{\prime}\in B_{p}(\mathbf{x},\varepsilon).f(\mathbf{x}^{\prime})\neq f( \mathbf{x}) \tag{4}\]
_where \(\mathbf{x}\in X\), \(X\) is a FSFP space and \(f\) is a ReLU classifier. Then \(U\)-\(ATT_{\infty}\) is \(\mathit{NP}\)-complete._
**Corollary 3.2**.: _For every \(0<p\leq\infty\), \(U\)-\(ATT_{p}\) is \(\mathit{NP}\)-complete._
**Corollary 3.3**.: _Targeted \(L^{p}\) attacks (for \(0<p\leq\infty\)) against ReLU classifiers are \(\mathit{NP}\)-complete._
**Corollary 3.4**.: _Theorem 3.1 holds even if we consider the more general set of polynomial-time classifiers w.r.t. the size of the tuple._
A consequence of Theorem 3.1 is that the complementary task of attacking, i.e. proving that no adversarial example exists (which is equivalent to proving that the classifier is locally robust on an input), is \(\mathit{co}\mathit{NP}\)-complete.
We then provide a more general upper bound that holds for classifiers in any complexity class:
**Theorem 3.5** (Untargeted \(L^{p}\) attacks against \(A\)-time classifiers are in \(NP^{A}\)).: _Let \(A\) be a complexity class, let \(f\) be a classifier, let \(Z_{f}=\{\langle\mathbf{x},y\rangle\mid y=f(\mathbf{x}),\mathbf{x}\in X\}\) and let \(U\)-\(ATT_{p}(f)=\{\langle\mathbf{x},\varepsilon,g\rangle\in U\)-\(ATT_{p}^{T}\mid g=f\}\), where \(U\)-\(ATT_{p}^{T}\) is the same as \(U\)-\(ATT_{p}\) but without the ReLU classifier restriction. If \(Z_{f}\in A\), then for every \(0<p\leq\infty\), \(U\)-\(ATT_{p}(f)\in NP^{A}\)._
**Corollary 3.6**.: _For every \(0<p\leq\infty\), if \(Z_{f}\in\Sigma_{n}^{P}\), then \(U\)-\(ATT_{p}(f)\in\Sigma_{n+1}^{P}\)._
As a consequence, if \(Z_{f}\in P\), then \(U\)-\(ATT_{p}(f)\in NP\). Informally, Theorem 3.1 establishes that, under broad assumptions, evaluating and attacking a general classifier are in complexity classes that are strongly conjectured to be distinct, with the attack problem being the harder one. Note that, in some special cases, one can obtain polynomial-time classifiers with polynomial-time attacks by placing additional restrictions on the input distribution and/or the structure of the classifier. Refer to Section 3.4 for an overview of such approaches.
### Complexity of Robust Training
We then proceed to prove our main result, i.e. that _finding a robust parameter set_, as formalized by our semantic, is in a distinct complexity class compared to the attack problem.
**Theorem 3.7** (Finding a set of parameters that make a ReLU network \((\varepsilon,p)\)-locally robust on an input is \(\Sigma_{2}^{P}\)-complete).: _Let \(PL\)-\(ROB_{p}\) be the set of tuples \(\langle\mathbf{x},\varepsilon,f_{\mathbf{\theta}},v\rangle\) such that:_
\[\exists\mathbf{\theta}^{\prime}.\left(v_{f}(\mathbf{\theta}^{\prime})=1\implies\forall \mathbf{x}^{\prime}\in B_{p}(\mathbf{x},\varepsilon).f_{\mathbf{\theta}^{\prime}}(\mathbf{x} ^{\prime})=f_{\mathbf{\theta}^{\prime}}(\mathbf{x})\right) \tag{5}\]
_where \(\mathbf{x}\in X\), \(X\) is a FSFP space and \(v_{f}\) is a polynomial-time function that is 1 iff the input is a valid parameter set for \(f\). Then \(PL\)-\(ROB_{\infty}\) is \(\Sigma_{2}^{P}\)-complete._
**Corollary 3.8**.: \(PL\)_-\(ROB_{p}\) is \(\Sigma_{2}^{P}\)-complete for all \(0<p\leq\infty\)._
**Corollary 3.9**.: _Theorem 3.7 holds even if, instead of ReLU classifiers, we consider the more general set of polynomial-time classifiers w.r.t. the size of the tuple._
The \(\Sigma_{2}^{P}\) complexity class includes \(NP\) and is conjectured to be strictly harder (as part of the Polynomial Hierarchy conjecture). In other words, if the Polynomial Hierarchy conjecture holds, **robustly training a general ReLU classifier is strictly harder than attacking it**. Note that our results hold _in the worst-case_, meaning there can be specific circumstances under which guaranteed robustness could be achieved with reasonable effort. However, in research fields where similar asymmetries are found, they tend to translate into practically meaningful difficulty gaps: for example, \(\exists\forall\) Quantified Boolean Formula problems (which are \(\Sigma_{2}^{P}\)-complete) are in practice much harder to solve than pure SAT problems (which are \(NP\)-complete).
We conjecture this is also the case for our result, as it mirrors the key elements in the SAT/TQBF analogy. First, generic classifiers can learn (and are known to learn) _complex input-output mappings with many local optima_. Second, while attacks rely on existential quantification (finding an example), _achieving robustness requires addressing a universally quantified problem_ (since we need to guarantee the same prediction on all neighboring points).
### Relevance of the Result and Related Work
In this section we discuss the significance of our results, both on the theoretical and the practical side.
Theoretical RelevanceAs we mentioned, results about polynomial-time attack and/or robustness certificates are available, but under restrictive assumptions. For example, Mahloujifar & Mahmoody (2019) showed that there exist exact polynomial-time attacks against classifiers trained on product distributions. Similarly, Awasthi et al. (2019) showed that for degree-2 polynomial threshold functions there exists a polynomial-time algorithm that either proves that the model is robust or finds an adversarial example.
Other complexity lower bounds also exist, but again they apply under specific conditions. Degwekar et al. (2019), extending the work of Bubeck et al. (2018) and Bubeck et al. (2019), showed that there exist certain cryptography-inspired classification tasks such that learning a classifier with a robust accuracy of 99% is as hard as solving the Learning Parity with Noise problem (which is \(NP\)-hard). On the other hand, Song et al. (2021) showed that learning a single periodic neuron over noisy isotropic Gaussian distributions in polynomial time would imply that the Shortest Vector Problem (conjectured to be \(NP\)-hard) can be solved in polynomial time.
Finally, Garg et al. (2020) provided an average-case complexity analysis, by introducing assumptions on the data-generation process. In particular, by requiring attackers to provide a valid cryptographic signature for inputs, it is possible to prevent attacks with limited computational resources from fooling the model in polynomial time.
Compared to the above results, both Theorem 3.1 and Theorem 3.7 apply to a wider class of models. In fact, to the best of our knowledge, **Theorem 3.7 is the first robust training complexity bound for general ReLU classifiers**.
Empirical RelevanceTheorems 3.1 and 3.7 imply that training-time defenses can be strictly (and significantly) harder than attacks. This result is consistent with a recurring
pattern in the literature where new defenses are routinely broken. For example, defensive distillation (Papernot et al., 2016) was broken by Carlini and Wagner (2016). Carlini also showed that several adversarial example detectors (Carlini and Wagner, 2017), as well as model-based purifiers (Carlini and Wagner, 2017) can be fooled. Similarly, He et al. (2017) showed that ensembles of weak defenses can be fooled, while the defense of Roth et al. (2019) was fooled by Hosseini et al. (2019). Finally, Tramer et al. (2020) and Croce et al. (2022) broke a variety of adaptive defenses.
While our theorems formally hold only in the worst case, they rely at their core on two properties that can be expected to be practically relevant, and namely: 1) that NNs can learn response surfaces that are as complex as Boolean formulas, and 2) that robustness involves universal rather then existential quantification. For this reason, we think that **the asymmetry we identified can provide valuable insight into a large body of empirical work**.
### Additional Sources of Asymmetry
On top of our identified structural difference, there are additional factors that may provide an advantage to the attacker, despite the fact that they lack a formal characterization at the moment of writing. We review them in this section, both as promising directions for future theoretical research, and since awareness of them can support efforts to build more robust defenses.
First, the attacker can gather information about the target model, e.g. by using genuine queries (Papernot et al., 2017), while the defender does not have such an advantage. As a result, the defender often needs to either make assumptions about adversarial examples (Hendrycks and Gimpel, 2017; Roth et al., 2019) or train models to identify common properties (Feinman et al., 2017; Grosse et al., 2017). These assumptions can be exploited, such as in the case of Carlini and Wagner (2017), who generated adversarial examples that did not have the expected properties.
Second, the attacker can focus on one input at the time, while the defender has to guarantee robustness on a large subset of the input space. This weakness can be exploited: for example, MagNet (Meng and Chen, 2017) relies on a model of the entire genuine distribution, which can be sometimes inaccurate. Carlini and Wagner (2017) broke MagNet by searching for examples that were both classified differently and mistakenly considered genuine.
Finally, defenses cannot significantly compromise the accuracy of a model. Adversarial training, for example, often reduces the clean accuracy of the model (Madry et al., 2018), leading to a trade-off between accuracy and robustness.
All of these factors can, depending on the application context, exacerbate the effects of the structural asymmetry; for this reason, minimizing their impact represents another important research direction.
## 4 Sidestepping the Asymmetry
An important aspect of our theoretical results is that they apply only to building robust classifiers at training time. This leaves open the possibility to _sidestep the asymmetry by focusing on defenses that operate at inference time_. Here, we prove that this indeed the case by means of an example, and characterize its properties since they can be expected to hold for other systems based on the same principles.
Our witness is a proof-of-concept robustness checker, called Counter-Attack (CA), that relies on adversarial attacks to compute robustness certificates at inference time, w.r.t. to a maximum \(p\)-norm \(\varepsilon\). CA can compute certificates in \(NP\)-time, and attacking it beyond its intended certification radius is \(\Sigma_{2}^{P}\)-hard, proving that **inference-time defenses can flip the attack-defense asymmetry**. While an argument can be made that CA is usable as it is, our main aim is to pave the ground for future approaches with the same strengths, and hopefully having better scalability.
### Inference-Time Defenses can Flip the Asymmetry: the Case of Counter-Attack
The main idea in CA is to evaluate robustness on a case-by-case basis, flagging inputs as potentially unsafe if a robust answer cannot be provided. Specifically, given a norm-order \(p\) and threshold \(\varepsilon\), CA operates as follows:
* For a given input \(\mathbf{x}\), we determine if the model is \((\varepsilon,p)\)-locally robust by running an untargeted adversarial attack on \(x\);
* If the attack succeeds, we flag the input.
In a practical usage scenario, flagged inputs would then be processed by a slower, but more robust, model (e.g. a human) or rejected; this behavior is similar to that of approaches for learning with rejection, but with a semantic tied to adversarial robustness5.
Footnote 5: Note that the learning-with-rejection approach usually involves some form of confidence score; while the decision boundary distance might be seen as a sort of score, it does not have a probabilistic interpretation. Studying CA under this light represents a promising research direction.
Similarly, it is possible to draw comparisons between robust transductive learning (e.g. the work of Chen et al. (2021)) and CA. While the two techniques use different approaches, we believe that parts of our analysis might be adapted to study existing applications of transductive learning to robust classification. Refer to Appendix G for a more in-depth comparison.
Finally, note that the flagging rate depends on the model
robustness: a model that is locally robust on the whole input distribution would have a flagging rate of 0, while in the opposite case all inputs would be flagged. As a consequence, this form of inference-time defense is best thought of as a _complement_ to training-time robustness approaches, designed to catch those cases that are hard to handle due to Theorem 3.7. A technique such as CA would indeed benefit from most advances in the field of adversarial robustness: training-time defenses for a better flagging rate, and attack algorithms for more effective and efficient certificates.
### Formal Properties
The formal properties of the CA approach depend on the kind of attack used to perform the robustness check. Specifically, when used with an exact attack, such as those from Carlini et al. (2017) and Tjeng et al. (2019), CA provides formal robustness guarantees for an arbitrary \(p\) and \(\varepsilon\):
**Theorem 4.1**.: _Let \(0<p\leq\infty\) and let \(\varepsilon>0\). Let \(f:X\rightarrow\{1,\ldots,N\}\) be a classifier and let \(a\) be an exact attack. Let \(f^{a}_{CA}:X\rightarrow\{1,\ldots,N\}\cup\{\star\}\) be defined as:_
\[f^{a}_{CA}(\mathbf{x})=\begin{cases}f(\mathbf{x})&\|a_{f,p}(\mathbf{x})-\mathbf{x}\|_{p}> \varepsilon\\ \star&\text{otherwise}\end{cases} \tag{6}\]
_Then \(\forall\mathbf{x}\in X\) an \(L^{p}\) attack on \(\mathbf{x}\) with radius greater than or equal to \(\varepsilon\) and with \(\star\not\in C(\mathbf{x})\) fails._
The notation \(f^{a}_{CA}(\mathbf{x})\) refers to the classifier \(f\) combined with CA, relying on attack \(a\). The condition \(\star\not\in C(\mathbf{x})\) requires that the input generated by the attack should not be flagged by CA. Intuitively, CA guarantees robustness due to the fact that, if \(\mathbf{x}^{\prime}\) is an adversarial example for an input \(\mathbf{x}\), \(\mathbf{x}\) is also an adversarial example for \(\mathbf{x}^{\prime}\), which means that \(\mathbf{x}^{\prime}\) will be flagged.
Due to the properties of \(L^{p}\) norms, CA also guarantees a degree of robustness against attacks with a different norm:
**Corollary 4.2**.: _Let \(1\leq p\leq\infty\) and let \(\varepsilon>0\). Let \(f\) be a classifier on inputs with \(n\) elements that uses CA with norm \(p\) and radius \(\varepsilon\). Then for all inputs and for all \(1\leq r<p\), \(L^{r}\) attacks of radius greater than or equal to \(\varepsilon\) and with \(\star\not\in C(\mathbf{x})\) will fail. Similarly, for all inputs and for all \(r>p\), \(L^{r}\) attacks of radius greater than or equal to \(n^{\frac{1}{\varepsilon}-\frac{1}{p}}\varepsilon\) and with \(\star\not\in C(\mathbf{x})\) will fail (treating \(\frac{1}{\infty}\) as 0)._
Note that since the only expensive step in CA consists in applying an adversarial attack to an input, the complexity is the same as that of a regular attack.
Attacking with a Higher RadiusIn addition to robustness guarantees for a chosen \(\varepsilon\), CA provides a form of computational robustness even beyond its intended radius. To prove this statement, we first formalize the task of attacking CA (referred to as Counter-CA, or CCA). This involves finding, given a starting point \(\mathbf{x}\), an input \(\mathbf{x}^{\prime}\in B_{p}(\mathbf{x},\varepsilon^{\prime})\) that is adversarial but not flagged by CA, i.e. such that \(f(\mathbf{x}^{\prime})\in C(\mathbf{x})\wedge\forall\mathbf{x}^{\prime\prime}\in B_{p}( \mathbf{x}^{\prime},\varepsilon).f(\mathbf{x}^{\prime\prime})=f(\mathbf{x}^{\prime})\). Note that, _for \(\varepsilon^{\prime}\leq\varepsilon\), no solution exists_, since \(\mathbf{x}\in B_{p}(\mathbf{x}^{\prime},\varepsilon)\) and \(f(\mathbf{x})\neq f(\mathbf{x}^{\prime})\).
**Theorem 4.3** (Attacking CA with a higher radius is \(\Sigma_{2}^{P}\)-complete).: _Let \(CCA_{p}\) be the set of all tuples \(\langle\mathbf{x},\varepsilon,\varepsilon^{\prime},C,f\rangle\) such that:_
\[\exists\mathbf{x}^{\prime}\in B_{p}(\mathbf{x},\varepsilon^{\prime}). \tag{7}\] \[(f(\mathbf{x}^{\prime})\in C(\mathbf{x})\wedge\forall\mathbf{x}^{\prime\prime }\in B_{p}(\mathbf{x}^{\prime},\varepsilon).f(\mathbf{x}^{\prime\prime})=f(\mathbf{x}^{ \prime}))\]
_where \(\mathbf{x}\in X\), \(X\) is a FSFP space, \(\varepsilon^{\prime}>\varepsilon\), \(f(\mathbf{x})\not\in C(\mathbf{x})\)\(f\) is a ReLU classifier and whether an output is in \(C(\mathbf{x}^{\star})\) for some \(\mathbf{x}^{\star}\) can be decided in polynomial time. Then \(CCA_{\infty}\) is \(\Sigma_{2}^{P}\)-complete._
**Corollary 4.4**.: \(CCA_{p}\) _is \(\Sigma_{2}^{P}\)-complete for all \(0<p\leq\infty\)._
**Corollary 4.5**.: _Theorem 4.3 also holds if, instead of ReLU classifiers, we consider the more general set of polynomial-time classifiers w.r.t. the size of the tuple._
In other words, under our assumptions, fooling CA can be harder than running it, thus flipping the computational asymmetry. Corollary 3.6 also implies that it is impossible to obtain a better gap between running the model and attacking it, from a Polynomial Hierarchy point of view (e.g. a \(P\)-time model that is \(\Sigma_{2}^{P}\)-hard to attack). Note that, due to the worst-case semantic of Theorem 4.3, fooling CA can be expected to be easy in practice when \(\varepsilon^{\prime}\gg\varepsilon\): this is however a very extreme case, where the threshold might have been poorly chosen or the adversarial examples might be very different from genuine examples.
Partial RobustnessWhile using exact attacks with CA is necessary for the best formal behavior, the approach remains capable of providing partial guarantees when used with either heuristic or lower-bounding approaches.
In particular, if a heuristic attack returns an example \(\mathbf{x}^{\prime}\) with \(\|\mathbf{x}-\mathbf{x}^{\prime}\|_{p}\leq\varepsilon\), then \(f\) is guaranteed to be locally non-robust on \(\mathbf{x}\). However, a heuristic attack failing to find an adversarial example does not guarantee that the model is locally robust.
Conversely, if we replace the attack with an optimization method capable of returning a lower bound \(lb(\mathbf{x})\) on the decision boundary distance (e.g. a Mathematical Programming solver), we get the opposite result: if the method proves that \(lb(\mathbf{x})>\varepsilon\), then \(f\) is locally robust on \(x\), but \(f\) might be robust even if the method fails to prove it.
In other words, with heuristic attacks false positives are impossible, while with lower-bound methods false negatives are impossible. Note that these two methods can be combined to improve scalability while retaining some formal guarantees.
These considerations provide further motivation for research in heuristic attacks, since every improvement in that field could lead to more reliable or faster robustness "certificates". Additionally, they emphasize the potential of lower bounding techniques (e.g. guaranteed approximation algorithms) as efficient certification tools. Finally, while we think that CA is an interesting technique per-se, we reiterate that the main appeal of the approach is to prove by means of an example that it is possible to circumvent the computational asymmetry we identified. We hope that future work will expand on this research direction, developing approaches that are both more efficient and with more formal guarantees.
## 5 An Evaluation of Adversarial Attacks as Certification Tools
CA highlights an interesting aspect of adversarial attacks: since attacking a classifier and certifying its local robustness are complementary tasks, **adversarial attacks can be used to build inference-time certification techniques**. This observation raises interest in evaluating existing (heuristic) attack algorithms in terms of their ability to serve as defenses (of which CA is just one of many possible applications). For example, in contexts where provable robustness is too resource-intensive, one could use sufficiently powerful heuristic attacks to determine with great accuracy if the model is locally robust (but without formal guarantees).
From this point of view, it should be noted that checking robustness _only requires evaluating the decision boundary distance_, and not necessarily finding the adversarial example that is closest to an input \(\mathbf{x}\), i.e. the optimal solution of Equation (1). As a consequence, an attack does not need to perform well to be usable as a defense, but just to come _predictably close_ to the decision boundary. For example, an algorithm that consistently overestimates the decision boundary distance by a 10% factor would be as good as an exact attack for many practical purposes, since we could simply apply a correction to obtain an exact estimate. This kind of evaluation is natural when viewing the issue from the perspective of our CA method, but to the best of our knowledge it has never been observed in the literature.
In this section, we thus empirically evaluate the quality of heuristic attacks. Specifically, we test whether \(\|\mathbf{x}-\mathbf{x}_{h}\|_{p}\), where \(\mathbf{x}_{h}\) is an adversarial example found by a heuristic attack, is predictably close to the true decision boundary distance \(d_{p}^{*}(\mathbf{x})\). To the best of our knowledge, the only other work that performed a somewhat similar evaluation is Carlini et al. (2017), which evaluated the optimality of the Carlini & Wagner attack on 90 MNIST samples for a \(\sim\)20k parameter network.
Consistently with Athalye et al. (2018) and Weng et al. (2018), we focus on the \(L^{\infty}\) norm. Additionally, we focus on _pools_ of heuristic attacks. The underlying rationale is that different adversarial attacks should be able to cover for their reciprocal blind spots, providing a more reliable estimate. Since this evaluation is empirical, it requires sampling from a chosen distribution, in our case specific classifiers and the MNIST (LeCun et al., 1998) and CIFAR10 (Krizhevsky et al., 2009) datasets. This means that the results are not guaranteed for other distributions, or for other defended models: studying how adversarial attacks fare in these cases is an important topic for future work.
Experimental SetupWe randomly selected \(\sim\)2.3k samples each from the test set of two datasets, MNIST and CIFAR10. We used three architectures per dataset (named A, B and C), each trained in three settings, namely standard training, PGD adversarial training (Madry et al., 2018) and PGD adversarial training with ReLU loss and pruning (Xiao et al., 2019) (from now on referred to as ReLU training), for a total of nine configurations per dataset.
Since our analysis requires computing exact decision boundary distances, and size and depth both have a strong adverse impact on solver times, we used small and relatively shallow networks with parameters between \(\sim\)2k and \(\sim\)80k. For this reason, the natural accuracy for standard training are significantly below the state of the art (89.63% - 95.87% on MNIST and 47.85% - 55.81% on CIFAR10). Adversarial training also had a negative effect on natural accuracies (84.54% - 94.24% on MNIST and 45.19% - 51.35% on CIFAR10), similarly to ReLU training (83.69% - 93.57% on MNIST and 32.27% - 37.33% on CIFAR10). Note that using reachability analysis tools for NNs, such as (Gehr et al., 2018), capable of providing _upper bounds_ on the decision boundary in a reasonable time would not be sufficient for our goal: indeed both lower and upper bounds on the decision boundary distance could be arbitrarily far from \(d^{*}(\mathbf{x})\), thus preventing us from drawing any firm conclusion.
We first ran a pool of heuristic attacks on each example, namely BIM (Kurakin et al., 2017), Brendel & Bethge (Brendel et al., 2019), Carlini & Wagner (Carlini & Wagner, 2017), Deepfool (Moosavi-Dezfooli et al., 2016), Fast Gradient (Goodfellow et al., 2015) and PGD (Madry et al., 2018), in addition to simply adding uniform noise to the input. Our main choice of attack parameters (from now on referred to as the "strong" parameter set) prioritizes finding adversarial examples at the expense of computational time. For each example, we considered the nearest feasible adversarial example found by any attack in the pool. We then ran the exact solver-based attack MIPVerify (Tjeng et al., 2019), which is able to find the nearest adversarial example to a given input. The entire process (including test runs) required \(\sim\)45k core-hours on an HPC cluster. Each node of the cluster has 384 GB of RAM and features two Intel CascadeLake 8260 CPUs, each with 24 cores and a clock
frequency of 2.4GHz. We removed the examples for which MIPVerify crashed in at least one setting, obtaining 2241 examples for MNIST and 2269 for CIFAR10. We also excluded from our analysis all adversarial examples for which MIPVerify did not find optimal bounds (atol = 1e-5, rtol = 1e-10), which represent on average 11.95% of the examples for MNIST and 16.30% for CIFAR10. Additionally, we ran the same heuristic attacks with a faster parameter set (from now on referred to as the "balanced" set) on a single machine with an AMD Ryzen 5 1600X six-core 3.6 GHz processor, 16 GBs of RAM and an NVIDIA GTX 1060 6 GB GPU. The process took approximately 8 hours. Refer to Appendix H for a more comprehensive overview of our experimental setup.
Distance ApproximationAcross all settings, the mean distance found by the strong attack pool is 4.09\(\pm\)2.02% higher for MNIST and 2.21\(\pm\)1.16% higher for CIFAR10 than the one found by MIPVerify. For 79.81\(\pm\)15.70% of the MNIST instances and 98.40\(\pm\)1.63% of the CIFAR10 ones, the absolute difference is less than 1/255, which is the minimum distance in 8-bit image formats. The balanced attack pool performs similarly, finding distances that are on average 4.65\(\pm\)2.16% higher for MNIST and 2.04\(\pm\)1.13% higher for CIFAR10. The difference is below 1/255 for 77.78\(\pm\)16.08% of MNIST examples and 98.74\(\pm\)1.13% of CIFAR10 examples. We compare the distances found by the strong attack pool for MNIST A and CIFAR10 (using standard training) with the true decision bound distances in Figure 1. Refer to Appendix J for the full data.
For all datasets, architectures and training techniques there appears to be a **strong, linear, correlation between the distance of the output of the heuristic attacks and the true decision boundary distance**. We chose to measure this by training a linear regression model linking the two distances. For the strong parameter set, we find that the average \(R^{2}\) across all settings is 0.992\(\pm\)0.004 for MNIST and 0.997\(\pm\)0.003 for CIFAR10. The balanced parameter set performs similarly, achieving an \(R^{2}\) of 0.990\(\pm\)0.006 for MNIST and 0.998\(\pm\)0.002 for CIFAR10. From these results, we conjecture that increasing the computational budget of heuristic attacks does not necessarily improve predictability, although further tests would be needed to confirm such a claim. Note that such a linear model can also be used to correct decision boudary distance overestimates in the context of heuristic CA. Another (possibly more reliable) procedure would consist in using quantile fitting; results for this approach are reported in Appendix I.
Attack Pool Ablation StudyDue to the nontrivial computational requirements of running several attacks on the same input, we now study whether it is possible to drop some attacks from the pool without compromising its predictability. Specifically, we consider all possible pools of size \(n\) (with a success rate of 100%) and pick the one with the highest average \(R^{2}\) value over all architectures and training techniques. As shown in Figure 2, adding attacks _does_ increase predictability, although with diminishing returns. For example, the pool composed of the Basic Iterative Method, the Brendel & Bethge Attack and the Carlini & Wagner attack achieves on its own a \(R^{2}\) value of 0.988\(\pm\)0.004 for MNIST+strong, 0.986\(\pm\)0.005 for MNIST+balanced, 0.935\(\pm\)0.048 for CIFAR10+strong and 0.993\(\pm\)0.003 for CIFAR10+balanced. Moreover, dropping both the Fast Gradient Sign Method and uniform noise leads to negligible (\(\ll 0.001\)) absolute variations in the mean \(R^{2}\). These findings suggest that, as far as consistency is concerned, **the choice of attacks represents a more impor
Figure 1: Distances of the nearest adversarial example found by the strong attack pool compared to those found by MIPVerify on MNIST A and CIFAR10 A with standard training. The black line represents the theoretical optimum. Note that no samples are below the black line.
tant factor than the number of attacks** in a pool. Refer to Appendix K for a more in-depth overview of how different attack selections affect consistency and accuracy.
Efficient AttacksWe then explore if it is possible to increase the efficiency of attacks by optimizing for fast, rather than accurate, results. We pick three new parameter sets (namely Fast-100, Fast-1k and Fast-10k) designed to find the nearest adversarial examples within the respective number of calls to the model. We find that while Deepfool is not the strongest adversarial attack (see Appendix J), it provides adequate results in very few model calls. For details on these results see Appendix L.
UG100 DatasetWe collect all the adversarial examples found by both MIPVerify and the heuristic attacks into a new dataset, which we name UG100. UG100 can be used to benchmark new adversarial attacks. Specifically, we can determine how strong an attack is by comparing it to both the theoretical optimum and heuristic attack pools. Another potential application involves studying factors that affect whether adversarial attacks perform sub-optimally.
## 6 Conclusion
In this work, we provided three contribution in the context of adversarial robustness.
First, we proved that attacking a ReLU classifier is \(NP\)-hard, while training a robust model of the same type is \(\Sigma_{2}^{P}\)-hard. This result implies that defending is in the worst case harder than attacking; moreover, due to the broad applicability assumptions and the structure of its proof, it represents a reasonable explanation for the difficulty gap often encountered when building robust classifiers. The intuition behind our proofs can also help to pave the way for research into more tractable classes.
Second, we showed how inference-time techniques can sidestep the aforementioned computational asymmetry, by introducing a proof-of-concept defense called Counter Attack (CA). The central idea in CA is to check robustness by relying on adversarial attacks themselves: this strategy provides robustness guarantees, can invert the computational asymmetry, and may serve as the basis for devising more advanced inference-time defenses.
Finally, motivated by the last observation, we provided an empirical evaluation of heuristic attacks in terms of their ability to consistently approximate the decision boundary distance. We found that state-of-the-art heuristic attacks are indeed very reliable approximators of the decision boundary distance, suggesting that even heuristic attacks might be used in defensive contexts.
Our theoretical results highlight a structural challenge in adversarial ML, one that could be sidestepped through not only our CA approach, but potentially many more. Additionally, we showed that adversarial attacks can also play a role in asymmetry-free robustness, thus opening up new research directions on their defensive applications. We hope that our observations, combined with our formal analysis and our UG100 benchmark, can serve as the starting point for future research into these two important areas.
## Acknowledgements
The project leading to this application has received funding from the European Union's Horizon Europe research and innovation programme under grant agreement No. 101070149. We also acknowledge the CINECA award under the ISCRA initiative, for the availability of high performance computing resources and support. Finally, we thank Andrea Borghesi, Andrea Iacco and Rebecca Montanari for their advice and support.
Figure 2: Best mean \(R^{2}\) value in relation to the number of attacks in the pool. |
2308.14852 | SynthDistill: Face Recognition with Knowledge Distillation from
Synthetic Data | State-of-the-art face recognition networks are often computationally
expensive and cannot be used for mobile applications. Training lightweight face
recognition models also requires large identity-labeled datasets. Meanwhile,
there are privacy and ethical concerns with collecting and using large face
recognition datasets. While generating synthetic datasets for training face
recognition models is an alternative option, it is challenging to generate
synthetic data with sufficient intra-class variations. In addition, there is
still a considerable gap between the performance of models trained on real and
synthetic data. In this paper, we propose a new framework (named SynthDistill)
to train lightweight face recognition models by distilling the knowledge of a
pretrained teacher face recognition model using synthetic data. We use a
pretrained face generator network to generate synthetic face images and use the
synthesized images to learn a lightweight student network. We use synthetic
face images without identity labels, mitigating the problems in the intra-class
variation generation of synthetic datasets. Instead, we propose a novel dynamic
sampling strategy from the intermediate latent space of the face generator
network to include new variations of the challenging images while further
exploring new face images in the training batch. The results on five different
face recognition datasets demonstrate the superiority of our lightweight model
compared to models trained on previous synthetic datasets, achieving a
verification accuracy of 99.52% on the LFW dataset with a lightweight network.
The results also show that our proposed framework significantly reduces the gap
between training with real and synthetic data. The source code for replicating
the experiments is publicly released. | Hatef Otroshi Shahreza, Anjith George, Sébastien Marcel | 2023-08-28T19:15:27Z | http://arxiv.org/abs/2308.14852v1 | # SynthDistill: Face Recognition with Knowledge Distillation from Synthetic Data
###### Abstract
State-of-the-art face recognition networks are often computationally expensive and cannot be used for mobile applications. Training lightweight face recognition models also requires large identity-labeled datasets. Meanwhile, there are privacy and ethical concerns with collecting and using large face recognition datasets. While generating synthetic datasets for training face recognition models is an alternative option, it is challenging to generate synthetic data with sufficient intra-class variations. In addition, there is still a considerable gap between the performance of models trained on real and synthetic data. In this paper, we propose a new framework (named SynthDistill) to train lightweight face recognition models by distilling the knowledge of a pre-trained teacher face recognition model using synthetic data. We use a pretrained face generator network to generate synthetic face images and use the synthesized images to learn a lightweight student network. We use synthetic face images without identity labels, mitigating the problems in the intra-class variation generation of synthetic datasets. Instead, we propose a novel dynamic sampling strategy from the intermediate latent space of the face generator network to include new variations of the challenging images while further exploring new face images in the training batch. The results on five different face recognition datasets demonstrate the superiority of our lightweight model compared to models trained on previous synthetic datasets, achieving a verification accuracy of 99.52% on the LFW dataset with a lightweight network. The results also show that our proposed framework significantly reduces the gap between training with real and synthetic data. The source code for replicating the experiments is publicly released.
## 1 Introduction
Recent advancements in face recognition systems have been driven by deep neural networks trained on large-scale datasets, leading to remarkable progress in accuracy [16, 27]. However, the state-of-the-art face recognition networks are often computationally heavy and the deployment of these networks on edge devices poses practical challenges. Nevertheless, it is possible to develop efficient networks from these large models that achieve comparable accuracy with significantly reduced computational load, making them suitable for edge device deployment.
One strategy is training lightweight and efficient networks on the large-scale face recognition datasets [1, 6, 9, 18, 30]. However, training an efficient face recognition model using large-scale face recognition datasets requires access to such a dataset. Nonetheless, large-scale face recognition datasets, such as VGGFace2 [11], MS-Celeb [19], WebFace [56], etc., were collected by crawling images from the Internet, thus raising legal, ethical, and privacy concerns [10]. To address such concerns, recently several works proposed generating synthetic face datasets and use the synthetic face images for training face recogni
Figure 1: Schematic showing the proposed approach (SynthDistill). Latent space of StyleGAN is first sampled from \(\mathcal{Z}\) space, and then dynamically re-sampled from \(\mathcal{W}\) space based on teacher-student agreement. This dynamic re-sampling leads to the generation of challenging samples that facilitate efficient learning.
tion models [3, 7, 28]. However, generating synthetic face datasets with sufficient inter-class and intra-class variations is still a challenging problem. Our experimental results also show that there is still a large gap in the recognition performance when training a lightweight face recognition model on real data and existing synthetic face datasets.
Another strategy to train a lightweight face recognition model is to transfer the knowledge of a model trained on a large dataset to a lightweight network through knowledge distillation [21]. Notwithstanding, the knowledge distillation from a teacher model often requires access to the original or another large-scale real dataset. Meanwhile, access to a real dataset for knowledge distillation may not always be feasible due to the size of the datasets. Even if there is access to real large-scale dataset, there remain ethical and legal concerns of using large-scale face recognition datasets crawled from internet. In this work, we propose a new framework to distill the knowledge of a pretrained teacher using synthetic face images without identity labels, and thus mitigating the need for real identity-labeled data during the distillation phase. We propose dynamic sampling from the intermediate latent space of a StyleGAN to generate new images and enhance training.
In contrast to previous approaches that rely on static generation of synthetic face datasets [3, 7, 28] and then using the generated dataset for training the FR model, we combine these two steps with an online-generation of synthetic images and training the lightweight network in the image generation loop within a knowledge distillation based framework. This avoids the requirements of hard identity labels for the generated images, and further assists the generation network to produce challenging samples though a feedback mechanism while exploring more image variations, thus enabling the training of more robust models. In addition, compared to previous works for the training of face recognition models on synthetic datasets, our proposed knowledge distillation framework does not require identity labels in the training, simplifying the process of generating synthetic face images. We should also note that previous synthetic datasets still used a face recognition model in the dataset generation pipeline.
In our case, we also employ a pre-trained face recognition model in our pipeline, but with the role as a teacher. However, instead of generating a static synthetic dataset with identity labels, we dynamically create synthetic face images during the knowledge distillation process. This novel approach allows us to frame our knowledge distillation as a label-free training paradigm, utilizing synthetic data to effectively train lightweight face recognition models.
It is noteworthy that we do not need access to the complete whitebox knowledge of the teacher network in our proposed knowledge distillation approach, and thus our method can also be used in case of a blackbox access to the teacher model that can used to generate the embeddings, given the embeddings are available. We adapt the TinyNet [20] architecture and train lightweight face recognition models (called _TinyFaR_) in our knowledge distillation approach. We provide an extensive experimental evaluation on five different face recognition benchmarking datasets, including LFW [23], CA-LFW [54], CP-LFW [53], CFP-FP [40] and AgeDB-30 [36]. Our experimental results demonstrate the effectiveness of our approach in achieving efficient face recognition systems with reduced computational requirements, while avoiding the use of real data for knowledge distillation. This opens new possibilities for developing privacy-aware and resource-efficient face recognition models suitable for edge devices. Fig. 1 illustrates the general block diagram of our proposed knowledge distillation framework with dynamic sampling.
The main contributions of this work are listed below:
* We propose a novel framework to train a lightweight face recognition model using knowledge distillation. The proposed knowledge distillation framework is based on synthetic face images and does not require real training data. In addition, we do not need identity-labeled training data in our knowledge distillation framework, mitigating problems in generating synthetic face recognition datasets.
* Our proposed knowledge distillation framework is based on a dynamic sampling of difficult samples during training to enhance the training. Dynamic sampling helps the student network to simultaneously learn on new images (i.e., increase generalization), while focusing on difficult samples. Therefore, the training images are synthesized online and during the distillation process.
* We provide extensive experimental results on different face recognition datasets, showing superior recognition accuracy for lightweight face recognition models trained in our framework compared to training lightweight face recognition from scratch using other synthetic datasets.
The remainder of the paper is organized as follows. In Section 2 we review the related works in the literature. We describe our proposed framework for knowledge distillation with synthetic data using dynamic latent sampling in Section 3. We report our experimental results in Section 4 and also discuss our results in Section 5. Finally, the paper is concluded in Section 6.
## 2 Related works
In this section, we discuss the relevant literature on synthetic datasets, light-weight face recognition networks, and
knowledge distillation in the context of face recognition.
### Synthetic Datasets
Several works have explored the generation of synthetic datasets for training face recognition. It is worth noting that many large-scale datasets are typically collected through web-crawling without explicit informed consent. By leveraging synthetic datasets, it becomes possible to mitigate concerns regarding the privacy of individuals while also potentially addressing issues such as bias [24, 41]. These synthetic datasets are often generated using variations of StyleGAN, 3D models, and diffusion models.
Several prior works, including FaceID-GAN [42], identity-preserving face images [4][51], have employed synthesis techniques to generate facial images. Notably, FF-GAN [51] (e.g., 3DMM [5]) and DiscoFaceGAN [17] leverages 3D priors. In [37], authors proposed an approach called SynFace which incorporates the use of identity mixup (IM) and domain mixup (DM) techniques to address the performance gap. They use a small portion of labeled real data in the training process to reduce the domain gap between real and synthetic data to improve the performance. Additionally, the controllable face synthesis model provides a convenient means to manipulate various aspects of synthetic face generation, such as pose, expression, illumination, the number of identities, and samples per identity. Boutros et al. [7], presented a method to generate synthetic data using a class conditional generative adversarial network. The authors trained the StyleGAN2-ADA model [25] on the CASIA-WebFace [49] datasets, using identities as class labels. They have conducted experiments using the generated SFace dataset to show its utility in training face recognition models. Bae et al. [3], introduced a large-scale synthetic dataset for face recognition named DigiFace-1M. This dataset was created by utilizing a computer graphics pipeline to render digital faces. Each identity within the dataset is generated by incorporating randomized variations in facial geometry, texture, and hairstyle. The rendered faces exhibit diverse attributes such as different poses, expressions, hair color, hair thickness, and density, as well as accessories. Through the implementation of aggressive data augmentation techniques, they reduced the domain gap between the generated images and real face images leading to gains in face recognition performance. In [28], authors proposed a Dual Condition Face Generator (DCFace) utilizing a diffusion model. This approach incorporates a novel Patch-wise style extractor and Time-step dependent ID loss, enabling DCFace to consistently generate face images depicting the same individual in different styles, while maintaining precise control over the process.
Despite the advantages of synthetic data in terms of privacy and consent, the performance of face recognition models trained on these datasets falls short when compared to models trained on real data. This severely limits real-world usage of models trained on synthetic datasets. To address these challenges, we propose a novel strategy for training face recognition models using synthetic data within an knowledge distillation framework. Our method generates data online dynamically and eliminates the need for real data during the distillation phase.
### Efficient Face Recognition
As edge computing gained prevalence, there is an increased focus on developing lightweight face recognition models without compromising accuracy. In the initial phase of efficient model development, Wu et al. introduced LightCNN, a lightweight architecture [47]. MobileNets [22, 39] employed depth-wise separable convolutions to improve the performance. Building upon the MobileNet architecture, MobileFaceNets were designed for real-time face verification tasks [15]. The concept of MixConv, which incorporates multiple kernel sizes in a single convolution, was used to develop MixFaceNet networks for lightweight face recognition [44, 6]. Inspired by ShuffleNetV2 [34], ShuffleFaceNet models were proposed for face recognition, with parameter counts ranging from 0.5M to 4.5M and verification accuracies exceeding 99.20% on the LFW dataset [35]. Neural architecture search (NAS) was utilized in [9] to automatically design an efficient network called PocketNet for face recognition. The PocketNet architecture was learned using the differential architecture search (DARTS) algorithm on the CASIA-WebFace dataset, and knowledge distillation (KD) was employed during training. Yan et al. [48] employed knowledge distillation (KD) and variable group convolutions to address computational intensity imbalances in face recognition networks. Alansari et al. proposed GhostFaceNets, which exploit redundancy in convolutional layers to create compact networks [1]. These modules generate a fixed percentage of convolutional feature maps using computationally inexpensive depth-wise convolutions. Recently, George et al. introduced EdgeFace, a combination of CNN-Transformer architecture that achieved strong verification performance with minimal FLOP and parameter complexity [18].
### Knowledge Distillation
The concept of Knowledge Distillation was first introduced by Hinton et al. [21]. The primary goal of knowledge distillation is to transfer the knowledge from a pre-trained, complex "teacher" model to a simpler, more efficient "student" model. The methods for distillation in classification tasks can primarily be learned through the utilization of soft labels from a teacher and ground truth [21]. Another approach involves feature-based learning, where the student aims to match the intermediate layers of the teacher [38]. Additionally, contrastive-based methods have also been em
ployed [45] for distilling the knowledge of a teacher to a student.
Over the years, several methods have been proposed in the literature [38, 31, 26, 14, 32, 55, 52, 12] to enhance the efficiency of distillation. However, most of these methods rely on the availability of original or similar training datasets, which can be limited due to security and privacy concerns. Consequently, traditional data-dependent distillation methods become impractical. To address this challenge, researchers have introduced Data-free knowledge distillation (DFKD), without relying on the original or real training data. DFKD aims to develop a distillation strategy using a synthesis-based approach. These approaches utilize either whitebox teacher model [33, 13, 50] or data augmentation techniques [2] to generate synthetic samples. These synthetic samples act as substitute training datasets for distillation. By training on such synthetic data, the student model can effectively learn from the teacher model without needing access to real training data making it privacy friendly. Along the same lines, Boutros et al. [8] proposed an unsupervised face recognition model based on unlabeled synthetic data. They used contrastive learning to maximize the similarity between two augmented images (using geometric and color transformations) of the same synthetic image. However, since the data augmentation cannot provide enough inter-class variations, it affects the performance of trained face recognition model when evaluating on benchmark datasets.
## 3 Proposed Framework
In this section, we describe our proposed framework for training a lightweight face recognition model using synthetic data using knowledge distillation. We describe the architecture of lightweight face recognition model in Section 3.1 and explain our knowledge distillation framework using synthetic data in Section 3.2.
### Lightweight Network Architecture
As discussed in Section 2, lightweight face recognition models in the literature usually adapt lightweight neural network models for face recognition tasks. However, our knowledge distillation framework can be applied to any lightweight model with only the condition that the output of the lightweight network should have the same dimensions as the embedding of the teacher model. To eliminate this condition so that the proposed framework can be used for any lightweight network with different output sizes, we use a fully connected layer at the output of the lightweight network to have output with the same size as the teacher model.
In this paper, we use TinyNet [20] as the backbone for the lightweight FR model. The TinyNet is an optimized version of EfficientNet [43], which uses a structure that simultaneously enlarges the resolution, depth, and width in a Rubik's cube for neural networks and find networks with high efficiency by changing these three dimensions. However, authors in [20] show that the resolution and depth are more important than width for small networks, and propose smaller models derived from the EfficientNet-B0 as different variations of TinyNet, which are efficient and achieve high accuracy in recognition tasks. The feature layer of TinyNet has 1280 dimensions and the embedding of our teacher network has 512 dimensions. Therefore, we add a fully connected layer to generate 512-length feature at the output of TinyNet and call our lightweight face recognition network based on TinyNet _TinyFaR_. We should note that to our knowledge, TinyNet lightweight network structure has
Figure 2: Schematic showing the proposed approach (SynthDistill). In step 1, \(\mathcal{Z}\) space of the StyleGAN is sampled to generate face images. In step 2, the \(\mathcal{W}\) space is re-sampled based on the teacher-student agreement to generate more challenging samples. The student model is is updated based on the distillation loss \(\mathcal{L}_{\text{KD}}\), all the other network blocks remains frozen.
not been used before for face recognition in the literature.
### Knowledge Distillation with Synthetic Data
Let \(F_{\text{T}}\) and \(F_{\text{S}}\) denote the teacher1 and student (lightweight) face recognition models, respectively. In this paper, we consider StyleGAN [25] as a pretrained face generator model, which consists of a mapping network \(M\) and a generator network \(G\). The mapping network takes a a noise \(\mathbf{z}\in\mathcal{Z}\sim N(0,\mathbb{I})\) from input latent space \(\mathcal{Z}\) with Gaussian distribution and generates an intermediate latent code \(\mathbf{w}\in\mathcal{W}\). Then, the intermediate latent code \(\mathbf{w}\) is used by the generator network to generate a face image \(I=G(\mathbf{w})\). In our knowledge distillation framework, we first generate a batch of synthetic face images and extract the teacher's embeddings \(e_{\text{T}}=F_{\text{T}}(I)\). Then, we train the student network by minimizing the mean squared error (MSE) of the teacher and student's embeddings as follows:
Footnote 1: Note that the teacher model can be blackbox and we do not use teacher’s gradients in our method.
\[\mathcal{L}_{\text{KD}}=\left\|\mathbf{e}_{\text{T}}-F_{\text{S}}(I)\right\|_{2}^{2}. \tag{1}\]
Minimizing the MSE of embeddings helps the student network to extract embeddings similar to the teacher's embeddings from a given face image.
After updating the weights of student network with our knowledge distillation loss \(\mathcal{L}_{\text{KD}}\) (as in Eq. 1), we sample around the intermediate latent codes based on the similarity of embeddings extracted by the student \(\mathbf{e}_{\text{S}}\) and teacher \(\mathbf{e}_{\text{T}}\) networks in our batch. To this end, we use the cosine similarity and normalize it in (0,1) interval as follows:
\[\text{SIM}(\mathbf{e}_{\text{T}},\mathbf{e}_{\text{S}})=0.5\times(1+\frac{\mathbf{e}_{ \text{S}}\cdot\mathbf{e}_{\text{T}}}{\left\|\mathbf{e}_{\text{S}}\right\|_{2}\cdot \left\|\mathbf{e}_{\text{T}}\right\|_{2}}). \tag{2}\]
Having normalized similarity score \(\mathbf{s}_{\text{sim}}=\text{SIM}(e_{\text{T}},e_{\text{S}})\), we re-sample around each latent code:
\[\mathbf{w}_{\text{resample}}=\mathbf{w}+c\times\mathbf{s}_{\text{sim}}\times\mathbf{n}, \tag{3}\]
where \(\mathbf{n}\sim\mathcal{N}(0,\mathbb{I})\) is a random noise with Gaussian distribution and \(c\) is a constant coefficient. As a matter of fact, in our re-sampling based on similarity score \(\mathbf{s}_{\text{sim}}\) as in Eq. 3, we sample with higher standard deviation values around the latent codes which achieved higher similarity in our initial sampling, and thus letting more variation in re-sampling. While, for the lower similarity between embeddings extracted by the student and teacher networks, the
Figure 3: Schematic showing the re-sampling strategy in the proposed approach. When teacher-student agreement is high, the re-sampling method generates diverse images. Conversely, when the similarity is low, i.e, when the given sample is challenging, re-sampling generates similar (challenging) samples facilitating the learning.
standard deviation values for re-sampling are smaller so that during re-sampling we can sample around the same latent codes. Therefore, our dynamic re-sampling approach helps us further sample difficult images while exploring the latent space. Fig. 3 illustrates our re-sampling strategy. After re-sampling new latent codes, we generate synthetic face images and optimize our student network with our knowledge distillation loss \(\mathcal{L}_{\text{KD}}\) (as in Eq. 1). Our knowledge distillation framework using synthetic data (named _SynthDistill_) is depicted in Fig. 2 and summarized in Algorithm 1.
## 4 Experiments
In this section, we report our experiments and discuss our results. First, in Section 4.1 we describe our evaluation datasets, and in Section 4.2 we explain our training details. In Section 4.3, we compare our method with previous methods based on synthetic data for face recognition in the literature. Then, we report different ablation studies and discuss effect of each part in our proposed framework in Section 4.4.
### Datasets
We evaluate our trained student models using five different benchmarking datasets. The datasets chosen for evaluation comprised Labeled Faces in the Wild (LFW) [23], Cross-age LFW (CA-LFW) [54], CrossPose LFW (CP-LFW) [53], Celebrities in Frontal-Profile in the Wild (CFP-FP) [40], AgeDB-30 [36]. To maintain consistency with previous work, we present recognition accuracy values on these datasets.
### Training Details
For the teacher network, we use the pretrained ArcFace model2 with IResnet100 backbone from Insightface [16] trained on the MS-Celeb dataset [19]. The embedding of our teacher network has 512 dimensions, but the feature layer of TinyNet has 1280 dimensions. Therefore, as discussed in Section 3 we use a fully connected layer at the output of our TinyNet model so that it can generate embeddings with the same dimension as the teacher's embeddings and call it _TinyFa_R. In our experiments, we use different variations of TinyNet [20] and build corresponding version of TinyFaR with 512-length feature as our student (lightweight) network. Table 1 compares IResnet100 with different variations of TinyFaR in terms of computation complexity and number of parameters. We use StyleGAN2-ADA model [25] to generate synthetic face images with \(256\times 256\) resolution and crop and resize images to have \(112\times 112\) face images for our knowledge distillation. We train our student networks with 17 epochs, where in each epoch we sampled one million images in step 1 of our algorithm 1 and re-sampled the same number of images with the re-sampling coefficient of \(c=1\). We trained our student networks using Adam optimizer [29] on a system equipped with a single NVIDIA GeForce RTX(tm) 3090. For training face recognition from scratch in our experiments, we used CosFace [46] loss function. The source codes of our experiments are publicly available3.
Footnote 2: The performance of our teacher network on our benchmarking datasets in terms of recognition accuracy is as follows: LFW (\(99.77\pm 0.28\)), CA-LFW (\(96.10\pm 1.10\)), CP-LFW (\(92.88\pm 1.52\)), CP-FP-FP (\(96.27\pm 1.10\)), and AgeDB-30 (\(98.25\pm 0.71\)).
### Comparison
We compare the performance of our proposed knowledge distillation framework with training the same network using synthetic datasets in the literature, including DigFace [3], SFace [7], and DCFace [28]. In addition, we also consider training with real data using WebFace-4M [56] as our baseline. Table 2 compares these datasets in terms of the number of images and samples and their generation method. All these datasets are generated to have inter-class and intra-class variation, and thus have identity labels. Therefore, these datasets can be used for training lightweight face recognition from scratch using the classification training. In contrast, our proposed framework based on dynamic sampling approach does not provide identity labels and can be used within a knowledge distillation training. Table 3 reports the recognition performance of different variations of TinyFaR when training with datasets. As the results in this table show, our knowledge distillation approach with synthetic data (and no identity labels) far outperforms training from scratch using synthetic data and has comparable performance with training using real data.
### Ablation studies
Effect of dynamic sampling:To evaluate the effect of dynamic sampling in our proposed framework, we compare the performance network trained with knowledge distillation using our dynamic sampling (sampling + re-sampling)
\begin{table}
\begin{tabular}{l|l|c|c} \hline \hline Role in our KD & Network & M FLOPS & M Params \\ \hline \hline Teacher & IResNet100 & 24,179.2 & 65.2 \\ \hline \hline \multirow{3}{*}{Student} & TinyFaR-A & 254.3 & 5.6 \\ & TinyFaR-B & 151.3 & 3.1 \\ \cline{1-1} & TinyFaR-C & 76.8 & 1.8 \\ \hline \hline \end{tabular}
\end{table}
Table 1: Complexity of different network structures
\begin{table}
\begin{tabular}{l|c|c|c|c} \hline \hline Dataset & \#Images & \#Subjects & Data & Method \\ \hline WebFace-4M [56] & 4,235,242 & 2059,990 & Real & Web-related \\ Sface [7] (IJCB 2022) & 1,885,877 & 10.572 & Synthetic & StyleGAN model \\ DigFace [3] (WACV 2023) & 1,219,995 & 109,999 & Synthetic & Rendering \\ DCFace [28] (CVPR 2023) & 1,300,000 & 60,000 & Synthetic & Diffusion model \\ \hline \hline \end{tabular}
\end{table}
Table 2: Synthetic and real face datasets
using static sampling (with no re-sampling). Table 4 compares the performance of TinyFaR-A trained with knowledge distillation using our dynamic sampling (sampling + re-sampling in \(\mathcal{W}\) space) with one million samples plus one million re-sampling (1M+1M) in each epoch as well as static sampling with one million and two million samples in each epoch. As the results in this table show knowledge distillation using our dynamic sampling with one million iterations in each epoch outperforms the same number of iterations or sample total samples with static sampling. This table also compares our dynamic re-sampling in \(\mathcal{W}\) space to dynamic re-sampling in \(\mathcal{Z}\) space. As the results show dynamic re-sampling in both spaces achieves better performance than static sampling. In addition, comparing dynamic re-sampling space, the results show that dynamic re-sampling in \(\mathcal{W}\) leads to superior performance.
Effect of number of sampled images:To evaluate the effect of the number of sample images in our dynamic sampling, we train TinyFaR-A with different numbers of iterations (sampling and re-sampling) per epoch in our knowledge distillation approach. Table 6 reports the performance of the trained model with different numbers of iterations. As the results in this table show, higher iterations help our knowledge distillation with the cost of more training computation. However, to reduce computations in our experiments we use one million iterations (1M sampling + 1M re-sampling) in our experiments.
Effect of re-sampling coefficient:As another ablation study, we evaluate the effect of re-sampling coefficient \(c\) in our dynamic sampling. Table 6 reports the performance of TinyFaR-A trained with our knowledge distillation using different re-sampling coefficient values. As the results in this table show, with a higher re-sampling coefficient our dynamic re-sampling can generate more diverse images and achieve higher recognition performance. However, a very high re-sampling coefficient can also cause \(\mathbf{w}_{\text{resample}}\) to be out of the distribution of \(\mathcal{W}\), and thus drop the performance.
\begin{table}
\begin{tabular}{l c c c c c c} \hline \hline \multicolumn{1}{c}{coef. (\(c\))} & LFW & CA-LFW & CP-LFW & CFP-FP & AgeDB-30 \\ \hline
0.8 & 99.43 \(\pm\) 0.32 & 94.58 \(\pm\) 0.95 & 86.10 \(\pm\) 2.23 & 90.23 \(\pm\) 1.68 & 94.82 \(\pm\) 1.15 \\
0.9 & 99.40 \(\pm\) 0.41 & 94.90 \(\pm\) 1.09 & 87.23 \(\pm\) 2.02 & 90.36 \(\pm\) 1.45 & 94.72 \(\pm\) 1.07 \\
1 & 99.52 \(\pm\) 0.31 & 94.57 \(\pm\) 1.01 & 87.00 \(\pm\) 1.64 & 90.89 \(\pm\) 1.54 & 94.93 \(\pm\) 1.35 \\
1.1 & 99.47 \(\pm\) 0.44 & 94.95 \(\pm\) 0.84 & 87.53 \(\pm\) 1.78 & 90.81 \(\pm\) 1.61 & 95.13 \(\pm\) 1.08 \\
1.2 & 99.53 \(\pm\) 0.32 & 94.95 \(\pm\) 0.90 & 87.47 \(\pm\) 1.27 & 90.94 \(\pm\) 1.63 & 94.52 \(\pm\) 1.47 \\
1.3 & 99.52 \(\pm\) 0.31 & 94.50 \(\pm\) 0.97 & 87.58 \(\pm\) 1.84 & 91.17 \(\pm\) 1.50 & 95.05 \(\pm\) 1.28 \\
1.4 & 99.48 \(\pm\) 0.32 & 94.77 \(\pm\) 0.97 & 87.40 \(\pm\) 1.74 & 90.56 \(\pm\) 1.49 & 94.78 \(\pm\) 1.29 \\
1.5 & 99.47 \(\pm\) 0.32 & 94.58 \(\pm\) 1.00 & 88.17 \(\pm\) 1.64 & 90.84 \(\pm\) 1.24 & 94.80 \(\pm\) 1.07 \\ \hline \hline \end{tabular}
\end{table}
Table 6: Ablation study on the effect of re-sampling coefficient
## 5 Discussions
The results in Table 3 show that our proposed knowledge distillation framework outperforms training using synthetic datasets in the literature and achieves comparable performance with training using real face images. Comparing the performance of networks trained with previous synthetic datasets to networks trained with real data, we observe a considerable gap in the performance of trained face recognition models with synthetic and real data. Meanwhile, our proposed knowledge distillation method still achieves lower but is very close to the performance of training with real data.
Unlike previous synthetic face datasets, our method does not require identity labels, and thus does not have many issues in generating synthetic datasets with inter-class and intra-class variations. Instead, our knowledge distillation approach with dynamic sampling leverages the most capacity of StyleGAN to generate training samples, which helps to achieve comparable performance to training with real data. Our proposed framework avoids the requirements of hard identity labels for the generated images, which further assists the generation network to produce challenging samples though a feedback mechanism during our knowledge distillation, thus enabling the training of much robust models. We should also note that, for generation of synthetic face datsets in the literature, a pretrained face recognition model (which has been trained on a large-scale real face recognition dataset) is used in the process of generation of synthetic dataset. Therefore, training with synthetic face datasets in the literature indirectly benefits from the information and knowledge of the pretrained face recognition model (trained on real images) used for generating the synthetic dataset. In our proposed framework, we also use the pretrained face recognition model, but instead of following common two-step approach (generation of dataset and training with new dataset), we use the pretrained face recognition model as a teacher in our knowledge distillation approach and generate synthetic face images used in our training with no identity label.
Our ablation studies show the effect of each part in our knowledge distillation framework. In particular, the results demonstrate that our dynamic sampling improves our knowledge distillation compared to static sampling. In addition, using our dynamic sampling and with more number of iterations or higher re-sampling coefficient can improve the knowledge distillation, as it helps our student to learn embeddings of more face images from the teacher.
## 6 Conclusions
In this paper, we proposed a data-free framework (named _SynthDistill_) to train lightweight face recognition models based on knowledge distillation using synthetic data. We combined the two steps of data generation and training the lightweight network and have an online-generation and training in the loop using a distillation framework. We dynamically generated synthetic face images during training and distilled the knowledge of a pretrained and blackbox face recognition model. Our dynamic sampling helps our student network to further see difficult samples while exploring new samples, leading to more robust training. Our knowledge distillation framework does not require identity-labeled training data, and thus mitigates challenges in generating intra-class variations in synthesized datasets. We adapted the TinyNet architecture to use in our knowledge distillation framework and trained lightweight face recognition models (called _TinyFaR_). We reported extensive experimental evaluation on five different face recognition benchmarking datasets, including LFW, CA-LFW, CP-LFW, CFP-FP, and AgeDB-30. The experimental results demonstrate the superiority of our proposed knowledge distillation approach compared to training previous synthetic datasets.
Our experimental results also showed that while there is a considerable gap between training with synthetic datasets and real data, our knowledge distillation framework based on synthetic data achieves comparable performance with training with real data and significantly reduces the gap between models trained on synthetic data and models trained on real data. Achieving such an improvement in training using synthetic data within our proposed framework shows more potential in training with synthetic data and motivates further research on training with synthetic data. Furthermore, our results for lightweight student networks pave the way for developing privacy-aware and resource-efficient face recognition models.
## Acknowledgments
This research is based upon work supported by the H2020 TReSPAsS-ETN Marie Sklodowska-Curie early training network (grant agreement 860813).
This research is also based upon work supported in part by the Office of the Director of National Intelligence (ODNI), Intelligence Advanced Research Projects Activity (IARPA), via [2022-21102100007]. The views and conclusions contained herein are those of the authors and should not be interpreted as necessarily representing the official policies, either expressed or implied, of ODNI, IARPA, or the U.S. Government. The U.S. Government is authorized to reproduce and distribute reprints for governmental purposes notwithstanding any copyright annotation therein.
|
2310.01973 | Federated Wasserstein Distance | We introduce a principled way of computing the Wasserstein distance between
two distributions in a federated manner. Namely, we show how to estimate the
Wasserstein distance between two samples stored and kept on different
devices/clients whilst a central entity/server orchestrates the computations
(again, without having access to the samples). To achieve this feat, we take
advantage of the geometric properties of the Wasserstein distance -- in
particular, the triangle inequality -- and that of the associated {\em
geodesics}: our algorithm, FedWad (for Federated Wasserstein Distance),
iteratively approximates the Wasserstein distance by manipulating and
exchanging distributions from the space of geodesics in lieu of the input
samples. In addition to establishing the convergence properties of FedWad, we
provide empirical results on federated coresets and federate optimal transport
dataset distance, that we respectively exploit for building a novel federated
model and for boosting performance of popular federated learning algorithms. | Alain Rakotomamonjy, Kimia Nadjahi, Liva Ralaivola | 2023-10-03T11:30:50Z | http://arxiv.org/abs/2310.01973v1 | # Federated Wasserstein Distance
###### Abstract
We introduce a principled way of computing the Wasserstein distance between two distributions in a federated manner. Namely, we show how to estimate the Wasserstein distance between two samples stored and kept on different devices/clients whilst a central entity/server orchestrates the computations (again, without having access to the samples). To achieve this feat, we take advantage of the geometric properties of the Wasserstein distance - in particular, the triangle inequality - and that of the associated _geodesics_: our algorithm, FedWaD (for Federated Wasserstein Distance), iteratively approximates the Wasserstein distance by manipulating and exchanging distributions from the space of geodesics in lieu of the input samples. In addition to establishing the convergence properties of FedWaD, we provide empirical results on federated coresets and federate optimal transport dataset distance, that we respectively exploit for building a novel federated model and for boosting performance of popular federated learning algorithms.
## 1 Introduction
**Context.** Federated Learning (FL) is a form of distributed machine learning (ML) dedicated to train a global model from data stored on local devices/clients, while ensuring these clients never share their data (Kairouz et al., 2021; Wang et al., 2021). FL provides elegant and convenient solutions to concerns in data privacy, computational and storage costs of centralized training, and makes it possible to take advantage of large amounts of data stored on local devices. A typical FL approach to learn a parameterized global model is to alternate between the two following steps: i) update local versions of the global model using local data, and ii) send and aggregate the parameters of the local models on a central server (McMahan et al., 2017) to update the global model.
**Problem.** In some practical situations, the goal is not to learn a prediction model, but rather to compute a certain quantity from the data stored on the clients. For instance, one's goal may be to compute, in a federated way, some prototypes of client's data, that can be leveraged for federated clustering or for classification models (Gribonval et al., 2021; Phillips, 2016; Munteanu et al., 2018; Agarwal et al., 2005). In another learning scenarios where data are scarce, one may want to look for similarity between datasets in order to evaluate dataset heterogeneity over clients and leverage on this information to improve the performance of federated learning algorithms. In this work, we address the problem of computing, in a federated way, the Wasserstein distance between two distributions \(\mu\) and \(\nu\) when samples from each distribution are stored on local devices. A solution to this problem will be useful in the aforementioned situations, where the Wasserstein distance is used as a similarity measure between two datasets and is the key tool for computing some coresets of the data distribution or cluster prototypes. We provide a solution to this problem which hinges on the geometry of the Wasserstein distance and more specifically, its geodesics. We leverage the property that for any element \(\xi^{*}\) of the geodesic between two distributions \(\mu\) and \(\nu\), the following equality holds, \(\mathcal{W}_{p}(\mu,\nu)=\mathcal{W}_{p}(\mu,\xi^{*})+\mathcal{W}_{p}(\xi^{*},\nu)\), where \(\mathcal{W}_{p}\) denotes the \(p\)-Wasserstein distance.
This property is especially useful to compute \(\mathcal{W}_{p}(\mu,\nu)\) in a federated manner, leading to a novel theoretically-justified procedure coined FedWaD, for **Fed**erated **Wasserstein **D**istance.
Contribution: FedWaD.The principle of FedWaD is to iteratively approximate \(\xi^{*}\) - which, in terms of traditional FL, can be interpreted as the global model. At iteration \(k\), our procedure consists in i) computing, on the clients, distributions \(\xi^{k}_{\mu}\) and \(\xi^{k}_{\nu}\) from the geodesics between the current approximation of \(\xi^{*}\) and the two secluded distributions \(\mu\) and \(\nu-\xi^{k}_{\mu}\) and \(\xi^{k}_{\nu}\) playing the role of the local versions of the global model, and ii) aggregating them on the global model to update \(\xi^{*}\).
Organization of the paper.Section 2 formalizes the problem we address, and provides the necessary technical background to devise our algorithm FedWaD. Section 3 is devoted to the depiction of FedWaD, pathways to speed-up its executions, and a theoretical justification that FedWaD is guaranteed to converge to the desired quantity. In Section 4, we conduct an empirical analysis of FedWaD on different use-cases (Wasserstein coresets and Optimal Transport Dataset distance) which rely on the computation of the Wasserstein distance. We unveil how these problems can be solved in our FL setting and demonstrates the remarkable versatility of our approach. In particular, we expose the impact of federated coresets. By learning a single global model on the server based on the coreset, our method can outperform personalized FL models. In addition, our ability to compute inter-device dataset distances significantly helps amplify performances of popular federated learning algorithms, such as FedAvg, FedRep, and FedPer. We achieve this by clustering clients and harnessing the power of reduced dataset heterogeneity.
## 2 Related Works and Background
### Wasserstein Distance and Geodesics
Throughout, we denote by \(\mathscr{P}(X)\) the set of probability measures in \(X\). Let \(p\geq 1\) and define \(\mathscr{P}_{p}(X)\) the subset of measures in \(\mathscr{P}(X)\) with finite \(p\)-moment, _i.e.,_\(\mathscr{P}_{p}(X)\doteq\big{\{}\eta\in\mathscr{P}(X):M_{p}(\eta)<\infty\big{\}}\), where \(M_{p}(\eta)\doteq\int_{X}d^{p}_{X}(x,0)d\eta(x)\) and \(d_{X}\) is a metric on \(X\) often referred to as the _ground cost_. For \(\mu\in\mathscr{P}_{p}(X)\) and \(\nu\in\mathscr{P}_{p}(Y)\), \(\Pi(\mu,\nu)\subset\mathscr{P}(X\times Y)\) is the collection of probability measures or _couplings_ on \(X\times Y\) defined as
\[\Pi(\mu,\nu)\doteq\big{\{}\pi\in\mathscr{P}(X\times Y):\forall A\subset X,B \subset Y,\pi(A\times Y)=\mu(A)\text{ and }\pi(X\times B)=\nu(B)\big{\}}.\]
The \(p\)-Wasserstein distance \(\mathcal{W}_{p}(\mu,\nu)\) between the measures \(\mu\) and \(\nu\) --assumed to be defined over the same ground space, i.e. \(X=Y\)-- is defined as
\[\mathcal{W}_{p}(\mu,\nu)\doteq\left(\inf_{\pi\in\Pi(\mu,\nu)}\int_{X\times X}d ^{p}_{X}(x,x^{\prime})d\pi(x,x^{\prime})\right)^{1/p}. \tag{1}\]
It is proven that the infimum in (1) is attained (Peyre et al., 2019) and any probability \(\pi\) which realizes the minimum is an _optimal transport plan_. In the discrete case, we denote the two marginal measures as \(\mu=\sum_{i=1}^{n}a_{i}\delta_{x_{i}}\) and \(\nu=\sum_{i=1}^{m}b_{i}\delta_{x^{\prime}_{i}}\), with \(a_{i},b_{i}\geq 0\) and \(\sum_{i=1}^{n}a_{i}=\sum_{i=1}^{m}b_{i}=1\). The _Kantorovich relaxation_ of (1) seeks for a transportation coupling \(\mathbf{P}\) that solves the problem
\[\mathcal{W}_{p}(\mu,\nu)\doteq\left(\min_{\mathbf{P}\in\Pi(\mathbf{a}, \mathbf{b})}\langle\mathbf{C},\mathbf{P}\rangle\right)^{1/p} \tag{2}\]
where \(\mathbf{C}\doteq(d^{p}_{X}(x_{i},x^{\prime}_{j}))\in\mathbb{R}^{n\times m}\) is the matrix of all pairwise costs, and \(\Pi(\mathbf{a},\mathbf{b})\doteq\{\mathbf{P}\in\mathbb{R}^{n\times m}_{+}| \mathbf{P}\mathbf{1}=\mathbf{a},\mathbf{P}^{\top}\mathbf{1}=\mathbf{b}\}\) is the _transportation polytope_ (i.e. the set of all transportation plans) between the distributions \(\mathbf{a}\) and \(\mathbf{b}\).
**Property 1** (Peyre et al. (2019)).: _For any \(p\geq 1\), \(\mathcal{W}_{p}\) is a metric on \(\mathscr{P}_{p}(X)\). As such it satisfies the triangle inequality:_
\[\forall\mu,\nu,\xi\in\mathscr{P}_{p}(X),\quad\mathcal{W}_{p}(\mu,\nu)\leq \mathcal{W}_{p}(\mu,\xi)+\mathcal{W}_{p}(\xi,\nu) \tag{3}\]
It might be convenient to consider _geodesics_ as structuring tools of metric spaces.
**Definition 1** (Geodesics, (Ambrosio et al., 2005)).: _Let \((\mathcal{X},\text{d})\) be a metric space. A constant speed geodesic \(x:[0,1]\rightarrow\mathcal{X}\) between \(x_{0},x_{1}\in\mathcal{X}\) is a continuous curve such that \(\forall s,t\in[0,1]\), \(d(x(s),x(t))=|s-t|\cdot d(x_{0},x_{1}).\)_
**Property 2** (Interpolating point, (Ambrosio et al., 2005)).: _Any point \(x_{t}\) from a constant speed geodesic \((x(t))_{t\in[0,1]}\) is an interpolating point and verifies, \(d(x_{0},x_{1})=d(x_{0},x_{t})+d(x_{t},x_{1}),\) i.e. the triangle inequality becomes an equality._
These definitions and properties carry over to the case of the Wasserstein distance:
**Definition 2** (Wasserstein Geodesics, Interpolating measure, (Ambrosio et al., 2005; Kolouri et al., 2017)).: _Let \(\mu_{0}\), \(\mu_{1}\in\mathscr{P}_{p}(X)\) with \(X\subseteq\mathbb{R}^{d}\) compact, convex and equipped with \(\mathcal{W}_{p}\). Let \(\gamma\in\Pi(\mu_{0},\mu_{1})\) be an optimal transport plan. For \(t\in[0,1],\) let \(\mu_{t}\doteq(\pi_{t})_{\#}\gamma\) where \(\pi_{t}(x,y)\doteq(1-t)x+ty\), i.e. \(\mu_{t}\) is the push-forward measure of \(\gamma\) under the map \(\pi_{t}\). Then, the curve \(\dot{\mu}\doteq(\mu_{t})_{t\in[0,1]}\) is a constant speed geodesic between \(\mu_{0}\) and \(\mu_{1};\) we call it a Wasserstein geodesics between \(\mu_{0}\) and \(\mu_{1}.\) Any point \(\mu_{t}\) of the geodesics is an interpolating measure between \(\mu_{0}\) and \(\mu_{1}\) and, as expected:_
\[\mathcal{W}_{p}(\mu_{0},\mu_{1})=\mathcal{W}_{p}(\mu_{0},\mu_{t})+\mathcal{W} _{p}(\mu_{t},\mu_{1}). \tag{4}\]
In the discrete case, and for a fixed \(t\), one can obtain such interpolating measure \(\mu_{t}\) given the optimal transport map \(\mathbf{P}^{*}\) solution of Equation (2) as follows (Peyre et al., 2019, Remark 7.1):
\[\mu_{t}=\sum_{i,j}^{n,m}\mathbf{P}^{*}_{i,j}\delta_{(1-t)x_{i}+tx_{j}^{\prime}} \tag{5}\]
where \(\mathbf{P}^{*}_{i,j}\) is the \((i,j)\)-th entry of \(\mathbf{P}^{*}\); as an interpolating measure, \(\mu_{t}\) obviously complies with (4).
### Problem Statement
Our goal is to compute the Wasserstein distance between two data distributions \(\mu\) and \(\nu\) on a global server with the constraint that \(\mu\) and \(\nu\) are distributed on two different clients which do not share any data samples to the server. From a mathematical point of view, our objective is to estimate an element \(\xi^{\star}\) on the geodesic of \(\mu\) and \(\nu\) without having access to them by leveraging two other elements \(\xi_{\mu}\) and \(\xi_{\nu}\) on the geodesics of \(\mu\) and \(\xi^{\star}\) and \(\nu\) and \(\xi^{\star}\) respectively.
### Related Works
Our work touches the specific question of learning/approximating a distance between distributions whose samples are secluded on isolated clients. As far as we are aware of, this is a problem that has never been investigated before and there are only few works that we see closely connected to
Figure 1: The Wasserstein distance between \(\mu\) and \(\nu\) which are on their respective clients can be computed as \(\mathcal{W}_{p}(\mu,\nu)=\mathcal{W}_{p}(\mu,\xi^{\star})+\mathcal{W}_{p}(\nu, \xi^{\star})\) where \(\xi^{\star}\) is an element on the geodesic between \(\mu\) and \(\nu\). FedWaD seeks at estimating \(\xi^{\star}\) with \(\xi^{K}\) using an iterative algorithm and plugs in this estimation to obtain \(\mathcal{W}_{p}(\mu,\nu)\). Iterates of \(\xi_{i}\) are computed on the server and sent to clients in order to compute measures \(\xi_{\mu}^{i}\) and \(\xi_{\nu}^{i}\) that are on geodesics of \(\mu\) and \(\xi_{i}\) and \(\nu\) and \(\xi_{i}\) respectively.
ours. Some problems have addressed the objective of retrieving nearest neighbours of one vector in a federated manner. For instance, Liu et al. (2021) consider to exchange encrypted versions of the dataset on client to the central server and Schoppmann et al. (2018) consider the exchange of differentially private statistics about the client dataset. Zhang et al. (2023) propose a federated approximate \(k\)-nearest approach based on a specific spatial data federation. Compared to these works that compute distances in a federated manner, we address the case of distances on distributions without any specific encryption of the data and we exploit the properties of the Wasserstein distances and its geodesics, which have been overlooked in the mentioned works. If these properties have been relied upon as a key tool in some computer vision applications (Bauer et al., 2015; Maas et al., 2017) and trajectory inference (Huguet et al., 2022), they have not been employed as a privacy-preserving tool.
## 3 Computing the Federated Wasserstein distance
In this section, we develop a methodology to compute, on a global server, the Wasserstein distance between two distributions \(\mu\) and \(\nu\), stored on two different clients which do not share this information to the server. Our approach leverages the topology induced by the Wasserstein distance in the space of probability measures, and more precisely, the geodesics.
**Outline of our methodology.** A key property is that \(\mathcal{W}_{p}\) is a metric, thus satisfies the triangle inequality: for any \(\mu,\nu,\xi\in\mathscr{P}_{p}(X)\),
\[\mathcal{W}_{p}(\mu,\nu)\leq\mathcal{W}_{p}(\mu,\xi)+\mathcal{W}_{p}(\xi,\nu )\,, \tag{6}\]
with equality if and only if \(\xi=\xi^{\star}\), where \(\xi^{\star}\) is an interpolating measure. Consequently, one can compute \(\mathcal{W}_{p}(\mu,\nu)\) by computing \(\mathcal{W}_{p}(\mu,\xi^{\star})\) and \(\mathcal{W}_{p}(\xi^{\star},\nu)\) and adding these two terms. This result is useful in the federated setting and inspires our methodology, as described hereafter. The global server computes \(\xi^{\star}\) and communicate \(\xi^{\star}\) to the two clients. The clients respectively compute \(\mathcal{W}_{p}(\mu,\xi^{\star})\) and \(\mathcal{W}_{p}(\xi^{\star},\nu)\), then send these to the global server. Finally, the global server adds the two received terms to return \(\mathcal{W}_{p}(\mu,\nu)\).
The main bottleneck of this procedure is that the global server needs to compute \(\xi^{\star}\) (which by definition, depends on \(\mu,\nu\)) while not having access to \(\mu,\nu\) (which are stored on two clients). We then propose a simple workaround to overcome this challenge, based on an additional application of the triangle inequality: for any \(\xi\in\mathscr{P}_{p}(X)\),
\[\mathcal{W}_{p}(\mu,\nu)\leq\mathcal{W}_{p}(\mu,\xi)+\mathcal{W}_{p}(\xi,\nu )=\mathcal{W}_{p}(\mu,\xi_{\mu})+\mathcal{W}_{p}(\xi_{\mu},\xi)+\mathcal{W}_{ p}(\xi,\xi_{\nu})+\mathcal{W}_{p}(\xi_{\nu},\nu)\,, \tag{7}\]
where \(\xi_{\mu}\) and \(\xi_{\nu}\) are interpolating measures respectively between \(\mu\) and \(\xi\) and \(\xi\) and \(\nu\). Hence, computing \(\xi^{\star}\) can be done through intermediate measures \(\xi_{\mu}\) and \(\xi_{\nu}\), to ensure that \(\mu,\nu\) stay on their respective clients. To this end, we develop an optimization procedure which essentially consists in iteratively estimating an interpolating measure \(\xi^{(k)}\) between \(\mu\) and \(\nu\) on the server, by using \(\xi^{(k)}_{\mu}\) and \(\xi^{(k)}_{\nu}\) which were computed and communicated by the clients. More precisely, the objective is to minimize (7) over \(\xi\) as follows: at iteration \(k\), the clients receive current iterate \(\xi^{(k-1)}\) and compute \(\xi^{(k)}_{\mu}\) and \(\xi^{(k)}_{\nu}\) (as interpolating measures between \(\mu\) and \(\xi^{(k-1)}\), and between \(\xi^{(k-1)}\) and \(\nu\) respectively). By the triangle inequality,
\[\mathcal{W}_{p}(\mu,\nu)\leq\mathcal{W}_{p}(\mu,\xi^{(k)}_{\mu})+\mathcal{W}_{ p}(\xi^{(k)}_{\mu},\xi^{(k-1)})+\mathcal{W}_{p}(\xi^{(k-1)},\xi^{(k)}_{\nu})+ \mathcal{W}_{p}(\xi^{(k)}_{\nu},\nu)\,, \tag{8}\]
therefore, the clients then send \(\xi^{(k)}_{\mu}\) and \(\xi^{(k)}_{\nu}\) to the server, which in turn, computes the next iterate \(\xi^{(k)}\) by minimizing the left-hand side term of (8), _i.e.,_
\[\xi^{(k)}\in\operatorname*{arg\,min}_{\xi}\mathcal{W}_{p}(\xi^{(k)}_{\mu},\xi) +\mathcal{W}_{p}(\xi,\xi^{(k)}_{\nu})\,. \tag{9}\]
Our methodology is illustrated in Figure 1 and summarized in Algorithm 1. Besides computing the Wasserstein distance in a federated manner, we point out several methods can easily be incorporated in our algorithm to further reduce the risk of privacy leak. Since the triangle inequality reaches equality on a particular geodesic, \(\xi^{(k)}\), \(\xi^{(k)}_{\mu}\) or \(\xi^{(k)}_{\nu}\) are not unique, thus clients can compute these interpolating measures based on a _random_ value of \(t\). Besides, since communicating the distance
may reveal information about the data, they can be shared with the server only for the last iteration. More effectively, one can incorporate an (adapted) differentially private version of the Wasserstein distance (Le Tien et al., 2019). Regarding communication cost, at each iteration, the communication cost involves the transfer between the server and the clients of four interpolating measures: \(\xi^{(k-1)}\) (twice), \(\xi^{(k)}_{\mu}\), \(\xi^{(k)}_{\zeta^{(k)}}\). Hence, if the support size of \(\xi^{(k-1)}\) is \(S\), the communication cost is in \(\mathcal{O}(4SKd)\), with \(d\) the data dimension and \(K\) the number of iterations.
Reducing the computational complexity.In terms of computational complexity, we need to compute three OT plans per iteration which single cost, based on the network simplex is \(O((n+m)nmlog(n+m))\). More importantly, consider that \(\mu\) and \(\nu\) are discrete measures, then, any interpolating measure between \(\mu\) and \(\nu\) is supported on at most on \(n+m+1\) points. Hence, even if the size of the support of \(\xi^{(0)}\) is small, but \(n\) is large, the support of the next interpolating measures may get larger and larger, and this can yield an important computational overhead when computing \(\mathcal{W}_{p}(\mu,\xi^{(k)})\) and \(\mathcal{W}_{p}(\xi^{(k)},\nu)\).
To reduce this complexity, we resort to approximations of the interpolating measures which goal is to fix the support size of the interpolating measures to a small number \(S\). The solution we consider is to approximate the McCann's interpolation equation which formalizes geodesics \(\xi_{t}\) given an optimal transport map between two distributions,say, \(\xi\) and \(\xi^{\prime}\), based on the equation \(\xi_{t}=((1-t)Id+tT)_{\#}\xi\)Peyre et al. (2019). Using barycentric mapping approximation of the map \(T\)(Courty et al., 2018), we propose to approximate the interpolating measures \(\xi_{t}\) as
\[\xi_{t}=\frac{1}{n}\sum_{i=1}^{n}\delta_{(1-t)x_{i}+tn(\mathbf{P}^{*}\mathbf{ X}^{\prime})_{i}} \tag{10}\]
where \(\mathbf{P}^{*}\) is the optimal transportation plan between \(\xi\) and \(\xi^{\prime}\), \(x_{i}\) and \(x^{\prime}_{j}\) are the samples from these distributions and \(\mathbf{X}^{\prime}\) is the matrix of samples from \(\xi^{\prime}\). Note that by choosing the appropriate formulation of the equation, the support size of this interpolating measure can be chosen as the one of \(\xi\) or \(\xi^{\prime}\). In practice, we always opt for the choice that leads to the smallest support of the interpolating measure. Hence, if the support size of \(\xi^{(0)}\) is \(S\), we have the guarantee that the support of \(\xi^{(k)}\) is \(S\) for all \(k\). Then, for computing \(\mathcal{W}_{p}(\mu,\xi^{(k)})\) using approximated interpolating measures, it costs \(O(3*(Sn^{2}+S^{2}n)log(n+S))\) at each iteration and if \(S\) and the number of iterations \(K\) are small enough, the approach we propose is even competitive compared to exact OT. Our experiments reported later that for larger number of samples (\(\geq 5000\)), our approach is as fast as exact optimal transport and less prone to numerical errors.
Theoretical guarantees.We discuss in this section some theoretical properties of the components of FedWaD. At first, we show that the approximated interpolating measure is tight in the sense that there exists some situations where the resulting approximation is exact.
**Theorem 1**.: _Consider two discrete distributions \(\mu\) and \(\nu\) with the same number of samples \(n\) and uniform weights, then for any \(t\), the approximated interpolating measure, between \(\mu\) and \(\nu\) given by Equation (10) is equal to the exact one Equation (5)._
Proof is given in Appendix A. In practice, this property does not have much impact, but it ensures us about the soundness of the approach. In the next theorem, we prove that Algorithm 1 is theoretically justified, in the sense that its output converges to \(\mathcal{W}_{p}(\mu,\nu)\).
**Theorem 2**.: _Let \(\mu\) and \(\nu\) be two measures in \(\mathcal{P}_{p}(X)\), \(\xi^{(k)}_{\mu}\), \(\xi^{(k)}_{\nu}\) and \(\xi^{(k)}\) be the interpolating measures computed at iteration \(k\) as defined in Algorithm 1. Denote as_
\[A^{(k)}=\mathcal{W}_{p}(\mu,\xi^{(k)}_{\mu})+\mathcal{W}_{p}(\xi^{(k)}_{\mu}, \xi^{(k)})+\mathcal{W}_{p}(\xi^{(k)},\xi^{(k)}_{\nu})+\mathcal{W}_{p}(\xi^{(k) }_{\nu},\nu)\]
_Then the sequence \((A^{(k)})_{k}\) is non-increasing and converges to \(\mathcal{W}_{p}(\mu,\nu)\)._
We provide hereafter a sketch of the proof, and refer to Appendix B for full details. First, we show that the sequence \((A^{(k)})_{k}\) is non-increasing, as we iteratively update \(\xi^{(k+1)}_{\mu}\), \(\xi^{(k+1)}_{\nu}\) and \(\xi^{(k+1)}\) based on geodesics (a minimizer of the triangle inequality). Then, we show that the sequence \((A^{(k)})_{k}\) is bounded below by \(\mathcal{W}_{p}(\mu,\nu)\). We conclude the proof by proving that the sequence \((A^{(k)})_{k}\) converges to \(\mathcal{W}_{p}(\mu,\nu)\).
In the next theorem, we show that when \(\mu\) and \(\nu\) are Gaussians then we can recover some nicer properties of our algorithm and provide a convergence rate (proof in Appendix C).
**Theorem 3**.: _Assume that \(\mu\), \(\nu\) and \(\xi^{(0)}\) are three Gaussian distributions with the same covariance matrix \(\Sigma\) ie \(\mu\sim\mathcal{N}(\mathbf{m}_{\mu},\Sigma)\), \(\nu\sim\mathcal{N}(\mathbf{m}_{\nu},\Sigma)\) and \(\xi^{(0)}\sim\mathcal{N}(\mathbf{m}_{\xi^{(0)}},\Sigma)\). Further assume that we are not in the trivial case where \(\mathbf{m}_{\mu}\), \(\mathbf{m}_{\nu}\), and \(\mathbf{m}_{\xi^{(0)}}\) are aligned. Applying our Algorithm 1 with \(t=0.5\) and the squared Euclidean cost, we have the following properties:_
1. _all interpolating measures_ \(\xi^{(k)}_{\mu}\)_,_\(\xi^{(k)}_{\nu}\)_,_ \(\xi^{(k)}\) _are Gaussian distributions with the same covariance matrix_ \(\Sigma\)_,_
2. _for any_ \(k\geq 1\)_,_ \(\mathcal{W}_{2}(\mu,\nu)=\|\mathbf{m}_{\mu}-\mathbf{m}_{\nu}\|_{2}=2\|\mathbf{ m}_{\xi^{(k)}_{\mu}}-\mathbf{m}_{\xi^{(k)}_{\nu}}\|_{2}=2\mathcal{W}_{2}(\xi^{(k) }_{\mu},\xi^{(k)}_{\nu})\)__
3. \(\mathcal{W}_{2}(\xi^{(k)},\xi^{\star})=\frac{1}{2}\mathcal{W}_{2}(\xi^{(k-1) },\xi^{\star})\)__
4. \(\mathcal{W}_{2}(\mu,\xi^{(k)})+\mathcal{W}_{2}(\xi^{(k)},\nu)-\mathcal{W}_{2} (\mu,\nu)\leq\frac{1}{2^{k-1}}\mathcal{W}_{2}(\xi^{(0)},\xi^{(\star)})\)__
Interestingly, this theorem also says that in this specific case, only one iteration is needed to recover \(\mathcal{W}_{2}(\mu,\nu)\)
Figure 2: Analysis of the different Wasserstein distance computation methods (most-left panels) for varying support size of the approximated FedWaD and (most-right panels) for varying sample ratio in the two distributions and fixed support size. For each couple of panels, for increasing number of samples, we report the running time and the relative error of the Wasserstein distance (WD), our exact FedWaD (FedWaD-e) and our approximate FedWaD (FedWad-a) with a support size of \(2\), \(10\) and \(100\). For the most-right panels, we have set the support size of the interpolating measure to \(10\). For a sample ratio (1:3), the first distribution has a number of samples \(N\) and the second ones \(N/3\).
Experiments
This section presents numerical applications, where FedWaD can successfully be used and show how it can boost performances of federated learning algorithms. Full details are provided in Appendix D.
Toy analysis.We illustrate the evolution of interpolating measures using FedWaD for calculating the Wasserstein distance between two Gaussian distributions. We sample 200 points from two 2D Gaussian distributions with different means and the same covariance matrix. We compute the interpolating measure at \(t=0.5\) using both the analytical formula (5) and the approximation (10). Figure 3 (left panel) shows how the interpolating measure evolves across iterations. We also observe, in Figure 3 (right panel), that the error on the true Wasserstein distance for the approximated interpolating measure reaches \(10^{-3}\), while for the exact interpolating measure, it drops to a minimum of \(10^{-4}\) before increasing. This discrepancy occurs as the support size of the interpolating measure expands across iterations leading to numerical errors when computing the optimal transport plan between \(\xi^{(k)}\) and \(\xi^{(k)}_{\mu}\) or \(\xi^{(k)}_{\nu}\). Hence, using the approximation Equation (10) is a more robust alternative to exact computation Equation (5).
We also examine computational complexity and approximation errors for both methods as we increase sample sizes in the distributions, as displayed in Figure 2. Key findings include: The approximated interpolating measure significantly improves computational efficiency, being at least 10 times faster with sample size exceeding 100, especially with smaller support sizes. It also achieves a similar relative approximation error as FedWaD using the exact interpolating measure and true non-federated Wasserstein distance. Importantly, it demonstrates greater robustness with larger sample sizes compared to true Wasserstein distance for such a small dimensional problem.
Wasserstein coreset and application to federated learning.In many ML applications, summarized data into fewer representative samples is routinely done to deal with large datasets. The notion of _coreset_ has been relevant to extract such samples and admit several formulations (Phillips, 2016; Munteanu et al., 2018). In this experiment, we show that Wasserstein coreset (Claici et al., 2018) can be computed in a federated way via FedWaD. Formally, given a dataset described by the distribution \(\mu\), the Wasserstein coreset aims at finding the empirical distribution that minimizes \(\min_{x^{\prime}_{i},\cdots,x^{\prime}_{K}}\mathcal{N}_{p}\left(\frac{1}{K} \sum_{i=1}^{K}\delta_{x^{\prime}_{i}},\mu\right)\). We solve this problem in the following federated setting: we assume that either the samples drawn from \(\mu\) are stored on an unique client or distributed across different clients, and the objective is to learn the coreset samples \(\{x^{\prime}_{i}\}\) on the server. In our setting, we can compute the federated Wasserstein distances between the current coreset and some subsamples of all active client datasets, then update the coreset given the aggregated gradients of these distances with respect to the coreset support. We sampled \(20000\) examples randomly from the MNIST dataset, and dispatched them at random on \(100\) clients. We compare the results we obtained with FedWaD with those obtained with exact non-federated Wasserstein distance The results are shown in Figure 4. We can note that when classes are almost equally spread across clients (with \(K=8\) different classes per client), FedWaD is able to capture the \(10\) modes of the dataset. However, as the diversity in classes between clients increases, FedWaD has more difficulty to capture all the modes of the dataset. Nonetheless, we also
Figure 4: Examples of the \(10\) coreset we obtained, with for each panel _(top-row)_ the exact Wasserstein and _(bottow-row)_ FedWaD for the MNIST dataset. Different panels correspond to different number of classes \(K\) on each client: _(top)_\(K=8\), _(middle)_\(K=2\), _(bottom)_ support of the interpolating measure varying from \(10\) to \(100\).
Figure 3: (left) Evolution of the interpolating measure \(\xi^{(k)}\) - in blue - (right) the estimated Wasserstein distance between two Gaussian distributions \(\mu\) and \(\nu\).
observe that the exact Wasserstein distance is not able to recover those modes either. We can thus conjecture that this failure is likely due to the coreset approach itself, rather than to the approximated distance returned by FedWaD. We also note that the support size of the interpolating measure has less impact on the coreset. We believe this is a very interesting result, as it shows that FedWaD can provide useful gradient to the problem even with a poorer estimation of the distance.
Federated coreset classification modelThose federated coresets can also be used for classification tasks. As such, we have learned coresets for each client, and used all the coresets from all clients as the examples for a one-nearest neighbor global classifier shared to all clients. Note that since a coreset computation is an unsupervised task, we have assigned to each element of a coreset the label of the closest element in the client dataset. For this task, we have used the MNIST dataset which has been autoencoded in order to reduce its dimensionality. Half of the training samples have been used for learning the autoencoder and the other half for the classification task. Those samples and the test samples of dataset have been distributed across clients while ensuring that each client has samples from only \(2\) classes. We have then computed the accuracy of this federated classifier for varying number of clients and number of coresets and compared its performance to the ones of _FedRep_(Collins et al., 2021) and _FedPer_(Arivazhagan et al., 2019). Results are reported in Figure 5. We can see that our simple approach is highly competitive with these personalized FL approaches, and even outperforms them when the number of users becomes large.
Geometric dataset distances via federated Wasserstein distance.Our goal is to improve on the seminal algorithm of Alvarez-Melis and Fusi (2020) that seeks at computing distance between two datasets \(\mathcal{D}\) and \(\mathcal{D}^{\prime}\) using optimal transport. We want to make it federated. This extension will pave the way to better federated learning algorithms for transfer learning and domain adaptation or can simply be used for boosting federated learning algorithms, as we illustrate next. Alvarez-Melis and Fusi (2020) considers a Wasserstein distance with a ground metric that mixes distances between features and tractable distance between class-conditional distributions. For our extension, we will use the same ground metric, but we will compute the Wasserstein distance using FedWaD. Details are provided in Appendix D.3.
We replicated the experiments of Alvarez-Melis and Fusi (2020) on the dataset selection for transfer learning: given a source dataset, the goal is to find a target one which is the most similar to the source. We considered four real datasets, namely MNIST, KMNIST, USPS and FashionMNIST and we have computed all the pairwise distance between \(5000\) randomly selected examples from each dataset using the original OTDD of Alvarez-Melis and Fusi (2020) and our FedWaD approach. For FedWaD, we chose the support size of the interpolating measure to be \(1000\) and \(5000\) and the number of epochs to \(20\) and \(500\). Results, averaged over \(5\) random draw of the samples, are depicted in Figure 6. We can see that the distance matrices produced by FedWaD are semantically similar to the ones for OTDD distance, which means that order relations are well-preserved for most pairwise distances (except only for two pairs of datasets in the USPS row). More importantly, running more epochs leads to slightly better approximation of the OTDD distance, but the exact order relations are already uncovered using only \(20\) epochs in FedWaD. Detailed ablation studies on these parameters are provided in Appendix D.4.
Boosting FL methodsOne of the challenges in FL is the heterogeneity of the data distribution among clients. This heterogeneity is usually due to shift in class-conditional distributions or to a label shift (some classes being absent on a client). As such, we propose to investigate a simple approach that allows to address dataset heterogeneity (in terms of distributions) among clients, by leveraging on our ability to compute distance between datasets in a federated way.
Figure 5: Nearest neighbor classifier based on the coresets learnt from each client for varying number of clients and number of coresets per clients We have compared to the performance of two personalized FL algorithms.
Our proposal involves computing pairwise dataset distances between clients, clustering them based on their (di)-similarities using a spectral clustering algorithm (Von Luxburg, 2007), and using this clustering knowledge to enhance existing federated learning algorithms. In our approach, we run the FL algorithm for each of the \(K\) clusters of clients instead of all clients to avoid information exchange between clients with diverse datasets. For example, for FedAvg, this means learning a global model for each cluster of clients, resulting in \(K\) global models. For personalized models like FedRep (Collins et al., 2021), or FedPer (Arivazhagan et al., 2019), we run the personalized algorithm on each cluster of clients. By running FL algorithms on clustered client, we ensure information exchange only between similar clients and improves the overall performance of federated learning algorithms by reducing the statistical dataset heterogeneity among clients.
We have run experiments on MNIST and CIFAR10 in which client datasets hold a clear cluster structure. We have also run experiments where there is no cluster structure in which clients are randomly assigned a pair of classes. In practice, we used the code of FedRep Collins et al. (2021) for the _FedAvg_, _FedRep_ and _FedPer_ and the spectral clustering method of scikit-learn (Pedregosa et al., 2011) (details are in Appendix D.5). Results are reported in Table 1 (with details in Appendix D.5). We can see that when there is a clear clustering structure among the clients, FedWaD is able to recover it and always improve the performance of the original federated learning algorithms. Depending on the algorithm, the improvement can be highly significant. For instance, for _FedRep_, the performance can be improved by \(9\) points for CIFAR10 and up to \(29\) for MNIST. Interestingly, even without clear clustering structure, FedWaD is able to almost always improve the performance of all federated learning algorithms (except for some specific cases of _FedPer_). Again for _FedRep_, the performance uplift can reach \(19\) points for CIFAR10 and \(36\) for MNIST. In terms of clustering, the "affinity" parameter of the spectral clustering algorithm seems to be the most efficient and robust one.
## 5 Conclusion
In this paper, we presented a principled approach for computing the Wasserstein distance between two distributions in a federated manner. Our proposed algorithm, called FedWaD, leverages the geometric properties of the Wasserstein distance and associated geodesics to estimate the distance while respecting the privacy of the samples stored on different devices. We established the convergence properties of FedWaD and provided empirical evidence of its practical effectiveness through simulations on various problems, including dataset distance and coreset computation. Our approach shows potential applications in the fields of machine learning and privacy-preserving data analysis, where computing distances for distributed data is a fundamental task.
|
2301.06785 | Hadronic molecular states with the quark contents $bc\bar{s}\bar{q}$,
$b\bar{c}s\bar{q}$, and $b\bar{c}\bar{s}q$ | We study the hadronic molecular states with the quark content
$bc\bar{s}\bar{q}$ by investigating the interactions of the $\bar{B}_s D$,
$\bar{B} D_s$, $\bar{B}_s^* D$, $\bar{B}^* D_s$, $\bar{B}_s D^*$, $\bar{B}
D_s^*$, $\bar{B}_s^* D^*$, and $\bar{B}^* D_s^*$ systems. By solving the
Bethe-Salpeter equation within the extended local hidden gauge formalism, we
find altogether six poles qualifying as possible hadronic molecular states: one
pole of $J^P=0^+$ below the $\bar{B}_s D$-$\bar{B}D_s$ threshold, one pole of
$J^P=1^+$ below the $\bar{B}_s^* D$-$\bar{B}^* D_s$ threshold, one pole of
$J^P=1^+$ below the $\bar{B}_s D^*$-$\bar{B}D_s^*$ threshold, and three poles
of $J^P=0^+/1^+/2^+$ below the $\bar{B}_s^* D^*$-$\bar{B}^* D_s^*$ threshold.
Their binding energies are calculated to be about 10-20 MeV with the cut-off
momentum $q_\textrm{max}=600\textrm{ MeV}$. Similarly, we study the hadronic
molecular states with $bs\bar{c}\bar{q}$ by investigating the interactions of
the $\bar{B}\bar{D}_s$, $\bar{B}_c\bar{K}$, $\bar{B}^*\bar{D}_s$,
$\bar{B}_c^*\bar{K}$, $\bar{B}\bar{D}_s^*$, $\bar{B}_c\bar{K}^*$,
$\bar{B}^*\bar{D}_s^*$, $\bar{B}_c^*\bar{K}^*$ systems, and the states with
$bq\bar{c}\bar{s}$ by investigating the interactions of the $\bar{B}_s\bar{D}$,
$\bar{B}_cK$, $\bar{B}_s^*\bar{D}$, $\bar{B}_c^*K$, $\bar{B}_s\bar{D}^*$,
$\bar{B}_cK^*$, $\bar{B}_s^*\bar{D}^*$, $\bar{B}_c^*K^*$ systems. However, no
deeply-bound poles are found in these systems. | Wen-Ying Liu, Hua-Xing Chen, En Wang | 2023-01-17T10:21:53Z | http://arxiv.org/abs/2301.06785v2 | Hadronic molecular states with the quark contents \(bc\bar{s}\bar{q}\), \(b\bar{c}s\bar{q}\), and \(b\bar{c}sq\)
###### Abstract
We study the hadronic molecular states with the quark content \(bc\bar{s}\bar{q}\) by investigating the interactions of the \(\bar{B}_{s}D\), \(\bar{B}_{s}\), \(\bar{B}_{s}^{*}D\), \(\bar{B}_{s}D^{*}\), \(\bar{B}_{s}D^{*}\), \(\bar{B}_{s}^{*}D^{*}\), \(\bar{B}_{s}^{*}D^{*}\), and \(\bar{B}^{*}D_{s}^{*}\) systems. By solving the Bethe-Salpeter equation within the extended local hidden gauge formalism, we find altogether six poles qualifying as possible hadronic molecular states: one pole of \(J^{P}=0^{+}\) below the \(\bar{B}_{s}D\)-\(\bar{B}D_{s}\) threshold, one pole of \(J^{P}=1^{+}\) below the \(\bar{B}_{s}^{*}D\)-\(\bar{B}^{*}D_{s}\) threshold, one pole of \(J^{P}=1^{+}\) below the \(\bar{B}_{s}D^{*}\)-\(\bar{B}D_{s}^{*}\) threshold, and three poles of \(J^{P}=0^{+}/1^{+}/2^{+}\) below the \(\bar{B}_{s}^{*}D^{*}\)-\(\bar{B}^{*}D_{s}^{*}\) threshold. Their binding energies are calculated to be about 10-20 MeV with the cut-off momentum \(q_{\rm max}=600\) MeV. Similarly, we study the hadronic molecular states with \(bs\bar{c}\bar{q}\) by investigating the interactions of the \(\bar{B}D_{s}\), \(\bar{B}_{c}\bar{K}\), \(\bar{B}^{*}\bar{D}_{s}\), \(\bar{B}_{s}^{*}\bar{K}\), \(\bar{B}D_{s}^{*}\), \(\bar{B}_{s}^{*}\bar{B}_{s}^{*}\bar{K}^{*}\) systems, and the states with \(b\bar{c}\bar{s}\) by investigating the interactions of the \(\bar{B}_{s}D\), \(\bar{B}_{c}K\), \(\bar{B}_{s}^{*}\bar{D}\), \(\bar{B}_{s}^{*}K\), \(\bar{B}_{s}\bar{D}^{*}\), \(\bar{B}_{s}K^{*}\), \(\bar{B}_{s}^{*}\bar{D}^{*}\), \(\bar{B}_{s}^{*}\bar{K}^{*}\) systems. However, no deeply-bound poles are found in these systems.
hadronic molecule, Bethe-Salpeter equation, coupled-channel analysis
## I Introduction
Recently, the LHCb Collaboration reported their observation of the first doubly charmed tetraquark state \(T_{cc}(3875)\) in the \(D^{0}D^{0}\pi^{+}\) mass spectrum just below the \(D^{*+}D^{0}\) mass threshold [1; 2]. This state has the quark content \(cc\bar{u}\bar{d}\). Its spin-parity quantum numbers were determined to be \(J^{P}=1^{+}\), and the LHCb experiment favors it to be an isoscalar state. Based on the Breit-Wigner parametrisation, its mass and width were measured to be:
\[M_{\rm BW} =M_{D^{*+}}+M_{D^{0}}-\left(273\pm 61\pm 5^{+11}_{-14}\right)\ {\rm keV}\,, \tag{1}\] \[\Gamma_{\rm BW} =410\pm 165\pm 43^{+18}_{-38}\ {\rm keV}\,.\]
A LHCb analysis of the data with a unitary amplitude and considering the experimental resolution produces the resonance pole at \(\sqrt{s}=m_{\rm pole}-\frac{i}{2}\Gamma_{\rm pole}\), where [2]
\[m_{\rm pole} =M_{D^{*+}}+M_{D^{0}}-\left(360\pm 40^{+0}_{-4}\right)\ {\rm keV}\,, \tag{2}\] \[\Gamma_{\rm pole} =48\pm 2^{+~{}0}_{-14}\ {\rm keV}\,.\]
The closeness of the \(T_{cc}(3875)\) to the \(D^{*+}D^{0}\) threshold makes it a good candidate for the \(DD^{*}\) hadronic molecular state of \(I(J^{P})=0(1^{+})\), whose existence had been predicted in Refs. [3; 4; 5; 6; 7; 8; 9; 10] before the LHCb experiment.
Besides, the BESIII Collaboration observed an excess of events near the \(D_{s}^{-}D^{*0}\)-\(D_{s}^{*-}D^{0}\) mass thresholds in the \(K^{+}\) recoil-mass spectrum of the \(e^{+}e^{-}\to K^{+}(D_{s}^{-}D^{*0}+D_{s}^{*-}D^{0})\) process [11]. This structure, denoted as \(Z_{cs}(3985)\), is expected to be the strange partner of the \(Z_{c}(3900)\)[12; 13]. Its pole mass and width were measured to be \(3982.5^{+1.8}_{-2.6}\pm 2.1\) MeV and \(12.8^{+5.3}_{-4.4}\pm 3.0\) MeV, respectively. It is the first candidate for the hidden-charm tetraquark state with strangeness.
Later the LHCb Collaboration reported their observation of two exotic structures in the \(J/\psi K^{+}\) mass distribution of the \(B^{+}\to J/\psi\phi K^{+}\) decay [14]. The mass and width of the lower-lying state, denoted as \(Z_{cs}(4000)\), were measured to be \(4003\pm 6^{+44}_{-14}\) MeV and \(131\pm 15\pm 26\) MeV, respectively. Its spin-parity quantum numbers were determined to be \(J^{P}=1^{+}\). The mass and width of the higher-lying state, denoted as \(Z_{cs}(4220)\), were measured to be \(4216\pm 24^{+43}_{-30}\) MeV and \(233\pm 52^{+97}_{-73}\) MeV, respectively. Its spin-parity quantum numbers were determined to be either \(J^{P}=1^{+}\) or \(1^{-}\).
The above \(Z_{cs}\) states have the quark content \(c\bar{c}s\bar{q}\) or \(c\bar{c}\bar{s}q\) (\(q=u/d\)). There have been extensive theoretical studies, and their existence had been predicted in various theoretical models before the BESIII and LHCb experiments, based on the \(D\bar{D}_{s}^{*}D^{*}\bar{D}_{s}\) hadronic molecular picture [15], the compact tetraquark picture [16; 17], the hadro-quarkonium picture [18; 19], and the initial-single-chiral-particle-emission mechanism [20]. We refer to Refs. [21; 22; 23; 24; 25; 26; 27; 28; 29; 30; 31; 32; 33; 34; 35; 36; 37; 38; 39; 40; 41; 42; 43; 44; 45; 46] for their detailed discussions, and the studies on the \(T_{cc}(3875)\) can also be found in these reviews.
The observation of the \(T_{cc}(3875)\) with the quark content \(cc\bar{u}\bar{d}\) motivates us to investigate the hadronic molecular states with the quark content \(bc\bar{q}\bar{q}\), and the observations of the three \(Z_{cs}\) states with the quark contents \(c\bar{c}s\bar{q}/c\bar{c}s\bar{q}\) motivate us to further investigate the hadronic molecular states with the quark contents \(bc\bar{s}\bar{q}\), \(b\bar{c}s\bar{q}\), and \(b\bar{c}\bar{s}q\). Accordingly, in this paper we shall study the possibly-existing hadronic molecular states with these quark contents. We shall use the extended local hidden gauge symmetry approach [47; 48; 49], which has been widely applied in the study of the meson-meson and meson-baryon interactions [50; 51; 52; 53; 54; 55; 56; 57]. We also refer to Refs. [58; 59; 60; 61; 62] for more discussions.
For the possible hadronic molecular states with the quark content \(bc\bar{s}\bar{q}\), we shall investigate the interactions of the \(\bar{B}_{s}D\), \(\bar{B}D_{s}\), \(\bar{B}_{s}^{*}D\), \(\bar{B}^{*}D\), \(\bar{B}_{s}D^{*}\), \(\bar{B}D_{s}^{*}\), \(\bar{B}_{s}^{*}D^{*}\), and \(\bar{B}^{*}D_{s}^{*}\) systems. We shall find six poles in these systems,
which may qualify as hadronic molecular states. Besides, we shall study the hadronic molecular states with \(b\bar{c}s\bar{q}\) by investigating the interactions of the \(\bar{B}\bar{D}_{s}\), \(\bar{B}_{c}\bar{K}\), \(\bar{B}^{*}\bar{D}_{s}\), \(\bar{B}D_{s}^{*}\), \(\bar{B}_{c}\bar{K}^{*}\), \(\bar{B}^{*}\bar{D}_{s}^{*}\), \(\bar{B}_{c}^{*}\bar{K}^{*}\) systems, and the states with \(b\bar{c}s\bar{q}\) by investigating the interactions of the \(\bar{B}_{s}\bar{D}\), \(\bar{B}_{c}K\), \(\bar{B}_{s}^{*}\bar{D}\), \(\bar{B}_{c}^{*}K\), \(\bar{B}_{s}D^{*}\), \(\bar{B}_{c}K^{*}\), \(\bar{B}_{s}^{*}\bar{D}^{*}\), \(\bar{B}_{c}^{*}K^{*}\) systems. However, we shall find no deeply-bound pole in these systems.
This paper is organized as follows. In Sec. II we apply the local hidden gauge formalism to derive the potentials for the interactions between charmed(-strange) mesons and bottom(-strange) mesons. Based on the obtained potentials, we solve the coupled-channel Bethe-Salpeter equation in Sec. III to extract the poles, some of which can qualify as hadronic molecular states. A brief summary is given in Sec. IV.
## II Local hidden gauge formalism
By using the unitary coupled-channel approach within the local hidden gauge formalism, the interactions of the \(B^{(*)}D^{(*)}\) and \(B^{(*)}\bar{D}^{(*)}\) systems have been systematically studied in Ref. [63], and the interactions of the \(B^{(*)}_{(s)}B^{(*)}_{(s)}\) and \(B^{(*)}K^{(*)}\) systems have been systematically studied in Refs. [64; 65]. In this section we shall extend these formalisms to the \(\bar{B}^{(*)}_{(s)}D^{(*)}_{(s)}\) and \(\bar{B}^{(*)}_{(s)}\bar{D}^{(*)}_{(s)}\) systems:
* We shall investigate the \(\bar{B}^{0}D^{+}\), \(\bar{B}^{0}D^{+}\), \(\bar{B}^{*0}_{s}D^{+}\), \(\bar{B}^{*0}_{s}D^{+}\), \(\bar{B}^{*0}_{s}D^{*+}\), \(\bar{B}^{0}_{s}D^{*+}\), and \(\bar{B}^{*0}D^{*+}_{s}\) channels to study the hadronic molecular states with the quark content \(bc\bar{s}\bar{d}\).
* We shall investigate the \(\bar{B}^{0}D^{-}_{s}\), \(B^{-}_{c}\bar{K}^{0}\), \(\bar{B}^{*0}D^{-}_{s}\), \(B^{*-}_{c}\bar{K}^{0}\), \(\bar{B}^{0}D^{*-}_{s}\), \(B^{-}_{c}\bar{K}^{*0}\), \(\bar{B}^{*0}D^{*-}_{s}\), and \(B^{*-}_{c}\bar{K}^{*0}\) channels to study those with \(bc\bar{s}\bar{d}\).
* We shall investigate the \(\bar{B}^{0}_{s}D^{-}\), \(B^{-}_{c}K^{0}\), \(\bar{B}^{*0}_{s}D^{-}\), \(B^{*-}_{c}K^{0}\), \(\bar{B}^{0}_{s}D^{*-}\), \(B^{*-}_{c}\), and \(B^{*-}_{c}K^{*0}\) channels to study those with \(bc\bar{s}\bar{d}\).
The threshold masses of the above channels are tabulated in Table 1. Besides, we shall also study the hadronic molecular states with the quark contents \(bc\bar{s}\bar{u}\), \(bc\bar{s}\bar{u}\), and \(b\bar{c}\bar{s}u\). It is only the third component of isospin that changes, and the threshold masses of these channels are also tabulated in Table 1.
Within the extended local hidden gauge symmetry approach, the interactions between charmed(-strange) mesons and bottom(-strange) mesons mainly proceed through the exchange of the vector meson, as depicted in Figs. 1(a,b,c). Together with the contact term depicted in Fig. 1(d), their corresponding Lagrangians can be written as:
\[\mathcal{L}_{VPP} = -ig\left\langle[P,\partial_{\mu}P]V^{\mu}\right\rangle, \tag{3}\] \[\mathcal{L}_{VVV} = ig\left\langle(V^{\mu}\partial_{\nu}V_{\mu}-\partial_{\nu}V^{\mu }V_{\mu})V^{\nu}\right\rangle,\] (4) \[\mathcal{L}_{VVVV} = \frac{g^{2}}{2}\langle V_{\mu}V_{\nu}V^{\mu}V^{\nu}-V_{\nu}V_{\mu} V^{\mu}V^{\nu}\rangle\,. \tag{5}\]
The coupling constant is defined as \(g=M_{V}/(2f_{\pi})\), where \(M_{V}\) is the mass of the exchanged vector meson and \(f_{\pi}=93\) MeV is the decay constant of pion. Especially, we shall take \(M_{V}=800\) MeV for the mass of the exchanged light vector meson.
Taking into account the standard \(\eta\)-\(\eta^{\prime}\) mixing, we can write the matrices of the flavor \(SU(5)\) pseudoscalar and vector mesons as follows,
\[P=\left(\begin{array}{ccccc}\frac{\eta}{\sqrt{3}}+\frac{\eta^{\prime}}{ \sqrt{6}}+\frac{\pi^{0}}{\sqrt{2}}&\pi^{+}&K^{+}&\bar{D}^{0}&B^{+}\\ \pi^{-}&\frac{\eta}{\sqrt{3}}+\frac{\eta^{\prime}}{\sqrt{6}}-\frac{\pi^{0}}{ \sqrt{2}}&K^{0}&D^{-}&B^{0}\\ K^{-}&\bar{K}^{0}&-\frac{\eta}{\sqrt{3}}+\sqrt{\frac{2}{3}}\eta^{\prime}&D^{-} _{s}&B^{0}_{s}\\ D^{0}&D^{+}&D^{+}_{s}&\eta_{c}&B^{+}_{c}\\ B^{-}&\bar{B}^{0}&\bar{B}^{0}_{s}&B^{-}_{c}&\eta_{b}\end{array}\right)\,, \tag{6}\]
Figure 1: Feynman diagrams for the interactions between charmed(-strange) mesons and bottom(-strange) mesons: (a) the vector meson exchange between two pseudoscalar mesons, (b) the vector meson exchange between vector and pseudoscalar mesons, (c) the vector meson exchange between two vector mesons, and (d) the contact term connecting four vector mesons.
Although the flavor \(SU(5)\) symmetry has been used here, for the vector exchange between mesons which is the dominant part in the large quark mass counting, one is only using the \(\bar{q}q\) character of the mesons [63]. Furthermore, in this work the heavy quarks in the coupled-channels are the bystanders according to the heavy quark symmetry, and the light quarks are the participants in the reactions, so the exchange of light vector mesons contributes dominantly to the \(bc\bar{s}\bar{q}\) system. As discussed in Ref. [66], the exchange of light vector mesons can be well described by the \(SU(3)\) flavor symmetry. We shall further investigate the contribution from the exchange of heavy vector mesons in Appendix A for the \(b\bar{c}\bar{s}\bar{q}\) and \(b\bar{c}\bar{s}q\) systems, where the exchange of light vector mesons is not allowed.
We shall study the interaction of the \(bc\bar{s}\bar{d}\) system as an example, and separately investigate the vector meson exchange between two pseudoscalar mesons, the vector meson exchange between vector and pseudoscalar mesons, and the vector meson exchange between two vector mesons in the following subsections. Studies on the \(b\bar{c}s\bar{q}\) and \(b\bar{c}\bar{s}q\) systems can be found in Appendix A.
### \(P\)-\(P\) interaction in the \(bc\bar{s}\bar{d}\) system
In this subsection we study the interaction due to the vector meson exchange between two pseudoscalar mesons in the \(bc\bar{s}\bar{d}\) system, and the \(bc\bar{s}\bar{u}\) system can be similarly investigated. There are only two coupled channels:
\[\bar{B}^{0}_{s}D^{+}\,,\,\bar{B}^{0}D^{+}_{s}\,. \tag{7}\]
As shown in Fig. 2, the exchanged vector meson can be either the \(K^{*0}\) or \(B^{*-}_{c}\) meson. Since the latter \(B^{*-}_{c}\) meson is too massive, we do not take it into account in this work.
Based on Eq. (3), the transition potential \(V(s)\) due to the vector meson exchange between two pseudoscalar
\begin{table}
\begin{tabular}{c|c c c c c c c} \hline \hline Channels & \(\bar{B}^{0}_{s}D^{+}\) & \(\bar{B}^{0}D^{+}_{s}\) & \(\bar{B}^{*0}D^{+}_{s}\) & \(\bar{B}^{*0}D^{+}_{s}\) & \(\bar{B}^{0}D^{*+}_{s}\) & \(\bar{B}^{*0}D^{*+}_{s}\) & \(\bar{B}^{*0}D^{*+}_{s}\) & \(\bar{B}^{*0}D^{*+}_{s}\) \\ \hline Threshold & 7236.6 & 7248.0 & 7285.1 & 7293.1 & 7377.2 & 7391.9 & 7425.7 & 7436.9 \\ \hline Channels & \(\bar{B}^{0}_{s}D^{0}\) & \(B^{-}D^{+}_{s}\) & \(\bar{B}^{*0}_{s}D^{0}\) & \(B^{*-}D^{+}_{s}\) & \(\bar{B}^{0}_{s}D^{*0}\) & \(B^{-}D^{*+}_{s}\) & \(\bar{B}^{*0}_{s}D^{*0}\) & \(B^{*-}D^{*+}_{s}\) \\ \hline Threshold & 7231.8 & 7247.7 & 7280.2 & 7293.1 & 7373.8 & 7391.5 & 7422.3 & 7436.9 \\ \hline Channels & \(\bar{B}^{0}D^{-}_{s}\) & \(B^{-}_{c}\bar{K}^{0}\) & \(\bar{B}^{*0}D^{-}_{s}\) & \(B^{*-}_{c}\bar{K}^{0}\) & \(\bar{B}^{0}D^{*-}_{s}\) & \(B^{-}_{c}\bar{K}^{*0}\) & \(\bar{B}^{*0}D^{*-}_{s}\) & \(B^{*-}_{c}\bar{K}^{*0}\) \\ \hline Threshold & 7247.7 & 6772.1 & 7293.1 & 6828.6 & 7391.9 & 7170.0 & 7436.9 & 7226.6 \\ \hline Channels & \(\bar{B}^{-}D^{-}_{s}\) & \(B^{-}_{c}K^{-}\) & \(B^{*-}D^{-}_{s}\) & \(B^{-}_{c}K^{-}\) & \(B^{-}D^{*-}_{s}\) & \(B^{-}_{c}K^{*-}\) & \(B^{-}D^{*-}_{s}\) & \(B^{*-}_{c}K^{*-}\) \\ \hline Channels & \(\bar{B}^{0}_{s}D^{-}\) & \(B^{-}_{c}K^{0}\) & \(\bar{B}^{*0}_{s}D^{-}\) & \(B^{*-}_{c}K^{0}\) & \(\bar{B}^{0}_{s}D^{*-}\) & \(B^{-}_{c}K^{*0}\) & \(\bar{B}^{*0}_{s}D^{*-}\) & \(B^{*-}_{c}K^{*0}\) \\ \hline Threshold & 7236.6 & 6772.1 & 7285.1 & 6828.6 & 7377.2 & 7170.0 & 7425.7 & 7226.6 \\ \hline Channels & \(\bar{B}^{0}_{s}\bar{D}^{0}\) & \(B^{-}_{c}K^{+}\) & \(\bar{B}^{*0}_{s}\bar{D}^{0}\) & \(B^{*-}_{c}K^{+}\) & \(\bar{B}^{0}_{s}\bar{D}^{*0}\) & \(B^{-}_{c}K^{*+}\) & \(\bar{B}^{*0}_{s}\bar{D}^{*0}\) & \(B^{*-}_{c}K^{*+}\) \\ \hline Threshold & 7231.8 & 6768.2 & 7280.2 & 6824.7 & 7373.8 & 7166.1 & 7422.3 & 7222.7 \\ \hline \hline \end{tabular}
\end{table}
Table 1: Threshold masses of the 48 channels considered in the present study, in units of MeV.
Figure 2: The vector meson exchange between two pseudoscalar mesons in the \(bc\bar{s}\bar{d}\) system.
mesons can be written as
\[V_{PP}(s)=C_{PP}\times g^{2}(p_{1}+p_{3})(p_{2}+p_{4})\,, \tag{8}\]
with \(p_{1}(p_{3})\) the four-momentum of the \(\bar{B}^{0}_{s}(\bar{B}^{0})\) meson and \(p_{2}(p_{4})\) the four-momentum of the \(D^{+}(D^{+}_{s})\) meson. The matrix \(C_{PP}\) for the \(bc\bar{s}\bar{d}\) system is a \(2\times 2\) matrix:
\[C_{PP}=\left(\begin{array}{c|cc}J=0&\bar{B}^{0}_{s}D^{+}&\bar{B}^{0}D^{+}_{s }\\ \hline\bar{B}^{0}_{s}D^{+}&0&\frac{1}{m^{2}_{K^{*}}}\\ \bar{B}^{0}D^{+}_{s}&\frac{1}{m^{2}_{K^{*}}}&0\end{array}\right)\,, \tag{9}\]
which does not contain any diagonal term.
As shown in Table 1, the threshold masses of the \(\bar{B}^{0}_{s}D^{+}\) and \(\bar{B}^{0}D^{+}_{s}\) channels are quite close to each other. This triggers us to consider their mixing:
\[|(\bar{B}D)^{+}_{s};J=0\rangle =\frac{1}{\sqrt{2}}\left(|\bar{B}^{0}_{s}D^{+}\rangle_{J=0}+| \bar{B}^{0}D^{+}_{s}\rangle_{J=0}\right), \tag{10}\] \[|(\bar{B}D)^{-}_{s};J=0\rangle =\frac{1}{\sqrt{2}}\left(|\bar{B}^{0}_{s}D^{+}\rangle_{J=0}-| \bar{B}^{0}D^{+}_{s}\rangle_{J=0}\right), \tag{11}\]
and the matrix \(C_{PP}\) in this basis transforms to be
\[C^{\prime}_{PP}=\left(\begin{array}{c|cc}J=0&(\bar{B}D)^{+}_{s}&(\bar{B}D)^ {-}_{s}\\ \hline(\bar{B}D)^{+}_{s}&\frac{1}{m^{2}_{K^{*}}}&0\\ (\bar{B}D)^{-}_{s}&0&-\frac{1}{m^{2}_{K^{*}}}\end{array}\right)\,. \tag{12}\]
Accordingly, the attractive combination \(|(\bar{B}D)^{-}_{s};J=0\rangle\) may produce a bound state, while the repulsive combination \(|(\bar{B}D)^{+}_{s};J=0\rangle\) can not.
### \(V\)-\(P\) interaction in the \(bc\bar{s}\bar{d}\) system
In this subsection we study the interaction due to the vector meson exchange between vector and pseudoscalar mesons in the \(bc\bar{s}\bar{d}\) system. There are four coupled channels:
\[\bar{B}^{*0}_{s}D^{+}\,,\,\bar{B}^{*0}D^{+}_{s}\,,\,\bar{B}^{0}_{s}D^{*+}\,, \,\bar{B}^{*0}D^{*+}_{s}\,.\]
As shown in Fig. 3, the exchanged vector meson can still be either the \(K^{*0}\) or \(B^{*-}_{c}\) meson, and again we neglect the latter due to its large mass.
Based on Eq. (3) and Eq. (4), the transition potential \(V(s)\) due to the vector meson exchange between vector and pseudoscalar mesons can be written as
\[V_{VP}(s)=C_{VP}\times g^{2}(p_{1}+p_{3})(p_{2}+p_{4})\,\,\vec{\epsilon}\cdot \vec{\epsilon}^{\prime}\,, \tag{13}\]
with the \(4\times 4\) matrix
\[C_{VP}= \tag{14}\] \[\left(\begin{array}{c|cc}J=1&\bar{B}^{*0}_{s}D^{+}&\bar{B}^{*0}D ^{+}_{s}&\bar{B}^{0}_{s}D^{*+}&\bar{B}^{0}D^{*+}_{s}\\ \hline\bar{B}^{*0}_{s}D^{+}&0&\frac{1}{m^{2}_{K^{*}}}&0&0\\ \bar{B}^{*0}D^{+}_{s}&\frac{1}{m^{2}_{K^{*}}}&0&0&0\\ \bar{B}^{0}_{s}D^{*+}&0&0&0&\frac{1}{m^{2}_{K^{*}}}\\ \bar{B}^{0}D^{*+}_{s}&0&0&\frac{1}{m^{2}_{K^{*}}}&0\end{array}\right)\,.\]
Since the three-momenta of the external vector mesons can be ignored compared to their masses when working at the threshold, we approximate \(\epsilon^{0}\approx 0\) for these external vector mesons. This makes the \(VP\to VP\) transition potential similar to the \(PP\to PP\) transition potential, and we just need to add an extra factor \(\vec{\epsilon}\cdot\vec{\epsilon}^{\prime}\), with \(\vec{\epsilon}(\vec{\epsilon}^{\prime})\) the polarization vector of the initial(final) vector meson.
As shown in Table 1, the threshold masses of the \(\bar{B}^{*0}_{s}D^{+}\) and \(\bar{B}^{*0}D^{+}_{s}\) channels are close to each other, and the threshold masses of the \(\bar{B}^{0}_{s}D^{*+}\) and \(\bar{B}^{0}D^{*+}_{s}\) channels are also close to each other. Accordingly, we consider the four mixed channels:
\[|(\bar{B}^{*}D)^{+}_{s};J=1\rangle =\frac{1}{\sqrt{2}}\left(|\bar{B}^{*0}_{s}D^{+}\rangle_{J=1}+| \bar{B}^{*0}D^{+}_{s}\rangle_{J=1}\right), \tag{15}\] \[|(\bar{B}^{*}D)^{-}_{s};J=1\rangle =\frac{1}{\sqrt{2}}\left(|\bar{B}^{*0}_{s}D^{+}\rangle_{J=1}-| \bar{B}^{*0}D^{+}_{s}\rangle_{J=1}\right),\] (16) \[|(\bar{B}D^{*})^{+}_{s};J=1\rangle =\frac{1}{\sqrt{2}}\left(|\bar{B}^{0}_{s}D^{*+}\rangle_{J=1}+| \bar{B}^{0}D^{*+}_{s}\rangle_{J=1}\right),\] (17) \[|(\bar{B}D^{*})^{-}_{s};J=1\rangle =\frac{1}{\sqrt{2}}\left(|\bar{B}^{0}_{s}D^{*+}\rangle_{J=1}-| \bar{B}^{0}D^{*+}_{s}\rangle_{J=1}\right),\]
and the matrix \(C_{VP}\) in this basis transforms to be
\[C^{\prime}_{VP}= \tag{19}\]
Figure 3: The vector meson exchange between vector and pseudoscalar mesons in the \(bc\bar{s}\bar{d}\) system.
### _V-V_ interaction in the \(bc\bar{s}\bar{d}\) system
In this subsection we study the interaction due to the vector meson exchange between two vector mesons in the \(bc\bar{s}\bar{d}\) system. There are two coupled channels:
\[\bar{B}_{s}^{*0}D^{*+}\,,\,\bar{B}^{*0}D_{s}^{*+}\,.\]
As shown in Fig. 4, the exchanged vector meson can be either the \(K^{*0}\) or \(B_{c}^{*-}\) meson, and we again neglect the contribution from the \(B_{c}^{*-}\) meson exchange due to its large mass. Besides, we also need to take into account the contact term connecting four vector mesons, so the complete \(VV\) transition potential consists of two terms:
\[V_{VV}(s)=V_{VV}(s)^{cx}+V_{VV}(s)^{co}\,. \tag{20}\]
Based on Eq. (4), the transition potential \(V_{VV}(s)^{ex}\) due to the vector meson exchange between two vector mesons can be written as
\[V_{VV}(s)^{ex}=C_{VV}\times g^{2}(p_{1}+p_{3})(p_{2}+p_{4})\epsilon_{1}\cdot \epsilon_{3}\epsilon_{2}\cdot\epsilon_{4}\,, \tag{21}\]
where \(p_{1}(p_{3})\) is the four-momentum of the \(\bar{B}_{s}^{*0}(\bar{B}^{*0})\) meson, \(p_{2}(p_{4})\) is the four-momentum of the \(D^{*+}(D_{s}^{*+})\) meson, \(\epsilon_{1}(\epsilon_{3})\) is the polarization vector of the \(\bar{B}_{s}^{*0}(\bar{B}^{*0})\) meson, and \(\epsilon_{2}(\epsilon_{4})\) is the polarization vector of the \(D^{*+}(D_{s}^{*+})\) meson. The matrix \(C_{VV}\) for the \(bc\bar{s}\bar{d}\) system is a \(2\times 2\) matrix:
\[C_{VV}=\left(\begin{array}{c|cc}J=0,1,2&\bar{B}_{s}^{*0}D^{*+}&\bar{B}^{*0}D _{s}^{*+}\\ \hline\bar{B}_{s}^{*0}D^{*+}&0&\frac{1}{m_{K^{*}}^{2}}\\ \bar{B}^{*0}D_{s}^{*+}&\frac{1}{m_{K^{*}}^{2}}&0\end{array}\right)\,, \tag{22}\]
which does not contain any diagonal term.
In addition, the transition potential \(V_{VV}(s)^{co}\) can be extracted from Eq. (5) to be
\[V_{VV}(s)^{co} =m_{K^{*}}^{2}\cdot C_{VV}\] \[\times g^{2}(-2\epsilon_{\mu}\epsilon^{\mu}\epsilon_{\nu}\epsilon ^{\nu}+\epsilon_{\mu}\epsilon_{\nu}\epsilon^{\mu}\epsilon^{\nu}+\epsilon_{\mu }\epsilon_{\nu}\epsilon^{\nu}\epsilon^{\mu})\,. \tag{23}\]
By using the spin projection operators,
\[\mathcal{P}^{(0)} =\frac{1}{3}\epsilon_{\mu}\epsilon^{\mu}\epsilon_{\nu}\epsilon^{\nu}\] \[\mathcal{P}^{(1)} =\frac{1}{2}(\epsilon_{\mu}\epsilon_{\nu}\epsilon^{\mu}\epsilon ^{\nu}-\epsilon_{\mu}\epsilon_{\nu}\epsilon^{\nu}\epsilon^{\mu})\] \[\mathcal{P}^{(2)} =\frac{1}{2}(\epsilon_{\mu}\epsilon_{\nu}\epsilon^{\mu}\epsilon ^{\nu}+\epsilon_{\mu}\epsilon_{\nu}\epsilon^{\nu}\epsilon^{\mu})-\frac{1}{3} \epsilon_{\mu}\epsilon^{\mu}\epsilon_{\nu}\epsilon^{\nu}\, \tag{24}\]
Eq. (23) can be written separately for the spin \(J=0/1/2\) channels as:
\[V_{VV}(s)^{co}=m_{K^{*}}^{2}\cdot C_{VV}\times\left\{\begin{array}{cc}-4g^ {2}&\text{for}\ J=0,\\ 0&\text{for}\ J=1,\\ 2g^{2}&\text{for}\ J=2.\end{array}\right. \tag{25}\]
As shown in Table 1, the threshold masses of \(\bar{B}_{s}^{*0}D^{*+}\) and \(\bar{B}^{*0}D_{s}^{*+}\) channels are close to each other, so we consider the two mixed channels as we did in the previous subsections:
\[|(\bar{B}^{*}D^{*})_{s}^{+};J=0,1,2\rangle \tag{26}\] \[=\frac{1}{\sqrt{2}}\left(|\bar{B}_{s}^{*0}D^{*+}\rangle_{J=0,1,2} +|\bar{B}^{*0}D_{s}^{*+}\rangle_{J=0,1,2}\right),\] \[|(\bar{B}^{*}D^{*})_{s}^{-};J=0,1,2\rangle\] (27) \[=\frac{1}{\sqrt{2}}\left(|\bar{B}_{s}^{*0}D^{*+}\rangle_{J=0,1,2} -|\bar{B}^{*0}D_{s}^{*+}\rangle_{J=0,1,2}\right).\]
The matrix \(C_{VV}\) in this basis transforms to be
\[C_{VV}^{\prime}=\left(\begin{array}{c|cc}J=0,1,2&(\bar{B}^{*}D^{*})_{s}^{+}& (\bar{B}^{*}D^{*})_{s}^{-}\\ \hline(\bar{B}^{*}D^{*})_{s}^{+}&\frac{1}{m_{K^{*}}^{2}}&0\\ (\bar{B}^{*}D^{*})_{s}^{-}&0&-\frac{1}{m_{K^{*}}^{2}}\end{array}\right)\,. \tag{28}\]
It is worth mentioning that the contribution of the contact term \(V_{VV}(s)^{co}\), subleading in the heavy quark mass counting, is much smaller than the vector meson exchange term \(V_{VV}(s)^{ex}\), so the combinations \(|(\bar{B}^{*}D^{*})_{s}^{-};J=0,1,2\rangle\) are still attractive, while the combinations \(|(\bar{B}^{*}D^{*})_{s}^{+};J=0,1,2\rangle\) are always repulsive.
Figure 4: The vector meson exchange between two vector mesons in the \(bc\bar{s}\bar{d}\) system as well as the contact term connecting four vector mesons.
## III Numerical results
In this section we perform numerical analyses to study the hadronic molecular states with the quark contents \(bc\bar{s}\bar{q}\), \(\bar{b}\bar{c}s\bar{q}\), and \(b\bar{c}sq\). Again we use the \(bc\bar{s}\bar{d}\) system as an example. Based on Eq. (8), Eq. (13), Eq. (20), Eq. (21), and Eq. (25), we can solve the Bethe-Salpeter equation to obtain the scattering amplitude
\[T_{PP/VP/VV}(s)=\frac{V_{PP/VP/VV}(s)}{1-V_{PP/VP/VV}(s)G(s)}\,, \tag{29}\]
where \(G(s)\) is the diagonal loop function
\[G_{ii}(s)=i\int\frac{d^{4}q}{(2\pi)^{4}}\frac{1}{q^{2}-m_{1}^{2}+i\epsilon} \frac{1}{(p-q)^{2}-m_{2}^{2}+i\epsilon}\,. \tag{30}\]
Here \(s=p^{2}\) with \(p\) the total four-momentum; \(m_{1}\) and \(m_{2}\) are the masses of the two mesons involved in the present channel.
We regularize Eq. (30) through the cut-off method as
\[G_{ii}(s)=\int_{0}^{q_{\rm max}}\frac{d^{3}q}{(2\pi)^{3}}\frac{\omega_{1}+ \omega_{2}}{2\omega_{1}\omega_{2}}\frac{1}{s-(\omega_{1}+\omega_{2})^{2}+i \epsilon}\,, \tag{31}\]
with \(\omega_{1}=\sqrt{m_{1}^{2}+\vec{q}\,^{2}}\) and \(\omega_{2}=\sqrt{m_{2}^{2}+\vec{q}\,^{2}}\). In this work we take two values for the cut-off momentum, \(q_{\rm max}=400\) MeV and \(q_{\rm max}=600\) MeV, meanwhile the similar cut-off momentum \(q_{\rm max}\) is used to study the meson-meson interaction with heavy flavors in Refs. [63; 64; 67].
Equation. (31) holds on the physical sheet, _i.e._, the first Riemann sheet. Sometimes we also need to search for poles on the second Riemann sheet. In the latter case we define \(G_{ii}^{II}(s)\) as
\[G_{ii}^{II}(s)=G_{ii}(s)+i\frac{k}{4\pi\sqrt{s}}\,, \tag{32}\]
with \(k(s)=\sqrt{(s-(m_{1}+m_{2})^{2})(s-(m_{1}-m_{2})^{2})}/(2\sqrt{s})\) for \({\rm Im}(k)>0\).
In order to express the coupling strength of the pole to different channels, we introduce the coupling \(g_{i}\) and define it in the vicinity of the pole as
\[T_{ij}(s)=\frac{g_{i}g_{j}}{s-s_{p}^{2}}\,. \tag{33}\]
Here \(s_{p}\) is the position of pole on the \(\sqrt{s}\) complex plane, and the coupling \(g_{i}\) is the coupling constant between the pole and the channel \(i\). We can also write it in the residue form as
\[g_{i}^{2}=\lim_{\sqrt{s}\to s_{p}}(s-s_{p}^{2})\ T_{ii}(s)\,. \tag{34}\]
Firstly, we use the cut-off momentum \(q_{\rm max}=600\) MeV to perform numerical analyses. We find altogether six bound states in the \(bc\bar{s}\bar{d}\) system with the binding energies about 10-20 MeV: one state of \(J^{P}=0^{+}\) below the \(\bar{B}_{s}^{0}D^{+}\)-\(\bar{B}^{0}D_{s}^{+}\) threshold, one state of \(J^{P}=1^{+}\) below the \(\bar{B}_{s}^{*0}D^{+}\)-\(\bar{B}^{*0}D_{s}^{+}\) threshold, one state of \(J^{P}=1^{+}\) below the \(\bar{B}_{s}^{0}D^{+}\)-\(\bar{B}^{0}D_{s}^{*+}\) threshold, and three states of \(J^{P}=0^{+}/1^{+}/2^{+}\) below the \(\bar{B}_{s}^{*0}D^{*+}\)-\(\bar{B}^{*0}D_{s}^{*+}\) threshold. We summarize their results in Table 2. Besides, we have investigated the \(bc\bar{s}\bar{u}\) system, where we also find six bound states. Similar to Eq. (10), we denote them as \(|(\bar{B}^{*})D^{(*)}_{s}^{(*)};J^{\prime}\rangle\), and summarize their results also in Table 2. The parameter \(|g_{i}|\) is about 20 GeV for all the channels, indicating that the mixing is roughly balanced, _e.g._, the couplings of the mixing state \(|(\bar{B}D)_{s}^{-};J=0\rangle\) to
\begin{table}
\begin{tabular}{c|c|c|c|c} \hline \hline Content: \(bc\bar{s}\bar{d}\) & \(I(J^{P})\) & \(E_{B}\) (MeV) & Channel & \(|g_{i}|\) (GeV) \\ \hline \hline \multirow{3}{*}{\(|(\bar{B}D)_{s}^{-};J=0\rangle\)} & \(\frac{1}{2}(0^{+})\) & \multirow{3}{*}{15.7} & \(\bar{B}_{s}^{0}D^{+}\) & \(19\) \\ \cline{3-5} & & & \(\bar{B}^{0}D_{s}^{+}\) & \(21\) \\ \hline \multirow{3}{*}{\(|(\bar{B}^{*}D)_{s}^{-};J=1\rangle\)} & \(\frac{1}{2}(1^{+})\) & \multirow{3}{*}{17.3} & \(\bar{B}_{s}^{*0}D^{+}\) & \(20\) \\ \cline{3-5} & & & \(\bar{B}^{0}D_{s}^{+}\) & \(21\) \\ \cline{3-5} & & & \(\bar{B}^{0}D_{s}^{+}\) & \(20\) \\ \hline \multirow{3}{*}{\(|(\bar{B}D)_{s}^{-};J=1\rangle\)} & \(\frac{1}{2}(1^{+})\) & \multirow{3}{*}{16.4} & \(\bar{B}_{s}^{0}D^{*+}\) & \(20\) \\ \cline{3-5} & & & \(\bar{B}^{0}D_{s}^{*+}\) & \(23\) \\ \hline \multirow{3}{*}{\(|(\bar{B}^{*}D^{*})_{s}^{-};J=0\rangle\)} & \(\frac{1}{2}(0^{+})\) & \multirow{3}{*}{13.6} & \(\bar{B}_{s}^{*0}D^{*+}\) & \(19\) \\ \cline{3-5} & & & \(\bar{B}^{0}D_{s}^{*+}\) & \(21\) \\ \hline \multirow{3}{*}{\(|(\bar{B}^{*}D^{*})_{s}^{-};J=1\rangle\)} & \(\frac{1}{2}(1^{+})\) & \multirow{3}{*}{18.2} & \(\bar{B}_{s}^{*0}D^{*+}\) & \(21\) \\ \cline{3-5} & & & \(\bar{B}^{*0}D_{s}^{*+}\) & \(23\) \\ \cline{3-5} & & & \(\bar{B}^{*0}D_{s}^{*+}\) & \(22\) \\ \hline \hline \multirow{3}{*}{\(|(\bar{B}^{*}D^{*})_{s}^{-};J=2\rangle\)} & \(\frac{1}{2}(2^{+})\) & \multirow{3}{*}{20.5} & \(\bar{B}_{s}^{*0}D^{*+}\) & \(22\) \\ \cline{3-5} & & & \(\bar{B}^{*0}D_{s}^{*+}\) & \(24\) \\ \hline \hline Content: \(bc\bar{s}\bar{u}\) & \(I(J^{P})\) & \(E_{B}\) (MeV) & Channel & \(|g_{i}|\) (GeV) \\ \hline \hline \multirow{3}{*}{\(|(\bar{B}D)_{s}^{-};J=0\rangle^{\prime}\)} & \(\frac{1}{2}(0^{+})\) & \multirow{3}{*}{14.3} & \(\bar{B}_{s}^{0}D^{0}\) & \(19\) \\ \cline{3-5} & & & \(\bar{B}^{-}D_{s}^{+}\) & \(22\) \\ \cline{3-5} & & & \(\bar{B}^{*0}D^{0}\) & \(19\) \\ \cline{3-5} & & & \(\bar{B}^{*-}D_{s}^{+}\) & \(22\) \\ \hline \multirow{3}{*}{\(|(\bar{B}D^{*})_{s}^{-};J=1\rangle^{\prime}\)} & \(\frac{1}{2}(1^{+})\) & \multirow{3}{*}{15.7} & \(\bar{B}_{s}^{*0}D^{*0}\) & \(19\) \\ \cline{3-5} & & & \(\bar{B}^{*-}D_{s}^{+}\) & \(21\) \\ \cline{3-5} & & & \(\bar{B}^{*-}D_{s}^{*0}\) & \(21\) \\ \cline{3-5} & & & \(\bar{B}^{*-}D_{s}^{*+}\) &
both the \(\bar{B}^{0}_{s}D^{+}\) and \(\bar{B}^{0}D_{s}^{+}\) channels are roughly equivalent.
Note that all these bound states have zero width. This is partly because: a) we do not consider the widths of the initial and final states, and b) we do not consider the box diagrams with pion exchanges, _e.g._, see Ref. [64; 65] for discussions on these diagrams. Besides, we have neglected the exchange of the \(B_{s}^{*}\) meson, so the bound state \(|(\bar{B}D^{*})^{-}_{s};J=1\rangle\) located at 7361 MeV only couples to the \(\bar{B}^{0}_{s}D^{*+}\) and \(\bar{B}^{0}D_{s}^{*+}\) channels, while it does not couple to the \(\bar{B}^{*0}_{s}D^{+}\) and \(\bar{B}^{*0}D_{s}^{+}\) channels, even if it lies above the thresholds of the \(\bar{B}^{*0}_{s}D^{+}\) and \(\bar{B}^{*0}D_{s}^{+}\) channels.
Secondly, we use the cut-off momentum \(q_{\rm max}=400\,{\rm MeV}\) to perform numerical analyses. In this case we do not find any bound state, _i.e._, we do not find any pole in the first Riemann sheet below their corresponding thresholds. However, we find on the second Riemann sheet six poles for the \(bc\bar{s}\bar{d}\) system, and the other six poles for the \(bc\bar{s}\bar{u}\) system. As summarized in Table 3, all these poles are below their corresponding thresholds, indicating their nature as the near-threshold virtual states.
We use the combination \(|(\bar{B}^{*}D^{*})^{-}_{s};J=2\rangle\) as an example, and show its pole position in Fig. 5 as a function of the cut-off momentum \(q_{\rm max}\). We find that this pole becomes a bound state when \(q_{\rm max}>410\) MeV, while it becomes a virtual state when \(q_{\rm max}<410\) MeV. We also show its transition amplitude
\[t(s)\equiv T^{|(\bar{B}^{*}D^{*})^{-}_{s};J=2\rangle\to|(\bar{B}^{*}D^{*})^{-} _{s};J=2\rangle}_{VV}(s)\,, \tag{35}\]
in Fig. 6 for the cut-off momentum \(q_{\rm max}=700,600,500\), 400, and 300 MeV. This pole is identified as a bound state in Fig. 6(a,b,c), which appears as the singularity under the threshold. Differently, this pole is identified as a virtual state in Fig. 6(d,e), which can significantly enhance the near-threshold cusp effect to produce a sharp peak at the threshold. More discussions on the near-threshold virtual states can be found in Refs. [68; 69].
## IV Summary
In this paper we systematically study the possible hadronic molecular states with the quark contents \(bc\bar{s}\bar{q}\), \(bc\bar{s}\bar{q}\), and \(bc\bar{s}\bar{q}\) (\(q=u/d\)) within the extended local hidden gauge formalism. We solve the Bethe-Salpeter equation to search for poles on both the physical (first Riemann) sheet and the second Riemann sheet.
We study the \(bc\bar{s}\bar{d}\) system by investigating the interactions of the \(\bar{B}^{0}_{s}D^{+}\), \(\bar{B}^{0}D^{+}_{s}\), \(\bar{B}^{*0}_{s}D^{+}\), \(\bar{B}^{*0}_{s}\), \(\bar{B}^{0}_{s}D^{*+}\), \(\bar{B}^{0}_{s}D^{*+}\), \(\bar{B}^{*0}_{s}D^{*+}\), and \(\bar{B}^{*0}_{s}D^{*+}\) channels. Since the threshold masses of the \(\bar{B}^{0}_{s}D^{+}\) and \(\bar{B}^{0}_{s}D^{+}_{s}\) channels are quite close to each other, we take into account their mix
\begin{table}
\begin{tabular}{c|c|c|c|c} \hline \hline Content: \(bc\bar{s}\bar{d}\) & \(I(J^{P})\) & Pole & Channel & Threshold \\ \hline \hline \multirow{2}{*}{\(|(\bar{B}D)^{-}_{s};J=0\rangle\)} & \(\frac{1}{2}(0^{+})\) & 7235.2+i0 & \(\bar{B}^{0}_{s}D^{+}\) & 7236.6 \\ \cline{2-5} & & \(\bar{B}^{0}D^{+}_{s}\) & 7248.0 \\ \hline \multirow{2}{*}{\(|(\bar{B}^{*}D)^{-}_{s};J=1\rangle\)} & \multirow{2}{*}{\(\frac{1}{2}(1^{+})\)} & 7284.9+i0 & \(\bar{B}^{*0}_{s}D^{+}\) & 7285.1 \\ \cline{3-5} & & \(\bar{B}^{*0}D^{+}_{s}\) & 7293.1 \\ \hline \multirow{2}{*}{\(|(\bar{B}D^{*})^{-}_{s};J=1\rangle\)} & \multirow{2}{*}{\(\frac{1}{2}(1^{+})\)} & 7375.7+i0 & \(\bar{B}^{0}_{s}D^{*+}\) & 7377.2 \\ \cline{2-5} & & \(\bar{B}^{0}D^{*+}_{s}\) & 7391.9 \\ \hline \multirow{2}{*}{\(|(\bar{B}^{*}D^{*})^{-}_{s};J=0\rangle\)} & \multirow{2}{*}{\(\frac{1}{2}(0^{+})\)} & 7423.0+i0 & \(\bar{B}^{*0}_{s}D^{*+}\) & 7425.7 \\ \cline{2-5} & & \(\bar{B}^{*0}D^{*+}_{s}\) & 7436.9 \\ \hline \multirow{2}{*}{\(|(\bar{B}^{*}D^{*})^{-}_{s};J=1\rangle\)} & \multirow{2}{*}{\(\frac{1}{2}(1^{+})\)} & 7425.4+i0 & \(\bar{B}^{*0}_{s}D^{*+}\) & 7425.7 \\ \cline{2-5} & & \(\bar{B}^{*0}D^{*+}_{s}\) & 7436.9 \\ \hline \multirow{2}{*}{\(|(\bar{B}^{*}D^{*})^{-}_{s};J=2\rangle\)} & \multirow{2}{*}{\(\frac{1}{2}(2^{+})\)} & 7425.6+i0 & \(\bar{B}^{*0}_{s}D^{*+}\) & 7425.7 \\ \cline{2-5} & & \(\bar{B}^{*0}D^{*+}_{s}\) & 7436.9 \\ \hline \hline Content: \(bc\bar{s}\bar{u}\) & \(I(J^{P})\) & Pole & Channel & Threshold \\ \hline \multirow{2}{*}{\(|(\bar{B}D)^{-}_{s};J=0\rangle^{\prime}\)} & \multirow{2}{*}{\(\frac{1}{2}(0^{+})\)} & 7226.7+i0 & \(\bar{B}^{0}_{s}D^{0}\) & 7231.8 \\ \cline{3-5} & & \(\bar{B}^{-}D^{+}_{s}\) & 7247.7 \\ \hline \multirow{2}{*}{\(|(\bar{B}^{*}D)^{-}_{s};J=1\rangle^{\prime}\)} & \multirow{2}{*}{\(\frac{1}{2}(1^{+})\)} & 7278.4+i0 & \(\bar{B}^{*0}_{s}D^{0}\) & 7280.2 \\ \cline{2-5} & & \(\bar{B}^{*-}D^{+}_{s}\) & 7293.1 \\ \hline \multirow{2}{*}{\(|(\bar{B}D^{*})^{-}_{s};J=1\rangle^{\prime}\)} & \multirow{2}{*}{\(\frac{1}{2}(1^{+})\)} & 7370.8+i0 & \(\bar{B}^{*0}_{s}D^{*0}\) & 7373.8 \\ \cline{3-5} & & \(\bar{B}^{-}D^{-+}_{s}\) & 7391.5 \\ \hline \multirow{2}{*}{\(|(\bar{B}^{*}D^{*})^{-}_{s};J=0\rangle^{\prime}\)} & \multirow{2}{*}{\(\frac{1}{2}(0^{+})\)} & 7415.6+i0 & \(\bar{B}^{*0}_{s}D^{*0}\) & 7422.3 \\ \cline{2-5} & & \(\bar{B}^{*-}D^{+}_{s}\) & 7436.9 \\ \hline \multirow{2}{*}{\(|(\bar{B}^{*}D^{*})^{-}_{s};J=1\rangle^{\prime}\)} & \multirow{2}{*}{\(\frac{1}{2}(1^{+})\)} & 7421.2+i0 & \(\bar{B}^{*0}_{s}D^{*0}\) & 7422.3 \\ \cline{2-5} & & \(\bar{B}^{*-}D^{*+}_{s}\) & 7436.9 \\ \hline \multirow{2}{*}{\(|(\bar{B}^{*}D^{*})^{-}_{s};J=2\rangle^{\prime}\)} & \multirow{2}{*}{\(\frac{1}{2}(2^{+})\)} & 7421.9+i0 & \(\bar{B}^{*0}_{s}D^{*0}\) & 7422.3 \\ \cline{2-5} & & \(\bar{B}^{*
ing as
\[|(\bar{B}D)^{+}_{s};J=0\rangle =\frac{1}{\sqrt{2}}\left(|\bar{B}^{0}_{s}D^{+}\rangle_{J=0}+|\bar{B} ^{0}D^{+}_{s}\rangle_{J=0}\right),\] \[|(\bar{B}D)^{-}_{s};J=0\rangle =\frac{1}{\sqrt{2}}\left(|\bar{B}^{0}_{s}D^{+}\rangle_{J=0}-|\bar{ B}^{0}D^{+}_{s}\rangle_{J=0}\right),\]
Similar combinations are considered for all the other channels. With the cut-off momentum \(q_{\rm max}=600\) MeV, we find six bound states in this system with the binding energies about 10-20 MeV, as summarized in Table 2. These six bound states change to be six near-threshold virtual poles when using the cut-off momentum \(q_{\rm max}=400\) MeV, as summarized in Table 3. Similar results are obtained for the \(bc\bar{s}\bar{u}\) system, which are also summarized in Tables 2 and 3.
In the above \(bc\bar{s}\bar{q}\) system the interactions are mainly caused by the exchange of the light vector meson \(K^{*}\). However, in the \(b\bar{c}s\bar{q}\) and \(b\bar{c}\bar{s}q\) systems one can not exchange light vector mesons, and there only the exchanges of the heavy vector mesons \(D^{*}\), \(D^{*}_{s}\), \(B^{*}\), and \(B^{*}_{s}\) are allowed. Consequently, the interactions of the \(b\bar{c}s\bar{q}\) and \(b\bar{c}\bar{s}q\) systems are expected to be significantly smaller, and we do not find any deeply-bound pole in these systems.
To end this paper, we would like to emphasize that the \(bc\bar{s}\bar{d}\) and \(bc\bar{s}\bar{u}\) systems are rather "clean" since there are only limited numbers of coupled channels. We propose to search for these possibly-existing hadronic molecular states in the \(\Upsilon\) decays. Besides, we propose to search for \(|(\bar{B}^{*}D^{*})^{-}_{s};J=2\rangle\) through its \(D\)-wave two-body decay patterns \(|(\bar{B}^{*}D^{*})^{-}_{s};J=2\rangle\rightarrow\bar{B}_{s}D/\bar{B}D_{s}\), \(|(\bar{B}^{*}D^{*})^{-}_{s};J=1\rangle\) through its \(P\)-wave three-body decay patterns \(|(\bar{B}^{*}D^{*})^{-}_{s};J=1\rangle\rightarrow\bar{B}_{s}D\pi/\bar{B}D_{s}\pi\), and \(|(\bar{B}^{*}D)^{-}_{s};J=1\rangle\) through its weak decay patterns \(|(\bar{B}^{*}D)^{-}_{s};J=1\rangle\to DD_{s}\pi\) and its semileptonic decay patterns \(|(\bar{B}^{*}D)^{-}_{s};J=1\rangle\rightarrow(D^{*}D_{s}/DD^{*}_{s})^{\ell} \bar{\nu}_{\ell}\).
## Appendix A Interactions in the \(b\bar{c}s\bar{q}/b\bar{c}s\bar{q}\) systems
In this appendix, we study the interactions of the \(b\bar{c}s\bar{q}\) and \(b\bar{c}s\bar{q}\) systems. As shown in Fig. 7 and Fig. 8, there can only exist the exchange of the heavy vector mesons \(D^{*}\), \(D^{*}_{s}\), \(B^{*}\), and \(B^{*}_{s}\), while there does not exist the exchange of the light vector meson \(K^{*}\). Therefore, the interactions of the \(b\bar{c}s\bar{q}\) and \(b\bar{c}s\bar{q}\) systems are expected to be much smaller than the interaction of the \(b\bar{c}s\bar{q}\) system.
Figure 8: The exchange of the vector mesons \(D^{*}\) and \(\bar{B}^{*}_{s}\) between two pseudoscalar mesons in the \(b\bar{c}s\bar{q}\) system.
When investigating the exchange of the light vector meson \(K^{*}\) in Sec. II, we have taken its momentum to be \(q^{2}\to 0\) so that its propagator is reduced to be
\[\frac{1}{q^{2}-m_{V}^{2}+i\epsilon}\approx-\frac{1}{m_{V}^{2}}\,. \tag{30}\]
However, this reduction does not work well when investigating the exchange of heavy mesons, due to the large mass difference between the initial and final mesons. Similar to Ref. [70], in this paper we adopt the following modification to account for the impact of this effect
\[\frac{1}{(q^{0})^{2}-m_{V}^{2}+i\epsilon}\simeq-\lambda\frac{1}{m_{V}^{2}}, \tag{31}\]
where \((q^{0})^{2}=(\Delta M)^{2}=(M_{i}-M_{j})^{2}\) and \(\Delta M\) is the mass difference between the two external mesons. Since the \(B_{c}^{-}\) meson is quite massive, we have neglected its three-momentum and approximated the transferred momentum as \(q^{2}\approx(q^{0})^{2}=(\Delta M)^{2}\). Take the processes depicted in Fig. 7 as examples, in the left panel \(M_{i}\) and \(M_{j}\) are the masses of the \(\bar{B}_{s}^{0}\) and \(B_{c}^{-}\) mesons, and in the right panel \(M_{i}\) and \(M_{j}\) the masses of the \(\bar{D}\) and \(B_{c}^{-}\) mesons. Numerically, we obtain
\[\lambda^{t}_{\bar{B}_{s}\bar{D}\to B_{c}^{-}K} = \frac{-m_{D_{s}^{*}}^{2}}{(m_{B_{c}}-m_{B_{s}})^{2}-m_{D_{s}^{*}} ^{2}}=1.23, \tag{32}\] \[\lambda^{u}_{\bar{B}_{s}\bar{D}\to B_{c}^{-}K} = \frac{-m_{B^{*}}^{2}}{(m_{B_{c}}-m_{D})^{2}-m_{B_{s}^{*}}^{2}}=3.17, \tag{33}\]
where the superscripts \(t\) and \(u\) describe the \(t\) and \(u\) channels, respectively. We note that the contribution from the heavy meson exchange is still quite small compared to the light meson exchange, after the above modifications.
In the calculations we have taken into account the width of the \(K^{*}\) meson, _i.e._, we have taken into account the \(K^{*}\to K\pi\) decay in the \(\bar{B}_{c}K^{*}\) loop, as depicted in Fig. 9. This provides an imaginary part to the unitarized \(\bar{B}_{c}K^{*}\) scattering amplitude, so it can provide a non-zero width to the possibly-existing states generated from the \(B_{c}K^{*}\) interaction. To take into account this effect in the \(B_{c}K^{*}\) and \(B_{c}^{*}K^{*}\) channels, we write Eq. (31) for the loop function \(G(s)\) as
\[G(s) = \int_{0}^{q_{\rm max}}\frac{q^{2}dq}{4\pi^{2}}\,\frac{\omega_{B_ {c}^{(s)}}+\omega_{K^{*}}}{\omega_{B_{c}^{(s)}}\omega_{K^{*}}}\,\frac{1}{ \sqrt{s}+\omega_{B_{c}^{(s)}}+\omega_{K^{*}}} \tag{34}\] \[\times\frac{1}{\sqrt{s}-\omega_{K^{*}}-\omega_{B_{c}^{(s)}}+i \frac{\sqrt{s}}{2\omega_{K^{*}}}\Gamma_{K^{*}}(s^{\prime})}\,,\]
where \(s^{\prime}=(\sqrt{s}-\omega_{B_{c}^{(s)}})^{2}-\vec{q}^{\,2}\) and
\[\Gamma_{K^{*}}(s^{\prime}) = \Gamma_{K^{*}}(m_{K^{*}}^{2})\frac{m_{K^{*}}^{2}}{s^{\prime}} \left(\frac{p_{\pi}(s^{\prime})}{p_{\pi}(m_{K^{*}}^{2})}\right)^{3} \tag{35}\] \[\times\Theta(\sqrt{s^{\prime}}-m_{K}-m_{\pi})\,,\]
with \(\Gamma_{K^{*}}\) the width of the \(K^{*}\) meson.
The rest of the calculations are the same as those used to study the interaction of the \(bc\bar{s}\bar{q}\) system. However, for the \(b\bar{c}s\bar{q}\) and \(bc\bar{s}q\) systems, we do not find any pole in both the first and second Riemann sheets, indicating that there do not exist deeply-bound states in these systems due to the weak attraction from the heavy meson exchange.
###### Acknowledgements.
We are grateful to Eulogio Oset for the very helpful discussion. This project is supported by the National Natural Science Foundation of China under Grants No. 12075019 and No. 12192263, the Jiangsu Provincial Double-Innovation Program under Grant No. JSS-CRC2021488, the Natural Science Foundation of Henan under Grand No. 222300420554, the Project of Youth Backbone Teachers of Colleges and Universities of Henan Province (2020GGJS017), the Youth Talent Support Project of Henan (2021HYTP002), the Open Project of Guangxi Key Laboratory of Nuclear Physics and Nuclear Technology, No. NLK2021-08, and the Fundamental Research Funds for the Central Universities.
|
2307.05102 | Rational Solutions of Parametric First-Order Algebraic Differential
Equations | In this paper we give a procedure for finding rational solutions of a given
first-order ODE with functional and constant coefficients which occur in a
rational way. We derive an associated system with the same solvability, and
sufficient and necessary conditions for the existence of rational solutions are
given. In the case where all parametric coefficients are constant, we give an
algorithm to compute the rational solutions. In the case where one functional
coefficient appears, we algorithmically find rational general solutions which
rationally depend on the appearing transcendental constant. In the other cases,
the presented procedure is not completely algorithmic. | Sebastian Falkensteiner, Rafael Sendra | 2023-07-11T08:24:25Z | http://arxiv.org/abs/2307.05102v1 | # Rational solutions of parametric first-order algebraic differential equations
###### Abstract.
In this paper we give a procedure for finding rational solutions of a given first-order ODE with functional and constant coefficients which occur in a rational way. We derive an associated system with the same solvability, and sufficient and necessary conditions for the existence of rational solutions are given. In the case where all parametric coefficients are constant, we give an algorithm to compute the rational solutions. In the case where one functional coefficient appears, we algorithmically find rational general solutions which rationally depend on the appearing transcendental constant. In the other cases, the presented procedure is not completely algorithmic.
Max Planck Institute for Mathematics in the Sciences, Leipzig, Germany. Universidad de Alcala, Dpto. Fisica y Matematicas, Alcala de Henares, Madrid, Spain
_E-mail addresses_: [email protected], [email protected] _Date:_ July 12, 2023.
algebraic closure of an algebraic or transcendental extension of \(\mathbb{K}\), and allow \((b_{1},\ldots,b_{m},f_{1}(x),\ldots,f_{l}(x))\in\mathcal{S}\).
First-order AODEs have been studied extensively and there are several solution methods for special classes of them. Most of them, however, do not work with differential equations involving functional coefficients or a set of unknown parameters.
In the case of polynomial coefficients in \(x\), Eremenko [5] provides a degree bound for rational solutions and hence a method for determining them. A more efficient method has been introduced in [8, 9] for autonomous first-order AODEs by associating an algebraic set to the given AODE. Then the well-known theory on algebraic curves can be used for finding properties of the rational solutions which help to simplify the differential problem and actually find the solutions. The extension to rational general solutions of first-order non-autonomous AODEs can be found in [14, 10, 16]. For algebraic solutions, we refer to [1, 18]. Local solutions of first-order autonomous AODEs are treated in [2]. For a wide panoramic vision on this algebraic-geometry approach we refer to the summary paper [6].
We follow the algebraic-geometric approach. We consider the algebraic curve implicitly defined by the given first-order differential equation by viewing \(y\) and \(y^{\prime}\) as independent variables. Algebraic curves involving parameters are treated in [7] and the results therein play a crucial role as theoretical and algorithmic tools. Then, by considering the differential relation again, an associated differential equation can be derived. In the case of constant parameters the associated differential equation can be further simplified leading to a necessary and sufficient condition for the existence of rational solutions. In the case of functional coefficients of the given differential equation, the solvability of the associated system is in general not completely characterised. If the number of different functional coefficients is just one, however, by applying [12], we give a complete algorithm for computing its rational general solutions.
The structure of the paper is as follows. In Section 2 we fix notation and summarize the relevant results on parametric rational curves. In particular, the specification of the parameters at certain values is treated (see Subsections 2.2 and 2.3). The analysis on surjectivity depending on the specification of the parameters (Theorem 2.7) is new. In Section 3 we generalize previous results on first-order autonomous differential equations to the parametric case. This behavior is depending on the exact values of the constant parameters. We give a finite decomposition of the parameter space where the solvability is unchanged and rational solutions can be computed whenever they exist (Theorems 3.6,3.9,3.10). Section 4 studies first-order differential equations with functional and possibly parametric coefficients. In the case of one function coefficient, the solvability of rational general solutions depending rationally on the appearing transcendental constant can be decided (Theorem 4.5) and, in the affirmative case, such solutions can be
computed (Algorithm 2). We illustrate the algorithmic methods and some possible generalizations by examples.
## 2. Parametric rational curves
Let us first fix notations and recall some results on rational curves; for further details see [3, 15]. In the two remaining subsections we analyze the behavior, under specializations, of parametric rational curves.
### Preliminaries
For a field \(\mathbb{L}\), we denote by \(\overline{\mathbb{L}}\) its algebraic closure. We will express tuples with bold face letters. We will work with expressions in functional coefficients \(\mathbf{f}=(f_{1}(x),\ldots,f_{l}(x))\) and constant (possibly unevaluated or unknown) coefficients \(\mathbf{b}=(b_{1},\ldots,b_{m})\), and the corresponding set of independent parameters \(\mathbf{a}=(a_{1},\ldots,a_{n})\) where \(n=m+l\). Then we will usually set \(\mathbb{L}=\mathbb{K}(\mathbf{a})\) and \(\mathbb{F}=\mathbb{L}(\delta)\) where \(\delta\) is an algebraic element over \(\mathbb{L}\).
Let \(\mathbb{L}\supseteq\mathbb{K}\) be a field and let \(F(y,y^{\prime})\in\mathbb{L}[y,y^{\prime}]\) be an irreducible polynomial (over \(\overline{\mathbb{L}}\)) and depending on \(y^{\prime}\). Then we define the _associated curve to \(F\)_ as the zero-set of \(F\) over \(\overline{\mathbb{L}}\), i.e.
\[\mathcal{C}(F)=\{(p,q)\in\overline{\mathbb{L}}^{2}\mid F(p,q)=0\}.\]
A _(rational) parametrization_ of \(\mathcal{C}(F)\) is a pair \(\mathcal{P}(t)=(P_{1}(t),P_{2}(t))\in\overline{\mathbb{L}}(t)^{2}\) such that \(F(\mathcal{P}(t))=0\) holds and not both components are in \(\overline{\mathbb{L}}\). A rational parametrization of \(\mathcal{C}(F)\) exists if and only if the genus of the curve is equal to zero [15, Theorem 4.63]. If \(\mathcal{P}\) is birational, then \(\mathcal{P}\) is called a _proper_ or _birational_ parametrization. If \(\mathcal{C}(F)\) admits a parametrization, we say that \(\mathcal{C}(F)\) is a rational curve.
Let us note that for the differential part of the problem it would not be necessary to require that \(F\) is irreducible. But only in this case \(\mathcal{C}(F)\) can admit a rational parametrization [15, Theorem 4.4].
In general, if one computes a parametrization \(\mathcal{P}(t)\) of \(\mathcal{C}(F)\), the ground field \(\mathbb{L}\) has to be extended. The coefficient field of \(\mathcal{P}(t)\) is called the _field of definition_. Moreover, a subfield \(\mathbb{F}\) of \(\overline{\mathbb{L}}\) is called a _field of parametrization_ of \(\mathcal{C}(F)\) if there exists a parametrization with \(\mathbb{F}\) as field of definition.
One can achieve a field of parametrization \(\mathbb{F}=\mathbb{L}(\delta)\) for some \(\delta^{2}\in\mathbb{L}\) of \(\mathcal{C}(F)\) as it is highlighted in the following theorem [11].
**Theorem 2.1**.:
1. _If_ \(\deg(\mathcal{C}(F))\) _is odd then_ \(\mathbb{L}\) _is a field of parametrization._
2. _If_ \(\deg(\mathcal{C}(F))\) _is even then either_ \(\mathbb{L}\) _is a field of parametrization or there exists_ \(\delta\in\overline{\mathbb{L}}\) _algebraic over_ \(\mathbb{L}\)_, with minimal polynomial_ \(t^{2}-\alpha\in\mathbb{L}[t]\)_, such that_ \(\mathbb{L}(\delta)\) _is a field of parametrization of_ \(\mathcal{C}(G)\)_._
For equation (1.2) as we study, we will use \(\mathbb{L}=\mathbb{K}(\mathbf{a})\) where \(\mathbf{a}\) denotes a tuple of unspecified parameters. The notation for the evaluation of parameters \(\mathbf{a}\) will simply consist in replacing the parameters by \(\mathbf{a}^{0}=(\mathbf{b},\mathbf{f})\) or, if the dependencies on \(\mathbf{a}\) are not explicitly stated, by prepending \(\mathbf{a}^{0}\) in the argument. If we evaluate just the \(\mathbf{b}\) or \(\mathbf{f}\), respectively, we will use the same principle for
the corresponding coordinates of \(\mathbf{a}\). For example \(\mathcal{C}(\mathbf{f};F)\) denotes the curve defined by the partly evaluated polynomial \(F((a_{m+1},\ldots,a_{n})=\mathbf{f},y,z)\) over \(\overline{\mathbb{K}(a_{1},\ldots,a_{m})}\). At some steps, it might be necessary to work with field extensions \(\gamma(\mathbf{a})\) in the parameters \(\mathbf{a}\). Also in this case, we will simply write the dependencies on \(\mathbf{a}\) and not explicitly state \(\gamma(\mathbf{a})\) in the argument. For given \(f,g\in\mathbb{K}(\mathbf{a})[\mathbf{z}]\), we denote by \(\operatorname{res}_{z_{0}}(f,g)\) the resultant of \(f\) and \(g\) with respect to the variable \(z_{0}\) among \(\mathbf{z}\).
In addition, throughout the paper we use some Zariski open subsets, to be considered when specializing the parameters, that have been defined in [7]. More precisely, we use
1. For a polynomial \(R\), we consider the set \(\Omega_{\operatorname{def}(R)}\) and \(\Omega_{\operatorname{nonZ}(R)}\) where \(R\) is defined under specialization and is non zero, respectively (see [7, Definition 3.3]).
2. For two polynomials \(f,g\), with coefficients in a unique factorization domain, we consider the set \(\Omega_{\operatorname{gcd}(f,g)}\) (see [7, Definition 3.6]). Specializing in the open subset the gcd behaves properly under evaluation.
3. For a squarefree polynomial \(f\), we use \(\Omega_{\operatorname{sqrfree}(f)}\) as in [7, Definition 3.12] such that for \(\mathbf{a}^{0}\in\Omega_{\operatorname{sqrfree}(f)}\) it holds that \(f(\mathbf{a}^{0};\mathbf{z})\) is square-free.
4. For a given proper rational parametrization in reduced form \(\mathcal{P}=(p_{1}/q_{1},p_{2}/q_{2})\) of \(\mathcal{C}(F)\), we define \(\Omega_{\operatorname{def}(\mathcal{P})}=\Omega_{\operatorname{def}(q_{1})} \cap\Omega_{\operatorname{def}(q_{2})}\). Moreover, we consider the open set \(\Omega_{\operatorname{proper}(\mathcal{P})}\subseteq\mathcal{S}\) as in [7, Definition 5.1 and Definition 5.4] such that every specialization \(\mathbf{a}^{0}\in\Omega_{\operatorname{proper}(\mathcal{P})}\) satisfies that \(\mathcal{P}(\mathbf{a}^{0};t)\) is a proper parametrization of the specialized curve \(\mathcal{C}(\mathbf{a}^{0};F)\).
### Decomposition w.r.t. parametrizations
Let us now give further details on rational parametrizations of families of rational curves depending on several unspecified parameters. We present an algorithmic computation of such parametrizations. For details on parametric rational curves we refer to [7].
It can be the case that for some specializations \(\mathbf{a}^{0}=(\mathbf{b},\mathbf{f})\) the genus of \(\mathcal{C}(\mathbf{a}^{0};F)\) decreases or the curve gets reducible. Let us say that a specialization _degenerates_ if either \(F(\mathbf{a}^{0};y,z)\) is not well-defined or \(F(\mathbf{a}^{0};y,z)\in\overline{\mathbb{K}}(\mathbf{a}^{0})\). As a result of the process described in [7], we find a unique disjoint decomposition of \(\mathcal{S}\)
\[\mathcal{S}=\mathcal{S}_{1}\dot{\cup}\mathcal{S}_{2}\dot{\cup}\mathcal{S}_{3} \tag{2.1}\]
such that, for every specialization \(\mathbf{a}^{0}\in\mathcal{S}_{i}\), case (i) holds:
1. the specialization degenerates;
2. the genus is positive and preserved, or \(F\) is reducible (over \(\overline{\mathbb{K}(\mathbf{a}^{0})}\));
3. the genus is zero and a proper parametrization of \(\mathcal{C}(\mathbf{a}^{0};F)\) is provided.
Moreover, \(\mathcal{S}_{3}\) can be further decomposed into finitely many parameter sets, say \(\mathcal{S}_{3}=\bigcup_{i\in I}\mathcal{S}_{3,i}\), such that for each \(i\in I\) there is a corresponding rational
parametrization \(\mathcal{P}_{i}\) where \(\mathcal{P}_{i}(\mathbf{a}^{0};t)\) is well-defined for every \(\mathbf{a}^{0}\in\mathcal{S}_{3,i}\) and provides a proper rational parametrization of \(\mathcal{C}(\mathbf{a}^{0};F)\) (see [7, Remark 6.1]). We will call such a decomposition of \(\mathcal{S}\) a _decomposition w.r.t. (rational) parametrizations_. Let us note that the decomposition of \(\mathcal{S}_{3}\) is in general not unique since it depends on the chosen rational parametrizations.
**Corollary 2.2**.: _Let \(F\in\mathbb{K}(\boldsymbol{a})[y,z]\) be irreducible and let \(\mathcal{S}=\mathcal{S}_{1}\dot{\cup}\mathcal{S}_{2}\dot{\cup}\mathcal{S}_{3}\), \(\mathcal{S}_{3}=\dot{\bigcup}_{i\in I}\mathcal{S}_{3,i}\) be a decomposition w.r.t. parametrizations. Let \(\boldsymbol{a}^{0}\in\mathcal{S}\) be such that \(\mathcal{C}(\boldsymbol{a}^{0};F)\) is rational. Let \(\tilde{\mathcal{P}}\in\overline{\mathbb{K}(\boldsymbol{a}^{0})}(t)^{2}\) be a rational parametrization of \(\mathcal{C}(\boldsymbol{a}^{0};F)\). Then, there exists a component \(\mathcal{S}_{3,i}\) with a corresponding parametrization \(\mathcal{P}_{i}(t)\) such that \(\tilde{\mathcal{P}}\) is a reparametrization of \(\mathcal{P}\), i.e. \(\mathcal{P}_{i}(\boldsymbol{a}^{0};s)=\tilde{\mathcal{P}}(t)\) for some non-constant \(s\in\overline{\mathbb{K}(\boldsymbol{a}^{0})}(t)\)._
Proof.: By assumption, \(\mathcal{C}(\mathbf{a}^{0};F)\) is rational. Thus, by construction, \(\mathbf{a}^{0}\in\mathcal{S}_{3}\). Hence, there exists \(\mathcal{S}_{3,i}\) with a corresponding parametrization \(\mathcal{P}\) such that \(\mathcal{P}(\mathbf{a}^{0};t)\) is a proper parametrization of \(\mathcal{C}(\mathbf{a}^{0};F)\). By [15, Lemma 4.17], \(\tilde{\mathcal{P}}\) is a reparametrization of \(\mathcal{P}(\mathbf{a}^{0};t)\).
Finally, let us note that in the case of a single parameter, the field of definition can be chosen as the base field:
_Remark 2.3_.: Based on Tsen's theorem [4], in the case of \(n=1\) and \(F\in\mathbb{K}(a)[y,z]\) defining the rational curve \(\mathcal{C}(F)\), \(\mathbb{K}(a)\) is a field of parametrization [7, Corollary 2.3].
### Decomposition w.r.t. surjectivity
In this part of the section we refine the decomposition of \(\mathcal{S}_{3}\) in order to guarantee a surjective covering.
By [15, Theorem 6.22], for any proper parametrization \(\mathcal{P}(t)\) of \(\mathcal{C}(F)\) it holds that \(\mathcal{C}(F)\setminus\mathcal{P}(\overline{\mathbb{L}})\) contains at most one point. If this point exists, we call it the _critical point_ of \(\mathcal{P}(t)\). When \(\mathcal{P}(t)\) is not surjective, we may work with a finite collection of parametrizations. The following lemma ensures that the coefficients of these parametrizations can still be assumed to be in the field of parametrization.
**Lemma 2.4**.: _Let \(\mathbb{L}\) be a field, \(F(y,z)\in\mathbb{L}[y,z]\), \(\mathcal{C}(F)\) be a rational curve and let \(\mathbb{F}\) be a parametrizing field of \(\mathcal{C}(F)\). Then, there exists a set of proper parametrizations \(\{\mathcal{P}_{i}(t)\}_{i\in I}\subset\mathbb{F}(t)^{2}\), with \(\#(I)\leq 2\), such that \(\mathcal{C}(F)=\bigcup_{i\in I}\mathcal{P}_{i}(\overline{\mathbb{L}})\)._
Proof.: We assume w.l.o.g. that \(\mathcal{C}(F)\) is neither a vertical nor a horizontal line. So, in the following, none component of the parametrizations is constant. Since \(\mathbb{F}\) is a parametrizing field, let \(\mathcal{P}_{1}(t)\in\mathbb{F}(t)^{2}\) be a proper parametrization. Let
\[\mathcal{P}_{1}=\left(\frac{p_{1}}{q_{1}},\frac{p_{2}}{q_{2}}\right),\]
with \(\gcd(p_{i},q_{i})=1\). If there exists \(i\in\{1,2\}\) such that \(\deg(p_{i})>\deg(q_{i})\) then \(\mathcal{P}_{1}(\overline{\mathbb{L}})=\mathcal{C}(F)\) (see [15, Corollary 6.20]), and the statement follows with \(I=\{1\}\).
Let us assume that \(\deg(p_{i})\leq\deg(q_{i})\) for \(i\in\{1,2\}\). Moreover, let us also assume that none of the polynomials \(p_{1},p_{2},q_{1},q_{2}\) has zero as a root; if this would be the case, we can apply a change \(\mathcal{P}_{1}(t+a)\) with \(a\in\mathbb{L}\).
Let us express the polynomials \(p_{i},q_{j}\) as
\[p_{1}=\sum_{i=0}^{r}a_{i}t^{i},\,q_{1}=\sum_{i=0}^{n}b_{i}t^{i},\,p_{2}=\sum_{i =0}^{s}c_{i}t^{i},\,q_{2}=\sum_{i=0}^{m}d_{i}t^{i},\]
where \(a_{r}a_{0}b_{n}b_{0}c_{s}c_{0}d_{m}d_{0}\neq 0\). Then, by [15, Theorem 6.22],
\[\mathcal{C}(F)\setminus\{(a_{r}/b_{n},c_{s}/d_{m})\}\subset\mathcal{P}_{1}( \overline{\mathbb{L}}).\]
Now, let \(\mu\in\mathbb{F}\) be such that \(p_{1}(\mu)b_{n}-q_{1}(\mu)a_{r}\neq 0\) and \(p_{1}(\mu)q_{1}(\mu)p_{2}(\mu)q_{2}(\mu)\neq 0\); this is possible because \(b_{n},a_{r},p_{1},q_{1},p_{2},q_{2}\) are not zero. We consider the parametrization \(\mathcal{P}_{2}(t)=\mathcal{P}_{1}(1/t+\mu)\). That is
\[\mathcal{P}_{2}(t) = \left(\frac{(a_{r}+\tilde{a}_{r-1}t+\cdots+\tilde{a}_{1}t^{r-1}+p_ {1}(\mu)t^{r})t^{n-r}}{b_{n}+\tilde{b}_{n-1}t+\cdots+\tilde{b}_{1}t^{n-1}+q_{1 }(\mu)t^{n}},\right.\] \[\left.\frac{(c_{s}+\tilde{c}_{s-1}t+\cdots+\tilde{c}_{1}t^{s-1}+p _{1}(\mu)t^{r})t^{n-r}}{d_{m}+\tilde{d}_{m-1}t+\cdots+\tilde{d}_{1}t^{m-1}+q_{1 }(\mu)t^{n}}\right),\]
for some \(\tilde{a}_{i},\tilde{b}_{i},\tilde{c}_{i},\tilde{d}_{i}\in\mathbb{F}\). Now, \(\mathcal{C}(F)\setminus\{(p_{1}(\mu)/q_{1}(\mu),p_{2}(\mu)/q_{2}(\mu))\} \subset\mathcal{P}_{2}(\overline{\mathbb{L}})\). Since \(p_{1}(\mu)/q_{1}(\mu)\neq a_{r}/b_{n}\), the statement follows for \(I=\{1,2\}\).
_Remark 2.5_.: Throughout this paper, when we speak about a surjective rational covering, we will mean the covering provided by the proof of Lemma 2.4.
For a given proper rational parametrization \(\mathcal{P}\) of \(\mathcal{C}(F)\), we consider the open set \(\Omega_{\mathrm{proper}(\mathcal{P})}\subseteq\mathcal{S}\). In addition, let \(\{\mathcal{P}_{i}(t)\}_{i\in I}\subset\mathbb{F}(t)^{2}\), with \(\#(I)\leq 2\) be a surjective rational covering of a given curve \(\mathcal{C}(F)\). Then, we introduce a new open subset in the following way. Let \(C_{i}:=(A_{i,1},A_{i,2})\in\mathbb{F}^{2}\) be the critical point of \(\mathcal{P}_{i}(t)\); and let \(\mathcal{P}_{i}\) be expressed in reduced form as \((p_{i,1}/q_{i,1},p_{i,2}/q_{i,2})\). We consider the polynomials
\[g_{i,j}:=A_{i,j}q_{i,j}-p_{i,j},\,\,\,i\in I\,\,\text{and}\,\,j\in\{1,2\},\]
and
\[g_{i}:=\gcd(g_{i,1},g_{i,2}),\,R_{i}:=\operatorname{res}_{t}(g_{i},q_{i,1}q_{ i,2})),\,\,\,i\in\{1,2\}.\]
Then, we define the open subset
\[\Omega_{\mathrm{surj}}(\{\mathcal{P}_{i}(t)\}_{i\in I}):=\bigcap_{i\in I} \left(\Omega_{\gcd(g_{i,1},g_{i,2})}\cap\Omega_{\mathrm{nonZ}(R_{i})}\right).\]
**Lemma 2.6**.: _Let \(\{\mathcal{P}_{i}(t)\}_{i\in I}\subset\mathbb{F}(t)^{2}\), with \(\#(I)\leq 2\), be a surjective rational covering of a given curve \(\mathcal{C}(F)\) such that \(\mathcal{C}(F)=\bigcup_{i\in I}\mathcal{P}_{i}(\overline{\mathbb{L}})\). Let \(\boldsymbol{a}^{0}\in\bigcap_{i\in I}\Omega_{\mathrm{proper}(\mathcal{P}_{i})} \cap\Omega_{\mathrm{surj}}(\{\mathcal{P}_{i}(t)\}_{i\in I})\). Then_
\[\mathcal{C}(F,\boldsymbol{a}^{0})=\bigcup_{i\in I}\mathcal{P}_{i}(\overline{ \mathbb{K}(\boldsymbol{a}^{0})}).\]
Proof.: Since \(\mathbf{a}^{0}\in\Omega_{\mathrm{proper}(\mathcal{P}_{i})}\), by [7, Theorem 5.5]), \(\mathcal{P}_{i}(\mathbf{a}^{0};t)\) parametrizes properly \(\mathcal{C}(\mathbf{a}^{0};F)\). Moreover, the numerators and denominators of \(\mathcal{P}_{i}(t)\) stay coprime (see proof of Theorem 5.5 in [7]. Furthermore, the degrees of the numerators and denominators are also preserved. So, the critical point of \(\mathcal{P}_{i}(t,\mathbf{a}^{0})\) is the specialization of the critical point of \(\mathcal{P}_{i}(t)\), namely \(C_{i}(\mathbf{a}^{0})\). It remains to prove that \(C_{i}(\mathbf{a}^{0})\) is reachable by \(\mathcal{P}_{j}(t,\mathbf{a}^{0})\) for some \(j\in I\). By hypothesis, there exists \(t_{0}\in\overline{\mathbb{L}}\), and \(j\in I\) such that \(\mathcal{P}_{j}(t_{0})=C_{i}\). In particular, \(g_{j}(t_{0})=0\) and hence \(\deg_{t}(g_{j})>0\). On the other hand, since \(\mathbf{a}^{0}\in\Omega_{\mathrm{gcd}(g_{j,1},g_{j,2})}\), by [7, Corollary 3.8]), \(g_{j}(t,\mathbf{a}^{0})=\gcd(g_{j,1}(t,\mathbf{a}^{0}),g_{j,2}(t,\mathbf{a}^{ 0}))\) and \(\deg(\gcd(g_{j,1}(t,\mathbf{a}^{0}),g_{j,2}(\mathbf{a}^{0};t)))=\deg(g_{j}( \mathbf{a};t))>1\). Let \(t_{1}\in\overline{\mathbb{K}(\mathbf{a}^{0})}\) be a root of \(\gcd(g_{j,1}(\mathbf{a}^{0};t),g_{j,2}(\mathbf{a}^{0};t))\). Since \(\mathbf{a}^{0}\in\Omega_{\mathrm{nonZ}(R_{j})}\), it holds that \(q_{j,1}(\mathbf{a}^{0};t_{1})q_{j,2}(\mathbf{a}^{0};t_{1})\neq 0\). Thus, \(\mathcal{P}_{j}(\mathbf{a}^{0};t_{1})=C_{i}(\mathbf{a}^{0})\).
Using the previous result, we further decompose the parameter space under the surjectivity criterium. For each parametrization \(\mathcal{P}_{i}\) associated to \(\mathcal{S}_{3,i}\) in the decomposition w.r.t. parametrizations, we apply Lemma 2.4 to get \(\{\mathcal{P}_{i,j}\}_{j\in J_{i}}\) and we replace, in the construction in [7, Section 6], \(\Omega_{\mathrm{proper}(\mathcal{P}_{i})}\) by \(\Omega_{i}:=\bigcap_{j\in J_{i}}\Omega_{\mathrm{proper}(\mathcal{P}_{i,j})} \cap\Omega_{\mathrm{surj}}(\{\mathcal{P}_{i,j}(t)\}_{j\in I_{j}})\). Then we eventually obtain again a finite number of constructible sets \(\mathcal{S}_{3,i}\). Let us call such a decomposition of \(\mathcal{S}\) a _decomposition w.r.t. surjective (rational) parametrizations_ and the \(\{\mathcal{P}_{i,j}\}_{j\in J_{i}}\) the corresponding parametrization sets.
**Theorem 2.7**.: _Let \(F\in\mathbb{K}(\textbf{a})[y,z]\) be irreducible and let \(\mathcal{S}=\mathcal{S}_{1}\dot{\cup}\mathcal{S}_{2}\dot{\cup}\mathcal{S}_{3}\), \(\mathcal{S}_{3}=\dot{\bigcup}_{i\in I}\mathcal{S}_{3,i}\) be a decomposition w.r.t. surjective parametrizations. Then, there exists a set \(\{\mathcal{P}_{i,j}(t)\}_{i\in I,j\in J_{i}}\) with \(\#(I)\leq n\), \(\#(J_{i})\leq 2\), such that for every \(\textbf{a}^{0}\in\mathcal{S}_{3,i}\)_
1. \(\mathcal{P}_{i,j}(\textbf{a}^{0};t)\)_,_ \(j\in J_{i}\)_, are proper parametrizations of_ \(\mathcal{C}(\textbf{a}^{0};F)\)_; and_
2. \(\mathcal{C}(\textbf{a}^{0};F)=\bigcup_{j\in J}\mathcal{P}_{i,j}(\overline{ \mathbb{K}(\textbf{a}^{0})})\)_._
Proof.: From [7, Remark 6.1] we obtain the decomposition of the parameter space \(\mathcal{S}=\mathcal{S}_{1}\dot{\cup}\mathcal{S}_{2}\dot{\cup}\mathcal{S}_{3}\) and proper parametrizations \(\mathcal{P}_{i}\) such that \(\mathcal{P}_{i}(t,\mathbf{a}^{0})\) is a proper parametrization of \(\mathcal{C}(F,\mathbf{a}^{0})\) for every \(\textbf{a}^{0}\in\Omega_{\mathrm{proper}(\mathcal{P}_{i})}\). For every \(\mathcal{P}_{i}\), by Lemma 2.4, there are \(\mathcal{P}_{i,j}\), \(j\in J_{i}\) with \(\#(J_{i})\leq 2\), such that \(\mathcal{C}(F,\mathbf{a}^{0})=\bigcup_{j\in J}\mathcal{P}_{i,j}(\overline{ \mathbb{K}(\textbf{a}^{0})})\). Then the result follows from Lemma 2.6.
The main difference between a decomposition w.r.t. parametrizations and w.r.t. surjective parametrizations is that in the first case, for each \(\mathcal{S}_{3,i}\) there is a rational parametrization that specializes properly while in the second case, for each \(\mathcal{S}_{3,i}\) there is a finite set of parametrizations which union of images covers the whole curve and the property is preserved under specializations.
By an iterative construction, for both decompositions, we adjoin to every \(\mathcal{S}_{i}\) and \(\mathcal{S}_{3,i}\) a computable field \(\mathbb{F}_{\mathrm{J}}\), where \(\mathrm{J}\) denotes an ideal represented by a Grobner basis, such that every specialization \(\mathbf{a}^{0}\in\mathcal{S}_{1},\mathbf{a}^{0}\in\mathcal{S}_{2}\) or \(\mathbf{a}^{0}\in\mathcal{S}_{3,i}\), respectively, can be treated simultaneously and leads to an algorithmic treatment (see also [7, Section 6]).
## 3. Differential equations with constant parameters
In this section, we will use some of the notation introduced previously. Let \(\mathbb{L}=\mathbb{K}(\mathbf{a})\), where \(\mathbb{K}\) is a computable field of characteristic zero, and, for a given irreducible \(F\in\mathbb{L}[y,y^{\prime}]\), we denote by \(\mathcal{C}(F)\) the corresponding algebraic curve over \(\overline{\mathbb{L}}\). The field \(\mathbb{L}\) may also be chosen as \(\mathbb{F}_{\mathrm{J}}\) for some prime ideal \(\mathrm{J}\) as explained at the end of Section 2.
Throughout this section we assume that \(F\) in (1.2) has only constant coefficients, i.e. \(\mathbf{f}\) does not appear (\(l=0,m=n\)) and the parameters \(\mathbf{a}\) are only specialized at constants \(\mathbf{b}=(b_{1},\ldots,b_{m})\) with \(\frac{d\,b_{i}}{dx}=b_{i}^{\prime}=0\) for every \(i\in\{1,\ldots,m\}\).
The following two statements, Lemma 3.1 and Theorem 3.2, follow by the same proof as in [8] by replacing the coefficient field \(\mathbb{Q}\) with \(\mathbb{L}\).
**Lemma 3.1**.: _Let \(y(x)\in\overline{\mathbb{L}}(x)\) be a solution of \(F(y,y^{\prime})=0\) where \(F\in\mathbb{L}[y,y^{\prime}]\) is irreducible. Then \((y(t),y^{\prime}(t))\) is a proper rational parametrization of \(\mathcal{C}(F)\)._
Since all proper rational parametrizations of \(\mathcal{C}(F)\) are related by a Mobius-transformation, after a careful analyzation including the derivative, the following can be shown.
**Theorem 3.2**.: _Let \(F\in\mathbb{L}[y,y^{\prime}]\) be irreducible and let \(\mathcal{P}(t)=(P_{1}(t),P_{2}(t))\in\mathbb{L}(t)^{2}\) be a proper parametrization of \(\mathcal{C}(F)\). Then there is a rational solution of \(F(y,y^{\prime})=0\) if and only if either_
\[\alpha\,P_{1}^{\prime}(t)=P_{2}(t)\ \ \text{or}\ \ \alpha\,(t-\beta)^{2}\,P_{1}^{ \prime}(t)=P_{2}(t) \tag{3.1}\]
_for some \(\alpha,\beta\in\mathbb{L}\) with \(\alpha\neq 0\). In the affirmative case, \(y(x)=P_{1}(\alpha\cdot(x+c))\) (or \(y(x)=P_{1}(\beta-\frac{1}{\alpha\cdot(x+c)})\)), where \(c\in\overline{\mathbb{L}}\) is an arbitrary constant, defines all rational solutions of \(F(y,y^{\prime})=0\)._
_Remark 3.3_.: Let us note that if \(\alpha=0\) in the first case of Theorem 3.2, we obtain a constant solution given as \(y(x)=P_{1}(0)\). Not all constant solutions of \(F(y,y^{\prime})=0\), however, might be found in this way.
The following is an adapted version of [8, Theorem 6] to our setting justifying to consider solutions without field extensions involving the parameters.
**Theorem 3.4**.: _Let \(F\in\mathbb{L}[y,y^{\prime}]\) be irreducible. If there exists a solution \(y(x)\in\overline{\mathbb{L}}(x)\) of \(F(y,y^{\prime})=0\), then there is another solution \(z(x)\in\mathbb{L}(x)\)._
### Specializations of the parameters
Let \(F(y,y^{\prime})\) be a differential polynomial as in (1.2) with only constant coefficients. We now study rational solutions of \(F(\mathbf{b};y,y^{\prime})\).
**Proposition 3.5**.: _Let \(F\in\mathbb{K}(\textbf{a})[y,y^{\prime}]\) be as in (1.2), \(y(x)\in\overline{\mathbb{K}}(\textbf{a})(x)\) be a non-constant rational solution of \(F(y,y^{\prime})=0\), \(\mathcal{P}=(y(t),y^{\prime}(t))\) and let \(\textbf{b}\in\Omega_{\mathrm{def}(F)}\cap\Omega_{\mathrm{def}(\mathcal{P})}\). Then \(y(\textbf{b};x)\) is a well-defined rational solution of \(F(\textbf{b};y,y^{\prime})=0\)._
Proof.: By assumption, the specialization of \(\mathcal{P}(\mathbf{b};t)\) remains a zero of \(F(\mathbf{b};y,y^{\prime})\). So if \(F(\mathbf{b};y,y^{\prime})\) is a constant, then it is identically zero and the statement trivially holds. Let the specialization \(F(\mathbf{b};y,y^{\prime})\) be non-degenerate. Since the second component of \(\mathcal{P}(\mathbf{b};t)\) remains the derivative of the first and \(\mathcal{P}(\mathbf{b};t)=(y(\mathbf{b};t),y^{\prime}(\mathbf{b};t))\) is well-defined, \(F(\mathbf{b};y(\mathbf{b};x),y^{\prime}(\mathbf{b};x))=0\).
Proposition 3.5 treats the case where a rational solution of \(F\) exists. In the following, we analyze the cases where \(y(\mathbf{b};x)\) is not well-defined or \(F(y,y^{\prime})\) itself does not admit a rational solution. We show how all solutions under those specializations can be found where \(F(\mathbf{b};y,z)\) remains irreducible and represent the solution set in a finite way. Observe that the problem of algorithmically finding the parameters \(\mathbf{b}\in\mathcal{S}\) such that \(F(\mathbf{b};y,z)\) is reducible is an open problem, but the decomposition (2.1) provides an isolation of such specializations (they are in \(\mathcal{S}_{2}\)).
Similarly as in [7], let us decompose the parameter space such that the behavior for every specialization in a component is the same.
**Theorem 3.6**.: _Let \(F\in\mathbb{K}(\textbf{a})[y,y^{\prime}]\) be as in (1.2) and let \(\mathcal{P}=(P_{1},P_{2})\in\overline{\mathbb{K}(\textbf{a})}(t)^{2}\) be a proper parametrization of \(\mathcal{C}(F)\). Then the following holds._
1. _If_ \(\mathcal{P}\) _fulfills (_3.1_) for some_ \(\alpha,\beta\in\overline{\mathbb{K}(\textbf{a})}\) _with_ \(\alpha\neq 0\) _leading to the rational solution_ \(y(x)\) _of_ \(F(y,y^{\prime})=0\)_, then for every_ \(\textbf{b}\in\Omega_{\mathrm{def}(\mathcal{P})}\) _it holds that_ \(y(\textbf{b};x)\) _is a rational solution of_ \(F(\textbf{b};y,y^{\prime})=0\)_._
2. _If_ \(\mathcal{P}\) _does not fulfill (_3.1_), let_ \(A/B=P_{2}/P_{1}^{\prime}\) _be such that_ \(A,B\) _are coprime. Then for every_ \(\textbf{b}\in\Omega_{FG}\) _with_ \[\Omega_{FG}:=\Omega_{\mathrm{nonZ}(\mathrm{res}_{t}(A,B))}\cap\Omega_{ \mathrm{nonZ}(\mathrm{lc}(A))}\cap\Omega_{\mathrm{nonZ}(\mathrm{lc}(B))}\cap \Omega_{\mathrm{sqfree}(A/B)}\] _also_ \(\mathcal{P}(\textbf{b};t)\) _does not fulfill (_3.1_)._
Proof.: Item (1) holds due to Proposition 3.5. For item (2), note that for every \(\textbf{b}\in\Omega_{FG}\) it holds that \(A,B\) have the same degrees as \(A(\textbf{b}),B(\textbf{b})\), respectively, and \(A(\textbf{b}),B(\textbf{b})\) are again coprime. Thus, if \(A/B\) is not a polynomial of degree zero or two, then this is neither the case for \(A(\textbf{b})/B(\textbf{b})\). So it remains to consider those two cases. If \(A(\textbf{b})/B(\textbf{b})\) is constant, then this was already the case for \(A/B\), in contradiction to the assumption that \(A/B=P_{2}/P_{1}^{\prime}\) does not fulfill the first condition in (3.1). If \(A/B\) (and \(A(\textbf{b})/B(\textbf{b})\)) is a polynomial of degree two, then it has to be square-free; otherwise it would fulfill the second condition in (3.1). Since in this case we assume that \(\textbf{b}\in\Omega_{\mathrm{sqfree}(A,B)}\), also \(A(\textbf{b})/B(\textbf{b})\) is square-free and hence, does not fulfill (3.1).
Based on Theorem 3.6, we refine the decomposition w.r.t. parametrizations as follows. For every component \(\mathcal{S}_{3,i}\), we check (3.1) and if it is not fulfilled, the component given by \(\Omega_{FG}\) is added to \(\mathcal{S}_{2}\). The complement, which is a closed set, is represented by an intersection of prime ideals and can be further investigated by considering the resulting quotient rings as new base fields (cf. Section 6 in [7]). After this process those components generated through \(\mathcal{S}_{3,i}\) and fulfilling (3.1) are kept in \(\mathcal{S}_{3}\) the others are included
in \(\mathcal{S}_{2}\). We call the resulting decomposition \(\mathcal{S}=\mathcal{S}_{1}^{*}\cup\mathcal{S}_{2}^{*}\cup\mathcal{S}_{3}^{*}\) a _decomposition w.r.t. rational solutions_ of \(F(y,y^{\prime})=0\). Moreover, the solutions \(y_{i}(x)\), depending on the parameters \(\mathbf{b}\), coming from the components in \(\mathcal{S}_{3,i}^{*}\) are called the corresponding rational solutions.
_Remark 3.7_.: Let \(\mathcal{S}_{1}\cup\mathcal{S}_{2}\cup\mathcal{S}_{3}\) be a decomposition w.r.t. parametrizations and let \(\mathcal{S}_{1}^{*}\cup\mathcal{S}_{2}^{*}\cup\mathcal{S}_{3}^{*}\) be a decomposition w.r.t. rational solutions. Then \(\mathcal{S}_{1}=\mathcal{S}_{1}^{*}\), \(\mathcal{S}_{2}\subseteq\mathcal{S}_{2}^{*}\) and \(\mathcal{S}_{3}\supseteq\mathcal{S}_{3}^{*}\). Moreover, \(\mathcal{S}_{3}^{*}\) can consist of less or more components \(\mathcal{S}_{3,i}^{*}\) than the components \(\mathcal{S}_{3,i}\) of \(\mathcal{S}_{3}\).
**Corollary 3.8**.: _Let \(F\in\mathbb{K}(\boldsymbol{a})[y,y^{\prime}]\) be as in (1.2) and let \(\mathcal{S}_{i}^{*},\mathcal{S}_{3,i}^{*}\) be a decomposition w.r.t. rational solutions. Then_
1. _for every_ \(\boldsymbol{b}\in\mathcal{S}_{2}^{*}\) _it holds that_ \(F(\boldsymbol{b};y,y^{\prime})\) _is reducible or does not admit a non-constant rational solution;_
2. _for every_ \(\mathcal{S}_{3,i}^{*}\) _the corresponding parametrization_ \(\mathcal{P}\) _fulfills (_3.1_) such that the evaluation of the corresponding rational solution_ \(y(\boldsymbol{b};x)\) _at every_ \(\boldsymbol{b}\in\mathcal{S}_{3,i}^{*}\) _is a rational solution of_ \(F(\boldsymbol{b};y,y^{\prime})=0\)_._
Proof.: Item (1) follows from the definition of the decomposition w.r.t. parametrizations together with Lemma 3.1 and Remark 3.7. Let \(\mathcal{P}\) be a rational parametrization, corresponding to a component \(\mathcal{S}_{3,i}\) of a decomposition w.r.t. parametrizations, fulfilling (3.1). Then \(\mathcal{S}_{3,i}\) is also a component of \(\mathcal{S}_{3}^{*}\) and every specialization leads to a solution by Proposition 3.5. If \(\mathcal{P}\) does not fulfill (3.1), then components where for every \(\boldsymbol{b}\in\mathcal{S}_{3,i}\) the specialization \(\mathcal{P}(t,\boldsymbol{b})\) does not fulfill (3.1) are in \(\mathcal{S}_{2}^{*}\), see Theorem 3.6.
**Theorem 3.9**.: _Let \(F\in\mathbb{K}(\boldsymbol{a})[y,y^{\prime}]\) be as in (1.2), let \(\tilde{F}=F(\boldsymbol{b};y,y^{\prime})\) be well-defined and irreducible with a non-constant rational solution \(\tilde{y}(x)\). Let \(\mathcal{S}_{i}^{*},\mathcal{S}_{3,i}^{*}\) be a decomposition w.r.t. rational solutions. Then there exists \(i\in I\) such that \(\boldsymbol{b}\in\mathcal{S}_{3,i}^{*}\) and \(\tilde{y}(x)=y(\boldsymbol{b};x+c)\) for some \(c\in\overline{\mathbb{K}}\) where \(y(x)\) is the solution corresponding to \(\mathcal{S}_{3,i}^{*}\)._
Proof.: Since \(\tilde{F}\) is well-defined and \(\tilde{Y}=(\tilde{y}(x),\tilde{y}^{\prime}(x))\) is a rational parametrization of \(\mathcal{C}(\tilde{F})\), by Corollary 2.2, there exists \(i\in I\), \(j\in I_{j}\) such that for the corresponding parametrization \(\mathcal{P}\) it holds that \(\tilde{Y}(t)=\mathcal{P}(\boldsymbol{b};s)\) for some \(s\in\overline{\mathbb{K}(\boldsymbol{b})}\). By Corollary 3.8, \(\mathcal{P}\) fulfills (3.1) with some \(\alpha,\beta\in\overline{\mathbb{K}}\). Moreover, \(\tilde{y}(x)\) is reached by the solution family given as \(y(x+c)=\tilde{P}_{1}(\tilde{\alpha}\,(x+c))\) (or \(\tilde{P}_{1}(\tilde{\beta}-\frac{1}{\tilde{\alpha}\,(x+c)})\)).
Let us now apply the above results combined with Section 2 to the differential equation (1.2) involving only constant parameters \(\mathbf{b}\). For this purpose, we need the computation of a decomposition w.r.t. parametrizations as in (2.1) and call this auxiliary algorithm **RationalCovering**.
**Theorem 3.10**.: _Algorithm 1 is correct._
Proof.: The termination of the algorithm follows from the termination of Algorithm **RationalCovering** and the finite number of computations necessary in every step of the algorithm.
By Theorem 3.9, for every \(\mathbf{b}\) where \(\tilde{y}(x)\) is a rational solution of \(F(\mathbf{b};y,y^{\prime})=0\), there is a solution \(y_{i}(x)\) in the output of Algorithm 1 such that \(\tilde{y}(x)=y_{i}(\mathbf{b};x+c)\) for some \(c\in\overline{\mathbb{K}}\). Thus, the output indeed covers all possible solutions.
_Remark 3.11_.: Let us note that the works [1, 2] on algebraic and local solutions of \(F(y,y^{\prime})=0\) can be generalized to equations of the type (1.2) with constant parameters \(\mathbf{a}\) as well. In these cases, also algebraic conditions on the parameter values \(\mathbf{a}\) may have to be introduced where a generic solution over \(\mathbb{K}(\mathbf{a})\) cannot be specified or does not exist at all. For this purpose the decomposition w.r.t. surjective parametrizations could be applied, see Subsection 2.3. We do not further elaborate this here and give just a short illustration in Example 3.12.
**Example 3.12**.: Let \(F=2y-y^{\prime 2}+2a_{1}y^{\prime}-a_{2}^{2}+a_{2}(2a_{1}-2y^{\prime})\). Then the decomposition w.r.t. parametrization is \(\mathcal{S}_{3}=\mathbb{C}^{2}\) and consists of only one component with corresponding parametrization \(\mathcal{P}=(t^{2}/2-a_{1}t,t-a_{2})\). Equation (3.1) is generically not fulfilled because
\[A/B:=P_{2}/P_{1}^{\prime}=(t-a_{2})/(t-a_{1}).\]
The leading coefficients are one and \(\operatorname{res}_{t}(t-a_{2},t-a_{1})=a_{2}-a_{1}\). Then the only exceptional case is \(\Omega_{\operatorname{nonZ}(\operatorname{res}(A,B))}=\{(a,a)\mid a\in\mathbb{ C}\}\). In the case where \(\mathbf{b}=(a_{1},a_{1})\), (3.1) is fulfilled with \(\alpha=1\) leading to the solution
\[\tilde{y}(x)=x^{2}/2-a_{1}x\]
of \(F(a_{1}=a_{2};y,y^{\prime})=2y-y^{\prime 2}+a_{1}^{2}\). Thus, a decomposition w.r.t. rational solutions is \(\mathcal{S}_{2}^{*}=\mathbb{C}^{2}\setminus\{(a,a)\ |\ a\in\mathbb{C}\}\), \(\mathcal{S}_{3}^{*}=\{(a,a)\ |\ a\in\mathbb{C}\}\) with corresponding rational solution \(\tilde{y}(x)\).
Let us demonstrate with this example how additional local solutions of \(F\) can be found. Let us follow [2] and choose the critical initial values \(\mathbf{v}:=(-a_{1}^{2}/2,a_{1}-a_{2})\). The parametrization \(\mathcal{P}\) expanded at \(\mathbf{v}\) fulfills [2, Theorem 10] with
\[n=\mathrm{ord}_{t}(P_{1}-P_{1}(0))-\mathrm{ord}_{t}(P_{2})=2.\]
Therefore, there exist the two formal Puiseux series solutions of \(F(y,y^{\prime})=0\) given as
\[\tilde{y}_{\mathbf{v}}(x)=-a_{1}^{2}/2+(a_{1}-a_{2})x+1440\gamma(a_{1}-a_{2})x ^{3/2}+x^{2}/3+48\gamma x^{5/2}+\mathcal{O}(x^{3})\]
where \(2^{7}3^{6}5^{2}(a_{1}-a_{2})\gamma^{2}=1\). For all \(\mathbf{b}\in\mathbb{C}^{2}\setminus\{a_{1}=a_{2}\}\) we see that \(\tilde{y}_{\mathbf{v}}(\mathbf{b};x)\) is a solution of \(F(\mathbf{b};y,y^{\prime})=0\).
**Example 3.13**.: Let us consider
\[F=4a_{1}a_{2}^{2}y^{4}-4a_{1}a_{2}y^{2}y^{\prime}+a_{1}y^{\prime 2}+a_{2}y^{2}- y^{\prime}=0\]
and the parameter space \(\mathcal{S}=\mathbb{C}^{2}\). A decomposition w.r.t. parametrization yields \(\mathcal{S}_{1}=\emptyset,\mathcal{S}_{2}=\{(a_{1},0)\ |\ a_{1}\in\mathbb{C},a_{1}\neq 0\}\) and
\[\mathcal{S}_{3,1}=\mathbb{C}^{2}\setminus\{(a_{1},a_{2})\ |\ a_{1}a_{2}=0\}, \mathcal{S}_{3,2}=\{(0,a_{2})\ |\ a_{2}\in\mathbb{C},a_{2}\neq 0\},\mathcal{S}_{3,3}=\{(0,0)\}.\]
The corresponding parametrizations are
\[\mathcal{P}_{1}=\left(\frac{a_{1}a_{2}t}{a_{1}^{3}t^{2}-a_{2}^{3}},\frac{(a_{1 }^{3}t^{2}+a_{2}^{3})a_{1}^{2}t^{2}}{(a_{1}^{3}t^{2}-a_{2}^{3})^{2}}\right)\]
for the generic case \(\mathcal{S}_{3,1}\), \(\mathcal{P}_{2}=(t,a_{2}t^{2})\) for \(\mathcal{S}_{3,2}\), and \(\mathcal{P}_{3}=(t,0)\) for \(\mathcal{S}_{3,3}\), respectively; note that \(\mathcal{P}_{1}(0,a_{2};t)=(0,0)\) and \(F(0,a_{2};y,y^{\prime})=a_{2}y^{2}-y^{\prime}\). In the case of a specialization \(\mathbf{b}:=(a_{1},0)\in\mathcal{S}_{2}\), the curve factors into lines since \(F(\mathbf{b};y,y^{\prime})=y^{\prime}(a_{1}y^{\prime}-1)\).
Using (3.1) for \(\mathcal{P}_{1}=(P_{1},P_{2})\), we see that \(P_{2}/P_{1}^{\prime}=\frac{-a_{1}t^{2}}{a_{2}}\) leads to the rational solution \(y_{1}(x)=(x+c)/(a_{1}-a_{2}(x+c)^{2})\). For \(\mathcal{P}_{2}\) we obtain that (3.1) is fulfilled with \(\alpha=1,\beta=0\) leading to the rational solution \(y_{2}(x)=-1/(x+c)\). For \(\mathbf{b}=(0,0)\in\mathcal{S}_{3,3}\), the specialization \(F(\mathbf{b};y,y^{\prime})=-y^{\prime}\) defines a vertical line. Verifying (3.1) for \(\mathcal{P}_{2}\) leads to \(\alpha=0\) and the solutions are the constants (cf. Remark 3.3).
We thus obtain the decomposition w.r.t. rational solutions
\[\mathcal{S}_{1}^{*}=\emptyset,\mathcal{S}_{2}^{*}=\mathcal{S}_{2}\cup\mathcal{ S}_{3,3},\mathcal{S}_{3}^{*}=\mathcal{S}_{3,1}\cup\mathcal{S}_{3,2}\]
with the corresponding rational solutions \(y_{1}(x),y_{2}(x)\) for \(\mathcal{S}_{3,1}\) and \(\mathcal{S}_{3,2}\), respectively.
Let us note that in the case of \(\mathbf{b}\in\mathcal{S}_{2}\), the specialization \(\mathcal{P}_{1}(\mathbf{b};t)=(0,1/a_{1})\) is not a rational parametrization of a component of \(F(\mathbf{b};y,y^{\prime})=y^{\prime}(a_{1}y^{\prime}-1)\) anymore. For \(\mathcal{P}_{2}(\mathbf{b};t)=(t/a_{1},1/a_{1})\), however, we find the zero \(y(\mathbf{b};x)=(x+c)/a_{1}\) of \(F(\mathbf{b};y,y^{\prime})\) (cf. Proposition 3.5).
## 4. Differential equations with functional coefficients
In this section we study rational solutions of differential equations \(F=0\) with functional coefficients as in (1.1). Let \(F\) involve functional coefficients \(f_{1},\ldots,f_{n}\) and possibly constant coefficients \(b_{1},\ldots,b_{m}\). As in the previous sections, we will first replace \(\mathbf{f},\mathbf{b}\) by \(\mathbf{a}=(\mathbf{a}_{c},\mathbf{a}_{f})\) and then consider its specializations. Let us set \(\mathbb{L}=\overline{\mathbb{K}(\mathbf{a}_{c})}\).
By a rational solution of \(F(y,y^{\prime})=0\) we mean a solution \(\hat{y}\in\overline{\mathbb{K}(\mathbf{b})}(\mathbf{f})\) of the differential equation which rationally depends on \(f_{1},\ldots,f_{n}\). Note that if one is interested in solutions rationally depending on a new function \(f_{n+1}(x)\), for instance on \(x\), an additional parameter \(f_{n+1}\) can be set accordingly.
We assume that \(\mathbb{L}(\mathbf{f})\) is closed under derivation. If it is not, i.e. if \(f_{j}^{\prime}\notin\mathbb{L}(\mathbf{f})\) for some \(j\), one can add \(f_{j}^{\prime}\) as a new element. In this way, in principle, infinitely many elements might be iteratively added such that \(\mathbb{L}(\mathbf{f})\) gets closed under derivation. However, here, we only work with finitely many functions in \(\mathbf{f}\). So, we need to ensure that only finitely many derivatives of the functions \(f_{j}\) are needed to be adjoined so that \(\mathbb{L}(\mathbf{f})\) is closed under derivation. Note that this is the case if \(\mathbb{L}(\mathbf{f})\) is, for instance, a Liouvillian field extension of \(\mathbb{L}\). In case that new \(f_{n+1},\ldots,f_{n+r}\) are added, \(F\) is independent of them but we look for solutions that might rationally depend also on the \(f_{n+1},\ldots,f_{n+r}\). In the particular case where an \(f_{i}\) is algebraic over \(\mathbb{L}\), just the first derivatives have to be added. In order to see this, let \(Q(z)\in\mathbb{L}[x,z]\) be its minimal polynomial. Then \(P:=\operatorname{res}_{x}(Q,\frac{d\,Q}{dx})\in\mathbb{L}[z,z^{\prime}]\setminus \mathbb{L}\) and \(\frac{d\,P}{dx}=\frac{\partial\,P}{\partial z^{\prime}}\cdot z^{\prime\prime}+ R(z,z^{\prime})\) for some polynomial \(R\in\mathbb{L}[z,z^{\prime}]\) and \(\frac{\partial\,P}{\partial z^{\prime}}\neq 0\), which is equivalent to \(z^{\prime\prime}=-R(z,z^{\prime})/\frac{\partial\,P}{\partial z^{\prime}}\) and thus, \(f_{i}^{\prime\prime}\in\mathbb{L}(f_{i},f_{i}^{\prime})\); similarly for higher derivatives of \(f_{i}\).
_Remark 4.1_.: In general, if there are algebraic conditions among the elements in \(\mathbf{f}\), then we might compute the resultant of these conditions and (4.2) in order to eliminate some of the \(f_{i}\) and obtain an associated system involving less parameters. Such algebraic relations exist in particular if some of the \(f_{i}\)'s represent derivatives of other \(f_{j}\)'s. In this way, however, some new solutions of the associated equation might be introduced. We can detect these problematic solutions by checking whether the given differential equation is fulfilled or not for these solution candidates.
Let \(F\in\mathbb{L}(\mathbf{a}_{f})[y,y^{\prime}]\) be as in (1.1) where \(\mathbf{f}\) is replaced by independent parameters \(\mathbf{a}\). A solution \(\hat{y}\in\mathbb{L}(\mathbf{f})\) of \(F(\mathbf{f};y,y^{\prime})=0\) defines a point \(\hat{Y}(x):=(\hat{y},\frac{d}{dx}\hat{y})\) on the corresponding curve \(\mathcal{C}(\mathbf{f};F)\). We replace every functional coefficient \(\mathbf{f}\) in \(\hat{Y}\) by \(\mathbf{a}_{f}\). The main subjective of this section is when \(\hat{Y}(x)\) is a rational general solution (see Definition 4.4 below). In this case, \(\hat{Y}(x)\) will also define a rational parametrization. Hence, we firstly look for rational parametrizations of \(\mathcal{C}(F)\). The case of finding particular solutions will not be fully algorithmic, which goes along the note that finding all rational
solutions \(\hat{y}\in\mathbb{C}(x)\) of \(F(y,y^{\prime})=0\) with \(F\in\mathbb{C}[x,y,y^{\prime}]\) is in general an open problem (see also [17]).
We would like to find from a proper parametrization \(\mathcal{P}(t)=(P_{1}(t),P_{2}(t))\in\mathbb{L}(\mathbf{a}_{f})(t)^{2}\) of \(\mathcal{C}(F)\) a rational solution \(\hat{y}=P_{1}(\hat{w})\in\overline{\mathbb{L}}(\mathbf{f})\) where \(\mathbf{a}_{f}\) and \(t\) in \(\mathcal{P}(t)\) are replaced by \(\mathbf{f}\) and \(\hat{w}\in\overline{\mathbb{L}}(\mathbf{f})\), respectively. Since \(\frac{d}{dx}\hat{y}=P_{2}(\hat{w})\), we obtain
\[\frac{d}{dx}(P_{1}(w))=\sum_{i=1}^{n}\frac{\partial\,P_{1}}{\partial f_{i}} \cdot f_{i}^{\prime}+\frac{\partial\,P_{1}}{\partial t}\cdot\frac{\partial\,w }{\partial f_{i}}\cdot f_{i}^{\prime}=P_{2}(w). \tag{4.1}\]
Since we assume that \(\mathbb{L}(\mathbf{f})\) is closed under derivation, we may write \(f_{i}^{\prime}=q_{i}(\mathbf{f})\) for some \(q_{i}\in\mathbb{L}(\mathbf{f})\). If \(\mathcal{P}(\mathbf{f},t)\) is well-defined, we obtain the _associated differential equation_
\[\sum_{i=1}^{n}\frac{\partial\,P_{1}}{\partial f_{i}}(\mathbf{f},w)\cdot q_{i} (\mathbf{f})+\frac{\partial\,P_{1}}{\partial t}(\mathbf{f},w)\cdot\frac{ \partial\,w}{\partial f_{i}}\cdot q_{i}(\mathbf{f})=P_{2}(\mathbf{f},w) \tag{4.2}\]
which has coefficients in \(\mathbb{L}(\mathbf{f})\) and is quasi-linear. We aim to solve (4.2) for \(w\in\overline{\mathbb{L}}(\mathbf{f})\).
### Associated System
We investigate the associated differential equation (4.2), which is a quasi-linear ordinary differential equation, and particularly look for its rational solutions. We may assume that \(\mathbf{f}\) are algebraically independent over \(\mathbb{L}\) (cf. Remark 4.1) such that we can switch with abuse of notation between \(\mathbb{L}(\mathbf{a}_{f})\) and \(\mathbb{L}(\mathbf{f})\) for reasonings independent of the differential structure. Throughout this section, let us denote by \(\mathbb{F}=\mathbb{L}(\mathbf{f})\).
**Lemma 4.2**.: _With notations introduced above, let \(\hat{y}\in\overline{\mathbb{F}}\) be a solution of (1.1) and let \(\mathcal{P}(t)\in\overline{\mathbb{F}}(t)^{2}\) be a proper parametrization of \(\mathcal{C}(F)\). Then one of the following holds:_
1. \((\hat{y},\frac{d}{dx}\hat{y})\) _is in_ \(\mathcal{C}(F)\setminus\mathrm{Im}(\mathcal{P})\)_;_
2. \(\hat{y}=\mathcal{P}(\hat{w})\) _for some solution_ \(\hat{w}\in\overline{\mathbb{F}}\) _of (_4.2_)._
_Moreover, if \(\mathcal{P}(t)\in\tilde{\mathbb{F}}(t)^{2}\), for some field extension \(\mathbb{F}\subseteq\tilde{\mathbb{F}}\subseteq\overline{\mathbb{F}}\), closed under derivation, and \(\hat{y}=\mathcal{P}(\hat{w})\), it holds that \(\hat{y}\in\tilde{\mathbb{F}}\) if and only if \(\hat{w}\in\tilde{\mathbb{F}}\)._
Proof.: By definition, \((\hat{y},\frac{d}{dx}\hat{y})\) is a point on \(\mathcal{C}(F)\). If there exists \(\hat{w}\in\overline{\mathbb{F}}\) such that \(\mathcal{P}(\hat{w})=(\hat{y},\frac{d}{dx}\hat{y})\), by construction of the associated differential equation, it holds that \(\hat{w}\) is a solution of (4.2) (case (2)). Otherwise, \((\hat{y},\frac{d}{dx}\hat{y})\) is not in the image of \(\mathcal{P}(t)\) and case (1) is fulfilled.
Now let \(\mathcal{P}\in\tilde{\mathbb{F}}(t)^{2}\). Assume that \(\hat{w}\in\tilde{\mathbb{F}}\). Then \(\hat{y}=\mathcal{P}(\hat{w})\in\tilde{\mathbb{F}}\). Conversely, if \(\hat{y}\in\tilde{\mathbb{F}}\), then also \(\frac{d}{dx}\hat{y}\in\tilde{\mathbb{F}}\) and \(\hat{w}=\mathcal{P}^{-1}(\hat{y},\frac{d}{dx}\hat{y})\in\tilde{\mathbb{F}}\).
_Remark 4.3_.: In Lemma 4.2, \(\mathcal{C}(F)\setminus\mathrm{Im}(\mathcal{P})\) in case (1) is just a finite number of points. Moreover, it could be that \(\mathcal{P}\) is surjective, such that \(\mathcal{C}(F)\setminus\mathrm{Im}(\mathcal{P})=\emptyset\), and if not, we can find another parametrization \(\mathcal{Q}\) such that \(\mathcal{C}(F)=\mathrm{Im}(\mathcal{P})\cup\mathrm{Im}(\mathcal{Q})\) (see Lemma 2.4).
The differential equation (4.2) can be treated by the methods of characteristics as follows: A _characteristic curve_\((f_{1}(s),\ldots,f_{n}(s),w(s))\) is defined by the dynamical system
\[\left\{\begin{aligned} &\frac{d\,f_{i}}{ds}=\frac{\partial\,P_{1}}{ \partial t}\cdot q_{i},\quad\text{ for }i\in\{1,\ldots,n\},\\ &\frac{d\,w}{ds}=P_{2}(w)-\sum_{i=1}^{n}\frac{\partial\,P_{1}}{ \partial f_{i}}\cdot q_{i}.\end{aligned}\right. \tag{4.3}\]
A solution of (4.3) is of the form \((\mathbf{f}(s,c_{2},\ldots,c_{n}),w(s,c_{2},\ldots,c_{n}))\) where \(c_{i}\) are arbitrary constants. Then, if it is possible to solve the system of equations \(\mathbf{f}=\mathbf{f}(s,\mathbf{c})\) for \(c_{2},\ldots,c_{n}\) (in terms of \(s,\mathbf{f}\)), we obtain an explicit solution \(w(s,\mathbf{c}(s,\mathbf{f}))\) of (4.2). Note that in this procedure we might fail to compute a solution of (4.3) or solve for \(c_{2},\ldots,c_{n}\). If it is possible, however, it is guaranteed that \(P_{1}(w(s,\mathbf{c}(s,\mathbf{f})))\) is a (not necessarily rational) solution of the original differential equation (1.2).
The local solvability in the class of real analytic functions of the dynamical system, which is depending on the eigenvalues of the Jacobian of the right hand side, is well-studied by classical results such as the Hartman-Grobman theorem. We, however, are primarily interested into global and particularly into rational solutions.
### One functional coefficient
Let us analyze the special case where only one of the coefficients \(f\) in (1.2) is non-constant. We may allow other coefficients, say \(a_{1},\ldots,a_{n}\), that are constant. Let us note that by assumption \(\mathbb{L}(f)\) is closed under derivation which is fulfilled e.g. for \(f=\exp(x),f=x,f=\sqrt{x}\), etc. Then the associated system (4.2) can be written as
\[G:=\left(\frac{\partial\,P_{1}}{\partial f}+\frac{\partial\,P_{1}}{\partial t }\cdot\frac{d\,w}{df}\right)q-P_{2}(w)=0 \tag{4.4}\]
where \(q=f^{\prime}\in\mathbb{L}(f)\) and thus \(G\in\mathbb{L}(\mathbf{a})[f,w,\frac{d}{df}w]\). This (4.4) is in general still not fully solvable, but its _strong rational general solutions_ (SRGS) can be computed algorithmically. We follow the ideas summarized in [17, Section 4].
**Definition 4.4**.: Let \(G\in\mathbb{L}(\mathbf{a})[f,w,\frac{d}{df}w]\) be irreducible. Then a solution \(\hat{w}\in\overline{\mathbb{L}(c)}(f)\setminus\overline{\mathbb{L}}(f)\) of \(G(f;w,\frac{d}{df}w)=0\) is called a _rational general solution_ where \(c\) is a transcendental constant. If \(\hat{w}\) depends rationally on \(c\), then we call \(\hat{w}\) a _strong rational general solution_ (SRGS).
For a SRGS \(\hat{w}(c)\in\overline{\mathbb{L}}(f,c)\) of \(G(f;w,\frac{d}{df}w)=0\), the pair \((\hat{w}(t),\frac{d}{df}\hat{w}(t))\) defines a proper rational parametrization of \(\mathcal{C}(G)\) over \(\overline{\mathbb{L}}(f)\). SRGS \(\hat{y}\) of the original differential equation \(F(f;y,y^{\prime})=0\) corresponds to SRGS \(\hat{w}\) of the corresponding associated system by \(\hat{y}=P_{1}(f,\hat{w})\). They can be found as follows.
**Theorem 4.5**.: _Let \(F\in\mathbb{L}(a_{f})[y,y^{\prime}]\) be as in (1.2). Then the following statements hold:_
1. _If_ \(F(f;y,y^{\prime})=0\) _has a SRGS, then the corresponding associated differential equation (_4.4_) is of the form_ (4.5) \[\frac{d\,w}{da_{f}}=g_{0}(f)+g_{1}(f)\cdot w+g_{2}(f)\cdot w^{2}\] _for some_ \(g_{0},g_{1},g_{2}\in\overline{\mathbb{L}}(f)\)_._
2. _If_ \(\mathcal{C}(F)\) _is rational and_ \(F(f;y,y^{\prime})=0\) _has a rational general solution, then the associated differential equation is of the form (_4.5_) and_ \(F(f;y,y^{\prime})=0\) _admits a SRGS._
Proof.: The proof of the theorem is essentially as in [17, Theorem 4.3.4].
The equation (4.5) is a Riccati-equation or a linear differential equation (in the case where \(g_{2}=0\)). The existence of rational general solutions of such differential equations can be decided and, in the affirmative case, computed algorithmically, see e.g. [12, 7].
The previous reasonings lead to a procedure for computing rational solutions which turns out to be algorithmic in the case of one functional coefficient (\(l=1\)) and when there is a SRGS of the associated system, or equivalently, of \(F(f;y,y^{\prime})=0\) itself. In the general case where \(l>1\) or no SRGS exists, we might not be able to find the solution \(\hat{w}\) of the associated system corresponding to a solution \(\hat{y}=P_{1}(\hat{w})\).
```
0: A first-order AODE \(F(y,y^{\prime})=0\) as in (1.1) depending on one functional coefficient \(f\) such that the coefficient field \(\mathbb{Q}(f)\) is closed under derivation.
0: If it exists, a strong rational general solution of \(F(y,y^{\prime})=0\).
1: If the genus of \(\mathcal{C}(F)\) is zero, compute a proper rational parametrization \(\mathcal{P}(t)\in\overline{\mathbb{Q}}(a_{f},t)^{2}\).
2: Compute its associated differential equation (4.4).
3: If it is a Riccati equation or a linear differential equation, if it exists, compute a rational general solution \(\hat{w}\in\overline{\mathbb{Q}}(f,c)\).
4:return\(\hat{y}=P_{1}(\hat{w})\).
```
**Algorithm 2** FunctionalCoefficientSolve
Let us note that in step (1), if the genus is zero, then there can always be computed a proper rational parametrization without field extensions involving the parameter \(a_{f}\)[18, Theorem 4.2.3].
**Example 4.6**.: Let
\[F(y,y^{\prime})=y^{\prime}-2y+\exp(x)\]
and \(f=\exp(x)\). The corresponding parametric differential equations is \(G(y,y^{\prime})=y^{\prime}-2y+a.\) A parametrization of \(\mathcal{C}(G)\) is given by \(\mathcal{P}(t)=(t,2t-a)\).
The associated differential equation is \(f\,w^{\prime}+f-2w=0\) with the SRGS \(\hat{w}=c\,f^{2}+f\). Thus,
\[\hat{y}=P_{1}(\hat{w})=c\,\exp(2x)+\exp(x)\]
is a rational general solution of \(F(y,y^{\prime})=0\).
**Example 4.7**.: The parametric curve defined by the slightly different equation
\[F(y,y^{\prime})=y^{\prime}-2y+\exp(x)-\exp(2x)\]
with \(f_{1}=\exp(x)\) has the parametrization \(P(t)=(t,2t-a_{1}+a_{1}^{2})\). Its associated differential equation does not have a SRGS in \(\mathbb{C}(f_{1})\). If we allow the additional coefficient \(f_{2}=x\), however, we obtain the associated system
\[f_{1}\,\frac{\partial\,w}{\partial a_{1}}+\frac{\partial\,w}{\partial a_{2}}+ f_{1}-f_{1}^{2}-2w=0.\]
It has the rational solution \(\hat{w}=(f_{2}+c)\,f_{1}^{2}+f_{1}\) where \(c\) is an arbitrary parameter leading to the rational solution \(\hat{y}=(x+c)\,\exp(2x)+\exp(x)\) of \(F(y,y^{\prime})=0\).
**Example 4.8**.: Let \(F(y,y^{\prime})=-y^{\prime 2}+4y^{2}+2y-\exp(2x)\) and \(f=\exp(x)\). The corresponding parametric differential equation is
\[G(y,y^{\prime})=-y^{\prime 2}+4y^{2}+2y-a^{2}.\]
A parametrization of \(\mathcal{C}(G)\) is given by
\[\mathcal{P}(t)=\left(\frac{t^{2}+a^{2}}{2(2t+1)},\frac{t^{2}+t-a^{2}}{2t+1} \right).\]
The associated differential equation
\[-2w^{3}+(-f\,w^{\prime}+3)w^{2}+(f\,w^{\prime}-1)\,w+f^{3}\,w^{\prime}=0\]
is not a Riccati equation and hence, there is no strong rational general solution. It has the trivial solution \(w=0\), however, which leads to the solution \(\hat{y}=\exp(2x)/2\) of \(F(y,y^{\prime})=0\).
### Specialization of the constant parameters
The specialization of the constant parameters \(\mathbf{a}_{c}\) in the solutions of \(F(y,y^{\prime})=0\) where \(F\in\mathbb{K}(\mathbf{b},\mathbf{f})[y,y^{\prime}]\) is as in (1.1) can be studied in a similar way as in the previous sections. We, however, note that the degree bounds in [12] are in general not uniform in the parameters and thus neither the computation of the solutions of the associated differential equation nor the case destinction for the specializations are fully algorithmic. An example for a non-uniform degree bound is given by the solution \(\hat{w}=cx^{a_{c}}\) of the linear differential equation \(w^{\prime}=a_{c}w/x\).
Let us give an illustration by an example to show that the method is often still algorithmic.
**Example 4.9**.: Let \(F=fy^{\prime 2}-by-1\) and \(f=x\). The corresponding parametric differential equation is
\[G(y,y^{\prime})=a_{f}y^{\prime 2}-a_{c}y-1.\]
A parametrization of \(\mathcal{C}(G)\), for \(a_{f}a_{c}\neq 0\), is given by
\[\mathcal{P}_{1}(t)=\left(\frac{a_{f}t^{2}-1}{a_{c}},t\right).\]
The associated differential equation
\[2fw^{\prime}-b+w=0\]
is a Riccati equation. Its general solution \(\hat{w}=b+c/\sqrt{f}\) is algebraic. We thus find the (algebraic) general solution
\[\hat{y}=(2bc\sqrt{x}+b^{2}x+c^{2}-1)/b\]
of \(F(y,y^{\prime})=0\).
In the case where \(a_{c}=0\), the curve given by
\[G(a_{c}=0;y;y^{\prime})=a_{f}y^{\prime 2}-1\]
has the rational parametrization \(\mathcal{P}_{2}(t)=(t,1/\sqrt{a_{f}})\) which leads to the associated differential equation \(\sqrt{f}\,w^{\prime}-1=0\) with the again algebraic general solution \(\hat{w}_{2}=c+\sqrt{f}\) such that \(\hat{y}_{2}=c+\sqrt{x}\) is a (algebraic) general solution of \(F(b=0;y;y^{\prime})=0\).
The case where \(a_{f}=0\) remains unstudied, because we are only interested in solutions depending on the variable \(f=x\).
## Acknowledgements
Authors partially supported by the grant PID2020-113192GB-I00 (Mathematical Visualization: Foundations, Algorithms and Applications) from the Spanish MICINN. Second author also supported by the OeAD project FR 09/2022.
|
2308.15594 | Learning the greatest common divisor: explaining transformer predictions | The predictions of small transformers, trained to calculate the greatest
common divisor (GCD) of two positive integers, can be fully characterized by
looking at model inputs and outputs. As training proceeds, the model learns a
list $\mathcal D$ of integers, products of divisors of the base used to
represent integers and small primes, and predicts the largest element of
$\mathcal D$ that divides both inputs. Training distributions impact
performance. Models trained from uniform operands only learn a handful of GCD
(up to $38$ GCD $\leq100$). Log-uniform operands boost performance to $73$ GCD
$\leq 100$, and a log-uniform distribution of outcomes (i.e. GCD) to $91$.
However, training from uniform (balanced) GCD breaks explainability. | François Charton | 2023-08-29T19:38:41Z | http://arxiv.org/abs/2308.15594v2 | # Can transformers learn the greatest common divisor?
###### Abstract
I investigate the capability of small transformers to compute the greatest common divisor (GCD) of two positive integers. When the training distribution and the representation base are carefully chosen, models achieve \(98\%\) accuracy and correctly predict \(91\) of the \(100\) first GCD. Model predictions are deterministic and fully interpretable. During training, the models learn to cluster input pairs with the same GCD, and classify them by their divisors. Basic models, trained from uniform operands encoded on small bases, only compute a handful of GCD (up to \(38\) out of \(100\)): the products of divisors of the base. Longer training and larger bases allow some models to "grok" small prime GCD. Training from log-uniform operands boosts performance to \(73\) correct GCD, and balancing the training distribution of GCD, from inverse square to log-uniform, to \(91\) GCD. Training models from a uniform distribution of GCD breaks the deterministic model behavior.
## 1 Introduction
Transformers [30] have been successfully applied to many problems of symbolic [14; 3; 27] and numerical mathematics [2]. Yet, they struggle with basic arithmetic [15; 19]. Large transformers can memorize addition and multiplication tables for small integers (with \(3\) or \(4\) digits), but they fail to scale to larger operands. Recent fine-tuning techniques, such as scratchpad [20], chain-of-thought [31] or algorithmic prompting [34], allow large language models to generalize beyond their training range when performing addition or multiplication by a small prefactor (e.g. \(3\times n\)). But these techniques require bespoke training data, only apply to large pre-trained models, and struggle on complex tasks [8]. Interestingly, approximate arithmetic on floating point numbers proves easier to learn. Transformers can learn basic linear algebra [2], perform symbolic regression [6] and learn advanced computations such as eigen-decomposition [4] or finding the roots of polynomials [5].
There is, of course, little practical interest in replacing existing arithmetic algorithms, which are efficient and reliable, by transformer-based models. Nevertheless, investigating and better understanding the capabilities and limitations of transformers in elementary mathematics is an important line of research. As transformers are increasingly considered for scientific research [7; 23; 25], any limitation in their mathematical abilities will restrict their applicability to mathematics and science. Such limitations have already been postulated [19; 32], and even sometimes demonstrated [26].
This paper focuses on computing the greatest common divisor (GCD) of two positive integers, a key operation for arithmetic on rational numbers, and a common fixture of number theory. Initial experiments with rational arithmetic (Appendix A) indicate that transformers have no difficulty learning to compare fractions. Given four positive integers \(a\), \(b\), \(c\) and \(d\), a one-layer transformer can predict whether \(\frac{a}{b}<\frac{c}{d}\) with \(99.9\%\) accuracy. Transformers can also learn integer division, i.e. calculate \(\lfloor\frac{a}{b}\rfloor\) (albeit with lower accuracy), but they cannot learn to add two fractions, or even reduce one in its lowest terms. A likely explanation for these results is that the models fail to compute GCD, which are needed for fraction addition and simplification, but not for comparison and integer division. If such a limitation of transformers was confirmed, it would compromise their use for problems involving integers and fractions. Without GCD, very little arithmetic on rational numbers is possible.
In this work, I investigate how 4-layer sequence-to-sequence transformers learn to compute the GCD of two positive integers in a range (\(1\) to \(10^{6}\)), and make the following **contributions**:
1. Transformers trained on uniform random operands can achieve \(95\%\) accuracy when predicting GCD, provided the base \(B\) used to represent integers is carefully chosen. In other bases, it can be as low as \(61\%\). These models leverage **representation shortcuts** to learn divisibility rules, and **predict up to 38 (and as low as 1) GCD** under \(100\).
2. **Model predictions are deterministic and fully interpretable.** For any two integers with GCD \(k\), the model always predicts the largest product of primes divisors of \(B\) dividing \(k\).
3. Models using large composite bases sometimes exhibit a phenomenon related to **grokking**[22], which allows them to learn multiples of small primes not dividing \(B\).
4. **Training models from log-uniform operands** significantly improves performance, by providing the model with many simple instances. **73 GCD are correctly predicted**.
5. **Balancing the training set distribution of GCD**, from inverse square to log-uniform, brings an additional boost to performance: **91 GCD are correctly predicted**.
6. **An unbalanced training distribution of outcomes is needed.** Without it, model predictions are no longer deterministic (but some models can still learn \(95\) GCD out of \(100\)).
#### Related work
**Neural networks for arithmetic** were proposed since the 1990s [28], and recurrent models (RNN, LSTM and related models) since 2015 [12; 33; 11]. Most recent research focuses on fine-tuning large pre-trained transformers on various arithmetic tasks, in order to solve math word problems [17; 9]. See Lee [15] for a summary of current capabilities and techniques. Neural Arithmetic Logical Units were introduced by Trask [29] as an alternative architecture for arithmetic. They learn exact computations, which generalize to any input, by constraining the weights of a linear network to be close to \(0\), \(1\) or \(-1\) at the end of training. See Mistry [18] for a recent overview.
Several authors have noted the **difficulty of training transformers on certain arithmetic tasks**. Saxton [24] benchmarks many mathematical tasks, and observes that number theoretic operations, such as factorization, are hard. Palamas [21] further investigates the hardness of modular arithmetic. Dziri [8] notes the difficulty of extending the promising results obtained by transformers on the four operations [15] to complex mathematical algorithms (e.g. Euclid's algorithm for the GCD).
The importance of **number representation** was discussed by Nogueira [19] in the case of arithmetic, and Charton [2] for linear algebra. **Grokking** was described by Power [22] for integer arithmetic. Liu [16] proposes interpretations and metrics to characterize grokking. Gromov [10] provides an insightful analysis of grokking in feed-forward networks trained to perform modular arithmetic.
## 2 Experimental settings
Through this work, GCD calculations are set up as a supervised translation task. Pairs of problems and solutions, \(((a,b),\gcd(a,b))\), with \(a,b\in\mathbb{N}^{*}\), are randomly generated. Problems and solutions are encoded into sequences of tokens, and a sequence-to-sequence transformer [30] is trained to translate the problem into its solution, by minimizing the cross-entropy between model predictions and correct solutions. Integers are encoded as sequences of digits in base \(B\), preceded by a sign token which also serves as a separator (Table 1). For instance, with \(B=10\), the model learns to translate \((8,12)\), encoded as the sequence '+ 8 + 1 2', into its GCD, \(4\), encoded as '+ 4'.
The choice of \(B\) is a trade-off. Small bases result in longer sequences that are harder to learn, but use a small vocabulary that is easier to memorize. Composite bases provide simple tests for divisibility. In base \(10\), divisibility by \(2\), \(5\) and \(10\) is decided by the rightmost token in the sequence.
\begin{table}
\begin{tabular}{c c c} \hline \hline Base & Encoded input & Encoded output \\ \hline
2 & [+, 1, 0, 1, 0, 0, 0, 0, +, 1, 1, 1, 0, 0, 0] & [+, 1, 0, 1, 0, 0, 0] \\
6 & [+, 4, 2, 4, +, 3, 2, 0] & [+, 1, 0, 4] \\
10 & [+, 1, 6, 0, +, 1, 2, 0] & [+, 4, 0] \\
30 & [+, 5, 10, +, 4, 0] & [+, 1, 10] \\ \hline \hline \end{tabular}
\end{table}
Table 1: Encoding gcd(160,120) = 40 in base 2, 6, 10 and 30
In all experiments, sequence-to-sequence transformers, with \(4\) layers, \(512\) dimensions and \(8\) attention heads, are trained on batches of \(256\) examples. The optimizer is Adam [13], with a constant learning rate of \(10^{-5}\). All experiments are run on one NVIDIA V100 GPU with \(32\) GB of memory.
Training examples are generated by uniformly sampling integers \(a\) and \(b\) between \(1\) and \(M=10^{6}\), and computing their GCD. After each epoch (300,000 examples), trained models are tested on sets of 100,000 random examples. New test data is generated after each epoch.The size of the problem space (\(M^{2}=10^{12}\)) guarantees minimal duplication between train and test set.
Model are tested on two sets. In the _natural test set_, input pairs \((a,b)\) are uniformly distributed, and the distribution of GCD, verifies \(P(\text{gcd}(a,b)=k)=\frac{1}{k^{2}}\)[1], i.e. small GCD are more common. In the _stratified test set_, GCD are uniformly distributed between \(1\) and \(100\). It is generated as follows:
* Sample \(k\), uniformly between \(1\) and \(100\).
* Sample \(a\) and \(b\), uniformly between \(1\) and \(\frac{M}{k}\), such that \(\text{gcd}(a,b)=1\) (using rejection sampling, since \(P(\text{gcd}(a,b)=1)=0.608\)).
* Add \((ka,kb)\) to the stratified test set.
For every \(1\leq k\leq 100\), the stratified test set contains about \(1000\) examples \((a,b)\), such that \(\text{gcd}(a,b)=k\). These two test sets are used in all experiments, for all training distributions, and provide two measures of accuracy. **Model accuracy**, measured on the natural set, is the probability that the GCD of two random integers from \(1\) to \(M\) is correctly predicted. Model accuracy on the stratified test set is the **number of GCD correctly predicted** between \(1\) and \(100\).
## 3 Learning greatest common divisors
In this section, models are trained on pairs of integers uniformly sampled between \(1\) and \(10^{6}\), and encoded in \(20\) bases, ranging from \(2\) to \(1024\). Accuracy (measured on the natural test set) is high for large composite bases: up to \(96.8\%\) for base \(420\), and \(94.7\%\) for base \(30\). On the other hand, it drops to \(61.3\%\) for base \(31\) and \(84.7\%\) for base \(10\) (Table 2).
Learning is very fast: for base \(30\), models achieve \(90\%\) accuracy after \(2\) epochs (600,000 examples), and \(93\%\) after \(6\). Model size has little impact on performance (Tables 13 and 14 in Appendix B). For base \(30\), \(1\)-layer transformers with \(32\) dimensions (less than 300,000 trainable parameters), achieve \(93.3\%\) accuracy. \(24\)-layer models with \(1024\) dimensions (\(714\) million parameters) achieve \(93.4\%\). For base \(31\), performance is unchanged, at \(61\%\), across all model sizes.
The influence of integer representation (base \(B\)) on model performance was observed in prior works [19; 2]. Still, the large variations in model accuracy for different bases are puzzling. Table 3 summarizes model predictions for bases \(2\) and \(10\), for GCD up to \(36\) (Tables 18 and 19 in Appendix D.2 cover GCD up to \(100\) and bases \(2\), \(4\), \(10\), \(30\), \(31\) and \(420\)). For each GCD, it indicates the most frequent model prediction (Pred), and if frequency in the stratified test set (\(\%\)). All frequencies are close to \(100\%\): the model predicts a unique value for every test pair with GCD \(k\). This suggests that the model can tell that two pairs have the same GCD. Also, products of divisors of the base are always correctly predicted. In fact, all model predictions can be summarized by **the three rules**:
* **Predictions are deterministic.** In over \(99.9\%\) of test cases, for any pair of integers with GCD \(k\), the model predicts a unique value, \(f(k)\), correct when \(f(k)=k\).
* **Correct predictions are products of primes dividing B.** For base \(2\), they are \(1\), \(2\), \(4\), \(8\), \(16\), \(32\) and \(64\). For base \(31\), \(1\) and \(31\). For base \(10\), all products of elements from \(\{1,2,4,8,16\}\) and \(\{1,5,25\}\). For base \(30\), all products of \(\{1,2,4,8,\}\), \(\{1,3,9,27\}\). and \(\{1,5,25\}\).
* **f(k) is the largest correct prediction that divides k.** For instance, \(f(15)=5\) for base \(10\). \(f(6)=2\) and \(f(12)=4\) for bases \(2\), \(4\) and \(10\), but \(f(6)=6\) and \(f(12)=12\) for base \(30\).
\begin{table}
\begin{tabular}{l c c c c c c c c c} \hline \hline Base & 2 & 3 & 4 & 5 & 6 & 7 & 10 & 11 & 12 & 15 \\ Accuracy & 81.6 & 68.9 & 81.4 & 64.0 & 91.5 & 62.5 & 84.7 & 61.8 & 91.5 & 71.7 \\ \hline Base & 30 & 31 & 60 & 100 & 210 & 211 & 420 & 997 & 1000 & 1024 \\ Accuracy & 94.7 & 61.3 & 95.0 & 84.7 & 95.5 & 61.3 & 96.8 & 61.3 & 84.7 & 81.5 \\ \hline \hline \end{tabular}
\end{table}
Table 2: **Accuracy as a function of encoding base. Best of three models.**
For prime bases, model predictions and the three rules suggest that the model learns to count the rightmost zeros in its input. The representation of an integer divisible by \(B^{n}\) ends with \(n\) zeros. By counting the rightmost zeros, \(z_{a}\) and \(z_{b}\), of its operands, taking the minimum \(\text{min}(z_{a},z_{b})=z\) and predicting \(B^{z}\), a model accounts for all observed predictions, and satisfies the three rules. For instance, in base 2, the model will correctly predict the GCD of \(8=1000_{2}\) and \(12=1100_{2}\) (3 and \(2\) rightmost zeros), as \(2^{2}=4\). On the other hand, it will wrongly predict the GCD of \(7=111_{2}\) and \(14=1110_{2}\) as \(1\). For composite bases, divisibility by divisors of \(B\) is also reflected in the rightmost digits of numbers. Once models learn these divisibility rules, their predictions satisfy the three rules.
The three rules also explain the variations in accuracy in Table 2. The probability that two random numbers have GCD \(k\) is \(\frac{6}{\pi^{2}k^{2}}\)[1]. Assuming that all products of prime divisors of \(B\) are correctly predicted, we can compute a theoretical model accuracy for any base. If \(B=p^{k}\), with \(p\) prime, the maximal accuracy is
\[\mathcal{A}(p^{k})=\mathcal{A}(p)=\frac{6}{\pi^{2}}\sum_{i=0}^{\infty}\frac{1 }{p^{2i}}=\frac{6}{\pi^{2}}\frac{p^{2}}{p^{2}-1},\]
if \(B=p^{k}q^{l}\), \(\mathcal{A}(B)=1-\frac{\pi^{2}}{6}(1-\mathcal{A}(p))(1-\mathcal{A}(q)),\)
if \(B=p^{k}q^{l}r^{m}\), \(\mathcal{A}(B)=1-\frac{\pi^{4}}{36}(1-\mathcal{A}(p))(1-\mathcal{A}(q))(1- \mathcal{A}(r))\), and so on.
Table 4 compares theoretical accuracies with empirical observations. Best model performances may be higher than theory, because of sampling errors in the test set, or lower than theory when some powers of prime divisors of \(B\) have not been learned.
This analysis demonstrates a shortcoming of naive benchmarks. Accuracies measured on the natural test set, which over-represents small GCD, are very misleading. Base \(12\) models achieve \(91\%\) accuracy, yet only predict \(19\) of the first \(100\) GCD. Table 4 reports the number of correctly predicted GCD (accuracy on the stratified test set), which will be our main performance metric from now on.
Even our best models have not learned to compute GCD in the general case. Instead, they leverage representation shortcuts to predict a few easy but common instances. On the other hand, all models have learned to classify pairs of integers according to their GCD: for any pair of integers with GCD \(k\), they always make the same prediction \(f(k)\). This is an important result and a significant achievement.
\begin{table}
\begin{tabular}{l c c c c c c c c c c} \hline \hline Base & 2 & 3 & 4 & 5 & 6 & 7 & 10 & 11 & 12 & 15 \\ \hline Best model & 81.6 & 68.9 & 81.4 & 64.0 & 91.5 & 62.5 & 84.7 & 61.8 & 91.5 & 71.7 \\ Theory & 81.1 & 68.4 & 81.1 & 63.3 & 90.2 & 62.1 & 88.6 & 61.3 & 90.2 & 80.3 \\ Correct GCD & 7 & 5 & 7 & 3 & 19 & 3 & 13 & 2 & 19 & 9 \\ \hline Base & 30 & 31 & 60 & 100 & 210 & 211 & 420 & 997 & 1000 & 1024 \\ \hline Best model & 94.7 & 61.3 & 95.0 & 84.7 & 95.5 & 61.3 & 96.8 & 61.3 & 84.7 & 81.5 \\ Theory & 94.1 & 60.9 & 94.1 & 88.6 & 96.3 & 60.8 & 96.3 & 60.8 & 88.6 & 81.1 \\ Correct GCD & 27 & 2 & 28 & 13 & 32 & 1 & 38 & 1 & 14 & 7 \\ \hline \hline \end{tabular}
\end{table}
Table 4: **Best model accuracy, theoretical accuracy and number of GCD<100 correctly predicted.**
\begin{table}
\begin{tabular}{c c c c c|c c c c|c c c c c} \hline \hline & \multicolumn{2}{c|}{Base 2} & \multicolumn{2}{c|}{Base 10} & \multicolumn{4}{c|}{Base 2} & \multicolumn{2}{c|}{Base 10} & \multicolumn{4}{c}{Base 2} & \multicolumn{2}{c}{Base 10} \\ GCD & Pred & \% & Pred & \% & GCD & Pred & \% & Pred & \% & GCD & Pred & \% & Pred & \% \\ \hline
1 & **1** & 100 & **1** & 100 & 13 & 1 & 100 & 1 & 100 & 25 & 1 & 100 & **25** & 100 \\
2 & **2** & 100 & **2** & 100 & 14 & 2 & 100 & 2 & 100 & 26 & 2 & 100 & 2 & 100 \\
3 & 1 & 100 & 1 & 100 & 15 & 1 & 100 & 5 & 100 & 27 & 1 & 100 & 1 & 100 \\
4 & **4** & 100 & **4** & 100 & 16 & **16** & 100 & **16** & 99.7 & 28 & 4 & 100 & 4 & 100 \\
5 & 1 & 100 & **5** & 100 & 17 & 1 & 100 & 1 & 100 & 29 & 1 & 100 & 1 & 100 \\
6 & 2 & 100 & 2 & 100 & 18 & 2 & 100 & 2 & 100 & 30 & 2 & 100 & 10 & 100 \\
7 & 1 & 100 & 1 & 100 & 19 & 1 & 100 & 1 & 100 & 31 & 1 & 100 & 1 & 100 \\
8 & **8** & 100 & **8** & 100 & 20 & 4 & 100 & **20** & 100 & 32 & **32** & 99.9 & 16 & 99.9 \\
9 & 1 & 100 & 1 & 100 & 21 & 1 & 100 & 1 & 100 & 33 & 1 & 100 & 1 & 100 \\
10 & 2 & 100 & **10** & 100 & 22 & 2 & 100 & 2 & 100 & 34 & 2 & 100 & 2 & 100 \\
11 & 1 & 100 & 1 & 100 & 23 & 1 & 100 & 1 & 100 & 35 & 1 & 100 & 5 & 100 \\
12 & 4 & 100 & 4 & 100 & 24 & 8 & 100 & 8 & 100 & 36 & 4 & 100 & 4 & 100 \\ \hline \hline \end{tabular}
\end{table}
Table 3: **Model predictions and their frequencies, for GCD 1 to 36.** Correct predictions in bold face.
**Learning GCD one prime power at a time.** Learning curves for the number of correct GCD (accuracy on the stratified test set), for base \(30\), \(21\) and \(420\) (Figure 2) exhibit a step-like shape, which suggests that GCD are learned in batches, over small periods of training time. The learning curve for base \(30\) has four steps. First, the model predicts \(\{1,2,4\}\), \(\{1,3,9\}\), \(\{1,5\}\) and their products: \(17\) correct GCD under \(100\). Around epoch \(50\), the model learns to predict \(25\) and the three associated multiples: \(50\), \(75\) and \(100\) (\(21\) GCD). At epoch \(220\), it learns \(8\), and the multiples \(24\), \(40\) and \(72\), and at epoch \(660\), it learns \(27\) and \(54\), for a grand total of \(27\) correct GCD under \(100\).
For base \(210\), the model first learns the \(20\) products \(\{1,2,4\}\), \(\{1,3\}\), \(\{1,5\}\) and \(\{1,7\}\). At epoch \(30\), it learns to predict \(9\) and the associated multiples \(18\), \(36\), \(45\), \(63\) and \(90\). \(25\) and three more multiples are learned at epoch \(400\), and \(49\) and \(98\) at epoch \(500\), for a total of \(32\) correct GCD. The three rules hold throughout training: all GCD \(k\) are predicted as the largest predicted GCD dividing \(k\).
**Accelerating learning by balancing the distribution of GCD.** While products of small primes are learned in a few epochs, large powers of primes require a lot of training: the length of flat steps in the learning curves increases over time. This is due to the distribution of GCD in the training set, which contains \(10\) times more examples with GCD \(1\) than GCD \(10\), and \(1000\) times more than GCD \(32\). This can be mitigated by adding a small proportion (\(5\%\)) of uniformly sampled GCD to the training set, using the same generation technique as for the stratified test set. This adjustment increases the proportion of large GCD in the training set and has a major impact on learning speed (Figure 2). For base \(30\), the model learns \(25\) GCD in \(30\) epochs, instead of \(250\), and \(27\) GCD in \(175\) (vs \(650\)).
## 4 Large composite bases \(B\) - grokking small primes
So far, the only GCD correctly predicted are product of primes divisors of the base. Non-divisors of \(B\) are learned in a small number of cases, always involving large bases (\(1000\) and \(1024\)), and after extensive training. In one experiment with base \(1000\), the model correctly predicts \(13\) GCD after \(84\) epochs: all the products of \(\{1,2,4,8,16\}\) and \(\{1,5,25\}\). For the next \(100\) epochs, training and test losses are flat, and it seems that the model is not learning anymore. Yet, at epoch \(188\), the model begins to predict GCD \(3\), with accuracy \(0.2\%\) at epoch \(188\), and \(93\%\) at epoch \(193\) (despite only seeing 100,000 examples with GCD \(3\) during these \(5\) epochs). Multiples of \(3\) are then learned, and by epoch \(220\), the model predicts \(22\) GCD: products of \(\{1,2,4,8,16\}\), \(\{1,5,25\}\) and \(\{1,3\}\).
This phenomenon is related to grokking, first described by Power [22] for modular arithmetic. Table 5 presents model predictions for base \(1000\), which continue to respect rules R1 and R3. In fact, we can update the three rules into **the three rules with grokking**:
1. **Prediction is deterministic.** All pairs with the same GCD are predicted the same, as \(f(k)\).
2. **Correct predictions are products of primes divisors of B, and small primes.** Small primes are learned roughly in order, as grokking sets in.
3. **f(k) is the largest correct prediction that divides k.**
Grokking almost never happens in our initial experiments, but it is common in large bases when models are trained long enough. Table 6 presents results for models trained on \(16\) large bases, most
of them composite, for up to \(1300\) epochs (\(390\) million examples). Grokking happens in all models, but takes a long time to happen. For instance, for bases \(625\) and \(4000\), products of prime divisors of \(B\) are learned in \(5\) and \(15\) epochs, but grokking only begins after \(600\) epochs. In all experiments, primes and powers of primes are grokked in order, with a few exceptions (e.g. in two different models with base \(2401\), \(2\) and \(3\) are grokked in different order).
Learning curves for base \(2023\) are presented in Figure 6 of Appendix D.1. Learning proceeds in steps: long periods of stagnation followed by sudden drops in the loss and rises in accuracy, as new GCD and their multiples are learned. Whereas all models grok the same GCD in the same order, the number of epochs needed varies a lot with model initialization (from \(200\) to \(600\) for base \(2023\)). Because it helps models learn small primes, grokking provides a large boosts in model accuracy. For base \(2023\), accuracy increases from \(63\%\) to \(91\%\) as \(2\), \(3\) and \(4\) are learned. On the other hand, in all experiments, the number of correct GCD remains under \(30\) after grokking.
**Balancing outcomes.** Grokking requires a lot of examples. Adding a small proportion of uniformly distributed GCD to the training set, as in section 3, brings no clear benefit (Table 17 in Appendix D.2). Instead, I change the training distribution of GCD to log-uniform, so that it scale as \(\frac{1}{k}\) instead of \(\frac{1}{k^{2}}\). This can be achieved by sampling operands as follows:
* Sample \(k\) between \(1\) and \(100\), with probability \(P(k)=\frac{1}{Ck}\), with \(C=\sum_{i=1}^{100}\frac{1}{i}\).
* Sample two integers \(a\) and \(b\), uniformly from \(1\) to \(M/k\), such that \(\text{gcd}(a,b)=1\).
* Add \((ak,bk)\) to the training set.
For \(9\) bases out of \(35\), a log-uniform distribution of GCD in the training set helps models learn non-divisors of \(B\) (Table 7). For \(B=211\), primes up to \(7\) are learned. For \(B=10000\), \(7\), \(9\), \(13\) and \(27\) are learned, bringing the number of correct GCD to \(62\), our best result so far. For \(B=30\), a counter-intuitive situation prevails: instead of small primes, the model learns \(B-1\) and \(B+1\).
\begin{table}
\begin{tabular}{l c c} \hline \hline Base & GCD predicted & Divisors predicted & Non-divisors (epoch learned) \\ \hline \(625=5^{4}\) & 6 & {1,5,25} & 2 (634) \\ \(2017\) & 4 & {1} & 2 (142), 3 (392) \\ \(2021=43.47\) & 10 & {1,43}, {1,47} & 2 (125), 3 (228) \\ \(2023=7.17^{2}\) & 16 & {1,7}, {1,17} & 3 (101), 2 (205), 4 (599) \\ \(2025=3^{4}.5^{2}\) & 28 & {1,3,9, 27, 81}, {1,5,25} & 2 (217), 4 (493), 8 (832) \\ \(2187=3^{7}\) & 20 & {1,3,9,27,81} & 2 (86), 4 (315), 5 (650) \\ \(2197=13^{3}\) & 11 & {1,13} & 2 (62), 3 (170), 4 (799) \\ \(2209=47^{2}\) & 8 & {1,47} & 2 (111), 3 (260), 9 (937) \\ \(2401=7^{4}\) & 10 & {1,7,49} & 2 (39), 3 (346) \\ \(2401=7^{4}\) & 14 & {1,7,49} & 3 (117), 2 (399), 4 (642) \\ \(2744=2^{3}.7^{3}\) & 30 & {1,2,4,8,16,32}, {1,7,49} & 3 (543), 5 (1315) \\ \(3125=5^{5}\) & 16 & {1,5,25} & 2 (46), 3 (130), 4 (556) \\ \(3375=3^{3}.5^{3}\) & 23 & {1,3,9,27}, {1,5,25} & 2 (236), 4 (319) \\ \(4000=2^{5}.5^{3}\) & 24 & {1,2, 4,8,16,32}, {1, 5, 25} & 3 (599) \\ \(4913=17^{3}\) & 17 & {1,17} & 2 (54), 3 (138), 4 (648), 5 (873) \\ \(5000=2^{3}.5^{4}\) & 28 & {1,2,4,8,16,32}, {1,5,25} & 3 (205), 9 (886) \\ \(10000=2^{4}.5^{4}\) & 22 & {1,2,4,8,16}, {1,5,25} & 3 (211) \\ \hline \hline \end{tabular}
\end{table}
Table 6: **Predicted gcd, divisors and non-divisors of \(\mathbf{B}\). Best model of 3. For non-divisors, the epoch learned is the first epoch where model achieves \(90\%\) accuracy for this gcd.**
\begin{table}
\begin{tabular}{c c|c c|c c|c c|c c} \hline \hline GCD & Prediction & GCD & Prediction & GCD & Prediction & GCD & Prediction & GCD & Prediction & GCD & Prediction \\ \hline
1 & 1 & 11 & 1 & 21 & 3 & 31 & 1 & 41 & 1 \\
2 & 2 & 12 & 12 & 22 & 2 & 32 & 16/ 32 & 42 & 6 \\
3 & 3 & 13 & 1 & 23 & 1 & 33 & 3 & 43 & 1 \\
4 & 4 & 14 & 2 & 24 & 24 & 34 & 2 & 44 & 4 \\
5 & 5 & 15 & 15 & 25 & 25 & 35 & 5 & 45 & 15 \\
6 & 6 & 16 & 16 & 26 & 2 & 36 & 12 & 46 & 2 \\
7 & 1 & 17 & 1 & 27 & 3 & 37 & 1 & 47 & 1 \\
8 & 8 & 18 & 6 & 28 & 4 & 38 & 2 & 48 & 48 \\
9 & 3 & 19 & 1 & 29 & 1 & 39 & 3 & 49 & 1 \\
10 & 10 & 20 & 20 & 30 & 30 & 40 & 40 & 50 & 50 \\ \hline \hline \end{tabular}
\end{table}
Table 5: **Grokked model predictions. \(B=1000\), after \(220\) epochs. \(32\) is being learned.**
## 5 Learning from log-uniform operands
In all experiments so far, the input pairs \((a,b)\) in the training set are uniformly sampled between \(1\) and \(M=10^{6}\). As a result, models are mostly trained from examples with large operands. \(90\%\) of operands are \(6\)-digit integers, and small examples (e.g. \(\gcd(6,9)\)) are almost absent from the training set. In contrast, when teaching arithmetic, we usually insist that examples with small operands must be learned, and sometimes memorized, before students can generalize to larger instances.
In this section, we sample input pairs in the training set from a log-uniform distribution, uniformly sampling real numbers \(x\) between \(0\) and \(\log M\), computing \(e^{x}\) and rounding to the nearest integer. In this setting, the training set has as many \(1\)-digit operands as \(6\)-digit operands, and the number of \(a\) scales as \(\frac{1}{4}\). In \(3\%\) of training examples, both operands are smaller than \(10\), in \(11\%\) both are smaller than \(100\). This presents the model with many GCD of small integers that it can memorize, just like children rote learn multiplication and addition tables. This is different from curriculum learning: the training distribution does not change over time. Note that log-uniform sampling only applies to the training set (test sets are unchanged), and has no impact on the distribution of outcomes.
Training from log-uniform operands greatly improves performance (Table 8). Accuracy for all bases is between \(94\) and \(99\%\), vs \(61\) and \(97\%\) with uniform operands. **For base 2401, the number of correct GCD is 73, our best result so far.** For base \(10\), the number of correct GCD is \(48\) (vs \(13\)). For base \(1000\), it is \(71\) (vs \(22\) with grokking). As before, large bases perform best: for all models with \(B\leq 420\), accuracy is higher than \(98\%\), and more than \(55\) GCD are correctly predicted.
\begin{table}
\begin{tabular}{l c|c c||c c|c c} \hline \hline & Natural & \multicolumn{2}{c||}{Log-uniform outcomes} & \multicolumn{2}{c|}{Natural} & \multicolumn{2}{c}{Log-uniform outcomes} \\ Base & \# GCD & \# GCD & Additional divisors & Base & \# GCD & \# GCD & Additional divisors \\ \hline
2 & 7 & 7 & - & 997 & 1 & 1 & - \\
3 & 5 & 5 & - & 1000 & 22 & 31 & **9**, 32, 64 \\
4 & 7 & 7 & - & 2017 & 4 & 6 & **9** \\
5 & 3 & 3 & - & 2021 & 10 & 10 & - \\
6 & 19 & 20 & 64 & 2023 & 16 & 11 & - \\
7 & 3 & 3 & - & 2025 & 28 & 28 & - \\
10 & 13 & 14 & 32 & 2187 & 20 & 20 & - \\
11 & 2 & 2 & - & 2197 & 11 & 11 & - \\
12 & 19 & 20 & 81 & 2209 & 8 & 8 & - \\
15 & 9 & 10 & 81 & 2401 & 14 & 16 & **5** \\
30 & 25 & 36 & 16, **29, 31** & 2744 & 29 & 21 & - \\
31 & 2 & 2 & - & 3125 & 16 & 16 & - \\
60 & 28 & 33 & 27, 32, 64 & 3375 & 23 & 21 & - \\
100 & 13 & 15 & 32, 64 & 4000 & 25 & 31 & **9**, 64 \\
210 & 32 & 32 & - & 4913 & 17 & 9 & - \\
211 & 1 & 18 & **2,3,4,5,7** & 5000 & 28 & 30 & 64 \\
420 & 38 & 47 & **13, 49** & 10000 & 22 & 40 & **7, 9**, 32 \\
625 & 6 & 9 & **4** & 10000 & 22 & 62 & **7, 9, 13, 27**, 32, 64 \\ \hline \hline \end{tabular}
\end{table}
Table 7: **Log-uniform outcome distributions.** Best model of 3, trained for 700 epochs. Non-divisors in bold.
\begin{table}
\begin{tabular}{l c c|c c c|c c} \hline \hline Base & Accuracy & Correct GCD & Base & Accuracy & GCD & Base & Accuracy & GCD \\ \hline
2 & 94.4 & 25 & 60 & 98.4 & 60 & 2025 & 99.0 & 70 \\
3 & 96.5 & 36 & 100 & 98.4 & 60 & 2187 & 98.7 & 66 \\
4 & 98.4 & 58 & 210 & 98.5 & 60 & 2197 & 98.8 & 68 \\
5 & 97.0 & 42 & 211 & 96.9 & 41 & 2209 & 98.6 & 65 \\
6 & 96.9 & 39 & 420 & 98.1 & 59 & **2401** & **99.1** & **73** \\
7 & 96.8 & 40 & 625 & 98.2 & 57 & 2744 & 98.9 & 72 \\
10 & 97.6 & 48 & 997 & 98.3 & 64 & 3125 & 98.6 & 65 \\
11 & 97.4 & 43 & 1000 & 99.1 & 71 & 3375 & 98.8 & 67 \\
12 & 98.2 & 55 & 1024 & 99.0 & 71 & 4000 & 98.7 & 66 \\
15 & 97.8 & 52 & 2017 & 98.6 & 63 & 4913 & 98.2 & 57 \\
30 & 98.2 & 56 & 2021 & 98.6 & 66 & 5000 & 98.6 & 64 \\
31 & 97.2 & 44 & 2023 & 98.7 & 65 & 10000 & 98.0 & 56 \\ \hline \hline \end{tabular}
\end{table}
Table 8: **Accuracy and correct GCD (up to 100), log-uniform operands.** Best of three models, trained for 1000 epochs (300M examples). All models are tested on 100,000 pairs, uniformly distributed between 1 and \(10^{6}\).
The learning process is the same as in previous experiments. Models first learn the products of small powers of primes dividing \(B\), then small powers of prime non-divisors are "grokked", in sequence. For base \(2\), the best model predicts \(25\) GCD: products of powers of \(2\) up to \(64\), \(3\) and \(5\). For base \(10\), the best model predicts powers of \(2\) up to \(16\) and powers of \(5\) up to \(25\), as in previous experiments, but also \(3\), \(7\), \(9\) and \(11\). For base \(100\), models learn \(\{1,2,4,8,16\}\), \(\{1,5,25\}\) and \(3\), \(7\), \(9\), \(11\), \(13\) and \(17\).
Learning curves retain their step-like shape (Figure 3), but they are noisy, and their steps are less steep. For base \(2\), all small powers of \(2\) are learned by epoch \(25\), then \(3\) by epoch \(50\) and \(5\) by epoch \(300\). At epoch \(925\), GCD \(7\) is predicted with accuracy \(40\%\). For base \(10\), GCD \(1,2,4\) and \(5\) are learned as early as epoch \(3\), \(3\) and \(8\) by epoch \(25\), \(7\) and \(9\) by epoch \(220\) and \(11\) by epoch \(750\).
For base \(1024\) (\(2401\) and \(2844\)), only \(27\) GCD are incorrectly predicted (the three rules are respected):
* the \(16\) primes from \(29\) and \(97\), all predicted as \(1\),
* small multiples of these primes: products of \(2\) and \(29,31,37,41,43\) and \(47\), predicted as \(2\), and products of \(3\) and \(29\) and \(31\), predicted as \(3\),
* powers of small primes: \(49=7^{2}\), predicted as \(7\), and \(81=3^{4}\), predicted as \(27\).
* small multiples of these: \(98=49*2\), predicted as \(14\).
Overall, training from log-uniform operands improves model performance by accelerating the grokking process. After training, models have learned to predict all primes up to a certain value (\(23\) for the best models), some of their small powers, and all associated products. This brings model accuracy on random pairs to \(99\%\), and the number of correct GCD under \(100\) to \(73\). The three rules with grokking (G1 to G3) still apply. Models make deterministic predictions, and for a pair \((a,b)\) with GCD \(k\), they predict a unique number \(f(k)\): the largest correctly predicted GCD that divides \(k\).
During training, rules G1 and G3 can be temporarily violated while the model learns a new factor. During a few epochs, model predictions will be split between the old and the new value (e.g. between \(7\) and \(49\) while the model learn \(49\)). This situation was rarely observed in previous experiments, when transitions were faster. They become common with log-uniform operands.
**Log-uniform outcomes.** Balancing the distribution of outcomes in the training set to make it log-uniform (as in section 4), brings a large improvement in performance (Table 9). After \(1000\) epochs, models with bases larger than \(1000\) predict \(87\) to \(91\) GCD: all primes up to \(53\) and all composite numbers up to \(100\). These are our best results. For large bases, they can be improved by using an inverse square root distribution of outcomes (Table 15 in Appendix C.1). For small bases, log-uniform outcomes sometimes degrade performance. For base \(2\), accuracy drops to \(16.5\%\), as the model struggles to learn to predict GCD \(1\), for lack of examples.
Learning from uniform outcomes
Log-uniform distributions of outcomes improve model performance by reducing the imbalance between small and large GCD in the training set. In this section, I push this idea further, and eliminate all imbalance by training models from a uniform distribution of outcomes and operands, using the same sampling procedure for the training set and the stratified test set (see Section 2).
Figure 4 presents test losses and accuracies for three models trained on uniform operands and outcomes, with \(B=10\). Model accuracy seems to vary randomly during training, and the loss is not reduced. Yet, the number of correct GCD is stable, and increases in steps, from \(10\) to \(13\) and \(17\) GCD, in line with the results from section 3 (\(13\) GCD). Something is being learned, despite the flat loss.
At first glance, model predictions seem chaotic. At epoch 266, one model achieves \(81\%\) accuracy, and predicts 14 GCD: \(1\), \(2\), \(5\), \(8\), \(20\), \(32\), \(40\), \(44\), \(48\), \(50\), \(64\), \(75\), \(80\) and \(100\). One epoch later, accuracy is down to \(6\%\), the model still predicts \(14\) GCD: \(4\), \(8\), \(10\), \(16\), \(40\), \(50\), \(55\), \(60\), \(64\), \(66\), \(75\), \(80\), \(95\) and \(100\), but half of the correct GCD have changed! After another epoch, accuracy is \(4\%\) and the model predicts \(4\), \(20\), \(25\), \(26\), \(30\), \(32\), \(40\), \(48\), \(50\), \(55\), \(64\), \(73\), \(80\), \(88\) and \(100\).Again, half the GCD changed.
Further analysis reveal regular patterns. Table 10 presents the most common model predictions for all GCD up to \(20\), and their frequencies. First, note that model predictions remain deterministic:
\begin{table}
\begin{tabular}{c c c|c c|c|c|c|c|c|c|c|c} \hline \hline & Epoch 266 & Epoch 267 & Epoch 268 & Epoch 269 & Epoch 270 & Epoch 580 & Epoch 581 \\ & Pred & \% & Pred & \% & Pred & \% & Pred & \% & Pred & \% & Pred & \% \\ \hline
[MISSING_PAGE_POST]
* uniform operands and outcomes.** Most common prediction for GCD 1 to 20, and frequency, for successive epochs. Correct predictions are in bold
Figure 4: **Learning curves for B=10. Uniform outcomes and operands.** 3 different seeds.
most predictions have a frequency close to \(100\%\) (i.e. all pairs with this GCD are predicted the same), with the exception of epoch \(267\), when predictions for \(1\), \(3\), \(7\), \(11\), \(13\), \(17\) and \(19\) are equally split between \(11\) and \(19\). Second, groups of GCD are predicted the same. All elements in class \(C_{1}=\{1,3,7,9,11,13,17,19\}\) are predicted as \(1\) at epoch \(266\), \(19\) at epoch \(267\), \(73\) at epoch \(268\), and so on. Similar patterns occur for classes \(C_{2}=\{2,6,14,18\},C_{4}=\{4,12\}\) and \(C_{5}=\{5,15\}\).
These classes correspond to the multiples of \(1\), \(2\), \(4\), and \(5\), and would have been predicted as \(1\), \(2\), \(4\), and \(5\) by a base \(10\) model following the three rules from section 3. In other words, the model classifies GCD exactly like a model trained on a natural distribution of outcomes, but its predictions for each class vary over time. Finally, note that model predictions for each class are elements of the class: elements of \(C_{1}\) are predicted as \(1\), \(7\), \(11\), \(19\), elements of \(C_{2}\) as \(2\), \(22\), \(62\), \(66\). In fact, all model predictions are accounted for by **the three rules with uniform outcomes**:
1. **Predictions are mostly deterministic.** At a given epoch, the model usually predicts a unique value \(f(k)\) for a given GCD \(k\). In rare cases, the model makes \(2\) or \(3\) predictions.
2. **Classes of multiples of products of prime divisors of B are predicted the same.** For base \(10\), classes are \(\{1,3,7,9,11,13,17,19\ldots\}\), \(\{2,6,14,18,22,26,34,38\ldots\}\), \(\{4,12,21,36,44,52,\ldots\}\), \(\{5,15,35,55\ldots\}\)...
3. **For each class, at every epoch, the (unique) model prediction is an element of the class.** Predictions vary from one epoch to the next. However, the number of correctly predicted GCD is constant over time: it is the number of classes. It increases over time, as the model learns new multiples of primes divisors of \(B\)).
The three rules account for both the chaotic shape of the accuracy curve and the step-like shape of the number of correct GCD. Since \(61\%\) of test examples in the natural test set (used to compute accuracy) have GCD \(1\), accuracy jumps by \(61\%\) every time class \(C_{1}\) is predicted as \(1\). On the other hand, rule U3 implies that the number of GCD is the number of classes, because at one given epoch, one of their elements is correctly predicted. This accounts for the step-shaped learning curve of correct GCD.
These results shed light on the learning process and the role of the distribution of outcomes. During training, all models, no matter their outcome distribution, learn to partition input pairs into classes. Each class is associated with a product \(p\) of prime divisors of the base (e.g. \(1\), \(2\), \(4\), \(5\), \(10\)...), and contains all pairs \((a,b)\) with GCD \(k\), such that \(k\) is a multiple of \(p\). Model predictions are the same for all pairs in the class. When the distribution of outcomes is unbalanced, the model learns to predict the most common element in the class, i.e. \(f(p)=p\). When outcomes are uniformly distributed, one element of the class is selected, somewhat randomly, at every epoch.
For **base 1000**, learning curves (Figure 5) suggest that a similar situation occurs during the first \(400\) epochs, with grokking, characterized by steep drops in the loss and increases in the number of correct GCD, happening between epochs 200 and 400. Then, accuracy rises steadily, as does the number of correct GCD, which reaches \(95\) (out of \(100\)) after about \(800\) epochs.
Figure 5: **Learning curves for B=1000 - uniform operands and outcomes.**
More precisely, by epoch 180, the model has learned to classify all examples into \(14\) sets: multiples of \(1\), \(2\), \(4\), \(5\), \(8\), \(10\), \(16\), \(20\), \(25\), \(32\), \(40\), \(50\), \(80\) and \(100\). At each epoch, the model selects one element in each class, which is its unique prediction for all pairs of integers with GCD in the class.
Grokking set in around epoch \(200\), and by epoch \(220\), \(5\) new classes have been learned: multiples of \(11\) (\(11\), \(33\), \(77\) and \(99\)), \(22\), \(44\), \(55\) and \(88\), created by "splitting away" the multiples of \(11\) from the classes of multiples of \(1\), \(2\), \(4\), \(5\) and \(8\). Note that small primes are not learned in increasing order anymore (in fact, the order varies with model initialization). This is another effect of uniform outcomes. By epoch 260, multiples of \(3\) are learned and the model predicts \(31\) different outcomes (splitting \(12\) classes: \(1\) to \(32\)). By epoch \(400\), multiples of \(7\) are learned, and \(41\) GCD are predicted.
At that point, a new phenomenon sets in: model predictions are no longer deterministic. Until then, all pairs with the same GCD would be predicted the same at every epoch (or split between two values in rare cases). As grokking develops, more classes are created, and the frequencies of most common predictions for each class go down. By epoch 400, for pairs with GCD \(1\), the model predicts \(18\) different values, with frequencies ranging from \(2\%\) to \(13\%\) (Table 11).
At this point, model predictions are neither deterministic nor interpretable, and the three rules are no longer respected. Classes have as many predictions as there are elements, and the model begins learning individual GCD, beginning with the largest ones (i.e. the smallest classes). By epoch 740, \(95\) of the \(100\) first GCD are correctly predicted, the worst performance being achieved on the smallest values (GCD \(1\), \(2\) and \(3\), correctly predicted \(43\), \(74\) and \(85\%\) of the time).
To summarize, for base \(10\), the model only learns products of divisors of the base. Model predictions remain deterministic, but they change from an epoch to the next. For base \(1000\), grokking sets in after \(200\) epochs, and eventually causes models trained from uniform outcomes to stop making deterministic predictions. However, in the long run, they manage to learn the greatest common divisor, and predict \(95\) GCD out of \(100\).
## 7 Discussion
**Can transformers learn the greatest common divisor?** With enough examples, and appropriate adjustment of their training distribution, they can. Models leveraging large composite bases, and trained on log-uniform operands and outcomes predict more than \(90\) of the \(100\) first GCD. Models trained on uniform outcomes can predict \(95\) GCD. However, the initial experiments from section 3 show the limits of naive, benchmark-based evaluations on arithmetic tasks: high accuracies can be achieved by models that only predict a handful of GCD.
\begin{table}
\begin{tabular}{c c|c c|c c|c c|c c} \hline \hline \multicolumn{2}{c|}{GCD 1} & \multicolumn{2}{c|}{GCD 2} & \multicolumn{2}{c|}{GCD 3} & \multicolumn{2}{c|}{GCD 4} & \multicolumn{2}{c}{GCD 5} \\ Pred. & \% & Pred & \% & Pred & \% & Pred & \% & Pred & \% \\ \hline
[MISSING_PAGE_POST]
\hline \hline \end{tabular}
\end{table}
Table 11: **Base 1000 - epoch 400 - predicted values and frequencies. Frequencies larger than 1%.**
**The impact of the training distribution on model performance** is an important finding, which may come as a surprise. Many authors observed that evaluating a model out of its training distribution has a negative impact on performance. In these experiments, all models are tested on uniformly distributed operands and outcomes, but the best results are achieved for models trained from log-uniform operands and outcomes. The existence of special training distributions, which allow for high performance across many test distributions, was observed for other numerical tasks [4].
The log-uniform distribution of operands strikes a balance between memorization and generalization. It provides the model with many easy examples that help learn general solutions. This prevents catastrophic forgetting, observed in curriculum learning when easy examples are presented at the beginning of training and progressively replaced by harder problems.
The log-uniform distribution of outcomes helps balance the training set by making large GCD more common, a classic recipe in machine learning. However, the counter-intuitive result is that making the GCD uniformly distributed in the training set, the best possible balancing act, actually hinders training by preventing the model from learning to predict small GCD. The results about log-uniform distributions probably apply to other arithmetic tasks, notably to the fine-tuning large language models on arithmetic tasks.
**What are the models doing?** Model predictability is probably the most striking feature of these experiments. It is often repeated that neural networks, and especially transformers, are incomprehensible black-boxes, that sometimes confabulate and often fail in unpredictable ways. In most experiments in this paper, model predictions are deterministic and can be fully explained by a small number of rules. These rules suggest that the model is learning GCD by applying a sieving algorithm.
Throughout training, the model learns divisibility rules, and uses them to partition its input pairs into classes of pairs with a common divisor. Early during training, this is limited to powers of divisors of the base, that the model can learn by counting the rightmost zeroes in the representation of integers. For base \(2\), the model will partition its input into classes of pairs divisible by \(1\), \(2\), \(4\), \(8\). For composite bases, it will partition its input into classes of pairs divisible by products of primes dividing \(B\). If training outcomes are unbalanced, each class will be predicted as its most common value, i.e. its minimum. At this point, all GCD corresponding to products of divisors of \(B\) have been learned.
As training proceeds, new divisors are learned, in order if training outcomes are unbalanced. They are all prime because multiples of previous divisors were learned already, i.e. the model implements the sieve of Eratosthenes. When a new divisor \(p\) is learned, new classes are created by splitting all existing classes between multiples and non-multiples of \(p\). In base \(2\), when the model learns divisibility by \(3\), six new classes are created: multiples of \(3\), \(6\), \(12\), \(24\), \(48\) and \(96\). All GCD will eventually be learned, once all their prime factors are learned. Note that this algorithm relies on unbalanced outcomes in the training distribution: they guarantee that the model predicts the smallest element of every class, and that primes are learned in increasing order.
**Is it really grokking?** The characterization as grokking of the phenomenon observed in section 4 is not entirely correct. Power [22] defines grokking as "generalization far after overfitting." In these experiments, training and test data are generated on the fly from a very large problem space. No overfitting can happen. As a result, the classical pattern of grokking, where train accuracy saturates for a long time before validation accuracy catches up, will not occur. The similarity with grokking lies in the sudden change in accuracy after a long stagnation of the training loss. |
2308.12210 | ULDP-FL: Federated Learning with Across Silo User-Level Differential
Privacy | Differentially Private Federated Learning (DP-FL) has garnered attention as a
collaborative machine learning approach that ensures formal privacy. Most DP-FL
approaches ensure DP at the record-level within each silo for cross-silo FL.
However, a single user's data may extend across multiple silos, and the desired
user-level DP guarantee for such a setting remains unknown. In this study, we
present Uldp-FL, a novel FL framework designed to guarantee user-level DP in
cross-silo FL where a single user's data may belong to multiple silos. Our
proposed algorithm directly ensures user-level DP through per-user weighted
clipping, departing from group-privacy approaches. We provide a theoretical
analysis of the algorithm's privacy and utility. Additionally, we enhance the
utility of the proposed algorithm with an enhanced weighting strategy based on
user record distribution and design a novel private protocol that ensures no
additional information is revealed to the silos and the server. Experiments on
real-world datasets show substantial improvements in our methods in
privacy-utility trade-offs under user-level DP compared to baseline methods. To
the best of our knowledge, our work is the first FL framework that effectively
provides user-level DP in the general cross-silo FL setting. | Fumiyuki Kato, Li Xiong, Shun Takagi, Yang Cao, Masatoshi Yoshikawa | 2023-08-23T15:50:51Z | http://arxiv.org/abs/2308.12210v3 | # Uldp-FL: Federated Learning with Across Silo User-Level Differential Privacy
###### Abstract
Differentially Private Federated Learning (DP-FL) has garnered attention as a collaborative machine learning approach that ensures formal privacy. Most DP-FL approaches ensure DP at the record-level within each silo for cross-silo FL. However, a single user's data may extend across multiple silos, and the desired user-level DP guarantee for such a setting remains unknown. In this study, we present Uldp-FL, a novel FL framework designed to guarantee user-level DP in cross-silo FL where a single user's data may belong to multiple silos. Our proposed algorithm directly ensures user-level DP through per-user weighted clipping, departing from group-privacy approaches. We provide a theoretical analysis of the algorithm's privacy and utility. Additionally, we enhance the algorithm's utility and showcase its private implementation using cryptographic building blocks. Empirical experiments on real-world datasets show substantial improvements in our methods in privacy-utility trade-offs under user-level DP compared to baseline methods. To the best of our knowledge, our work is the first FL framework that effectively provides user-level DP in the general cross-silo FL setting.
Federated Learning, Differential Privacy, User-level DP
## I Introduction
Federated Learning (FL) [1] is a collaborative machine learning (ML) scheme in which multiple parties train a single global model without sharing training data. FL has attracted industry attention [2, 3] as concerns about the privacy of training data have become more serious, as exemplified by GDPR [4]. It should be noted that FL itself does not provide privacy protection for the trained model [5, 6], which motivates for _Differentially Private FL_ (DP-FL) [7, 8], which guarantees a formal privacy for trained models based on differential privacy (DP) [9].
Although DP is the de facto standard in the field of statistical privacy protection, it has a theoretical limitation. The standard DP definition takes a single record as a unit of privacy. This can easily break down in a realistic setting where one user may provide multiple records, and can deteriorate the privacy loss bound of DP. To this end, the notion of _user-level DP_ has been studied [10, 11, 12, 13]. In user-level DP, instead of a single record, all records belonging to a single user are considered as a unit of privacy, which is a stricter definition than standard DP. Note that we distinguish user-level DP from _group-privacy_[14] which considers any \(k\) records as privacy units. User-level DP has also been studied in the FL context [7, 8, 15, 16, 17]. However, these studies focus on the cross-device FL setting, where one user's data belongs to a single device only.
Cross-silo FL [18, 19, 20, 21] is a practical variant of FL in which a relatively small number of silos (e.g., hospitals or credit card companies) participate in the training rounds. In contrast to cross-device FL, in cross-silo FL, a single user can have multiple records across silos, as shown in Figure 1. Existing cross-silo DP-FL studies [19, 20, 21] have focused on record-level DP for each silo; user-level DP across silos has not been studied. Therefore, our research question is: _How do we design a FL framework guaranteeing user-level DP across silos in cross-silo FL?_
A naive design for an algorithm that guarantees user-level DP is a combination of bounding user contributions (number of records) as those in [12, 13] and group-privacy property of DP [14]. Group-privacy simply extends the indistinguishability of the record-level DP to multiple records. We can convert any DP algorithm to group-privacy version of DP (Lemma 1, 5), which we formally define as Group DP (GDP) later. However, this approach can be impractical due to the super-linear privacy
Fig. 1: In cross-silo FL, in general, records belonging to the same user can exist across silos, e.g., a user can use several credit card companies. In this study, we investigate how to train models satisfying _user-level_ DP in this setting.
bound degradation of conversion to GDP and the need to appropriately limit the maximum number of user records (group size) in a distributed environment. In particular, the former problem is a fundamental issue for DP and highlights the need to develop algorithms that directly satisfy user-level DP without requiring a conversion to GDP.
In this study, we present a novel cross-silo FL framework named Uldp-FL, designed to directly guarantee user-level DP through the incorporation of per-user weighted clipping. The contributions of our work are summarized as follows:
* We introduce a problem setting for cross-silo FL under user-level DP across silos, as illustrated in Figure 1.
* We propose the Uldp-FL framework and design baseline algorithms capable of achieving user-level DP across silos. The baseline algorithms combine limiting the maximum number of records per user and using group-privacy with DP-SGD [22] for each silo.
* Our proposed algorithm ULDP-AVG/SGD directly satisfy user-level DP by implementing user-level weighted clipping within each silo, which can effectively bound user-level sensitivity for unlimited number of a single user's records across silos. We provide theoretical analysis on the ULDP-AVG, showing a user-level DP bound and a convergence analysis.
* We evaluate our proposed method and baseline approaches through comprehensive experiments on various real-world datasets. The results underscore that our proposed method yields superior trade-offs between privacy and utility compared to the baseline approaches.
* We further design an effective method by refining the weighting strategy for user-level clipping bounds. Since this approach may lead to additional privacy leakage of the training data, we develop a private protocol employing cryptographic techniques. We evaluate the extra computational overhead of the proposed private protocol using real-world benchmark scenarios.
## II Background & Preliminaries
### _Cross-silo Federated learning_
In this work, we consider the following cross-silo FL scenario. We have a central aggregation server and silo set \(S\) participating in all rounds. In each round, the server aggregates models from all silos and then redistributes the aggregated models. Each silo \(s\in S\) optimizes a local model \(f_{s}\), which is the expectation of a loss function \(F(x;\xi)\) that may be non-convex, where \(x\in\mathbb{R}^{d}\) denotes the model parameters and \(\xi\) denotes the data sample, and the expectation is taken over local data distribution \(\mathcal{D}_{s}\). In cross-silo FL, we optimize this global model parameter cooperatively across all silos. Formally, the overarching goal in FL can be formulated as follows:
\[\min_{x}\,\left\{f(x):=\frac{1}{|S|}\sum_{s\in S}f_{s}(x)\right\},\,f_{s}(x): =\mathbb{E}_{\xi\sim\mathcal{D}_{s}}F(x;\xi). \tag{1}\]
In our work, we introduce additional notations. We have user set \(U\) across all datasets across silos, where each record belongs to one user \(u\in U\), and each user may have multiple records in one silo and across multiple silos. Each silo \(s\) has local objectives for each user \(u\), \(f_{s,u}:=\mathbb{E}_{\xi\sim\mathcal{D}_{s,u}}F(x;\xi)\), where \(\mathcal{D}_{s,u}\) is the data distribution of \(s\) and \(u\). In round \(t\in[T]\) in FL, the global model parameter is denoted as \(x_{t}\).
Note that this modeling is clearly different from cross-device FL in that there is no constraint that one user should belong to one device. Records from one user can belong to multiple silos. For example, the same customer may use several credit card companies, etc. Additionally, all silos participate in all training rounds, unlike the probabilistic participation in cross-device FL [17], and the number of silos \(|S|\) is small, around 2 to 100.
### _Differential Privacy_
DP [9] is a rigorous mathematical privacy definition that quantitatively evaluates the degree of privacy protection when publishing outputs.
**Definition 1** (\((\epsilon,\delta)\)-Dp).: _A randomized mechanism \(\mathcal{M}:\mathcal{D}\to\mathcal{Z}\) satisfies \((\epsilon,\delta)\)-DP if, for any two input databases \(D,D^{\prime}\in\mathcal{D}\) s.t. \(D^{\prime}\) differs from \(D\) in at most one record and any subset of outputs \(Z\subseteq\mathcal{Z}\), it holds that_
\[\Pr[\mathcal{M}(D)\in Z]\leq\exp(\epsilon)\Pr[\mathcal{M}(D^{\prime})\in Z]+\delta. \tag{2}\]
We call databases \(D\) and \(D^{\prime}\) as _neighboring_ databases. The maximum difference of the output for any neighboring database is referred to as _sensitivity_, as defined in Definition 4. We label the original definition as _record-level_ DP because the neighboring databases differ in only one record.
To extend privacy guarantees to multiple records, group-privacy [14] has been explored as a solution. We refer to the group-privacy version of DP as Group DP (GDP) and define it as follows:
**Definition 2** (\((k,\epsilon,\delta)\)-Gdp).: _A randomized mechanism \(\mathcal{M}:\mathcal{D}\to\mathcal{Z}\) satisfies \((k,\epsilon,\delta)\)-GDP if, for any two input databases \(D,D^{\prime}\in\mathcal{D}\), s.t. \(D^{\prime}\) differs from \(D\) in at most \(k\) records and any subset of outputs \(Z\subseteq\mathcal{Z}\), Eq. (2) holds._
GDP is a versatile privacy definition, as it can be applied to existing DP mechanisms without modification. To convert \((\epsilon,\delta)\)-DP to \((k,\epsilon,\delta)\)-GDP, it is known that any \((\epsilon,0)\)-DP mechanism satisfies \((k,k\epsilon,0)\)-GDP [14]. However, in the case of any \(\delta>0\), \(\delta\) increases super-linearly [23], leading to a much larger \(\epsilon\) (Lemma 1). Also, we can compute GDP using group-privacy property of Renyi DP [24]. First, we calculate the RDP of the algorithm, then convert it to group version of RDP, and subsequently to GDP (Lemma 5).
Figure 2 illustrates a numerical comparison of the group-privacy conversion from DP to GDP with normal DP (Lemma 1) and RDP (Lemma 5). We repeatedly execute the Gaussian mechanism and calculate the final GDP. We plot various group sizes, \(k\), on the x-axis and \(\epsilon\) of GDP at fixed \(\delta=10^{-5}\) on the y-axis. The conversion of normal DP with fixed \(\delta\) is complex and elaborated in Appendix D.1. Significantly, the result indicates that as the group size, \(k\), increases, \(\epsilon\) grows
rapidly, underscoring a considerable degradation in the privacy bound of GDP. For instance, with \(\epsilon=2.85\) in record-level (\(k=1\)), the value reaches 2100 for only \(k=32\), and 11400 at \(k=64\). While there might be some looseness in the group-privacy conversion of RDP compared to normal DP for some small group sizes, the difference is relatively minor (roughly three times at most). And the RDP's conversion is easier to compute with a fixed \(\delta\). Hence, we utilize RDP's conversion in our experiments.
### _Differentially Private FL_
DP has been applied to the FL paradigm, where the goal is to ensure that the trained model satisfies DP. A popular DP variant in the context of cross-device FL is user-level DP (also known as client-level DP) [7, 8, 25]. Informally, this definition ensures indistinguishability for device participation and has demonstrated a favorable privacy-utility trade-off even with large-scale models [17]. These studies often employ secure aggregation [26, 27] to mitigate the need for trust in other parties during FL model training. This is achieved by allowing the server and other silos to only access appropriately perturbed models after aggregation. This is often known as Distributed DP [8, 28]. In particular, _shuffling_-based variants have recently attracted a great deal of attention [28, 29, 30] and are being deployed in FL [31], which also provides user-level DP. All of these studies assume that a single device holds all records for a single user, i.e., cross-device FL. However, in a cross-silo setting, this definition does not extend meaningful privacy protection to individual users when they possess multiple records across silos.
Another DP definition in cross-silo FL is a variant offering record-level DP within each silo [19, 20, 21], which is referred to as _Silo-specific sample-level_ or _Inter-silo record-level DP_. These studies suggest that record-level DP can guarantee user-level DP through group-privacy [14]. However, they cannot account for settings where a single user may have records across multiple silos. To the best of our knowledge, there exists no method for training models that satisfy user-level DP in cross-silo FL where a single user records may extend across multiple silos.
## III Uldp-FL Framework
### _Trust model and Assumptions_
We assume that all (two or more) silos and aggregation servers are _semi-honest_, meaning they observe the information but do not deviate from the protocol. This is a typical assumption in prior works [26, 32]. In our study, aggregation is performed using secure aggregation to ensure that the server only gains access to the model after aggregation [8]. All communications between the server and silos are encrypted with SSL/TLS, and third parties with the ability to snoop on communications cannot access any information except for the final trained model. We assume that there is no collusion, which is reasonable given that silos are socially separate institutions (such as different hospitals or companies). Additionally, in our scenario, we assume that record linkage [33] across silos has already been completed, resulting in shared common user IDs. Both the server and the silos are aware of the total number of users \(|U|\) with at least one record and the number of silos \(|S|\).
### _Privacy definition_
In contrast to GDP, which offers indistinguishability for any \(k\) records, user-level DP [7, 11] provides a more reasonable user-level indistinguishability. While [7] focuses solely on a cross-device FL context, we re-establish user-level DP (ULDP) in the cross-silo setting as follows:
**Definition 3** (\((\epsilon,\delta)\)-ULDP).: _A randomized mechanism \(\mathcal{M}:\mathcal{D}\rightarrow\mathcal{Z}\) satisfies \((\epsilon,\delta)\)-ULDP if, for any two input databases across silos \(D,D^{\prime}\in\mathcal{D}\), s.t. \(D^{\prime}\) differs from \(D\) in at most one user's records, and any \(Z\subseteq\mathcal{Z}\), Eq. (2) holds._
The fundamental difference from record-level DP lies in the definition of the neighboring databases, which inherently defines user-level sensitivity. Additionally, it is important to emphasize that the input database \(D\) represents the comprehensive database spanning across silos.
If the number of records per user in the database is less than or equal to \(k\), it is clear that GDP is a generalization of ULDP, and the following proposition holds.
**Proposition 1**.: _If a randomized mechanism \(\mathcal{M}\) is \((k,\epsilon,\delta)\)-GDP with input database \(D\) in which any user has at most \(k\) records, the mechanism \(\mathcal{M}\) with input database \(D\) also satisfies \((\epsilon,\delta)\)-ULDP._
One drawback of GDP is the challenge of determining the appropriate value for \(k\). Setting \(k\) to the maximum number of records associated with any individual user could lead to introducing excessive noise to achieve the desired privacy protection level. On the other hand, if a smaller \(k\) is chosen, the data of users with more than \(k\) records must be excluded from the dataset, potentially introducing bias and compromising model utility. In this context, while several studies have analyzed the theoretical utility for a given \(k\)[10, 11] and theoretical considerations for determining \(k\) have been partially explored in [13], it still remains an open problem. In
Fig. 2: Group-privacy conversion results.
contrast, ULDP does not necessitate the determination of \(k\). Instead, it requires designing a specific ULDP algorithm.
### _Baseline methods: ULDP-NAIVE/GROUP-\(k\)_
We begin by describing two baseline methods. The first method is ULDP-NAIVE (described in Algorithm 3), a straightforward approach using substantial noise. It is an extension of DP-FedAVG [17], where each silo locally optimizes with multiple epochs, computing the model delta, clipping by \(C\) and adding a Gaussian noise with variance \(\sigma^{2}C^{2}\). In contrast, in ULDP-NAIVE, since a single user may contribute to the model delta of all silos, the sensitivity across silos is \(|C|*S\) for the aggregated model delta (Line 15). Moreover, compared to DP-FedAVG focusing on cross-device FL, the number of model delta samples (number of silos as versus number of devices) is very small, which also results in larger variance. Hence, ULDP-NAIVE satisfies ULDP at a significant sacrifice in utility. Note that any following algorithm uses secure aggregation and all of proofs of the following theorems are shown in Appendix A5.
**Theorem 1**.: _For any \(0<\delta<1\) and \(\alpha>1\), given noise multiplier \(\sigma\), ULDP-NAIVE satisfies \((\epsilon=\frac{T\alpha}{2\sigma^{2}}+\log{((\alpha-1)/\alpha)}-(\log{\delta} +\log{\alpha})/(\alpha-1),\delta)\)-ULDP after \(T\) rounds._ (The actual value of \(\epsilon\) is numerically calculated by selecting the optimal \(\alpha\) so that \(\epsilon\) is minimized.)
Secondly, we introduce the baseline algorithm ULDP-GROUP-\(k\) (described in Algorithm 1), which combines the constraint of limiting each user's records to a given \(k\) while satisfying \((k,\epsilon,\delta)\)-GDP. As proposition1 implies, this ensures \((\epsilon,\delta)\)-ULDP. The algorithm achieves GDP by implementing DP-SGD [22] within each silo. The algorithm's core principle is akin to that of [19], except for the global setting of a single privacy budget across silos. Before executing DP-SGD, it is essential to constrain the number of records per user to \(k\) (Line 8). We accomplish this by employing flags, denoted as \(\mathbf{B}\), which indicates the records to be used for training (i.e., \(b^{s}_{u,i}=1\)), with a total of \(k\) records for each user across all silos (i.e., \(\forall u,\sum_{s,i}b^{s}_{u,i}\leq k\)). This flag must be the same for all rounds for privacy guarantee. We ignore the privacy concerns in generating the flags because this is the baseline method. The, we perform DP-SGD to satisfy record-level DP (Line 9), which is subsequently converted to GDP.
**Theorem 2**.: _For any \(0<\delta<1\), any integer \(k\) to the power of 2 and \(\alpha>2^{k+1}\), ULDP-GROUP-\(k\) satisfies \((3^{k}\rho+\log{((\frac{\alpha}{2^{k}}-1)/\frac{\alpha}{2^{k}})}-(\log{\delta} +\log{\frac{\alpha}{2^{k}}})/(\frac{\alpha}{2^{k}}-1),\delta)\)-ULDP where \(\rho=\max_{s\in S}\rho_{s}\) s.t. for each silo \(s\in S\), DP-SGD of local subroutine satisfies \((\alpha,\rho_{s})\)-RDP._
While ULDP-GROUP shares algorithmic similarities with the existing record-level DP cross-silo FL framework [19], it does present weaknesses from several perspectives: (1) It presents significant degradation of privacy bounds due to the group-privacy conversion (DP to GDP). (2) Determining the appropriate group size \(k\) is a challenging task [13]. Moreover, this process demands substantial insights into the data distribution across silos and might even breach the trust model as well as the determination of the flags \(\mathbf{B}\). (3) Using group-privacy to guarantee ULDP requires removing records from the training dataset. This can introduce a bias and can cause a degradation in the utility [13, 34]. Our next proposed method aims to overcome these challenges.
### _Advanced methods: ULDP-AVG/SGD_
To directly satisfy ULDP without using group-privacy, we design ULDP-AVG (Algorithm 2) and ULDP-SGD (Algorithm 4). These can be seen as variants of DP-FedAVG and DP-FedSGD [17]. In most cases, DP-FedAVG is preferred in terms of privacy-utility trade-off and communication-cost while DP-FedSGD might be preferable only when we have fast networks [17], which is also the case for ULDP-AVG and ULDP-SGD. In the following analysis, we focus on ULDP-AVG.
Intuitively, ULDP-AVG limits each user's contribution to the model by training the model for each user in each silo and perform per-user per-silo clipping across all silos with globally prepared clipping weights. In each round, ULDP-AVG computes parameter deltas using a per-user dataset in each silo to achieve ULDP: selecting a user (Line 8), training local model with \(Q\) epochs using only the selected user's data (Lines 10-13), calculating model delta (Line 14) and clipping the delta (Line 15). These clipped deltas \(\Delta_{t}^{s,u}\) are then weighted by \(w_{s,u}\) (Line 15) and summed for all users (Line 16). As long as the weights \(w_{s,u}\) satisfies constraints \(\forall u\in U\), \(w_{s,u}>0\) and \(\sum_{s\in S}w_{s,u}=1\), each user's contribution, or _sensitivity_, to the delta aggregation \(\sum_{s\in S}\Delta_{t}^{s}\) is limited to \(C\) at most. This allows ULDP-AVG to provide user-level privacy. We will discuss better ways to determine \(\mathbf{W}\) later, but a simple way is to set \(w_{s,u}=1/|S|\). Compared to DP-FedAVG, ULDP-AVG increases the computational cost due to per-user local
training iteration but keeping communication costs the same, which is likely acceptable in the cross-silo FL setting.
**Theorem 3**.: _For any \(0<\delta<1\) and \(\alpha>1\), given noise multiplier \(\sigma\), ULDP-AVG satisfies \((\epsilon=\frac{T\alpha}{2\sigma^{2}}+\log{((\alpha-1)/\alpha)}-(\log{\delta}+ \log{\alpha})/(\alpha-1),\delta)\)-ULDP after \(T\) rounds._
**Remark 1**.: For further privacy amplification, we introduce user-level sub-sampling, which can make RDP smaller according to sub-sampled amplification theorem [35]. User-level sub-sampling must be done globally across silos. The sub-sampling can be implemented in the central server by controlling the weight \(\mathbf{W}\) for each round, i.e., all of users not sub-sampled are set to 0. This may violate privacy against the server but does not affect the DP when the final model is provided externally as discussed in C.3 of [25]. We have the detailed algorithm and experimental results to show the effectiveness of user-level sub-sampling in Appendix D.4.
**Theorem 4** (Convergence analysis on ULDP-AVG).: _For ULDP-AVG, with assumptions 1, 2 and 3 and \(\min_{x}f(x)\geq f^{*}\), let local / global learning rates \(\eta_{l}\) / \(\eta_{g}\) be chosen as s.t. \(\eta_{g}\eta_{l}\leq\frac{1}{3QL\bar{\alpha}_{t}}\) and \(\eta_{l}<\frac{1}{\sqrt{30QL}}\), we have,_
\[\frac{1}{T}\sum_{t=0}^{T}\mathbb{E}\left[\left\|\nabla f(x_{t}) \right\|^{2}\right] \tag{3}\] \[\leq \frac{1}{cT\eta_{g}\eta_{l}Q|S|}\left(\mathbb{E}\left[\frac{f(x_{0 })}{\underline{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{ \mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{ \mathbf{ }}}}}}}}}}}} \right]\right)\] \[+\frac{5}{2c}L^{2}Q\eta_{l}^{2}(\sigma_{l}^{2}+6Q\sigma_{g}^{2})+ \frac{3\bar{C}L\eta_{g}\eta_{l}\sigma_{l}^{2}}{2c|S|^{2}|U|}+\frac{L\eta_{g} \sigma^{2}C^{2}d}{2c\underline{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{ \mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{ \mathbf{ \mathbf{ }}}}}}}}}} \left|\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{ \mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{ \mathbf{ }}}}}}}}}}}} \left\right\left\|\right]\] \[+A_{1}\sum_{t=0}^{T-1}\mathbb{E}\left[\sum_{s\in S}\sum_{u\in U} \left(|\alpha_{t}^{s,u}-\tilde{\alpha}_{t}^{s,u}|+|\tilde{\alpha}_{t}^{s,u} -\bar{\alpha}_{t}|\right)\right]\] \[+A_{2}\sum_{t=0}^{T-1}\mathbb{E}\left[\sum_{s\in S}\sum_{u\in U} \left(|\alpha_{t}^{s,u}-\tilde{\alpha}_{t}^{s,u}|^{2}+|\tilde{\alpha}_{t}^{s,u}-\bar{\alpha}_{t}|^{2}\right)\right]\]
_where \(c>0\), \(\bar{C}:=\max\limits_{s,u,t}\biggl{(}\frac{C}{\max{(C,\eta\|\mathbb{E}\| \sum_{s\in[0]}g_{t,u}^{s,u}\|)}}\biggr{)}\), \(\bar{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{ \mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{ \mathbf{ }}}}}}}}}}}}} \left\|\bar{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{ \mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{ }}}}}}}}}}}}} \left|\bar{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{ \mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbfmathbfmathbfmathbfmathbfmathbfmathbfmathbf { } }}}}}}}}}}}}} \|\mathbf{}}\mathbf{}\mathbf{}\mathbf{}\mathbf{} \mathbf{}\mathbf{}\mathbf{}\mathbf{}\mathbf{}\mathbf{}\mathbf{}\mathbf{}\mathbf{ }\mathbf{}\mathbf{}\mathbf{}\mathbf{}\mathbf{}\mathbf{}\mathbf{}\mathbf{} \mathbf{}\mathbf{}\mathbf{}\mathbf{}\mathbf{}\mathbf{}\mathbf{}\mathbf{} \mathbf{}\mathbf{}\mathbf{}\mathbf{}\mathbf{}\mathbf{}\mathbf{}\mathbf{} \mathbf{}\mathbf{}\mathbf{}\mathbf{}\mathbf{}\mathbf{}\mathbf{}\mathbf{}\mathbf{ }\mathbf{}\mathbf{}\mathbf{}\mathbf{}\mathbf{}\mathbf{}\mathbf{}\mathbf{} \mathbf{}\mathbf{}\mathbf{}\mathbf{}\mathbf{}\mathbf{}\mathbf{}\mathbf{}\mathbf{ }\mathbf{}\mathbf{}\mathbf{}\mathbf{}\mathbf{}\mathbf{}\mathbf{}\mathbf{} \mathbf{}\mathbf{}\mathbf{}\mathbf{}\mathbf{}\mathbf{}\mathbf{}\mathbf{} \mathbf{}\mathbf{}\mathbf{}\mathbf{}\mathbf{}\mathbf{}\mathbf{}\mathbf{}\mathbf{ }\mathbf{}\mathbf{}\mathbf{}\mathbf{}\mathbf{}\mathbf{}\mathbf{}\mathbf{} \mathbf{}\mathbf{}\mathbf{}\mathbf{}\mathbf{}\mathbf{}\mathbf{}\mathbf{}\mathbf{ }\mathbf{}\mathbf{}\mathbf{}\mathbf{}\mathbf{}\mathbf{}\mathbf{}\mathbf{} \mathbf{}\mathbf{}\mathbf{}\mathbf{}\mathbf{}\mathbf{}\mathbf{}\mathbf{}\mathbf{ }\mathbf{}\mathbf{}\mathbf{}\mathbf{}\mathbf{}\mathbf{}\mathbf{}\mathbf{}\mathbf{ }\mathbf{}\mathbf{}\mathbf{}\mathbf{}\mathbf{}\mathbf{}\mathbf{}\mathbf{} \mathbf{}\mathbf{}\mathbf{}\mathbf{}\mathbf{}\mathbf{}\mathbf{}\mathbf{}\mathbf{ }\mathbf{}\mathbf{}\mathbf{}\mathbf{}\mathbf{}\mathbf{}\mathbf{}\mathbf{}\mathbf{ }\mathbf{}\mathbf{}\mathbf{}\mathbf{}\mathbf{}\mathbf{}\mathbf{}\mathbf{}\mathbf{ }\mathbf{}\mathbf{}\mathbf{}\mathbf{}\mathbf{}\mathbf{}\mathbf{}\mathbf{}\mathbf{ }\mathbf{}\mathbf{}\mathbf{}\mathbf{}\mathbf{}\mathbf{}\mathbf{}\mathbf{}\mathbf{ }\mathbf{}\mathbf{}\mathbf{}\mathbf{}\mathbf{}\mathbf{}\mathbf{}\mathbf{}\mathbf{ }\mathbf{}\mathbf{}\mathbf{}\mathbf{}\mathbf{}\mathbf{}\mathbf{}\mathbf{} \mathbf{}\mathbf{}\mathbf{}\mathbf{}\mathbf{}\mathbf{}\mathbf{}\mathbf{}\mathbf{ }\mathbf{}\mathbf{}\mathbf{}\mathbf{}\mathbf{}\mathbf{}\mathbf{}\mathbf{}\mathbf{ }\mathbf{}\mathbf{}\mathbf{}\mathbf{}\mathbf{}\mathbf{}\mathbf{}\mathbf{}\mathbf{ }\mathbf{}\mathbf{}\mathbf{}\mathbf{}\mathbf{}\mathbf{}\mathbf{}\mathbf{}\mathbf{ }\mathbf{}\mathbf{}\mathbf{}\mathbf{}\mathbf{}\mathbf{}\mathbf{}\mathbf{}\mathbf{ }\mathbf{}\mathbf{}\mathbf{}\mathbf{}\mathbf{}\mathbf{}\mathbf{}\mathbf{} \mathbf{}\mathbf{}\mathbf{}\mathbf{}\mathbf{}\mathbf{}\mathbf{}\mathbf{}\mathbf{} \mathbf{}\mathbf{}\mathbf{}\mathbf{}\mathbf{}\mathbf{}\mathbf{}\mathbf{}\mathbf{} \mathbf{}\mathbf{}\mathbf{}\mathbf{}\mathbf{}\mathbf{}\mathbf{}\mathbf{}\mathbf{} \mathbf{}\mathbf{}\mathbf{}\mathbf{}\mathbf{}\mathbf{}\mathbf{}\mathbf{}\mathbf{}} \mathbf{}\mathbf{}\mathbf{}\mathbf{}\mathbf{}\mathbf{}\mathbf{}\mathbf{}\mathbf{ }\mathbf{}\mathbf{}\mathbf{}\mathbf{}\mathbf{}\mathbf{}\mathbf{}\mathbf{}\mathbf{ }\mathbf{}\mathbf{}\mathbf{}\mathbf{}\mathbf{}\mathbf{}\mathbf{}\mathbf{}\mathbf{} \mathbf{}\mathbf{}\mathbf{}\mathbf{}\mathbf{}\mathbf{}\mathbf{}\mathbf{}\mathbf{} \mathbf{}\mathbf{}\mathbf{}\mathbf{}\mathbf{}\mathbf{}\mathbf{}\mathbf{}\mathbf{}\mathbf{ }\mathbf{}\mathbf{}\mathbf{}\mathbf{}\mathbf{}\mathbf{}\mathbf{}\mathbf{}\mathbf{} \mathbf{}\mathbf{}\mathbf{}\mathbf{}\mathbf{}}\mathbf{}\mathbf{}\mathbf{}\mathbf{ }\mathbf{}\mathbf{}\mathbf{}\mathbf{}\mathbf{}\mathbf{}\mathbf{}\mathbf{}\mathbf{} \mathbf{}\mathbf{}\mathbf{}\mathbf{}\mathbf{}\mathbf{}\mathbf{}\mathbf{}\mathbf{} \mathbf{}\mathbf{}\mathbf{}\mathbf{}\mathbf{}\mathbf{}\mathbf{}\mathbf{}\mathbf{}\mathbf{ }\mathbf{}\mathbf{}\mathbf{}\mathbf{}\mathbf{}\mathbf{}\mathbf{}\mathbf{}\mathbf{} \mathbf{}\mathbf{}\mathbf{}\mathbf{}\mathbf{}\mathbf{}\mathbf{}\mathbf{}\mathbf{}\mathbf{} \mathbf{}\mathbf{}\mathbf{}\mathbf{}\mathbf{}\mathbf{}\mathbf{}\mathbf{}\mathbf{} \mathbf{}\mathbf{}\mathbf{}\mathbf{}\mathbf{}\mathbf{}\mathbf{}\mathbf{}\mathbf{}\mathbf{} \mathbf{}\mathbf{}\mathbf{}\mathbf{}\mathbf{}\mathbf{}\mathbf{}\mathbf{}\mathbf{}\mathbf{ }\mathbf{}\mathbf{}\mathbf{}\mathbf{}\mathbf{}\mathbf{}\mathbf{}\mathbf{
of all gradients deviate from the global mean gradient. We may be able to minimize these values by selecting appropriate weights \(\mathbf{W}\), guided by the following optimization problem:
\[\min_{\mathbf{W}}\sum_{s\in S}\sum_{u\in U}|\tilde{\alpha}_{t}^{s,u }-\bar{\alpha}_{t}|,\text{ s.t., }w_{s,u}>0,\forall u,\sum_{s\in S}w_{s,u}=1\] \[\left(=\sum_{s\in S}\sum_{u\in U}\left|w_{s,u}C_{s,u}-\frac{1}{|S ||U|}\sum_{s^{\prime}\in S}\sum_{u^{\prime}\in U}w_{s^{\prime},u^{\prime}}C_{s ^{\prime},u^{\prime}}\right|\right)\]
where \(C_{s,u}:=\frac{C}{\max\left(C,\eta_{t}\left\|\mathbb{E}\left[\sum_{\pi\in \Omega}g_{t}^{s,u}\right]\right\|\right)}\).
However, it is hard to determine the optimal weights because we cannot predict the gradients norm in advance, which also can cause another privacy issue.
**Comparison to baselines.** Compared to ULDP-GROUP, ULDP-AVG satisfies ULDP without group-privacy, thus avoiding the large privacy bound caused from group-privacy conversion, choosing group size \(k\) and removing the records. ULDP-AVG can be used for an arbitrary number of records per user. Also, it differs from ULDP-NAIVE in the following point. Fundamentally, per-user clipping can be viewed as cross-user FL (instead of cross-silo FL), which ensures that each user contributes to only the user-specific portion of the aggregated model updates (i.e., \(\sum_{s\in S}\tilde{\Delta}_{t}^{s,u}\)) instead of entire aggregated update (i.e., \(\sum_{s\in S}\Delta_{t}^{s}\)), thereby reducing sensitivity (as illustrated in Figure 6). The user contributes only \(1/|U|\) of the entire aggregated model update, which is especially effective when \(|U|\) is large as in cross-silo FL (i.e., \(|S|\ll|U|\)). Moreover, compute model delta in user-level leads to lower Gaussian noise variances because of large \(|U|\) while it also introduces new biases.
### _Better weighting strategy with private protocol_
Here we consider the bias described in Remark 4. In our ULDP-AVG algorithm, we have employed uniform clipping weights, i.e. for any \(s\in S\) and \(u\in U\), \(w_{s,u}=1/|S|\), as a feasible solution to the problem without privacy violation. However, for more sophisticated solution, we propose following weighting strategy. We set a weight \(w_{s,u}^{opt}\) for \(C_{s,u}\) according to the number of records, following the heuristic that a gradient computed from a large number of records yields a better estimation that is closely aligned with the average. This results in the smaller bias. That is, let \(n_{s,u}\) be the number of records for user \(u\) in silo \(s\), we set the weight as
\[w_{s,u}^{opt}:=\frac{n_{s,u}}{\sum_{s\in|S|}n_{s,u}}. \tag{4}\]
We empirically demonstrate the effectiveness of this strategy in the experiments. However, the crucial question arises: how can it be implemented without privacy violation?
For the aforementioned better weighting strategy, a central server could aggregate histograms encompassing the user population (number of records per user) within each silo's dataset. Subsequently, the server could compute the appropriate weights for each silo and distribute these weights back to the respective silos. However, this approach raises significant privacy concerns. It leads to a privacy breach as the silo histograms are directly shared with the server. Additionally, when the server broadcasts the weights back to the silos, it enables an estimation of the entire histogram of users across all the silos, posing a similar privacy risk against other silos. In essence, the privacy protection necessitates preserving confidentiality in both of these directions. This is challenging because additive homomorphic encryption techniques such as Paillier cryptosystem cannot handle inverses to compute weights as in Eq. (4), and the raw weights are disclosed to the party with the secret key when encrypting the weights.
To address this privacy issue, we design a novel private weighting protocol to securely aggregate the user histograms from silos, compute the per-user clipping weight for each user in each silo, and aggregate the weighted sum from all silos. Our protocol leverages well-established cryptographic techniques, including secure aggregation [26, 32], the Paillier cryptosystem [38] and multiplicative blinding [39]. Intuitively, our protocol employs multiplicative blinding to hide user histograms against the server while server can compute inverses of blinded histogram to compute the weights (Eq. (4)). Subsequently, the server employs the Paillier encryption to conceal the inverses of blinded histograms because silo knows the blinded masks. Also it enables the server and silos to compute private weighted sum aggregation with its additive homomorphic property. The complete protocol and its correctness and privacy are described in Appendix B.
## IV Experiments
In this section, we evaluate the privacy-utility trade-offs of the proposed methods (ULDP-AVG/SGD), along with the previously mentioned baselines (ULDP-NAIVE/GROUP-\(k\)) and a non-private baseline (FedAVG with two-sided learning rates [36], denoted by DEFAULT). In ULDP-AVG/SGD, we set the weights as \(w_{s,u}=1/|S|\) for all \(s\) and \(u\), the one using \(w_{s,u}^{opt}\) is referred to as ULDP-AVG-w. Regarding ULDP-GROUP-\(k\), flags \(\mathbf{B}\) are generated for existing records to minimize waste on filtered out records, despite the potential privacy concerns. Various values, including the maximum number of user records, the median, 2, and 8, are tested as group size \(k\) and we report GDP using group-privacy conversion of RDP. In cases where \(k\) is not a power of 2, the computed \(\epsilon\) is reported for the largest power of 2 below \(k\), showcasing the lower bound of GDP to underscore \(\epsilon\) is large. The hyperparameters, including global and local learning rates \(\eta_{g}\), \(\eta_{l}\), clipping bound \(C\), and local epoch \(Q\), are set individually for each method. Execution times are measured on macOS Monterey v12.1, Apple M1 Max Chip with 64GB memory with Python 3.9 and 3072-bit security. Most of results are averaged over 5 runs and the colored area in the graph is the standard deviation. All of implementations and experimental settings are available at [https://github.com/FumiyukiKato/uldp-fl](https://github.com/FumiyukiKato/uldp-fl).
**Datasets** used in our evaluation comprise real-world open datasets, including Credicard [40] and two medical datasets, HeartDisease and TcgaBrca from [18], a benchmark datasets for cross-silo FL. We also use MNIST in Appendix. Credicard is a popular tabular dataset for credit card fraud detection
from Kaggle. We undersample the dataset and use about 25K training data and a neural network with about 4K parameters. For HeartDisease and TcgaBrca, we use the same setting such as number of silos (i.e., 4 and 6), data assignments to the silos, models, loss functions, etc. as shown in the original paper. These two datasets are quite small and the model has less than 100 parameters. For all datasets, how to allocate the records to users and silos are explained in Appendix D.2.
**Overhead of private weighting protocol.** We evaluate the overheads of execution time with the private weighting protocol. Figure 5 shows the execution times following HeartDisease and TcgaBrca with the number of users 10 and 100, respectively, with a skewed (zipf) distribution. The left figure shows the time required for a local training in each silo, and the right figure shows the execution time for key exchange, preparation of blinded histograms, and aggregation. As shown in the figure, the execution time of the local training is dominant and it increases with the larger number of users. Overall, it shows realistic execution times under these benchmark scenarios [18]. However, it still causes non-negligible overhead with room for improvement in efficiency. We have more analysis with artificial dataset in Appendix D4.
## V Conclusion
In this paper, we proposed the first cross-silo user-level DP FL framework where a user may have multiple records across silos and designed an algorithm using per-user clipping to directly satisfy ULDP instead of group-privacy. In addition, we developed a better weighting strategy that improves utility of our proposed method and a novel protocol that performs it privately. Finally, we demonstrated the effectiveness of the proposed method on several real-world datasets and showed that it performs significantly better than existing methods. We also verified that our proposed private protocol works in realistic time in existing cross-silo FL benchmark scenarios. Further improving the efficiency of the private protocol and the utility of ULDP algorithms to a level comparable to non-private method are future work.
|
2307.02686 | High-resolution electro-optically sampled broadband dual-comb
spectroscopy across mid-IR to terahertz at video rate | Ultrabroadband electro-optic sampling with few-cycle optical pulses is known
to be an extremely sensitive technique to detect electric field amplitudes. By
combining this method with dual-comb spectroscopy and with a new class of
ultrafast lasers, we perform high-resolution (<10 MHz, 0.0003 wavenumbers)
spectroscopic measurements across the whole frequency range of 1.5 to 45 THz
(6.6-200 microns) with an instantaneous spectral coverage exceeding an octave
(e.g., 9-22 microns). As a driving source, we use a pair of highly
mutually-coherent low-noise frequency combs centered at 2.35 microns produced
by mode-locked solid-state Cr: ZnS lasers. One of the two combs is frequency
downconverted via intrapulse difference frequency generation to produce a
molecular sensing comb, while the second comb is frequency doubled to produce a
near-IR comb for electro-optic sampling (EOS). An ultra-low intensity and phase
noise of our dual-comb system allows capturing a vast amount of longwave
spectral information (>200,000 comb-mode spectral lines) at up to a video rate
of 69 Hz and with the high dynamic range limited by the shot noise of the
near-IR EOS balanced detection. Our long-wavelength IR measurements with
low-pressure gases: ethanol, isoprene, and dimethyl sulfide reveal
spectroscopic features that had never been explored before. | Dmitrii Konnov, Andrey Muraviev, Sergey Vasilyev, Konstantin Vodopyanov | 2023-07-05T23:04:54Z | http://arxiv.org/abs/2307.02686v1 | High-resolution electro-optically sampled broadband dual-comb spectroscopy across mid-IR to terahertz at video rate
###### Abstract
Ultrabroadband electro-optic sampling with few-cycle optical pulses is known to be an extremely sensitive technique to detect electric field amplitudes. By combining this method with dual-comb spectroscopy and with a new class of ultrafast lasers, we perform high-resolution (\(<\) 10 MHz, 0.0003 cm-1) spectroscopic measurements across the whole frequency range of 1.5 to 45 THz (6.6-200 \(\upmu\)m) with an instantaneous spectral coverage exceeding an octave (e.g., 9-22 \(\upmu\)m). As a driving source, we use a pair of highly mutually-coherent low-noise frequency combs centered at \(\uplambda\)\(\approx\)2.35 \(\upmu\)m produced by mode-locked solid-state Cr:ZnS lasers. One of the two combs is frequency downconverted via intrapulse difference frequency generation to produce a molecular'sensing' comb, while the second comb is frequency doubled to produce a near-IR comb for electro-optic sampling (EOS). An ultra-low intensity and phase noise of our dual-comb system allows capturing a vast amount of longwave spectral information (\(>\)200,000 comb-mode spectral lines) at up to video rate of 69 Hz and with the high dynamic range limited by the shot noise of the near-IR EOS balanced detection. Our long-wavelength IR measurements with low-pressure gases: ethanol, isoprene and dimethyl sulfide reveal spectroscopic features that had never been explored before.
## I Introduction
The technique of dual-comb spectroscopy (DCS) [1, 2] has been rapidly expanding over the last two decades starting from the proof-of-concept work [3] where the key advantages of this method over the traditional Fourier transform infrared spectrometry were revealed, namely broadband coverage combined with high spectral resolution, high acquisition speed, high precision, and the absence of moving parts. Mid-infrared (mid-IR) spectral region (3-25 \(\upmu\)m) is of special interest for molecular spectroscopy and trace molecular detection since molecules have their strongest absorption bands across this range. Significant progress in generating extremely broadband mid-IR frequency combs became possible due to the new development of mode-locked fiber [4] and solid-state [5, 6, 7, 8] laser combs and efficient downconverting their frequencies through optical parametric oscillation (OPO) [9, 10, 11, 12], difference-frequency generation (DFG) [13, 14], and intra-pulse DFG (IDFG) [15, 16] based on advanced \(\chi^{(2)}\) nonlinear crystals [17] (see also [18, 19] and references therein). Simultaneously, great effort has been made in developing chip-scale frequency combs based on microresonators and waveguides [20, 21, 22, 23], quantum cascade lasers [24, 25, 26], and interband cascade lasers [27].
More recently, a system has been reported that for the first time has demonstrated all the advantages of the dual-comb method _simultaneously_, namely: broadband instantaneous spectral coverage (6.6-11.4 \(\upmu\)m), superior resolution (\(<\)0.0027 cm-1) and high detection speed (10 Hz) based on efficient downconversion of phase-locked 2.4-\(\upmu\)m combs to the longwave IR (LWIR) domain via IDFG in zinc germanium phosphide (ZGP) crystals and acquisition of interferograms using a fast liquid nitrogen cooled HgCdTe photo-detector [28]. However, reaching to \(>\)12 \(\upmu\)m wavelengths remains a major challenge for photon detectors, as they suffer from higher noise and slower response at long infrared wavelengths, even when operating at cryogenic temperatures.
Kowligy et al. [29] presented a new approach to DCS that combines the IDFG method to create a LWIR'sensing' comb and electro-optic detection using a near-infrared (NIR)'sampling' comb and eliminates the need for cryogenic IR detectors. Essentially, electro-optic sampling (EOS) combines three attractive techniques for low-noise detection of LWIR radiation: up-conversion from LWIR to NIR frequencies, optical time gating that eliminates background noise, and heterodyning that potentially allows quantum-limited LWIR detection [30, 31, 32].
Here we report a novel approach to EOS-DCS using mode-locked Cr:ZnS lasers as the driving source. These lasers have emerged as longwave alternatives to Ti:Sapphire technology, offering several advantages including efficient
pumping schemes and the highest LWIR downconversion efficiency. With this approach, we were able to conduct spectroscopic measurements spanning the entire frequency range from 1.5 to 45 THz (corresponding to wavelengths of 6.6-200 \(\upmu\)m) with an instantaneous spectral coverage of up to an octave, absolute frequency referencing, and the capability to resolve hundreds of thousands of comb-mode lines at video rate.
## II II. Experimental Setup
### A. Driving laser combs at 2.35 \(\upmu\)m
The front end of our DCS system is a pair of Cr:ZnS laser frequency lasers (Fig. 1, inset), each laser consisting of a polycrystalline Cr:ZnS master oscillator pumped by an Er-doped fiber laser (EDFL) at 1567 nm wavelength, and a single-pass Cr:ZnS power amplifier also pumped by an EDFL [6, 7]. The lasers operate at a repetition rate (\(f_{\mathrm{rep}}\)) of 80 MHz, central wavelength of 2.35 \(\upmu\)m, and FWHM bandwidth of 280 nm. Depending on the amplifier pump power, the average output power varies from 0.86 to 3.15 W with the pulse duration ranging respectively from 33 to 25 fs.
For the carrier envelope offset frequency (\(f_{\mathrm{ceo}}\)) stabilization, a portion of the oscillator power is deflected with a beam splitter and focused into a periodically poled lithium niobate (PPLN) crystal (Fig. 1, inset). A custom-design PPLN with three sections of different QPM periods generates 2\({}^{\mathrm{nd}}\), 3\({}^{\mathrm{nd}}\) and 4\({}^{\mathrm{th}}\) harmonics that are used for \(f_{\mathrm{ceo}}\) detection via 3\(f\)-to-4\(f\) nonlinear interferometry at \(\lambda\)\(\lambda\)\(\simeq\)0.7 \(\upmu\)m. The error signal is fed into a feedback loop that controls \(f_{\mathrm{ceo}}\) through changing the pump power of the Cr:ZnS oscillator. For the optical referencing of both combs, we utilize the second harmonic of the output, which is transmitted through a dichroic mirror of the oscillator cavity and heterodyned with a stable ultra-narrow linewidth 1064-nm laser. The beat signal \(f_{\mathrm{h}}\) is fed into a feedback loop to control the oscillator roundtrip length with a piezo transducer. The measured \(f_{\mathrm{ceo}}\) and \(f_{\mathrm{h}}\) offsets are phase-locked to synthesized radiofrequency (RF) signals referenced to a Rb clock. The integrated (10 Hz -10 MHz) phase noise of the \(f_{\mathrm{h}}\) and \(f_{\mathrm{ceo}}\) signals were \(<\)0.1 and \(<\)0.05 rad, respectively, which indicates robust phase locking. We consider the mutual coherence time between the two combs to be at least 100 s [28], and the absolute position of each comb tooth given by the Rb clock accuracy (10\({}^{\mathrm{-}}\)10).
### B. Mid-IR to THz combs produced by intrapulse difference frequency generation.
Generation of broadband transients via intra-pulse difference-frequency generation (IDFG) with few-optical-cycle pulses is a relatively simple but powerful technique for generating offset-free combs in the mid-IR - THz regions [31]. Depending on the application, we performed IDFG using two nonlinear crystals: ZnGeP\({}_{2}\) (ZGP) and GaSe. The ZGP crystal allows generation of LWIR transients with conversion efficiency exceeding 10% (thanks to its high nonlinearity and an excellent group-velocity matching between the 2.35-\(\upmu\)m pump and IDFG output [16]). However,
Fig. 1: Schematic of the EOS DCS spectroscopy setup. Inset (left) shows the Cr:ZnS laser system and its stabilization setup. Inset (right) shows the central portion of the interferogram with the temporal axis in the laboratory frame. EDFL, Er-doped fiber laser; PLL, phase-locked loop; PZT, piezo transducer; OC, output coupler; BS, beam splitter; PD, InGaAs detector for \(f_{\mathrm{h}}\) detection, and Si avalanche photodetector for 3/to-4/ interferometry; Rb, Rubidium atomic clock; DP, dispersive plate; OAP, off-axis parabolic mirror; LPF, longpass filter; SPF, shortpass filter; \(\lambda\)/4, quarter-wave plate; WP, Wollaston prism; BPD, InGaAs balanced photodetector.
the spectral span of the output is limited to \(\lambda\)\(<\)12.5 \(\upmu\)m, given by the ZGP transmission cut-off. In contrast, GaSe crystal can produce outputs spanning well beyond 25 \(\upmu\)m, but with lower output power.
In the case of ZGP, the driving laser beam is pre-chirped with a 1-mm-thick sapphire plate (having the opposite sign of the group velocity dispersion) and focused into a 3 mm thick antireflection (AR) coated ZGP crystal using an \(f\)\(=\) 75 mm CaF\({}_{2}\) lens. The ZGP crystal is cut for type I phase matching with 0=51\({}^{\circ}\) and \(\varphi\)=0\({}^{\circ}\). Since IDFG process requires pump with two orthogonal polarizations ('\(o\)' and '\(e\)' waves), dictated by the phase matching, the crystal is first rotated by 45\({}^{\circ}\) around the beam direction so that the original (horizontal) laser polarization will have both '\(o\)' and '\(e\)' components. The final crystal orientation was fine-tuned to produce the highest output power in the spectral region of interest and the generated LWIR beam was collimated with an off-axis parabolic mirror (OAP). With 2.8 W of the driving laser power the IDFG output power reached 300 mW after a 6.7 \(\upmu\)m longpass filter (LPF). The integrated (10 Hz-10 MHz) LWIR intensity noise was measured to be 0.18%.
For IDFG in GaSe crystal, the driving laser beam was also pre-chirped with 1-mm sapphire plate and focused into a 1.3 mm thick GaSe crystal using an\(f\)\(=\) 75 mm CaF\({}_{2}\) lens. The crystal orientation is given by the type I phase matching with 0=11.3\({}^{\circ}\) and \(\varphi\)=90\({}^{\circ}\) (inset to Fig. 2a). Using 3.15 W of the driving laser power allows generation of 5 mW of the LWIR power after a 7.4 \(\upmu\)m LPF. Fig. 2a represents the phase-matching function for IDFG, which shows that THz waves can be generated concurrently with LWIR waves at the same crystal orientation. Strictly speaking, thinner GaSe crystals are required to create broadband THz output, but this is outside the scope of this paper.
### Electro-optic sampling in the dual-comb configuration
In the electro-optic sampling (EOS) method, the electric field of the mid-IR/THz transient is detected via induced change of the polarization of the near-infrared probe pulse inside an electro-optic (EO) crystal, where the two beams overlap. This polarization change is detected via ellipsometry using a balanced NIR photodetector. The key advantage of the DCS-EOS modality is an ultrabroadband (mid-IR to THz) spectral coverage with a single NIR detector, without the need for cryogenically cooled photodetectors [29, 31].
The second 2.35-\(\upmu\)m comb in our setup (Fig. 1) is used to generate a few-cycle NIR probe pulse. The laser output is focused into a 32-\(\upmu\)m-thick GaSe crystal using an OAP mirror to produce the second harmonic (SH). A special care is taken to filter out the SH parasitically produced inside the laser cavity via random phase matching, to avoid distortion of the EOS signal. The SH GaSe crystal is oriented for the type I phase matching with 0=19.8\({}^{\circ}\) and \(\varphi\)=90\({}^{\circ}\)
Figure 2: (a) Phase-matching function \(|sinc(\Delta kL/2)|\) of a 1.3-mm-thick GaSe crystal with respect to the generated IDFG output (x-scale) for the driving laser centered at 2.35-\(\upmu\)m wavelength and the internal GaSe phase-matching angle \(\theta\) (y-scale). The inset shows the GaSe crystal orientation. (b) Electro-optic phase-matching function \(|sinc(\Delta kL/2)|\) of a150-\(\upmu\)m-thick GaSe with respect to the sampled IDFG wavelength (x-scale) for the EOS probe pulse centered at 1.15 \(\upmu\)m. The inset shows the GaSe crystal orientation. The arrows indicate polarizations of interacting waves.
orientation; its thickness is chosen to utilize the full bandwidth of the driving laser in the SH generation process and to avoid the effect of the spatial walk-off. The generated probe has a central wavelength of 1.15 \(\upmu\)m and 39 THz bandwidth at -10dB level. We did not measure the SH pulse duration but based on the width of the spectrum, it is expected to be close to 20 fs. With 2 W of the pump power the SH output power was 70 mW with vertically polarized beam and high (\(>\)99%) degree of polarization.
Next, both the probe and LWIR beams are spatially overlapped using an OAP mirror with a through hole (Fig. 1) and focused into yet another (EOS) GaSe crystal with the thickness of 150 \(\upmu\)m orientated at \(\theta\)=13\({}^{\circ}\) and \(\varphi\)=90\({}^{\circ}\) for type I phase-matching (inset to Fig. 2b), corresponding to the sum (LWIR+NIR) frequency generation (SFG). Since the LWIR beam has a 45\({}^{\circ}\) polarization with respect to the probe beam, only its vertical projection (\(o\)-wave) participates in the nonlinear interaction while the horizontal component does not affect SFG and thus the balanced detector signal. Fig. 2b shows the phase-matching function for the 150-\(\upmu\)m-thick EOS GaSe.
After GaSe crystal (Fig.1), the probe pulse goes through a short-pass filter (SPF, \(\lambda\)\(<\)1100 nm) to improve the dynamic range and the signal-to-noise ratio (SNR) of EOS by increasing the share of the SFG signal that carries the spectral information with respect to the total power of the NIR probe. Next, the beam is sent to an ellipsometry setup consisting out of quarter-wave plate, Wollaston prism and an InGaAs balanced photodetector (Thorlabs PDB450C, bandwidth 45 MHz). An attenuation wheel is used to keep the total power in each detector just below saturation (\(\sim\)1 mW). The differential signal (interferogram) from the balanced detector is radiofrequency filtered, digitized with a 16-bit analog-to-digital converter, coherently averaged, Fourier transformed, and frequency up-scaled to obtain the LWIR spectrum.
Shown in Fig. 3a is the spectrum obtained with GaSe as an IDFG crystal. One can see that the GaSe crystal produces a broad comb spanning 650 cm-1 (9-22 \(\upmu\)m) at -40db level. The spectral feature at 667 cm-1 corresponds to the carbon dioxide absorption in the air. Interestingly, one can see a dip at \(\sim\)510 cm-1 (15.3 THz) related to the two-phonon lattice absorption of GaSe. Similarly, Fig. 3b depicts the spectrum obtained with ZGP as an IDFG crystal. Here we used two different phase-matching angles for the EOS GaSe crystal. The two different spectral contours indicate that the EOS detection bandwidth was not high enough to capture the whole IDFG spectrum. The absorption peaks at \(>\)1300 cm-1 are due to water absorption in the surrounding air and sharp peaks near 900-1000 cm-1 are due to isoprene absorption in the optical gas cell.
## III High resolution field-resolved dual comb spectroscopy of molecules
Figure 3: (a) The spectrum obtained with GaSe as an IDFG crystal. (b) Spectra obtained with ZGP as an IDFG crystal for two different phase-matching angles of the EOS GaSe.
In this section we present our high-resolution spectroscopy study of several molecules that play an important role in exobiology and medical breath analysis. According to the DCS method, molecular vibrations are excited by few-cycle LWIR pulses and subsequently emit (in the same direction) coherent electric field detected through EOS. The nominal spectral sampling step is determined in our setup by the comb-mode spacing (80 MHz). However, when the absorption linewidths (predominantly Doppler-broadened in our case) are less than this spacing, we use the method of spectral interleaving, i.e., we combine the spectra taken with progressively shifted combs, in which case the spectral resolution can be well below 80 MHz [33]. The absorbance spectra for molecules (defined as \(A\)=-ln(\(I\)/\(I_{0}\)), where \(I\) is the spectral intensity of a gas-filled cell, and \(I_{0}\) is that of an empty cell) were obtained by normalizing the'sample' spectrum to the one taken with vacuum in the cell.
### A. Mixture of CO\({}_{2}\) and C\({}_{2}\)H\({}_{2}\)
We started with taking high-resolution EOS-DCS absorption spectrum of a low-pressure mixture of carbon dioxide (CO\({}_{2}\)) and acetylene (C\({}_{2}\)H\({}_{2}\)), as shown in Fig. 4. The two combs operated at \(f_{\rm rep}\)=80 MHz with the repetition frequency offset between the combs of \(\Delta\)/\({}_{\rm rep}\)=91 Hz. A 20 cm-long absorption cell with antireflection (AR) coated (7-12 \(\upmu\)m) Ge windows was filled with the gas mixture with concentration for each molecule of around 1% in N\({}_{2}\) buffer gas at 4 mbar total pressure. As a sensing comb, we used the IDFG comb produced in GaSe with the spectrum similar to the one shown in Fig. 3a. The absorption spectrum shown in Fig. 4a is combined from 8 interleaved comb-mode-resolved spectra, which allows us to fully resolve the narrow (66 MHz) absorption features. The expanded views of separately CO\({}_{2}\) and C\({}_{2}\)H\({}_{2}\) spectra and their comparison with the HITRAN simulation (shown as inverted peaks) are depicted in Figs. 4b and 4c, respectively. Figure 4d illustrates the combination of eight distinct spectra obtained using shifted combs, each represented by points of varying colors. These spectra were combined to create a unified high-resolution spectrum, with the average spacing of 10 MHz.
## Appendix B Methanol
Figure 4: (a) High-resolution spectrum of a mixture of CO\({}_{2}\) and C\({}_{2}\)H\({}_{2}\) molecules with N\({}_{2}\) as a buffer gas at 4 mbar pressure. (b) Expanded view of the CO\({}_{2}\) spectrum and its comparison with the HITRAN simulation. (c) Expanded view of the C\({}_{2}\)H\({}_{2}\) (acetylene) spectrum and its comparison with the HITRAN simulation. (d) Zoomed-in absorption line showing how spectral data points corresponding to shifted combs were combined in one spectrum.
High-resolution spectrum of methanol with predominantly Doppler-broadened linewidth (76 MHz) is shown in Fig. 5. In this and subsequent experiments, the two combs were operated with the repetition frequency offset of \(\Delta\)/\({}_{\mathrm{rep}}\)=69 Hz. A 45 cm-long absorption cell with AR coated Ge windows was filled with methanol vapor at 0.81 mbar pressure. As the sensing comb, we used the IDFG output from ZGP crystal. The absorption spectrum of Fig. 5 is a combination of 11 interleaved comb-mode resolved spectra. The simulated (HITRAN) spectrum is shown in red and inverted for clarity.
It can be seen from Figs. 4-5 that there is an excellent agreement with the HITRAN simulation for CO\({}_{2}\), C\({}_{2}\)H\({}_{2}\), and methanol in terms of line positions and line widths.
Figure 6: High-resolution LWIR spectrum of ethanol at 2 mbar pressure. (b) –(d) Zoomed-in portions of spectrum.
Figure 7: High-resolution LWIR spectrum of isoprene at 4 mbar pressure. (b) – (d) Zoomed-in portions of spectrum.
## IV Measurements at video rate
Thanks to the high SNR of our DCS-EOS system, we explored the capability of performing high-speed broadband LWIR spectroscopy. As the sensing comb we used the emission spectrum produced via IDFG in ZGP consisting of approximately 200,000 comb modes, with the comb center frequency around 1000 cm-1 and comb width of 530 cm-1 at -20 dB level. A 45-cm-long single-pass optical gas cell was filled with methanol vapor at a partial pressure \(\sim\) 1 mbar diluted in air at a total pressure of 20 mbar. Fig. 9 shows a portion of the spectrum of methanol at different acquisition times. Even at 0.0145-s acquisition time (69 Hz rate, single interferogram), we were able to detect the fine structure of methanol with the signal-to-noise ratio of 22. We did not use time-domain signal apodization, hence the full 80 MHz (comb-mode resolved) spectral resolution was preserved here.
## V Spectroscopy at Terahertzs (2-15 THz) frequencies
To demonstrate our system's ability to measure spectra at wavelengths \(>\)20 \(\upmu\)m, we first adjusted the phase-matching angle of the EOS GaSe to detect a longwave portion (320 - 500 cm-1, 9.6-15 THz) of the IDFG spectrum produced in GaSe (Fig. 10a). Since this spectral range contains numerous prominent water absorption lines in the surrounding atmosphere, we did not use the optical gas cell. Figure 10c displays the absorbance spectrum in this region (obtained by taking the negative natural logarithm of the emission spectrum and subtracting the baseline). The spectrum was compared with the HITRAN simulation (displayed in red color and inverted), revealing a good agreement between the two.
Similarly, we tuned the EOS GaSe crystal (by reducing its phase-matching angle \(\theta\)) for the field detection in the terahertz region, below the GaSe crystal's strongly absorbing Reststrahlen band at about 5-10 THz (Fig. 10b). Despite of the fact that the thicknesses of both IDFG and EOS GaSe crystals were not optimized for terahertz generation and detection, we were able to observe a noticeable band at 1.5-5 THz. In order to verify that the observed band was not an artifact, we derived the absorbance spectrum from this band, as depicted in Figure 10d, and then compared this spectrum with the HITRAN simulation (displayed in red color and inverted) for absorption in ambient air predominantly contributed by water. We observed an agreement between the observed and simulated THz absorption peaks, with the exception that some of our measured peaks were saturated due to a lower signal-to-noise ratio in this particular region of the spectrum.
Figure 9: Portion of the absorption spectrum of methanol between 1008.5 and 1011 cm-1 corresponding to different averaging times ranging from 0.0145 s (single interferogram) to 10 s (687 interferograms). A 45-cm-long optical gas cell was filled with methanol vapor at a partial pressure \(\sim\) 1 mbar diluted in air at a total pressure of 20 mbar.
## VI Detection limits and the figure of merit
In the time domain, our EOS-DCS signal exhibits a signal-to-noise ratio (SNR) of typically 1.2x10\({}^{3}\) for a single interferogram (we define SNR as a ratio of the peak signal to its standard deviation at the end of the interferogram, where we assume the field from the molecular free induction decay is negligible). The SNR scales as the square root of the number of averages, as verified in our experiments with up to 10\({}^{6}\) averages.
Based on our estimation this SNR is determined by the shot noise of the NIR balanced photodetector. In fact, when operating near the nominal saturation power of 1 mW for each detector channel (corresponding to \(I_{0}\sim\) 0.8 mA detector current), we find that for the 45-MHz detector bandwidth and with the peak differential current \(\Delta I\)\(\sim\)0.25x\(I_{0}\) observed in our experiment, the shot noise-limited SNR for a single interferogram is approximately 1.3x10\({}^{3}\) that closely matches our experimental SNR. Considering the fact that SNR scales as the square root of the averaging time, for the comb span of \(\sim\) 500 cm-1 we can achieve the \(E\)-field dynamic range of 10\({}^{6}\), corresponding to the intensity dynamic range of 10\({}^{12}\), in \(\sim\)150 min of averaging.
In the frequency domain, by measuring the spectral power noise \(\sigma_{\rm s}\) vs. the averaging time \(\tau\) in our experiment (we define \(\sigma_{\rm s}\) as the fractional standard deviation of spectral power density near the spectral maximum), we find that the spectral SNR (=1/\(\sigma_{\rm s}\)) scales as \(\sqrt{\tau}\), such that SNR/\(\sqrt{\tau}\)=62 Hz\({}^{1/2}\). For the number of modes M \(\approx\) 200,000 within the central (-20 dB level) 530 cm-wide portion of our typical LWIR comb (Fig. 3), this gives the DCS figure of merit (defined in [2]) of M\(\times\)SNR/\(\sqrt{\tau}\) = 1.2\(\times\)10\({}^{7}\) Hz\({}^{1/2}\). This figure of merit surpasses the best reported value in the LWIR range of 7.3\(\times\)10\({}^{6}\) Hz\({}^{1/2}\)[28] and provides a strong argument in favor of the EOS-DCS modality, especially when operated at longer wavelengths.
Figure 10: (a) EOS-DCS spectrum that features atmospheric absorption in the 300-500 cm-1 (9-15 THz) region. (b) EOS-DCS spectrum featuring atmospheric absorption lines below 5 THz. (c) Absorbance spectrum derived from the window 1 of the emission spectrum shown in (a). The simulated (HITRAN) spectrum for the ambient air is shown in red and inverted for clarity. (d) Absorbance spectrum derived from the window 2 of the emission spectrum shown in (b). The simulated (HITRAN) spectrum for the ambient air is shown in red and inverted.
## VII Conclusion
In summary, using EOS-DCS modality we performed high-resolution spectroscopic measurements across an ultra-broadband, 1.5 to 45 THz (6.6-200 \(\upmu\)m), longwave frequency range with acquisition of octave-wide spectra with 200,000 comb-mode resolved lines at video rate (69 Hz). This result was facilitated by utilizing, as the driving source, mode-locked 2.35-\(\upmu\)m Cr:ZnS lasers with the benefits of low noise and the ability to provide high (up to more than 10%) IDFG power conversion efficiency. With the nominal spectral resolution given by the comb-mode spacing (80 MHz, 0.0027 \(\upmu\)m\({}^{-1}\)), we were able to perform measurements with better than 10 MHz resolution via spectral interleaving. Also, we demonstrated our system's ability to do simultaneous measurements in the LWIR and THz domains. This opens up numerous possibilities for applications in fundamental spectroscopy, such as simultaneously studying absorption strengths of mid-IR and THz bands within the same experiment, allowing for cross-linking of molecular information. It also paves the way for creating highly accurate molecular spectroscopic databases and enables real-time medical diagnostics through multi-species exhaled breath analysis. Our next step will be compressing the driving 2.35-\(\upmu\)m pulses to sub-10 fs duration that will allow extending the spectral coverage to the whole 1-100 THz range.
## Acknowledgements
We acknowledge support from the Defense Advanced Research Projects Agency (DARPA), grant number W31P4Q-15-1-0008; from the Office of Naval Research (ONR), grants numbers N00014-15-1-2659, N00014-18-1-2176, N00014-17-1-2705, and N68335-20-C-0251; and from the Department of Energy (DOE), grant number B&R #KA2601020; US Air Force Office of Scientific Research (AFOSR), grant number FA9550-23-1-0126.
|
2303.01678 | Nonlinear ill-posed problem in low-dose dental cone-beam computed
tomography | This paper describes the mathematical structure of the ill-posed nonlinear
inverse problem of low-dose dental cone-beam computed tomography (CBCT) and
explains the advantages of a deep learning-based approach to the reconstruction
of computed tomography images over conventional regularization methods. This
paper explains the underlying reasons why dental CBCT is more ill-posed than
standard computed tomography. Despite this severe ill-posedness, the demand for
dental CBCT systems is rapidly growing because of their cost competitiveness
and low radiation dose. We then describe the limitations of existing methods in
the accurate restoration of the morphological structures of teeth using dental
CBCT data severely damaged by metal implants. We further discuss the usefulness
of panoramic images generated from CBCT data for accurate tooth segmentation.
We also discuss the possibility of utilizing radiation-free intra-oral scan
data as prior information in CBCT image reconstruction to compensate for the
damage to data caused by metal implants. | Hyoung Suk Park, Chang Min Hyun, Jin Keun Seo | 2023-03-03T02:46:15Z | http://arxiv.org/abs/2303.01678v1 | # Nonlinear ill-posed problem in low-dose dental cone-beam computed tomography1
###### Abstract
This paper describes the mathematical structure of the ill-posed nonlinear inverse problem of low-dose dental cone-beam computed tomography (CBCT) and explains the advantages of a deep learning-based approach to the reconstruction of computed tomography images over conventional regularization methods. This paper explains the underlying reasons why dental CBCT is more ill-posed than standard computed tomography. Despite this severe ill-posedness, the demand for dental CBCT systems is rapidly growing because of their cost competitiveness and low radiation dose. We then describe the limitations of existing methods in the accurate restoration of the morphological structures of teeth using dental CBCT data severely damaged by metal implants. We further discuss the usefulness of panoramic images generated from CBCT data for accurate tooth segmentation. We also discuss the possibility of utilizing radiation-free intra-oral scan data as prior information in CBCT image reconstruction to compensate for the damage to data caused by metal implants.
one-beam computed tomography; ill-posed inverse problem; deep learning; metal artifact reduction.
## 1 Introduction
Computed tomography (CT) is an established diagnostic imaging tool that produces cross-sectional images (i.e., slices of anatomy) using projection data (denoted by P), which consist of a series of X-ray images taken from various angles in the human body. It uses the X-ray ionizing radiation (electromagnetic waves with wavelengths ranging from \(10^{-8}\) to \(10^{-12}\) m) discovered by Wilhelm Rontgen, who was awarded the first Nobel Prize in Physics in 1901 [40]. In medical CT, polychromatic X-ray beams pass through the body and to a two-dimensional array of detectors that acquires the projection data (i.e., X-ray images).
The relationship between projection data P and CT image \(\mu\) (assigning an X-ray attenuation coefficient) is given by Lambert-Beer's law [4, 60], which, in a two-dimensional (2D) parallel beam CT model, is
\[\mathrm{P}^{\sharp}(\theta,s)=-\ln\left(\int\eta(E)\exp\big{\{}-[\mathcal{R}( \mu_{\varepsilon})](\theta,s)\big{\}}dE\right), \tag{1}\]
where \(\mathbb{P}^{\sharp}(\theta,s)\) denotes the projection data of the 2D parallel beam CT at projection angle \(\theta\in[0,2\pi)\) and detector position \(s\), \(\mu_{\varepsilon}\) denotes the attenuation coefficient distribution at photon energy level \(E\), \(\eta(E)\) represents the fractional energy at \(E\)[38, 81], and \(\mathcal{R}(\mu_{\varepsilon})\) is the Radon transform of \(\mu_{\varepsilon}\). Although CT theory began based on this parallel beam model, parallel CT is not used in clinical practice. However, parallel CT is very similar to fan-beam multi-detector CT (MDCT) from the mathematical point of view in CT image reconstruction, and MDCT is widely used in clinical practice.
Conventional CT image reconstruction is based on the ideal monochromatic assumption; \(\eta(E)=\delta(E-E_{0})\) in (1), where \(\delta\) is the Dirac delta function and \(E_{0}\) is a fixed energy. Under this monochromatic assumption, the inverse problem (1) becomes the well-posed linear inverse problem \(\mathbb{P}^{\sharp}=\mathcal{R}\mu\), where \(\mu=\mu_{\varepsilon_{0}}\). In 1917, Johann Radon [91] found that a CT image \(\mu\) could be reconstructed from \(\mathbb{P}^{\sharp}\) (X-ray images at all directions). In the late 1960s, the first clinical CT scanner was developed by Hounsfield [36], and Hounsfield and Cormack [13] shared the Nobel Prize for Medicine in 1979. The CT image reconstruction algorithm currently used for clinical CT (or MDCT) is based on the filtered back-projection (FBP) algorithm [5], built under the ideal assumption that \(\mathbb{P}^{\sharp}\) is in the range of the Radon transform [93].
However, the actual model of CT is nonlinear due to the polychromatic nature of the incident X-ray beam. Hence, an inconsistency exists in the mathematical model \(\mathbb{P}^{\sharp}=\mathcal{R}\mu\) because of the interaction between the applied X-ray beam and tissues. FBP ignores the nonlinear structure of \(P\) produced in terms of the change in tissue property \(\mu\) with respect to \(E\) along the fractional energy distribution \(\eta(E)\) in (1). The \(\mu_{\varepsilon}\) of high attenuation materials such as metals varies significantly with \(E\), and hence the presence of metals in the field-of-view (FOV) of clinical CT produces a serious discrepancy in the model \(\mathbb{P}^{\sharp}=\mathcal{R}\mu\), resulting in serious streaking artifacts associated with the metal geometry when the FBP algorithm is applied. Metal artifact reduction (MAR) in CT is becoming increasingly important as the number of older adults with artificial prostheses and metallic implants increases rapidly. Unfortunately, although numerous MAR methods have been suggested over the past 40 years [1, 59, 114], MAR remains a difficult problem because of the nonlinear structure of the inverse problem (1) associated with \(\eta(E)\)[85].
Although MDCT is known as the most accurate and reliable imaging technique, it is rarely used in small dental clinics because of its high equipment cost and the large space required for its use. By contrast, dental cone-beam CT (CBCT), an alternative to MDCT, is increasingly being used in dental clinics. Dental CBCT is an important element of digital dentistry [26] and is used in various dental fields such as implant/prosthetics, oral and maxillofacial surgery, and orthodontic treatment. Most dental CBCT devices are designed to allow the patient to be scanned while sitting or standing, requiring less space in the dental office. Moreover, most dental CBCT devices are designed to reduce the radiation dose by limiting the scan's FOV [117].
Currently, dental CBCT is being developed in the direction of providing high-resolution images (like those of MDCT), while optimizing data collection in terms of low invasiveness and cost-effectiveness. This willingness leads to a highly ill-posed inverse problem in the sense of Hadamard [34] as follows:
* Non-existence: The existence of a solution is not guaranteed because most of the measured sinogram data do not match a linear model. Given the polychromatic nature of the incident X-ray beam, discrepancies may exist between the actual projection data and the mathematical model, as illustrated in Fig. 1.
* Non-uniqueness: Low-dose dental CBCT can be viewed as an underdetermined problem (fewer equations than unknowns) because of its intention to provide high-resolution images with as few data as possible, as illustrated in Fig. 2 (b).
* Instability: A local inconsistency in the projection data can generate severe global streaking artifacts, as shown in Fig. 3.
As the demand to reduce the X-ray dose and maintenance costs become increasingly severe, the inverse problem of CBCT becomes increasingly ill-posed.
This paper describes the mathematical structure of the ill-posed inverse problem of low-dose dental CBCT and explains the advantages of a deep learning-based approach for CT image reconstruction over conventional regularization methods.
Solving the ill-posed problem of low-dose dental CBCT requires the use of an image prior for the expected images. Various regularization techniques for constraining the solution have been proposed, such as total variation (TV) [78, 103], sparse representations [14, 18, 52] and deep learning-based models [9, 45, 106]. Although these methods achieved remarkable performance in reducing noise, they still have a limited ability to handle metal-induced artifacts in the dental CBCT environment, where metal artifacts are common. Handling metal-induced artifacts in dental CBCT is difficult because its global structure is not only non-linearly influenced by local metal geometry, but is also entangled with complex factors associated with metal-bone and metal-teeth interactions, FOV truncation, offset detector acquisition, scattering, nonlinear partial volume effects, and other factors [6, 10, 41]. This paper describes recent complementary approaches to acquiring concrete image priors using generated panoramic images and a radiation-free oral scanner.
Figure 1: Nonlinear inverse problem in polychromatic X-ray CT image reconstruction. The polychromatic data P can be viewed as the spectrum-weighted sum of monochromatic data P\({}_{E}\), which leads to serious mismatch between the real projection data and mathematical model in the presence of metallic implants. Due to the inherent nature of FBP (i.e., the pseudo inverse of the Radon transform), the local inconsistency of P\((\varphi,u,0)\) generates severe global artifacts in the reconstructed image \(\mu(\mathbf{x},0)\) (the right middle image), which appear as streaking and shading artifacts.
## 2 Mathematical framework of low-dose dental CBCT
### Inverse problem
The mathematical model of low-dose dental CBCT (planar detector) can be described using the following factors and notations:
* **CT image for reconstruction:** The 3D image \(\mu(\mathbf{x},z)\) at point \((\mathbf{x},z)=(x_{1},x_{2},z)\in\mathbb{R}^{3}\) (or tissue density) is reconstructed from measured projection data P.
* **X-ray beam for CT:** A cone-shaped X-ray beam is passed through a patient positioned between an X-ray source and flat-panel detector. This beam is transmitted in different directions by rotating the gantry that houses the X-ray source and detector.
* **Radiation exposure:** CBCT devices emit a radiation dose in the range of 36.9-50.3 microsieverts (\(\mu\)Sv). Collimation, which limits the cross-sectional area of the X-ray beam to the area of the image receptor, reduces radiation exposure.
* **CT projection data:** CBCT projection data \(\text{P}(\varphi,u,v)\) are acquired using a planar detector after emitting the X-ray beams in several cone-beam directions, where \(\Theta_{\varphi}=(\cos\varphi,\sin\varphi),\ \varphi\in[0,2\pi)\) is the projection angle, and \((u,v)\) is the scaled planar detector position with scaling factor \(\frac{\Theta}{r}\). Here, \(r\) is the source-to-rotation axis distance and \(R\) is the source-to-detector plane distance, as illustrated in Fig. 4. We define the cone-beam transform \(\mathcal{C}\mu(\varphi,u,v)\) as \[\mathcal{C}\mu(\varphi,u,v)=\int_{0}^{\infty}\mu(r(\Theta_{\varphi}^{\perp},0 )+t\omega_{u,v})dt,\] (1) where \(\Theta_{\varphi}^{\perp}=(\sin\varphi,-\cos\varphi),\ (r\Theta_{\varphi}^{\perp},0)\) describes how the source position moves around a source circle of radius \(r\), and \(\omega_{u,v}\) denotes the cone-beam direction starting at \((r\Theta_{\varphi}^{\perp},0)\) and ending at detector position \((u,v)\). In addition, \(\mathcal{C}\mu(\varphi,u,v)\) is the line integral of \(\mu\) along the cone-beam line associated with \((\varphi,u,v)\). The relationship between data location \((\varphi,u,v)\) and image location \((\mathbf{x},z)\)
Figure 2: MDCT and CBCT geometry. (a) MDCT: Conventional CT acquires projection data using a fan-shape X-ray beam that moves in a spiral. (b) Dental CBCT uses a rectangular cone-shape beam that rotates once. (c) CBCT scan with a full detector. (d) CBCT-scan with an offset detector. Most dental CBCT machines use an offset detector to acquire only half of the extended FOV with a single projection.
is as follows: \[u=r\frac{\mathbf{x}\cdot\Theta_{\varphi}}{U_{\varphi,\mathbf{x}}}\ \ \text{ and }\ v=z\frac{r}{U_{\varphi,\mathbf{x}}}\ \ \ \text{ where }U_{\varphi,\mathbf{x}}=r+\mathbf{x}\cdot\Theta_{\varphi}^{\perp}.\] (2.2) The measured data P can be expressed as \[\text{P}(\varphi,u,v)=-\text{ln}\int\eta(E)\exp(-\mathcal{C}\mu_{\varepsilon }(\varphi,u,v))dE+\mathbf{noise},\] (2.3) where \(\eta(E)\) represents the fractional energy at photon energy \(E\) in the spectrum of the X-ray source with \(\int_{\mathbb{R}}\eta(E)dE=1\), as shown in Fig. 1.
* **Inverse Problem:** The goal of dental CBCT is to recover a proper tomography image \(\mu\) from P using the nonlinear relation (2.3). Our aim is to reconstruct \(\mu\) such that \(\mu=\mu_{E_{0}}\) for a fixed energy level \(E_{0}\).
Let us begin by reconstructing \(\mu(\mathbf{x},0)\) from the projection data P at level \(v=0\). For ease of explanation, we transform \(\text{P}(\varphi,u,0)\) into the corresponding parallel beam data \(\text{P}^{\sharp}(\theta,s)\) as follows:
\[\text{P}^{\sharp}(\underbrace{\gamma_{u}+\varphi}_{\theta},\underbrace{r\sin \gamma_{u}}_{s})=\text{P}(\varphi,u,0), \tag{2.4}\]
Figure 3: Metal-induced streaking artifacts. The beam hardening corrector \(\phi_{D,\lambda}\) is a function of \(D\) (metal geometry) and the control parameter \(\lambda\) associated with all energy dependent factors including attenuation coefficients and spectrum of X-ray source. Here, \(\mathcal{I}^{-1}\) is the Riesz potential of degree -1.
where \(\gamma_{u}\) is the fan angle given by \(u=r\tan\gamma\) and \(\varphi+\frac{\pi}{2}\) is the beam-source angle, as shown in Fig. 4. The identity (2.4) is derived from the following relation between the fan-beam line \(\ell_{\varphi,u}^{\text{\tiny{kin}}}:=\{\mathbf{x}\in\mathbb{R}^{2}\;:\; \mathbf{x}\cdot\Theta_{\varphi}=r\sin\gamma_{u}\}\) and parallel-beam line \(\ell_{\theta,s}^{\sharp}:=\{\mathbf{x}\in\mathbb{R}^{2}\;:\;\mathbf{x}\cdot \Theta_{\theta}=s\}\) as follows:
\[\ell_{\theta,s}^{\sharp}=\ell_{\varphi,u}^{\text{\tiny{kin}}}\;\;\Longleftrightarrow \;\;s=r\sin\gamma_{u},\;\;\theta=\gamma_{u}+\varphi. \tag{2.5}\]
Under the ideal assumption of \(\eta(E)=\delta(E-E_{0})\) (a monochromatic beam) and no noise, we have
\[\mathrm{P}^{\sharp}=\mathcal{R}\mu_{E_{0}}, \tag{2.6}\]
where the Radon transform \(\mathcal{R}\mu_{E_{0}}\) is defined by
\[\mathcal{R}\mu_{E_{0}}(\theta,s)=\int_{\mathbb{R}^{2}}\mu_{E_{0}}(\mathbf{x},0)\delta(\mathbf{x}\cdot\Theta_{\theta}-s)d\mathbf{x}. \tag{2.7}\]
Under the above ideal assumption, we have the FBP algorithm, given by
\[\mu_{E_{0}}(\mathbf{x},0)=\mathcal{R}^{-1}\mathrm{P}^{\sharp}(\mathbf{x})= \frac{1}{8\pi^{2}}\int_{0}^{2\pi}\int_{\mathbb{R}}|\xi|\mathcal{F}_{s}[ \mathrm{P}^{\sharp}(\theta,\cdot)](\xi)\exp(i\xi\mathbf{x}\cdot\Theta_{\theta })d\xi d\theta, \tag{2.8}\]
where \(\mathcal{F}_{s}\) is the 1D Fourier transform with respect to variable \(s\). However, in practice, because of the polychromatic nature of X-ray beam, \(\mathrm{P}^{\sharp}\) in (2.5) may not lie in the range space of the Radon transform, violating (2.6). Hence, in the presence of a metallic material whose attenuation coefficient varies significantly with \(E\), the FBP reconstruction \(\mathcal{R}^{-1}\mathrm{P}^{\sharp}\) may produce severe artifacts, as shown in Fig. 1.
The standard CBCT reconstruction algorithm is the FDK method, which was developed by Feldkamp, Davis, and Kress [27], and can be regarded as an empirical 3D extension of the standard 2D
Figure 4: (Left) Cone-beam projection geometry and (Right) the relation between parallel-beam data \(\mathrm{P}^{\sharp}(\theta,s)\) and cone-beam data at the midplane (fan-beam data) \(\mathbf{P}(\varphi,u,0)\). When representing the projection data \(\mathbf{P}(\varphi,u,v)\), for sake of the simplicity, the detector plane is pretended to lie on axis of rotation. The actual physical location corresponding to the position \((u,v)\) is \(\frac{R}{r}(u,v)\).
parallel-beam FBP in (2.8). The FDK algorithm consists of filtering and weighted back-projection. The FDK algorithm computes the attenuation value \(\mu(\mathbf{x},z)\) at an image location \((\mathbf{x},z)\) by suitably integrating \(\mathrm{P}(\varphi,u,v)\) over all data positions \((\varphi,u,v)\) that relate to the beam lines passing through \((\mathbf{x},z)\). For CBCT with a circular trajectory, as shown in Fig. 4, the FDK algorithm for reconstructing \(\mu\) can be expressed as
\[\mathcal{C}_{\text{ FDK}}^{\dagger}[\mathrm{P}](\mathbf{x},z)= \frac{1}{4\pi}\int_{0}^{2\pi}\frac{r^{2}}{U_{\varphi,\mathbf{x}}^{2}}\int \mathrm{P}(\varphi,u,\frac{zr}{U_{\varphi,\mathbf{x}}})\frac{r\;\hbar(r\frac{ \mathbf{x}^{\text{-}\Theta_{\varphi}}}{U_{\varphi,\mathbf{x}}}-u)}{\sqrt{r^{2 }+u^{2}+(\frac{zr}{U_{\varphi,\mathbf{x}}})^{2}}}dud\varphi, \tag{2.9}\]
where \(\hbar(u)\) is a 1D ramp filter given by \(\hbar(u)=\frac{1}{2\pi}\int_{\mathbb{R}}|\xi|e^{i\xi u}d\xi\). For details of the CBCT reconstruction, we refer the reader to [29, 50].
The ill-posedness of the inverse problem (i.e., recover \(\mu\) from P) is related to the following commercial dental CBCT specifications: circular cone-beam scan, a scan time of 5-24s, FOV truncation, offset detector, low X-ray dose, and a cost of less than 100 thousand USD. Meanwhile, MDCT uses a helical fan-beam scan and has a scan time that is less than 1s, no FOV truncation, no offset detector, a high X-ray dose, and a cost of over 1 million USD.
The main parameters influencing the radiation dose in a given CBCT device are the tube current, tube voltage, and collimation. To reduce the radiation exposure in CBCT, it is recommended to use as small field of view (FOV) as possible, the lowest tube current setting, and the shortest exposure time. CBCT devices generally emit 36.9-50.3 \(\mu\)Sv of radiation dose. FOV truncation is where the size of the FOV is determined by the detector size and shape, beam projection geometry, and beam collimation function, as illustrated in Fig. 2. Because a significant portion of the manufacturing cost of a CBCT device is the cost of the X-ray detector, the smallest detector is used to obtain the desired image.
**Remark 1**: _Many researchers misunderstand that current dental CBCTs have a higher spatial resolution than MDCT. This misconception is based on the fact that recent dental CBCTs use flat detectors with a pixel size of \(0.1mm\times 0.1mm\), whereas current MDCTs have spatial resolution of greater than 0.2mm. We must distinguish between "nominal spatial resolution" in non-living objects and "actual spatial resolution" in living patients. The actual spatial resolution of real-world dental CBCTs is negatively affected by multiple factors including patient motion owing to the long acquisition time, X-ray scattering, various sources of noise and artifacts caused by low X-ray dose and FOV truncation, cone-beam reconstruction errors caused by violation of the Tuy condition [105], and others. We note that a smaller detector size tends to lower the signal-to-noise ratio. Brullmann et al [8] stated that "Voxel size is also commonly mistaken as spatial resolution. Technically, the spatial resolution of CBCT devices is related to the physical pixel size of the sensor, the grey-level resolution, the reconstruction technique applied and some other factors. Many additional parameters affect the image quality and exposure doses, such as exposure parameters, tube voltage, tube current, exposure time and rotation arc."_
### Ill-posedness issues
#### 2.2.1. Underdetermined problem: Truncated FOV
Most dental CBCTs use a FOV that is smaller than that of a patient's body, resulting in significant truncation of the projection data. As shown in Fig. 2 (b), the effective FOV of dental CBCT does not cover the entire region of the object to be scanned and offset detector geometry is used. The dental
CBCT sinogram P can be expressed as
\[\text{P}=\mathcal{S}_{\text{ub}}(\text{P}_{\text{full}}), \tag{2.10}\]
where \(\text{P}_{\text{full}}\) is the corresponding sinogram acquired with a non-offset and wide-detector CBCT, providing the entire information of a sinogram, and \(\mathcal{S}_{\text{ub}}\) is a subsampling operator determined by the size and offset configuration of the detector. This missing information in P along the \(u\)-axis leads to the need to solve an undersampled problem (the so-called interior tomography problem [73]). This interior problem is known to have no unique solution in general [73].
To explain the interior tomography, let us consider the simple 2D parallel CT model data \(\text{P}^{\sharp}\) in (2.4), which corresponds to CBCT data P at detector position \(v=0\). We consider the problem of recovering \(\mu(\mathbf{x})=\mu(\mathbf{x},0)\) in the ROI area \(\Omega_{\text{ROI}}=\left\{\mathbf{x}\in\mathbb{R}^{2}:|\mathbf{x}|\leq d_{ \text{ROI}}\right\}\) from the projection data \(\text{P}^{\sharp}\) given in the truncated area \(\Pi_{\text{ROI}}=[-\pi,\pi)\times[-l_{\text{ROI}},l_{\text{ROI}}]\). The following directional Hilbert transform is used to address the interior tomography problem [75, 111]:
\[\mathcal{H}_{\theta_{0}}\mu(\mathbf{x})=\frac{1}{\pi}\ p.v.\int_{-\infty}^{ \infty}\frac{\mu(\mathbf{x}-t\Theta_{\theta_{0}})}{t}dt. \tag{2.11}\]
where \(\theta_{0}\) is a fixed angle and \(p.v.\) denotes the Cauchy principal value. Note that if \(\mathcal{R}\mu\) is known for every \((\theta,s)\in(-\pi,\pi]\times\mathbb{R}\), then \(\mu(\mathbf{x})\) can be recovered using the following identity [75, 111]
\[\mu(\mathbf{x})=\frac{1}{2\pi}\mathcal{H}_{\theta_{0}}\mathcal{R}_{\theta_{0} }^{*}\left[\frac{\partial}{\partial s}\mathcal{R}\mu\right](\mathbf{x}), \tag{2.12}\]
where \(\mathcal{R}_{\theta_{0}}^{*}\) denotes the backprojection operator depending on \(\boldsymbol{\theta}_{0}\), and is defined by
\[\mathcal{R}_{\theta_{0}}^{*}g(\mathbf{x})=\int_{|\theta-\theta_{0}|<\frac{ \pi}{2}}g(\theta,\mathbf{x}\cdot\Theta_{\theta})d\theta. \tag{2.13}\]
Under the ideal monochromatic assumption (2.6), \(\mathcal{R}\mu\) can be replaced by \(\text{P}_{\text{full}}^{\sharp}\), which is a full-sampled projection data corresponding to \(\text{P}^{\sharp}\). Then, (2.12) can be rewritten as
\[\mu(\mathbf{x})=\frac{1}{2\pi}\mathcal{H}_{\theta_{0}}\mathcal{R}_{\theta_{0} }^{*}\left[\frac{\partial}{\partial s}\text{P}_{\text{full}}^{\sharp}\right] (\mathbf{x}). \tag{2.14}\]
According to [73], there exists a non-zero \(\widetilde{\mu}\) in \(\Omega_{\text{ROI}}\) such that \(\mathcal{R}\widetilde{\mu}=0\) in \(\Pi_{\text{ROI}}\). To be precise, we can find a function \(h\in C^{\infty}(\mathbb{R})\) satisfying the Helgason-Ludwig condition (see Theorem 4.2 in [73]) such that \(h(s)=0\) for \(|s|<l_{\text{ROI}}\). Then, \(\widetilde{\mu}\) can be reconstructed by
\[\widetilde{\mu}(\mathbf{x})=\frac{1}{2\pi}\mathcal{H}_{\theta_{0}}\mathcal{R} _{\theta_{0}}^{*}\left[\frac{\partial}{\partial s}\widetilde{\text{P}}_{ \text{full}}^{\sharp}\right](\mathbf{x}), \tag{2.15}\]
where \(\widetilde{\text{P}}_{\text{full}}^{\sharp}(\theta,s)=h(s)\). This leads to the non-uniqueness of interior tomography, because
\[\mathcal{R}(\mu+\widetilde{\mu})=\mathcal{R}\mu=\text{P}^{\sharp}\ \ \ \text{in}\ \Pi_{\text{ROI}}. \tag{2.16}\]
Note that, according to (2.15), \(\widetilde{\mu}\) is analytic along the line segment \(\Omega_{\text{ROI}}\cap\left\{\mathbf{x}+t\Theta_{\theta_{0}}:t\in\mathbb{R}\right\}\) because \(\widetilde{\text{P}}^{\sharp}=0\) in \(\Pi_{\text{ROI}}\). Indeed, the reconstructed image \(\mu\) in \(\Omega_{\text{ROI}}\) is unique up to directional analytic images in \(\Omega_{\text{ROI}}\)[19, 75, 111].
#### 2.2.2 Nonlinear problem: X-ray energy-dependent attenuation
The projection data P in (2.3) are nonlinear with respect to \(\mu\) because the value of \(\mu_{\varepsilon}\) varies with \(E\). The nonlinearity of P depends on the geometries of the objects (e.g., tissues, bones, and metallic implants) and the energy spectrum \(\eta\). Fig. 1 shows the energy spectrum of the X-ray beam and the attenuation coefficients of iron, bone, and tissue from the experimental results given in [39]. For example, \(\mu_{E}(\mathrm{material})\) is:
* \(\mu_{30KeV}(\mathrm{soft\ tissue})\approx 0.40(cm^{-1}),\ \mu_{80KeV}( \mathrm{soft\ tissue})\approx 0.19(cm^{-1})\)
* \(\mu_{30KeV}(\mathrm{bone})\approx 2.56(cm^{-1}),\ \mu_{80KeV}(\mathrm{bone}) \approx 0.43(cm^{-1})\)
* \(\mu_{30KeV}(\mathrm{Iron})\approx 64.38(cm^{-1}),\ \mu_{80KeV}(\mathrm{Iron}) \approx 4.69(cm^{-1})\)
The value \(\mu\) of a soft tissue varies little with \(E\), whereas the \(\mu\) of a metal object varies greatly. Hence, in the presence of high attenuation materials (i.e., high \(\frac{\partial}{\partial E}\mu_{\varepsilon}\)) such as metals in the FOV of a clinical CBCT, the FDK algorithm (2.9) produces serious streaking artifacts, which are mainly caused by the beam hardening factor (the lower-energy photons tend to be absorbed more rapidly than higher energy photons) [20, 25, 65, 79]. Recently, major CT companies have been developing photon-counting CTs that have the potential to overcome the above mentioned nonlinear issues of current CTs [21, 28, 64, 102, 112]. However, it seems difficult to use a photon-counting detector for dental CBCT for the time being.
For ease of explanation and simplicity, we limit our discussion to the parallel-beam model data P\({}^{\sharp}\) in 2.4), which correspond to the CBCT data P at detector position \(v=0\). An unavoidable mismatch exists between the nonlinear projection data P\({}^{\sharp}\) and the linear mathematical model based on the ideal monochromatic assumption (2.6). Fig. 3 shows the violation of the linear assumption (2.6). The mismatch between the data P\({}^{\sharp}\) and the range space of \(\mathcal{R}\) can be explained by the polychromatic nature of the X-ray beams [73].
For a rigorous description of the geometry of beam hardening, consider a simple cross-sectional body (corresponding to \(z=0\)) composed of soft tissue and metal. Park _et al_[83, 85] found the nonlinear beam hardening factor of projection data P\({}^{\sharp}\) induced by metal. The reconstructed artifact image due to the metal-occupying region \(D\) is given by
\[\phi_{D,\lambda}(\mathbf{x})=-\frac{1}{8\pi^{2}}\int_{-\pi}^{\pi}\int_{- \infty}^{\infty}|\omega|\mathcal{F}\left[\ln\left(\frac{\sinh\left(\lambda \mathcal{R}\chi_{D}(\theta,\cdot)\right)}{\lambda\mathcal{R}\chi_{D}(\theta, \cdot)}\right)\right](\omega)e^{i\omega\mathbf{x}\cdot\Theta_{\theta}}d\omega d\theta, \tag{2.17}\]
where \(D\) is the metal region, \(\lambda\) is a constant that depends on the energy spectrum of the X-ray beam and absorption property of the subject, and \(\chi\) is a characteristic function. The projection data P\({}^{\sharp}\) can be decomposed into data consistent and inconsistent parts according to the metal region \(D\) as follows:
\[\mathrm{P}^{\sharp}(\theta,s)=\underbrace{\mathcal{R}\mu_{E_{0}}(\theta,s)}_ {\text{target}}+\underbrace{\ln\left(\frac{\sinh\left(\lambda\mathcal{R} \chi_{D}(\theta,s)\right)}{\lambda\mathcal{R}\chi_{D}(\theta,s)}\right)}_{ \text{model mismatch}}. \tag{2.18}\]
The beam hardening corrector \(\phi_{D,\lambda}\) enables us to handle the serious mismatch between projection data P\({}^{\sharp}\) and mathematical model \(\mathcal{R}\mu_{E_{0}}\) in the presence of metallic objects. The idea of the geometric corrector is the selective extraction of metal-induced streaking and shadow artifacts from uncorrupted CT images without affecting intact anatomical images.
The first mathematical analysis to characterize the structure of metal streaking artifacts was presented in [85]. On the basis of this mathematical analysis, the authors first found the mathematical
formula (17) for beam-hardening metal artifacts, which was experimentally validated using industrial CT [83]. Metal artifacts are viewed as a singularity propagation in the image, which is closely related to the interrelation between the structure of data \(\mathbb{P}^{\sharp}\) and \(\mu=\mathcal{R}^{-1}\mathbb{P}^{\sharp}\). This can be interpreted effectively using the Fourier integral operator and wavefront set [24, 35, 37, 90, 104].
The following theorem characterizes metal artifacts in terms of their geometry:
(Park _et al_. [85]) Let \(D_{1},D_{2},\ldots,D_{J}\) be strictly convex and disjoint bounded domains in \(\mathbb{R}^{2}\) with connected boundaries. Let \(D=\cup_{j=1}^{J}D_{j}\) be the metal region. Given P, assume that \(\mu\) is represented as
\[\mu(\mathbf{x})=\mu_{E_{0}}(\mathbf{x})+\Upsilon_{\text{P}}( \mathbf{x}), \tag{19}\]
where
\[\Upsilon_{\text{P}}=\frac{1}{4\pi}\mathcal{R}^{*}\mathcal{I}^{ -1}\left[\sum_{k=1}^{N}\frac{(-1)^{k}}{k}\left[\sum_{n=1}^{N}\frac{(\alpha \varepsilon)^{2n}}{(2n+1)!}(\mathcal{R}\chi_{D})^{2n}\right]^{k}\right]. \tag{20}\]
If a straight line \(\ell_{\theta,s}=\left\{\mathbf{x}=s\Theta_{\theta}+t\Theta_{\theta}^{\bot}:t \in\mathbb{R}\right\}\) satisfies
\[\Sigma_{\mathbf{x}}(\mu)\neq\emptyset\quad\text{for all}\ \ \mathbf{x}\in\ell_{ \theta,s}\setminus\text{sing-supp}(\mu_{E_{0}}), \tag{21}\]
then \((\theta,s)\) satisfies
\[\dim\left(\text{Span}[\Sigma_{(\theta,s)}(\mathcal{R}\chi_{D}) ]\right)=2, \tag{22}\]
where \(\dim(\text{Span}[A])\) is the dimension of the span of the set \(A\), \(\text{sing-supp}(u)\) denote the singular support of \(u\), and \(\Sigma_{\mathbf{x}}(u)\) is a closed conic subset in \(\mathbb{R}^{2}\setminus\{\mathbf{0}\}\)[37, 104].
Fig. 3 provides a visual explanation of Theorem 1. Metal streaking artifacts are produced only when the wavefront set of the Radon transform of \(\sum_{j=1}^{N}\chi_{D_{j}}\) does not contain the wavefront set of the square of the Radon transform. Metal streaking artifacts can appear when there exist distinct \(\mathbf{y}\), \(\mathbf{z}\in\cup_{j=1}^{N}\partial D_{j}\) such that the straight line \(l_{\theta,s}\) is tangent to the boundaries \(\cup_{j=1}^{N}\partial D_{j}\) at \(\mathbf{y}\) and \(\mathbf{z}\) simultaneously.
#### 2.2.3. Cone beam artifacts and violation of Tuy's data complete condition
Cone-beam artifacts are caused by the violation of the data sufficiency condition formulated by Tuy in [105]. According to Tuy's condition, the accurate reconstruction of a plane requires that the plane contains the X-ray's focal spot point. In dental CBCT, the reconstructed image \(\mathcal{C}_{\text{\tiny FDS}}^{\uparrow}[\text{P}]\) contains these cone-beam artifacts. In circular CBCT, as shown in Fig. 5, the projection data P (even with the monochromatic assumption) is insufficient for accurate analytic reconstruction, because it is not true that every plane passing through any location in the region of interest (ROI) intersects the source trajectory at least once. Most dental CBCT machines violate Tuy's condition, whereas most MDCTs are designed to meet Tuy's condition.
## 3 Methods for solving the ill-posed problems in dental CBCT
To solve the ill-posed problem in dental CBCT, the goal is to find a nonlinear reconstruction map \(f:\text{P}\mapsto\mu_{s}\) that maps from the noisy under-sampled data P to the corresponding CBCT image \(\mu_{s}\) that we
want to reconstruct. In addition, we should take into account photon starvation, which is very common in dental CBCT, especially when the patient has many implants. To solve this ill-posed problems, a regularized least squares method of the following form can be used:
\[\mu_{*}=\underset{\mu}{\text{argmin}}\frac{1}{2}\|\text{P}-\mathcal{C}\mu\|_{ \ell_{2}}^{2}+\lambda\Gamma(\mu), \tag{3.1}\]
where \(\Gamma(\mu)\) is a regularization term constraining prior knowledge of artifact-free and noise-free CBCT images, \(\mathcal{C}\) is the cone-beam transform in (2.1), \(\|\cdot\|_{\ell_{2}}\) denotes the standard Euclidean norm, and \(\lambda\) is the regularization parameter controlling the trade-off between the fidelity term \(\|\text{P}-\mathcal{C}\mu\|_{\ell_{2}}^{2}\) and regularity. This regularization should help suppress artifacts in the reconstructed image. Using the FDK algorithm \(\mathcal{C}_{\text{\tiny FDK}}^{\dagger}[\text{P}]\) defined in (2.9), the least-squares problem (3.1) can be expressed as
\[\mu_{*}=\underset{\mu}{\text{argmin}}\frac{1}{2}\|\mathcal{C}_{\text{\tiny FDK }}^{\dagger}[\text{P}]-\mu\|_{\ell_{2}}^{2}+\lambda\Gamma(\mu). \tag{3.2}\]
The linear model \(\mathcal{C}\mu\) or \(\mathcal{C}_{\text{\tiny FDK}}^{\dagger}[\text{P}]\) is based on the unrealistic assumption that \(\eta(E)=\delta(E-E_{0})\) (monochromatic beams) and a serious discrepancy in the fidelity term can occur in the presence of metallic objects. As a good approximation of the polychromatic beam (2.3), a more realistic model would be to replace \(\mathcal{C}\mu\) with the following cone beam projection \(\widehat{\mathcal{C}}\mu\) based on Alvarez's assumption [1]:
\[\widehat{\mathcal{C}}\mu=-\text{ln}\int\eta(E)\exp(\underbrace{-p(E)\psi_{p} (\mu)-q(E)\psi_{q}(\mu)}_{-\mu_{E}})dE, \tag{3.3}\]
where \(\psi_{p}(\mu)\) is the spatially-dependent photoelectric component, \(\psi_{q}(\mu)\) is the spatially-dependent Compton scattering component, \(p(E)\) approximates the energy dependence of the photoelectric interaction, \(q(E)\) is the Klein-Nishina function [48, 98], and \(\mu\) represents the monochromatic linear attenuation coefficient at \(E=70keV\)[20]. Here, \(\psi_{p}\) and \(\psi_{q}\) are assumed to be known functions of \(\mu\).
The key to finding the nonlinear reconstruction map \(f\) is to determine a good regularization model \(\Gamma(\mu)\) that penalizes \(\mu\) based on how far it is from the prior distribution of the expected images.
### Classical methods for MAR
Over the past four decades, numerous MAR methods have been developed based on the MDCT model with full FOV projection data. These include statistical iterative correction [20, 25, 69, 79, 108],
Figure 5: Cone-beam artifacts due to the violation of Tuy’s condition. It can be seen in the yellow arrows that the cone-beam artifacts increase as \(\mathbf{z}\) moves away from \(\mathbf{z}=0\) (the orange dashed line).
sinogram inpainting-based correction [2, 7, 49, 70, 82], and dual-energy approaches [1, 59, 114]. Most commercial MAR algorithms can be considered to be hybrid methods that combine iterative methods and sinogram correction approaches. These include SEMAR (Toshiba Medical Systems) [121], O-MAR (Philips Healthcare) [80], iMAR (Siemens Healthcare) [53], and Smart MAR (GE Healthcare) [30]. Sinogram correction approaches use various inpainting techniques such as interpolation, reprojection, and normalization to recover unreliable background data because of the presence of metallic objects.
We briefly explain the sinogram correction method using TV inpainting [23]. Given a reconstructed image \(\mu\), we segment a metal region \(D\) by using a suitable threshold, and then obtain the corresponding metal trace \(T\) given by \(T=\text{supp}\{\mathcal{C}\chi_{D}\}\). Sinogram correction \(\text{P}_{\text{cor}}\) can be obtained by minimizing the following objective function:
\[\mathcal{L}_{M}(\text{P}_{\text{cor}}):=\frac{1}{2}\|(\text{P}_{\text{cor}}- \text{P})\odot\chi_{T^{c}}\|_{\ell^{2}}^{2}+\lambda\|\nabla\text{P}_{\text{ cor}}\|_{\ell^{1}}, \tag{3.4}\]
where \(\odot\) is Hadamard's product, \(\chi_{T^{c}}\) is the binary mask of the sinogram area excluding the metal trace \(T\), and \(\|\cdot\|_{\ell^{1}}\) is the \(L^{1}\)-norm. Unfortunately, these methods can create new artifacts, and tend to corrupt the morphological information of the region around the metallic object in the reconstructed image. Hence, these approaches may not be suitable for dental CBCT, where individual tooth details are more important than the overall structure.
Normalized MAR (NMAR) [70] is widely used to handle streaking artifacts caused by the non-smooth transition between the original and interpolated data. The main idea of NMAR is to convert the sinogram to a nearly flat sinogram and then interpolate it to obtain a smooth transition. However, the NMAR method does not recover the tooth structure near a metallic object, because its performance depends on the accuracy of the object segmentation. A fundamental limitation of this method is that the segmentation, which is based on thresholds, is not perfectly accurate.
The other solution, dual-energy CT, is not appropriate for low-dose dental CBCT, because the use of an additional scan with a higher energy increases radiation exposure to patients. Similarly, photon-counting detector that provides monochromatic data by differentiating individual photon energies is not suitable for low-cost dental CBCT.
### Iterative reconstruction
In the variational approach (3.1) to solving the inverse problems of X-ray CT, numerous regularizations reflecting the spatial correlation among neighboring pixels, such as TV [22, 113], fractional-order TV [120], nonlocal TV [63], Markov random field theory [62, 101, 109], and nonlocal means [15, 54, 71, 121], have been used on the target image. However, it is challenging to design a regularization method that conveys the characteristics of target image \(\mu^{*}\). For example, in compressed sensing (CS) [14, 18], \(\Gamma(\mu)\) can be \(L^{1}\)-norm that increases sparsity in a given basis. These CS-based regularizations produce an over-smoothing effect that causes the loss of fine tooth detail. The internal parameters of the CS-based regularizations also have a significant impact on the reduction of artifacts.
To make this explanation concrete, we denote the projection operator by a matrix \(\mathbf{A}=[a_{ij}]\), where \(a_{ij}\) is the effective intersection length of the \(i-\)th projection line with the \(j-\)th voxel. We use the bold face letters \(\boldsymbol{\mu}\in\mathbb{R}^{N}\) and \(\mathbf{P}\in\mathbb{R}^{M}\), to denote the corresponding discrete versions of \(\mu\) and P, respectively. The least squares problem corresponding to (3.1) is to minimize the following:
\[\mathcal{L}(\boldsymbol{\mu})=\frac{1}{2}\|\mathbf{A}\boldsymbol{\mu}-\mathbf{ P}\|_{\mathbf{H}}^{2}+\lambda\Gamma(\boldsymbol{\mu}), \tag{3.5}\]
where \(\|\boldsymbol{\mu}\|_{\mathbf{H}}=\boldsymbol{\mu}^{T}\mathbf{H}\boldsymbol{\mu}\) and \(\mathbf{H}=\mathbf{I}-\mathbf{T}\). Here, \(\mathbf{I}\) is an identity matrix and \(\mathbf{T}=diag(t_{i})\) is a diagonal matrix whose diagonal elements \(t_{i}\) are one on the metal trace and zero otherwise. In the presence of metallic objects, the fidelity domain in (3.5) is restricted to the outside metal region to avoid serious discrepancy in the fidelity.
The proximal-gradient method provides the following iterative procedure: For each \(k=0,1,2,\ldots,K\),
\[\boldsymbol{\mu}^{(k+1/2)}=\boldsymbol{\mu}^{(k)}-\gamma\nabla\Gamma( \boldsymbol{\mu}^{(k)})\;\;\text{and}\;\;\boldsymbol{\mu}^{(k+1)}=g(\boldsymbol {\mu}^{(k+1/2)},\mathbf{P}), \tag{3.6}\]
where \(\gamma\) ia the step size at iteration \(k\) and \(g(\boldsymbol{\mu},\mathbf{P})\) is the function given by
\[g(\boldsymbol{\mu},\mathbf{P})=\arg\min_{\boldsymbol{\mu}}\mathcal{L}(\tilde{ \boldsymbol{\mu}};\boldsymbol{\mu},\mathbf{P})=\|\mathbf{A}\tilde{\boldsymbol {\mu}}-\mathbf{P}\|_{\mathbf{H}}^{2}+\frac{\lambda}{\gamma}\|\tilde{\boldsymbol {\mu}}-\boldsymbol{\mu}\|_{2}^{2}. \tag{3.7}\]
The minimization problem of (3.7) can be solved using the following separable paraboloid surrogate [25]:
\[Q(\tilde{\boldsymbol{\mu}};\tilde{\boldsymbol{\mu}}^{(l)},\boldsymbol{\mu}, \mathbf{P})=\sum_{i=1}^{M}\sum_{j=1}^{N}\beta_{ij}h_{i}\left(\frac{a_{ij}}{ \beta_{ij}}(\tilde{h}_{j}-\tilde{h}_{j}^{(l)})+\sum_{j=1}^{N}a_{ij}\tilde{h}_ {j}^{(l)}-P_{i}\right)^{2}+\frac{\lambda}{\gamma}\sum_{j=1}^{N}(\tilde{h}_{j}- \mu_{j})^{2}, \tag{3.8}\]
where \(\beta_{ij}=a_{ij}/\sum_{j=1}^{N}a_{ij}\). Then, \(Q(\tilde{\boldsymbol{\mu}};\tilde{\boldsymbol{\mu}}^{(l)},\boldsymbol{\mu}, \mathbf{P})\) satisfies
\[\mathcal{L}(\tilde{\boldsymbol{\mu}};\boldsymbol{\mu},\mathbf{P})\leq Q( \tilde{\boldsymbol{\mu}};\tilde{\boldsymbol{\mu}}^{(l)},\boldsymbol{\mu}, \mathbf{P}),\;\mathcal{L}(\tilde{\boldsymbol{\mu}}^{(l)};\boldsymbol{\mu}, \mathbf{P})=Q(\tilde{\boldsymbol{\mu}};\tilde{\boldsymbol{\mu}}^{(l)}, \boldsymbol{\mu},\mathbf{P}). \tag{3.9}\]
We minimize the paraboloid \(Q(\mathbf{z};\mathbf{z}^{(l)},,\boldsymbol{\mu},\mathbf{P})\) instead of \(\mathcal{L}(\mathbf{z};\boldsymbol{\mu},\mathbf{P})\). Since \(Q(\mathbf{z};\mathbf{z}^{(l)},\boldsymbol{\mu},\mathbf{P})\) is a separable paraboloid, it can be explicitly optimized as follows: For each pixel \(j=1,2,\ldots,N\),
\[\tilde{\mu}_{j}^{(l+1)}=\tilde{\mu}_{j}^{(l)}-\frac{\sum_{i=1}^{M}a_{ij}h_{i} \left(\sum_{j=1}^{N}a_{ij}\tilde{\mu}_{j}^{(l)}-P_{i}\right)+\frac{\lambda}{ \gamma}(\tilde{\mu}_{j}^{(l)}-\mu_{j})}{\sum_{i=1}^{M}\frac{a_{ij}^{2}}{ \beta_{ij}}+\frac{\lambda}{\gamma}}. \tag{3.10}\]
The (3.10) can be simply calculated as the following matrix-vector multiplication:
\[\tilde{\boldsymbol{\mu}}^{(l+1)}=\tilde{\boldsymbol{\mu}}^{(l)}-\frac{ \mathbf{A}^{T}\mathbf{H}(\mathbf{A}\tilde{\boldsymbol{\mu}}^{(l)}-\mathbf{P} )+\frac{\lambda}{\gamma}(\tilde{\boldsymbol{\mu}}^{(l)}-\boldsymbol{\mu})}{ \mathbf{A}^{T}\mathbf{H}\mathbf{A}\mathbf{I}+\frac{\lambda}{\gamma}\mathbf{1}}, \tag{3.11}\]
where \(\mathbf{A}^{T}\) is the transpose of \(\mathbf{A}\) (i.e., the back projection) and \(\mathbf{1}\) is a vector of ones.
### Data-driven regularization instead of handcrafted regularization
In the iterative reconstruction in (3.6), the impact of regularization (i.e., \(-\gamma\nabla\Gamma(\boldsymbol{\mu}^{(k)})\)) plays a key role in the success of reconstructions. Here, two questions arise: (i) which regularization is the most appropriate, and (ii) whether it is appropriate to use the same regularization in each iteration. Because artifacts and noise characteristics differ at each iteration step, it is desirable to use a different artifact corrector for each iteration step. Hand-craft regularization priors, such as TV, seem to have limited performance in handling artifacts that have nonlinear structures depending on metal geometry.
Recently, data-driven regularization was used to compute \(\boldsymbol{\mu}^{(k+1/2)}\) in (3.6) [89], where the regularization function was different for each iteration step. This is a supervised learning method to find an artifact extractor \(f^{(k)}:\boldsymbol{\mu}^{(k)}\rightarrow\boldsymbol{\zeta}^{(k)}\), where \(\boldsymbol{\zeta}^{(k)}\) indicates artifacts included in the reconstructed image \(\boldsymbol{\mu}^{(k)}\) at the \(k-\)th iteration step. Here, \(\gamma\nabla\Gamma(\boldsymbol{\mu}^{(k)})\) in (3.6) is replaced by \(f^{(k)}(\boldsymbol{\mu}^{(k)})\). The artifact extractor \(f^{(k)}(\cdot;\mathbf{W}^{(k)})\) is a trainable neural network that depends on parameters \(W^{(k)}\). See [89] for details of the network architecture of \(f^{(k)}(\cdot;\mathbf{W}^{(k)})\). Assuming that paired data \(\mathcal{S}^{(k)}=\{(\boldsymbol{\mu}^{(k)}_{i},\boldsymbol{\zeta}^{(k)}_{i}) \mid i=1,2,\ldots,L\}\) is available, the network \(f^{(k)}(\cdot;\mathbf{W}^{(k)})\) is trained using the paired training dataset through the following framework:
\[\mathbf{W}^{(k)}=\arg\min_{\mathbf{W}}\frac{1}{L}\sum_{i=1}^{L}\|f^{(k)}( \boldsymbol{\mu}^{(k)}_{i};\mathbf{W})-\boldsymbol{\zeta}^{(k)}_{i}\|_{2}^{2}, \tag{3.12}\]
where \(\|\cdot\|_{2}\) denotes the standard Euclidean norm.
In practice, it is difficult to directly obtain a dataset consisting of artifact-free \(\boldsymbol{\mu}^{*}\) paired with the corresponding artifact \(\boldsymbol{\zeta}\). However, large amount of unpaired data are easy to obtain. Hence, we can combine real data and metal artifact simulation to generate paired data. First, we obtain artifact-free data from metal-free patients. Second, we perform individual tooth segmentation on the artifact-free data using the deep learning-based segmentation technique in [46] and choose several tooth positions in which virtual metal implants could be placed. Third, we generate a variety of sinogram data affected by the different geometries of metal implants using the accurate forward model in (2.3) [86; 122]. Finally, various data induced by simulated-metals are added to the metal-free data. This process provides a paired dataset, \(\mathcal{S}^{(0)}=\{(\boldsymbol{\mu}^{(0)}_{i},\boldsymbol{\zeta}^{(0)}_{i })\mid i=1,2,\ldots,L\}\).
This paired dataset \(\mathcal{S}^{(0)}\) is used to train \(f^{(0)}\) via (3.12). Next, \(\mathcal{S}^{(0)}\) and \(f^{(0)}\) are used to generate \(\mathcal{S}^{(1)}=\{(\boldsymbol{\mu}^{(1)}_{i},\boldsymbol{\zeta}^{(1)}_{i} )\mid i=1,2,\ldots,L\}\), where \(\boldsymbol{\zeta}^{(1)}_{i}=\boldsymbol{\mu}^{(1)}_{i}-\boldsymbol{\mu}^{*}_ {i}\) and
\[\boldsymbol{\mu}^{(1)}_{i}=g(\boldsymbol{\mu}^{(0)}_{i}-f^{(0)}(\boldsymbol{ \mu}^{(0)}_{i};\mathbf{W}^{(0)})\,\ \mathbf{P}_{i}). \tag{3.13}\]
This process continues iteratively to obtain \(\mathcal{S}^{(k)},k=2,\ldots,K\). Fig. 6 shows the results of the data-driven regularization approach.
## 4 Image priors
Throughout this section, the notation \(\mathbf{z}\) is used to represent an image \(\mathbf{z}=\mathcal{C}^{\dagger}_{\text{\tiny FOK}}[\mathbf{P}]\) reconstructed by the FDK algorithm (2.9). Let \(p_{*}(\boldsymbol{\mu})\) represent the probability distribution of artifact-free CBCT images. Our aim is to model \(p_{*}(\boldsymbol{\mu})\) such that when the \(\boldsymbol{\mu}\) is closer to an artifact-free CBCT image, a higher \(p_{*}(\boldsymbol{\mu})\) is assigned. Given \(\mathbf{z}=\mathcal{C}^{\dagger}_{\text{\tiny FOK}}[\mathbf{P}]\), the least squares problem (3.2) can be considered to be equivalent to the following maximum a posteriori (MAP) estimation [106; 107]:
\[\boldsymbol{\mu}^{*}=\underset{\boldsymbol{\mu}}{\text{argmin}}(-\log p( \mathbf{z}\mid\boldsymbol{\mu})-\log p_{*}(\boldsymbol{\mu})), \tag{4.1}\]
where \(p_{*}(\boldsymbol{\mu})=\exp(-\Gamma(\boldsymbol{\mu}))\) can be viewed as a prior and the conditional distribution \(p(\mathbf{z}\mid\boldsymbol{\mu})=\exp(\|\mathbf{z}-\boldsymbol{\mu}\|_{2}^{2}/ 2\lambda)\) is regarded as the data fidelity. The challenge is then how to assign the prior \(p_{*}(\boldsymbol{\mu})\).
Assume that any artifact-free dental CBCT image lies near or on an (unknown) manifold \(\mathcal{M}_{*}\), whose Hausdorff dimension is much smaller than the dimension of the sample space (i.e., the number of voxels in the dental CBCT image). The role of \(p_{*}(\boldsymbol{\mu})\) is to apply a force that causes \(\boldsymbol{\mu}\) to lie in or near
manifold \(\mathcal{M}_{*}\). For example, \(p_{*}(\boldsymbol{\mu})\) can be \(p_{*}(\boldsymbol{\mu})\propto\exp(-\text{dist}(\boldsymbol{\mu},\mathcal{M}_{*}))\), where \(\text{dist}(\boldsymbol{\mu},\mathcal{M}_{*})\) is a suitable distance between \(\boldsymbol{\mu}\) and \(\mathcal{M}_{*}\) from the perspective of dental radiologists. In CS approaches, \(\mathcal{M}_{*}\) is assumed to have a locally very sparse dimension, that is, \(\boldsymbol{\mu}\in\mathcal{M}_{*}\) is sparse in a suitable basis. These CS approaches include TV, which imposes the sparsity of the image gradient using the \(\ell^{1}\)-convex relaxation \(\Gamma(\boldsymbol{\mu})=\|\nabla\boldsymbol{\mu}\|\). However, CS methods cannot selectively preserve small details, because \(\Gamma(\boldsymbol{\mu})\) penalizes uniformly based on a fixed sparsity in a given basis. These approaches lack the ability to contextually control the anatomical details, which tends to compromise the morphological information in a region around metallic objects while reducing noise and artifacts.
Deep learning approaches such as generative adversarial networks [31] have shown remarkable performance in regressing manifold using training data. Here, training data are used to train a generator \(G\) such that \(\mathcal{M}_{*}\approx\{\boldsymbol{\mu}=G(\mathbf{z})\ :\ \mathbf{z}\sim p_{\text{ prox}}\}\), where \(p_{\text{ prox}}(\mathbf{z})\) represents the probability distribution of the reconstructed images using the FDK algorithm (2.9). This generator \(G:\mathbf{z}\mapsto\boldsymbol{\mu}\sim p_{*}\) can be viewed as an artifact correction function that minimizes the following loss model:
\[\text{dist}(p_{G},p_{*})+\lambda E_{\mathbf{z}\sim p_{\text{ prox}}}\left[\|G(\mathbf{z})-\mathbf{z}\|_{2}^{2}\right], \tag{4.2}\]
where \(p_{G}\) is the distribution of the generated images \(G(\mathbf{z}),\ \mathbf{z}\sim p_{\text{ prox}}\) and \(\text{dist}(p_{G},p_{*})\) denotes the distance between two probability distributions such as the Pearson \(\chi^{2}\) divergence \(\text{dist}(p_{G},p_{*})=\chi^{2}_{Person}(p_{G}+p_{*},2p_{*})\). Here, \(E[\mathbf{z}]\) is the expectation of \(\mathbf{z}\) and \(\lambda>0\) is the regularization parameter. Minimizing the model in (4.2) with \(\text{dist}(p_{G},p_{*})\) using the Pearson \(\chi^{2}\) divergence can be converted to the least squares generative adversarial networks framework [72]; \(G\) is trained simultaneously with a discriminator \(D\) in an adversarial relationship to improve their mutual abilities as follows:
\[\left\{\begin{array}{l}\mathbf{W}_{g}^{*}=\arg\min_{\mathbf{W}_{g}}\ \ E_{\mathbf{z}\sim p_{\text{ prox}}}\left[D(G(\mathbf{z};\mathbf{W}_{g}))^{2}\right]+\lambda E_{ \mathbf{z}\sim p_{\text{ prox}}}\left[\|G(\mathbf{z};\mathbf{W}_{g})-\mathbf{z}\|_{2}^{2 }\right]\\ \mathbf{W}_{d}^{*}=\arg\min_{\mathbf{W}_{d}}\ \ E_{\boldsymbol{\mu}\sim p_{*}} \left[(1-D(\boldsymbol{\mu};\mathbf{W}_{d}))^{2}\right]+E_{\mathbf{z}\sim p_{ \text{ prox}}}\left[(1+D(G(\mathbf{z});\mathbf{W}_{d}))^{2}\right].\end{array}\right. \tag{4.3}\]
A detailed derivation of (4.3) is described in [72, 88]. The f-divergence, including the Pearson \(\chi^{2}\) divergence, is not symmetric, and therefore does not satisfy the metric properties [76]. Some studies [3, 57] have exploited the Wasserstein distance to satisfy metric properties in order to match the probability distribution \(p_{*}\).
If the gap between \(\mathbf{z}\) and the corresponding artifact-free image is not small, the fidelity \(E_{\mathbf{z}\sim p_{\text{ prox}}}\left[\|G(\mathbf{z};\mathbf{W}_{g})-\mathbf{z}\|_{2}^{2}\right]\) may not be sufficient for MAR. Even when using deep learning techniques
Figure 6: Visual comparison of MAR methods for realistic test dataset. The figure in (a) shows clinical dental CBCT image \((\mathcal{C}^{+}_{\text{PDK}}[\mathbf{P}])\) with two simulated dental crowns (see yellow arrows in (b)). The severe metal artifacts owing to the two crowns occur in \(\mathcal{C}^{+}_{\text{PDK}}[\mathbf{P}]\). The figures in (b), (c), and (d) show corrected image obtained by NMAR [70], LSGAN [72], the data-driven regularizaion approach [89], respectively. The figure in (e) shows the corresponding target image \((\boldsymbol{\mu}^{*})\).
in a learning environment where only unpaired training data are available, it can be very difficult to accurately reconstruct the details of the tooth surface from projection data **P** heavily contaminated by metal implants, as shown in Fig. 6. In particular, the structures of the teeth and the oral cavity differ from person to person. It seems to be necessary to secure and supplement sophisticated tooth information to obtain an image reconstruction accurate enough for dental prosthetic treatment.
We note that the 3D segmentation of teeth, jaw, and skull from a CBCT image is an important component of 3D cephalometric analysis, where soft tissue details are not required [116]. Therefore, dental CBCT reconstruction can focus on restoring the morphological structures of the bones and teeth near metal objects instead of considering soft tissues. This simplified approach allows the use of supplementary techniques to obtain an image before dealing with the ill-posed problem.
However, in the case of CBCT images deteriorated by metal artifacts, the boundaries between teeth are often occluded, making it difficult to accurately segment a 3D individual tooth. Recently, several deep learning methods [17, 67, 97] have been proposed for automated 3D tooth segmentation directly from \(\mu\); however, their performance is far from satisfactory on CBCT images with severe metal artifacts.
### 2D panoramic image prior for 3D individual tooth segmentation
Jang _et al_[46] observed that panoramic images generated from CBCT images were not significantly affected by metal-related artifacts. This is because the cone beam projection configuration is advantageous for composing panoramic image reconstructions. Fig. 7 (b) shows a 3D bone-teeth-jaw model for one of the authors who has multiple gold dental prostheses (Fig. 7 (a)). The 3D bone-teeth-jaw model in Fig. 7 (b) was generated from the CBCT image using state-of-the-art MAR software. However, state-of-the-art MAR does not adequately remove metal artifacts caused by the crown and does not fully recover the neighboring tooth details. The panoramic image generated by CBCT data (Fig. 7 (c)) produce effective prior information. Jang _et al_[46] leveraged these panoramic images as a prior to accurately perform 3D tooth segmentation and identification.
Let us briefly explain how to automatically generate upper and lower jaw panoramic images from a 3D CBCT image [46]. The upper and lower jaws are separated to reduce the overlap between adjacent teeth. To explain the panoramic image generation, we used the CBCT image, which is seriously degraded by metal artifacts.
Generating panoramic images begins by obtaining binary images (or segmentations) of the upper and lower jaws by applying Otsu's thresholding technique [77] and connected-component labeling [100]. Using the upper-jaw binary image, we obtain the following 2D reference curve \(\mathcal{C}_{\texttt{upper}:\texttt{j}}\) passing
Figure 7: Concrete image priors; panoramic image and intra-oral scan.
through the upper dental arch region completely:
\[\mathcal{C}_{\text{\tiny{uppe\text{-}jur}}}=\{\mathbf{x}_{u}(s):s\in[0,1]\}. \tag{4.4}\]
Similarly, we can obtain \(\mathcal{C}_{\text{\tiny{uppe\text{-}jur}}}\) from the lower dental arch region. We then project the CBCT image \(\mu\) along the curve normal direction to generate the upper jaw panoramic image \(\mathcal{P}_{\text{\tiny{uppe\text{-}jur}}}\), which is given by
\[\mathcal{P}_{\text{\tiny{uppe\text{-}jur}}}(s,z)=\int_{-a}^{a}\mu_{\text{ \tiny{uppe\text{-}jur}}}\left(\mathbf{x}_{u}(s)+t\mathbf{n}(s),z\right)\,dt, \tag{4.5}\]
where \(\mu_{\text{\tiny{uppe\text{-}jur}}}\) is the upper-jaw CBCT image, \(s\) is the parameter in (4.4) and \(\mathbf{n}(s)\) is the unit normal vector at \(\mathbf{x}_{u}(s)\). Similarly, we obtain the lower panoramic image \(\mathcal{P}_{\text{\tiny{luwer\text{-}jur}}}\), as shown in Fig. 8.
We next use a U-shaped fully-convolutional network [94] and YOLO [96] to obtain the 2D tooth segmentation, as shown in Fig. 8. Next, backprojecting this 2D individual tooth segmentation along reference curve \(\mathcal{C}_{\text{\tiny{luwer\text{-}jur}}}\) provides a tight 3D ROI of the corresponding individual tooth in the 3D CBCT image \(\mu\). These tight ROIs on individual teeth are crucial for fine 3D individual tooth segmentation at the boundary where the target tooth and adjacent teeth meet [46]. Therefore, the panoramic image, as an image prior, plays an important role in 3D tooth segmentation.
### Using radiation-free intra-oral scan data to obtain teeth surface priors
Although the method using the panoramic image prior obtains accurate tooth segmentation in the case of CBCT images severely affected by metal artifacts, it still has limitations in accurately restoring tooth surfaces. It seems to be fundamentally difficult to precisely restore tooth surfaces around metal implants on CBCT images that are severely damaged by metal-induced artifacts.
The use of an intra-oral scan (IOS) as a concrete image prior can effectively compensate for the aforementioned weakness of dental CBCT without increasing X-ray dose exposure. Recently, intra-oral scanner equipped with cutting-edge technology was developed that can acquire accurate 3D images of the teeth surfaces and gingiva at high resolution [119], and its accuracy is approaching the level required for clinical application for digital impressions and occlusal analysis [123]. Fig. 7 (d) shows that the IOS provides precise tooth surface images, whereas dental CBCT images can be affected by metal-related artifacts.
Hyun _et al_[43] leveraged tooth surface information from the IOS to compensate for the damage of CBCT images caused by metal-induced artifacts. By merging the IOS into CBCT scans via a surface
Figure 8: 3D individual tooth segmentation method [46] using panoramic images. The figure above shows overall processes and results in the test case of a subject having metal prostheses.
matching method [47], they provided an accurate jaw-teeth model for a realistic digital simulation. This method can facilitate the virtual surgical planning, treatment simulation, and design and delivery of orthodontic and surgical treatment [11, 26, 68].
Fig. 9 shows the process of using the prior information about tooth surface obtained from IOS. Let \(\mathbf{O}\) represent the IOS data. Given data triplets \(\{(\boldsymbol{\mu}_{i},\mathbf{O}_{i},\boldsymbol{\mu}_{i}^{*})\}_{i=1}^{L}\), a MAR network \(f_{\text{MAR}}:(\boldsymbol{\mu},\mathbf{O})\mapsto\boldsymbol{\mu}^{*}\) is trained by solving the following minimization problem:
\[\mathbf{W}_{*}=\underset{\mathbf{W}}{\text{argmin}}\ \frac{1}{L}\sum_{i=1}^{L}\|f_{ \text{MAR}}(\boldsymbol{\mu}_{i},\mathbf{O}_{i};\mathbf{W})-\boldsymbol{\mu} _{i}^{*}\|_{2}^{2}, \tag{4.6}\]
where \(\mathbf{O}_{i}\) is used as a side input in \(f_{\text{MAR}}\). As shown in Fig. 9, the IOS data can also provide a mask \(\mathbf{M}\) that does not contain teeth, and \(\mathbf{M}\) is applied to the corrected CBCT image \(f_{\text{MAR}}(\boldsymbol{\mu},\mathbf{O};\mathbf{W}_{*})\) to obtain a 3D bone-teeth-jaw model. Fig. 10 (c) shows the 3D bone-teeth-jaw model constructed using the two-stage method.
## 5 Discussion and conclusion
Dental CBCT aims to provide high-resolution images with the lowest possible radiation dose at a low cost for equipment and maintenance. This cost-competitive goal of dental CBCT makes the data acquisition hardware configuration different from that of MDCT, which is widely used in clinical practice. As a result, the inverse problem in dental CBCT is more ill-posed than it is in MDCT.
The fundamental reasons why dental CBCT is more ill-posed than MDCT are as follows. The data acquisition of MDCT is based on helical CT scanning and continuous table movement, and the fastest rotation time of MDCT is 0.33 seconds. By contrast, a dental CBCT system uses a fixed array of detectors and the body is scanned in one revolution. Because the rotation time of dental CBCT is more than 8 s, motion artifacts may also occur. Most dental CBCT devices use an offset detector with a short subject-to-detector distance (or air gap) to obtain a larger FOV with a small detector as possible. Because of the short subject-to-detector distance, the most serious source of artifacts in dental CBCT is
Figure 9: Schematic diagram of the method for 3D bone-teeth-jaw modeling using IOS data \(\mathbf{O}\)[43]. The IOS data \(\mathbf{O}\) is utilized as prior information of teeth surface for metal artifact reduction and 3D modeling.
scattering. As the air gap decreases, the probability that the scattered photons escape out of the detector decreases [74, 99]. This scattering effect makes the forward model (3) less accurate.
Despite the disadvantages mentioned above, dental CBCT systems are rapidly growing in demand owing to their cost competitiveness and low radiation dose, enhancing the confidence of clinicians who operate the equipment. Metal artifacts are common in dental CT. As the number of older people with artificial prostheses and metallic implants rapidly increases as the population ages, it is very important to deal with the inverse problem in which data are damaged by metal objects. Despite various studies seeking to reduce metal artifacts, metal streaking artifacts continue to pose difficulties, and the development of suitable reduction methods remains challenging.
Recently, numerous attempts have been made to use deep learning for MAR [32, 66, 86, 115, 122]. These deep learning-based MARs have demonstrated remarkable performance in limited environments. However, their performance in dental CBCT environments is limited when multiple metallic inserts occupy a significant area. It seems that there is a fundamental limitation in the accuracy of morphological tooth structure restoration when only dental CBCT data severely damaged by metal implants are used. Therefore, it would be desirable to use radiation-free intraoral scan data with dental morphological structures as prior information in image reconstruction. Our current research topic is the development of a deep learning method that effectively uses the tooth shape information obtained from an oral scanner for CBCT image reconstruction.
The development of artificial intelligence is expected to automate the convergence of CBCT, oral scanners, and facial scanners, which will substantially help both patients and doctors manage dental care and dental health. The integration of CBCT and IOS can provide highly accurate digital impressions by compensating for the shortcomings of metal artifacts in CBCT. Traditional impression-making methods have a number of factors that limit their accuracy, such as patient movement, tearing and deformation of the impression during removal, and soft tissue contraction. Therefore, this fusion approach could eliminate the cumbersome procedure of traditional impressions for both the dentist and patient, significantly shortening the treatment time.
## Acknowledgment
This work was supported by Samsung Science & Technology Foundation (No. SRFC-IT1902-09). H S Park was partially supported by the National Institute for Mathematical Sciences(NIMS) grant funded by the Korean government (No. NIMS-B22920000).
Figure 10: CBCT-based 3D bone-teeth-jaw modeling. (a) shows 3D bone-teeth-jaw model generated from uncorrected CBCT image with the simple thresholding. The figure in (b) shows 3D bone-teeth-jaw model obtained from the method [43] that leverages the tooth surface information from IOS in (c). |
2305.05790 | Inferences from surface brightness fluctuations of Zwicky 3146 via the
Sunyaev-Zeldovich effect and X-ray observations | The galaxy cluster Zwicky 3146 is a sloshing cool core cluster at $z{=}0.291$
that in SZ imaging does not appear to exhibit significant pressure substructure
in the intracluster medium (ICM). We perform a surface brightness fluctuation
analysis via Fourier amplitude spectra on SZ (MUSTANG-2) and X-ray (XMM-Newton)
images of this cluster. These surface brightness fluctuations can be
deprojected to infer pressure and density fluctuations from the SZ and X-ray
data, respectively. In the central region (Ring 1, $r < 100^{\prime\prime} =
440$ kpc, in our analysis) we find fluctuation spectra that suggest injection
scales around 200 kpc ($\sim 140$ kpc from pressure fluctuations and $\sim 250$
kpc from density fluctuations). When comparing the pressure and density
fluctuations in the central region, we observe a change in the effective
thermodynamic state from large to small scales, from isobaric (likely due to
the slow sloshing) to adiabatic (due to more vigorous motions). By leveraging
scalings from hydrodynamical simulations, we find an average 3D Mach number
$\approx0.5$. We further compare our results to other studies of Zwicky 3146
and, more broadly, to other studies of fluctuations in other clusters. | Charles E. Romero, Massimo Gaspari, Gerrit Schellenberger, Tanay Bhandarkar, Mark Devlin, Simon R. Dicker, William Forman, Rishi Khatri, Ralph Kraft, Luca Di Mascolo, Brian S. Mason, Emily Moravec, Tony Mroczkowski, Paul Nulsen, John Orlowski-Scherer, Karen Perez Sarmiento, Craig Sarazin, Jonathan Sievers, Yuanyuan Su | 2023-05-09T22:38:10Z | http://arxiv.org/abs/2305.05790v1 | Inferences from surface brightness fluctuations of Zwicky 3146 via the Sunyaev-Zel'dovich effect and X-ray observations
###### Abstract
The galaxy cluster Zwicky 3146 is a sloshing cool core cluster at \(z\)=0.291 that in SZ imaging does not appear to exhibit significant pressure substructure in the intracluster medium (ICM). We perform a surface brightness fluctuation analysis via Fourier amplitude spectra on SZ (MUSTANG-2) and X-ray (_XMM-Newton_) images of this cluster. These surface brightness fluctuations can be deprojected to infer pressure and density fluctuations from the SZ and X-ray data, respectively. In the central region (Ring 1, \(r<100^{\prime\prime}=440\) kpc, in our analysis) we find fluctuation spectra that suggest injection scales around 200 kpc (\(\sim 140\) kpc from pressure fluctuations and \(\sim 250\) kpc from density fluctuations). When comparing the pressure and density fluctuations in the central region, we observe a change in the effective thermodynamic state from large to small scales, from isobaric (likely due to the slow sloshing) to adiabatic (due to more vigorous motions). By leveraging scalings from hydrodynamical simulations, we find an average 3D Mach number \(\approx 0.5\). We further compare our results to other studies of Zwicky 3146 and, more broadly, to other studies of fluctuations in other clusters.
galaxy clusters: Galaxy clusters; Intracluster medium; clusters: ZwCl 1021.0+0426; clusters: Zwicky 3146
## 1 Introduction
The dominant baryonic component of galaxy clusters is the hot intracluster medium (ICM) which can be observed via X-rays and in the millimeter band via the Sunyaev-Zel'dovich (SZ) effect (Sunyaev & Zel'dovich, 1970, 1972). The observed radiative signatures at the two wavelengths regimes both depend on thermodynamic properties integrated along the line of sight (the gas is optically thin in both regimes), with X-ray surface brightness being roughly proportional to square of gas density integrated along the line of sight and the millimeter surface brightness being proportional to electron pressure along the line of sight. Temperatures can then be inferred from X-ray spectra or by combining pressure constraints from SZ data with density constraints from X-ray data (e.g. Romero et al., 2017; Bourdin et al., 2017).
Cluster masses can be estimated assuming hydrostatic equilibrium from radial profiles of gas density and profiles of either gas temperature or pressure. The mass inferred under the assumption of hydrostatic equilibrium is expected to fall below the true mass of the cluster by 10-30% (e.g. Hurier and Angulo, 2018). This offset from the true mass is termed "hydrostatic bias" and is expected to be primarily due to non-thermal pressure support, in particular turbulent motions driven by mergers and feedback tied to active galactic nuclei (AGN; Gaspari et al., 2020, for a review).
The extent to which the non-thermal pressure is dominated by velocity fluctuations of the gas can be revealed through Doppler broadening of emission lines observed by upcoming X-ray missions with high spectral resolution such as XRISM (XRISM Science Team, 2020) and _Athena_(Nandra et al., 2013; Roncarelli et al., 2018). Fluctuations in thermodynamic quantities may reveal the nature of hydrostatic bias. In particular, pressure fluctuations (\(\delta P/P\)) can quantify the relative non-thermal pressure of the thermal gas1.
Footnote 1: It is often expected that (quasi) turbulent motions dominate the non-thermal pressure, though cosmic rays and magnetic fields may also contribute to the non-thermal pressure.
To quantify fluctuations as a function of scale, we use amplitude spectra leveraging the Fourier domain (e.g. Churazov et al., 2012; Gaspari and Churazov, 2013; Gaspari et al., 2014). As in previous studies, the amplitude spectrum is defined as
\[A(k)\equiv\sqrt{P(k)4\pi k^{3}}, \tag{1}\]
where \(k=\sqrt{k_{x}^{2}+k_{y}^{2}+k_{z}^{2}}\) and \(P(k)\) is the power spectrum. Figure 1 has been adapted from Gaspari et al. (2014) to highlight key features/regions of interest in the amplitude spectra of thermodynamic fluctuations or velocity fluctuations (\(\delta v/c_{s}\), where \(c_{s}\) is the sound speed) when considering a single dominant injection mechanism. In particular, Figure 1 illustrates three important length scales (or range of scales): an injection scale, \(l_{\rm inj}\) (e.g. for mergers, expected to be several hundreds of kiloparsecs), intermediate scales (\(\sim\)10-100 kpc) at which the fluctuations are "cascading" towards smaller scales, and small scales at which the fluctuations are gradually dissipated, e.g. via Coulomb collisions or Alfven/whistler waves (e.g. Drake et al., 2021; Cho et al., 2022). The values in Figure 1 are suppressed to allow for generalization, i.e. the injection scale used in the particular simulation(s) may not match those in a particular cluster, e.g. Zwicky 3146, but we still expect the same general shape of the amplitude spectra (or the summation of such spectra if there are multiple injection mechanisms). The amplitude of the relevant fluctuations is generally taken as the maximum of the amplitude spectrum, \(A_{\rm 3D}(k_{\rm peak})\). The scales at which the damping occurs is generally expected to be smaller than can be (spatially) resolved for most galaxy clusters.
Most of the previous studies focused on retrieving the amplitude spectrum of a galaxy cluster using solely X-ray observations (e.g. Schuecker et al., 2003; Churazov et al., 2012; Sanders and Fabian, 2012; Gaspari et al., 2013; Zhuravleva et al., 2014; Arevalo et al., 2016). Similar studies have also targeted the amplitude/variance of fluctuations (e.g. Hofmann et al., 2016; Eckert et al., 2017). However, pure X-ray observations are often limited to less than a decade in spatial scale, and mostly targeting density fluctuations. To overcome such limitations, a multiwavelength approach is required. As a first exploratory study, Khatri and Gaspari (2016) showed that SZ images (via _Planck_) are a key complementary tool to X-ray datasets, in particular expanding our knowledge of relative ICM fluctuations over the large scales (low Fourier \(k\) modes) and the pressure variable. Here, we continue such a multiwavelength approach by leveraging the capabilites of MUSTANG-2.
In this paper we present a study of surface brightness fluctuations of SZ and X-ray maps of Zwicky 3146, also referred to as ZwCl 1021.0+0426, and associated amplitude spectra covering a decade in scales. Zwicky 3146 (\(z=0.291\), Allen et al., 1992) is a massive, relaxed, sloshing cluster with a cool core (Forman et al., 2002). The relaxed and regular nature of Zwicky 3146 give us the expectation that we will not find large pressure fluctuations. This work is a follow-up work to the study of Zwicky 3146 presented in Romero et al. (2020) (wherein Zwicky 3146 is also described in more detail). In particular, Romero et al. (2020) estimated the mass of Zwicky 3146 from pressure profiles determined from high-resolution SZ data and varying assumptions, including hydrostatic equilibrium when combined with electron density profiles determined from _XMM-Newton_ data. Masses from Romero et al. (2020) and references therein (e.g. Klein et al., 2019; Hilton et al., 2018; Martino et al., 2014) are in agreement with \(M_{500}=8\times 10^{14}M_{\odot}\), which corresponds to \(R_{500}=5\) arcminutes (1.3 Mpc).
The layout of this paper is as follows. Section 2 describes the data used and fitted surface brightness models. To perform our fluctuation analysis, detailed in Section 3, we calculate power spectra on fractional residual maps; that is, residual maps divided by their respective surface brightness models. We present the 2D and (deprojected) 3D amplitude spectra in Section 4 and discuss
them in the context of what is known about Zwicky 3146 in Section 5. We offer conclusions in Section 6.
Throughout this paper, we adopt a concordance cosmology: \(H_{0}=70\) km s\({}^{-1}\) Mpc\({}^{-1}\), \(\Omega_{M}=0.3\), \(\Omega_{\Lambda}=0.7\). We define \(h_{70}\equiv H_{0}\) (70 km s\({}^{-1}\) Mpc\({}^{-1}\))\({}^{-1}\) and \(h(z)\equiv H(z)H_{0}^{-1}\). At the redshift of Zwicky 3146 (\(z=0.291\)), one arcsecond corresponds to 4.36 kpc.
## 2 Data Products
We make use of MUSTANG-2 data presented in Romero et al. (2020) and archival _XMM-Newton_ EPIC data. The two data sets are highly complementary. MUSTANG-2 has a resolution (full-width half maximum, FWHM) of \(\sim 10^{\prime\prime}\). The PSF of each of _XMM-Newton_'s detectors, MOS1, MOS2, and pn, depends on the energy, and off-axis distance; for a point of rough comparison, we may consider that the detectors have an effective resolution of \(\sim 5^{\prime\prime}\), albeit with broad wings.
### MUSTANG-2 data products
MUSTANG-2 is a 215-detector array on the 100-m Robert C. Byrd Green Bank Telescope (GBT) and achieves \(10^{\prime\prime}\) resolution (FWHM) with an instantaneous field of view (FOV) of \(4^{\prime}.2\). Observing at 90 GHz, it is sensitive to the SZ effect, which is often parameterized in terms of Compton \(y\):
\[y=\frac{\sigma_{\rm T}}{m_{\rm e}c^{2}}\int P_{e}(\theta,z)dz, \tag{2}\]
where \(\sigma_{T}\) is the Thomson cross section, \(m_{e}\) is the electron mass, \(c\) the speed of light, \(P_{e}\) the electron pressure, and \(z\) is the axis along the line of sight.
The observations used here are the same as in Romero et al. (2020), as is the general data reduction. We employ both data reduction pipelines, MIDAS and Minkasi, in this work. Briefly, MIDAS follows a more traditional approach in its data processing (i.e. similar to the processing of many predecessor multi-pixel bolometric ground-based measurements); this processing typically restricts scales recovered (often characterized as a transfer function)2 to less than the instrument's instantaneous FOV (see Figure 2.) Meanwhile, Minkasi fits the data in the time domain and does not suffer the same loss of scales as MIDAS; see Romero et al. (2020) for a detailed comparison of the transfer functions of the two processing methods.
Footnote 2: The transfer function as used in Romero et al. (2020) is quantified as the transmission of the Fourier transform of an input map.
In this work, we update our pressure profile model from Romero et al. (2020) with an additional procedure used in Dicker et al. (2020); Orlowski-Scherer et al. (2022) which attempts to further remove atmospheric contributions to our maps by f
Figure 1: Figure adapted from Gaspari et al. (2014) showing typical ICM amplitude spectra for the thermodynamic relative fluctuations: density (\(\delta\rho/\rho\)), temperature (\(\delta T/T\)), entropy (\(\delta K/K\)), pressure (\(\delta P/P\)), and velocity \(\delta v/c_{s}\). Smaller scales (distances) are towards the right of the plot; values are suppressed to allow for generalization, i.e. for an arbitrary injection scale, we expect the same shape (roughly) for the spectra, with the peak of the spectra at said injection scale. The red dashed line indicates the injection scale; the shaded blue region indicates the scales over which the fluctuations “cascade” towards smaller scales, and the shaded green region is where the fluctuations are finally dissipated. The dotted black lines help guide the eye as to the (logarithmic) slope of the various spectra, which again should not be treated as an exact expectation; the slopes will vary depending on the actual conditions of the ICM in a given cluster.
Figure 2: Maps derived from the MUSTANG-2 observations: the residual Minkasi map (right) shows large scale noise, while the residual MIDAS map (left) has this filtered out. Given the angular scales of interest, the MIDAS map is preferable. The rings are as in Figure 5. The color scale is shown in units of \(y\times 10^{6}\); \(y\) is defined in Equation 2.
second order polynomial with respect to elevation offset from the scan center. Figure 3 compares the current to the former pressure profile; the two are fully consistent with each other.
As reported in Romero et al. (2020), the two pressure profile models (fit via MIDAS and Minkasi) are consistent, except beyond MUSTANG-2's radial (instantaneous) FOV where our transfer function is poorly constrained. However, when we subtract the Minkasi model via the MIDAS pipeline (rather than using a transfer function), we see that the residual map is consistent with noise at the radii where the pressure profiles (MIDAS vs Minkasi) differ.
### XMM data products and models
There are four _XMM-Newton_ observations (Obs.IDs) of Zwicky 3146: 0108670401, 0108670101, 0605540301, and 0605540201. The first does not have usable EPIC data; we use the remaining three observations (of nominal durations 56, 65, and 123 ks; see also Table 1).
We use heasoft v6.28 and SAS 19.0 and the Extended Source Analysis Software (ESAS) data reduction package (Snowden et al., 2008) to produce event files and eventually images for the three EPIC detectors: MOS1, MOS2, and pn. Our data reduction largely follows the ESAS cookbook3, with the initial steps being emchain, epchain, and epchain withoutoftime=true to extract calibrated events files. Soft proton flares are excised with the tasks mos-filter and pn-filter. A comparison of IN versus OUT count rates assesses the amount of residual contamination from soft protons (De Luca & Molendi, 2004). This comparison suggests that soft protons are not a concern for MOS detectors and that the pn detectors could suffer slight contamination.
Footnote 3: [https://heasarc.gsfc.nasa.gov/docs/xmm/esas/cookbook/xmm-esas.html](https://heasarc.gsfc.nasa.gov/docs/xmm/esas/cookbook/xmm-esas.html)
An initial list of point sources is created with the task cheese on the _XMM-Newton_ dataset based on flux with [0.4-7.2] keV energy band and detection significance. A region file is generated, excluding a 30\({}^{\prime\prime}\) radius about each point source.
#### 2.2.1 Image creation
We choose to extract images in the [0.4-1.25] keV and [1.25-5.0] keV bands. Images and vignetted exposures are extracted for each detector over the entire detector area whilst masking point sources (see Section 2.2.3 for point source identification) via the task mos-spectra or pn-spectra. Unvignetted exposures are also created with the task eexpmap withvignetting=no. Wide band (i.e. [0.4-5.0] keV) images are formed by the simple addition of the counts (and background) images; these wide band images are used for consistency checks.
#### 2.2.2 Constrained background components
The relevant particle backgrounds are calculated for the desired energy band via the tasks mos_back and pn_back. For the pn detector, we extract a separate spectrum (via pn-spectra) over the cluster region, which we take to be a radius of 5 arcminutes about the cluster center. While we treat the residual soft proton spectrum as a single power law, we must fit several other components to the spectrum: a thermal plasma component (apec) for each of the local (Solar) hot bubble, Galactic emission, and the ICM in Zwicky 3146. In addition to this, we also consider Gaussian components for fluorescent lines. A soft proton background is then made with the task proton and added to the particle background with the task farith. For the pn detector, we also consider the out-of-time (OOT) contribution. Depending on the full frame mode, we multiply our resultant pn image with randomized columns by 0.063 or 0.023 for full frame and extended frame modes, respectively, to have an OOT component which we incorporate
\begin{table}
\begin{tabular}{c|c|c|c} Obs ID & 0108670101 & 0605540301 & 0605540201 \\ \hline Date & 2000 Dec 05 & 2009 May 08 & 2009 Dec 13 \\ Exposure (ks) & 56.5 & 64.9 & 122.8 \\ \hline & MOS1: 51.2 & MOS1: 41.8 & MOS1: 101.2 \\ Clean Exp (ks) & MOS2: 51.7 & MOS2: 40.6 & MOS2: 102.2 \\ & pn: 43.3 & pn: 29.6 & pn: 73.8 \\ \hline Mode & FF & eFF & eFF \\ PI & R. Mushotzky & J. Sanders & J. Sanders \\ \end{tabular}
\end{table}
Table 1: Overview of imaging _XMM-Newton_ observations of Zwicky 3146. Modes FF and eFF are “full frame” and “extended full frame”, respectively.
Figure 3: Our updated profile is consistent with our previously published profile; we do see the outermost bin has a lower pressure than previously (Romero et al., 2020).
into the pn background. These background images will be subtracted from the respective images when extracting profiles.
#### 2.2.3 Point Source exclusion
In addition to the list generated from cheese, we make use of _Chandra_ archival data of Zwicky 3146 and run wavdetect on its calibrated event files. Finally, we perform a manual inspection to identify any remaining point sources.
#### 2.2.4 Profile fitting
We use the Python package pyproffit(Eckert et al., 2017) to extract profiles of our images. Profiles are fit via emcee(Foreman-Mackey et al., 2013) separately for each detector, each energy band, and each observation. We fit profiles to our low energy([0.4-1.25] keV) and high energy ([1.25-5.0] keV) images; As these profiles are fit per detector and per ObsID, we have 18 profiles in total (with another 9 from the wide energy band ([0.4-5.0] keV images that are only used for consistency checks).
Beyond masking the point sources, we also introduce a mask to exclude pixels of low exposure due to binning near chip gaps. We allow pyproffit to fit for centroids in the central 5 arcminutes of each (masked) image independently. Within a single observation and energy band the centroids of each detector differ by \(\lesssim 2\arcsec\). Given the general agreement, for each observation and energy band, we adopt circular symmetry and the centroid as the average centroid of the maps from each EPIC camera detector when extracting profiles. To be sure, the centroids determined in this manner differ by \(\sim 3\arcsec\) relative to the centroid used with MUSTANG-2 analysis.
We find that a simple \(\beta\)-model does not sufficiently capture the surface brightness in the core of Zwicky 3146 and at large radii. We adopt the double \(\beta\)-model as implemented in pyproffit, which has the form:
\[\begin{split} S(r)=S_{0}[&(1+(r/r_{c,1})^{2})^{-3 \beta+0.5}\\ &+R(1+(r/r_{c,2})^{2})^{-3\beta+0.5}]+B,\end{split} \tag{3}\]
where \(r\) is the radius, \(r_{c,1}\) is the first "core" (scaling) radius, \(r_{c,2}\) is the second "core" (scaling) radius, \(R\) is a ratio between the two \(\beta\)-profile components, \(S_{0}\) is the surface brightness normalization, and \(B\) is the background. We modify the background component (taken to be uniform in pyproffit) to be two components: one uniform and one the scaling of unvignetted-to-vignetted exposure maps. This latter component allows us to capture the contribution from fluorescent lines, predominantly the line from Aluminium, which is evident in the extracted profiles seen in Figure 4.
To appropriately constrain these background components we find that we should fit (from \(r=0\)) out to at least 10 arcminutes, but beyond 10 arcminutes the values of the background components do not change much. We choose 11 arcminutes (more than \(2R_{500}\)) as our fitting region. Across all three observation IDs, detectors, and energy bands, the profile residuals are quite small as in Figure 4.
We find that the residuals of the double \(\beta\)-model are generally very small, with slightly larger residuals towards the core where known sloshing exists (e.g. Forman et al., 2002). We find that this is not a shortcoming of the double \(\beta\)-model per se but rather affirmation that the surface brightness of the cluster, while roughly circular at large radii, is not circular in the core (cf axial ratios found in Romero et al., 2020).
## 3 Power spectra measurements
To quantify the fluctuations in surface brightness, we want to take the power spectra of residual images divided by the corresponding ICM surface brightness model as shown in Figure 5. We term these images "fractional residuals" and they are designated by either \(\delta S/S\) for X-ray images or \(\delta y/y\) for SZ images. In particular, Figure 5 shows fractional residual maps for
Figure 4: Profile fits of circular double \(\beta\)-models to each detector array in our “high energy” (1250-5000 eV) band for observation ID 0605540201. The grey vertical band is between \(100\arcsec\) and \(200\arcsec\), i.e. the region used for Ring 2. The vertical red line is at \(300\arcsec\) (\(\sim R_{500}\) and the outer edge of Ring 3) The dotted and dashed green curves show the PN profile broken into quadrants (along cardinal directions); the dotted lines are the two western quadrants and the dashed lines are the eastern quadrants. The lines in the residual are a polynomial regression to indicate large-scale residuals.
MUSTANG-2 and pn images from a single observation in the 400-1250 eV and 1250-5000 eV bands. From these (2D) spectra of the images, we can deproject to spectra of underlying 3D thermodynamical quantities, namely pressure for SZ images and density for X-ray images (see Section 3.3).
Motivated in part by the data, as well as by the theoretical expectation for differing levels of fluctuations as a function of cluster-centric radii, we divide the cluster into three annuli:
* Ring 1: \(r<100^{\prime\prime}=440\) kpc
* Ring 2: \(100^{\prime\prime}<r<200^{\prime\prime}\), and
* Ring 3: \(200^{\prime\prime}<r<300^{\prime\prime}=R_{500}\).
We also note that the MUSTANG-2 map has a rapidly increasing RMS beyond \(200^{\prime\prime}\) while the RMS is nearly uniform within \(100^{\prime\prime}\).
We calculate the power spectra of the fractional residual images, \(P_{\rm 2D}\) at five angular scales spaced logarithmically between \(10^{\prime\prime}\) (the FWHM of MUSTANG-2) and \(100^{\prime\prime}\) (the radial width of our annuli, i.e. rings). Corresponding amplitude spectra, \(A_{2D}\) and \(A_{3D}\) are given as
\[A_{2D}(k) = \left[k^{2}P_{\rm 2D}*(2\pi)\right]^{1/2} \tag{4}\] \[A_{3D}(k) = \left[k^{3}P_{\rm 3D}*(4\pi)\right]^{1/2} \tag{5}\]
We use a modified \(\Delta\)-variance method (Arevalo et al., 2012) to calculate the power spectra of surface brightness fluctuations. In particular, this method allows us to recover power spectra of data with arbitrary gaps (masks) in (of) the data, which suits our needs well. We do, however, need to be cautious of the bias that can occur due to steep underlying spectra; this is especially true given that we will attempt to recover spectra up to scales close to the FWHM of MUSTANG-2 and _XMM_. In particular, the convolution of a moderate slope with the PSF for either MUSTANG-2 or any of the EPIC cameras will lead to not only a steep slope, but a changing steep slope. The bias for this changing slope is derived in Appendix B. While we report spectral values at \(k=0.1\) arcsec\({}^{-1}\) in later figures, this bias and associated uncertainty reduces the significance of the values at \(k=0.1\) arcsec\({}^{-1}\) such that none of them is statistically significant.
### Calculations on MUSTANG-2 data
As noted in Section 2.1, our MUSTANG-2 residual map is created by subtracting the best fit model (from Minkasi) within the MIDAS pipeline. In all, 155 scans
Figure 5: Fractional residuals of our MUSTANG-2 data (upper) and _XMM-Newton_ data (only pn chip from observation 0605540201 shown) in the middle (400-1250 eV) and bottom (1250-5000 eV). The blue, orange, and green circles indicate \(r=100^{\prime\prime}\), \(200^{\prime\prime}\), and \(300^{\prime\prime}\), respectively. The purple lines and circles are masked chip gaps and point sources.
on source are used. Maps are produced for each scan, and the final residual image (see again Figure 5, top panel) is constructed as the (weighted) sum of these individual scan maps.
In order to calculate power spectra due to the ICM, we must account for any power contribution from inherent noise in the maps. In principle this can be done by "debiasing" the power spectrum (as will be described in Section 3.2), but a more direct method is to "halve" the data and take a cross-spectrum (e.g. see Khatri and Gaspari, 2016). However, instrumental noise can still "leak" through via such a cross-spectrum. In order to counter this, we calculate cross-spectra of noise realizations, which have amplitudes \(\lesssim 1/10\) the amplitudes of signal cross-spectra and, in effect, debias the cross-spectra. We perform both methods on the SZ data and present the results of the cross-spectra calculations in Figure 6. For the cross-spectra calculation, we take halving to be the generation of two maps covering the same area, each with half of the weight of a "full" map.
Division in half is not a trivial endeavour as these scans were taken over seven nights of observations, and even the nights with the best observing conditions had some variation in weather conditions. As such, we opt to create two halves randomly, 100 times. Cross spectra are calculated on these 100 pairs and the presented values are taken as the mean of the resultant spectra with their associated standard deviations. The 2D amplitude spectra, \(A_{\rm 2D}\), for the MUSTANG-2 data are shown in Figure 6 and include corrections for the MUSTANG-2 beam (PSF; the correction is shown as the dashed grey line) and MIDAS transfer function, both of which are characterized in Romero et al. (2020).
As mentioned earlier, we also calculated spectra via the debiasing route. The spectra in each ring are statistically consistent between the two calculation methods; however, Ring 2 is statistically consistent with zero as calculated via debiasing. Similarly, the spectrum in Ring 3 has negligible significance and thus we discard it from further analysis.
### Calculation on XMM data
In order to calculate the power spectra for our _XMM_ images, we opt to debias our spectra as calculated directly on maps of fractional residuals. A noise realization can be generated as Poisson noise realizations for each pixel with its expected value given by a model of expected counts of all relevant components. To also incorporate uncertainties from the surface brightness model itself, we take 1000 models from the MCMC chains well after the burn-in. A single Poisson noise realization is generated for each of these models. The "raw" and "noise" spectra are recorded for each, as well as their difference (i.e. a "debiased" spectrum). The mean and standard deviation of these debiased spectra are used in reported expected values and associated uncertainties.
We also consider the potential contribution of faint point sources below our detection threshold. To account for these, we quantify the distribution of detected sources in our images. We normalize a LogN-LogS distribution with an index of \(-1.6\)(Mateos et al., 2008) to our bright sources, where we take our completeness to be unity. We then randomly generate point sources of this distribution down to a minimum of 1 photon (count) when assuming a uniform (unvignetted) exposure. The final point source image, added to a noise realization, accounts for the proper (vignetted) exposure map. To stay consistent with total count expectations, we assume that the counts accumulated from these faint point sources would be equivalent to the uniform background (in count rates) in our profile fits. As such, we reduce the uniform background by the equivalent count rates.
Given the general agreement between energy bands (see Figure 7), we conclude that it is appropriate to take the weighted average of the respective power spectra, as shown in Figure 8. When checking power spectra across individual observations and detectors, we do not find any spurious spectra. However, we also note that Figure 7 provides some insights into data quality, especially suggesting caution when attempting to interpret
Figure 6: The amplitude spectrum of the fractional residual (\(\delta y/\bar{y}\)) for each ring. Abscissa values are offset between rings for visual separation. Our best constraints are in Ring 1, while Ring 2 is already quite noisy.
the combined amplitude spectrum in Ring 3 as well as the highest \(k\)-mode in all rings.
Both Figures 7 and 8 include corrections for the PSF, which we estimate per detector, per energy band, and per ring using the ELLBETA mode of the task psfgen. In particular, we find the median photon energies are 800 and 2000 eV for our two energy bands, and so we estimate the PSF at those energies. For the rings, we take \(x=50^{\prime\prime},150^{\prime\prime}\), and \(250^{\prime\prime}\) and \(y=0\) to be sufficient estimates of the PSFs for each ring. As in the SZ data, we see that some rings have (at least a portion of their) spectra which share the shape of the PSF correction.
To further investigate the quality in Ring 3 we calculate the radial profile (from the cluster center) of variance in the \(\delta S/S\) images. We find that the average variance falls below the standard deviation of the variance (across our 1000 realizations, 3 detectors, 2 energy bands, and 3 ObsIDs) beyond \(200^{\prime\prime}\).
### 3D spectra
In this section, we relate projected 2D fluctuations to the physical 3D fluctuations by following a common formalism (e.g. Peacock, 1999; Zhuravleva et al., 2012; Churazov et al., 2012; Khatri and Gaspari, 2016). The relation is given as:
\[P_{\rm 2D}(k_{\theta})=\int P_{\rm 3D}({\bf k})|\tilde{W}(k_{z})|^{2}dk_{z}, \tag{6}\]
where \(z\) is the axis along the line of sight, \(\theta^{2}=x^{2}+y^{2}\) is in the plane of the sky, and \(|\tilde{W}(k_{z})|^{2}\) is the 1D power spectrum of the window function, which normalizes the distribution of the relevant (unperturbed) 3D signal generation to the (unperturbed) 2D surface brightness. Additionally, \(P_{\rm 2D}\) is as before, and \(P_{\rm 3D}\) is the power spectrum of the 3D quantity which when integrated along the line of sight yields a surface brightness. The SZ and
Figure 8: Amplitude spectra of X-ray surface brightness fluctuations when combining both energy bands. Abscissa values are offset between rings for visual separation. Ring 2 has a similar spectrum as Ring 1 but with larger uncertainties.
Figure 7: The \(\pm 1\sigma\)-interval of amplitude spectra for the low energy band (red) and high energy band (blue). From top to bottom: Rings 1, 2, and 3, respectively.
X-ray window functions are respectively:
\[W_{\rm SZ}(\theta,z) \equiv\frac{\sigma_{\rm T}}{m_{\rm e}c^{2}}\frac{\bar{P}(\theta,z)}{ \bar{y}(\theta)}\text{ and} \tag{7}\] \[W_{\rm X}(\theta,z) \equiv\frac{\bar{\epsilon}(\theta,z)}{\bar{S}(\theta)}, \tag{8}\]
where \(\bar{P}\) and \(\bar{\epsilon}\) (emissivity), refer to the underlying 3D (spherical, unperturbed) models, which when integrated along the line of sight, produce \(\bar{y}\) and \(\bar{S}\), the 2D (circular, unperturbed) surface brightness models. To be sure, the relation between \(\bar{S}\) and \(\bar{\epsilon}\) is given by \(\bar{S}=\int\bar{\epsilon}dz\).
Above some cutoff wavenumber, \(k_{z,\rm cutoff}\), \(|\tilde{W}(k_{z})|^{2}\) will fall off; in the regime where \(k\gg k_{z,\rm cutoff}\), we can approximate Equation 6 as
\[P_{\rm 2D}(k_{\theta})\approx P_{\rm 3D}({\bf k})\int\lvert\tilde{W}(k_{z}) \rvert^{2}dk_{z}, \tag{9}\]
where we adopt the notation used in Khatri & Gaspari (2016) and define
\[N(\theta)\equiv\int\lvert\tilde{W}(k_{z})\rvert^{2}dk_{z}. \tag{10}\]
In Appendix C we verify that this approximation in Equation 9 is valid.
The dependence of the window function on the cluster-centric radius, \(\theta\), presents an issue of how to deproject over an area (e.g. over a given annulus). We therefore calculate \(N(\theta)\) along many points in the range \(0^{\prime\prime}\leq\theta\leq 300^{\prime\prime}\) and calculate an area-weighted average of those values (within a given annulus). Window functions (and their Fourier transform) are shown in Figures 9 and 10; the radii chosen are the effective radii for each annulus (i.e. where \(N(\theta_{\rm eff})=\langle N(\theta)\rangle\) for \(r\) in a given annulus.)
In the SZ case, this deprojection to 3D fluctuations lets us immediately arrive at pressure fluctuations
Figure 10: The X-ray window function in real space and in Fourier space. \(N_{4}(\theta)=N(\theta)*1e4\) with units of inverse kpc (see Equation 10).
Figure 9: The SZ window function in real space and in Fourier space. \(N_{4}(\theta)=N(\theta)*1e4\) with units of inverse kpc. (see Equation 10.)
(\(\delta P/P\)) because it is the thermal electron pressure that is being integrated along the line of sight. However, in the X-ray case, we have only derived a means of converting to fluctuations in emissivity (\(\delta\epsilon/\epsilon\)). Fortunately, for hot enough gas (\(\sim 3\) keV), the emissivity in soft bands is weakly sensitive to temperature, and thus effectively depends only on the square of gas density, \(n\). The emissivity can be expressed as \(\epsilon=Cn_{e}^{2}\), where we include the cooling function and mean molecular weight in \(C\) and note that \(C\) is weakly dependent on temperature at the temperatures of Zwicky 3146, such that \(C\) acts roughly as a constant. The emissivity can be decomposed into unperturbed and perturbed terms and is linearly approximated as: \(\epsilon=Cn^{2}[1+2\delta_{n}]\), with \(\delta_{n}\) being the density perturbation. This factor of 2 associated with \(\delta_{n}\) ultimately yields a factor of 4 when relating \(P_{\rm 2D}\) to \(P_{\rm 3D,n}\). That is, explicitly for SZ and X-ray, we have:
\[P_{\delta y/y}(k_{\theta}) \approx N_{\theta,SZ}P_{\delta P/P}({\bf k}) \tag{11}\] \[P_{\delta S/S}(k_{\theta}) \approx 4N_{\theta,X}P_{\delta n/n}({\bf k}) \tag{12}\]
## 4 3D Spectra Results
Given our deprojection approximation, the 3D amplitude spectra, \(A_{\rm 3D}\) will simply be the 2D amplitude spectra rescaled by a scalar and multiplied by another factor of \(k\).
As indicated in Section 3.2, the (2D) amplitude spectrum in Ring 3 from X-ray data is likely dominated by noise. We include it in our plot of 3D amplitude spectra (Figure 11) and tabulation of single spectral indices (Table 2) but do not include it in further analyses. Similarly, we exclude Rings 2 and 3 of the SZ data from further analysis (as justified in Section 3.1.
Figure 11 shows the resultant density and pressure fluctuations.
If a clear peak were present in a given spectrum, we could take the amplitude at the peak (\(A_{\rm 3D}(k_{\rm peak})\)) to be the amplitude of the amplitude spectrum. However, as an example, taking the highest \(k\) point for Ring 2 (orange) in Figure 11 is also problematic as it is consistent with zero. That is, choosing a peak is not solely a question of the shape of the spectra, but also of data quality. We wish to select the highest point with some threshold significance; in particular we adopt \(3\sigma\) as our threshold significance. The maximum values with at least \(3\sigma\) significance are reported in Table 2. With this adopted significance threshold, we find peaks in the range \(0.01<k<0.03\), which corresponds to injection scales, \(\ell_{\rm inj}\), of \(140{\rm kpc}<\ell_{\rm inj}<440\) kpc.
Though we may expect a changing power law (as in Figure 1), we fit a single power law to our power spectra, omitting \(k=0.01\) and report the (logarithmic) slope, \(\alpha\), in Table 2, where we use the convention:
\[P(k)=P_{0}k^{-\alpha}, \tag{13}\]
with \(P_{0}\) being a normalization of the fitted slope. We note that without a clear indication that we are sampling below an injection scale, our slopes are not indicative of the cascade of motions to smaller scales. Moreover, with our best estimate of the injection scales (\(140{\rm kpc}<\ell_{\rm inj}<440\) kpc), our constraints on the slope on smaller scales is minimal. These slopes do permit us to comment on the validity of our deprojection approximation (see Appendix C). We can additionally integrate the power spectra to obtain a measure of the variance of fluctuations; for the 3D spectra this is given as:
\[\sigma_{\rm 3D}^{2}=\int P(k)4\pi k^{2}dk. \tag{14}\]
We report the values of \(\sigma_{\rm 3D}\) in Table 2.
## 5 Discussion
In the context of expected amplitude spectra (see Section 1 and Figure 1), our recovered spectra do not clearly identify an injection scale and subsequent cascade. From Figure 11, we may loosely infer an injection scale \(100~{}{\rm kpc}\lesssim l_{\rm inj}\lesssim 300\) kpc for Rings 1 and 2. In the
Figure 11: Amplitude spectra of deprojected quantities. Colors reflect corresponding rings as in previous plots of spectra; SZ-derived spectra (\(\delta P/P\)) are shown as dashed lines and shaded regions while the X-ray-derived (\(\delta n/n\)) spectra are shown as lines with errorbars. The dotted lines show the spectral indices for the power spectra (following the convention indicated in Equation 13).
core an injection scale around 50 kpc could be plausible as Vantyghem et al. (2021) find evidence in _Chandra_ data for cavities with diameters \(\lesssim 50\) kpc in Zwicky 3146. Hydrodynamical simulations of AGN feedback also support such kind of injection scales (e.g., Wittor and Gaspari, 2020). However, the evidence for these cavities does not extend to Ring 2. In Ring 1 we see the density fluctuations increase relative to the pressure fluctuations at the larger scales probed (\(\sim 400\) kpc) which is consistent with a sloshing core. This also highlights that there may be multiple injection mechanisms (and scales) present in clusters.
In the present study, we refrain from making physical inferences regarding the slopes of the spectra. We do, however, compare the pressure and density spectra (in Ring 1; see Figure 12) as well as infer Mach numbers from our spectra. We note that Hofmann et al. (2016) has performed a fluctuation analysis, though not in the Fourier domain, of a sample of clusters which includes Zwicky 3146. Their analysis probed Zwicky 3146 using _Chandra_ data out to \(r\lesssim 90^{\prime\prime}\) and can thus be compared to results from our Ring 1. They derive standard deviations for \(\delta P/P\) and \(\delta\rho/\rho\) of 0.004 and 0.159, respectively.4 Our respective derived quantities (\(\sigma_{SP/P}\) and \(\sigma_{\delta\rho/\rho}\)) are 0.33 and 0.15. Our integrated density fluctuation is in good agreement with that from Hofmann et al. (2016); however, our integrated pressure fluctuation is considerably larger than those from Hofmann et al. (2016).
Footnote 4: The value for \(\delta P/P\) (\(dP/P\) in their notation) that Hofmann et al. (2016) report in their table is surprisingly low given the scatter evident in their pressure profile.
### Thermodynamic state
**adiabatic:**: \(\left|\frac{\delta K}{K}\right|\sim 0\),
**isothermal:**: \(\left|\frac{\delta T}{T}\right|\sim 0\),
**isobaric:**: \(\left|\frac{\delta P}{P}\right|\sim 0\),
where \(K\) is the gas entropy. With \(\gamma\) the classic adiabatic index, we have the following relations between pressure and density in the respective regimes:
**adiabatic:**: \[\left|\frac{\delta P}{P}\right|=\gamma\left|\frac{\delta n}{n}\right|\] (15)
**isothermal:**: \[\left|\frac{\delta P}{P}\right|=\left|\frac{\delta n}{n}\right|\] (16)
**isobaric:**: \[\left|\frac{\delta P}{P}\right|\ll\left|\frac{\delta n}{n}\right|.\] (17)
Assuming \(\gamma=5/3\) for a monatomic gas, we can roughly divide these regimes as shown in Figure 12. The isobaric regime \(A_{\delta P/P}<A_{\delta n/n}\) is only observed at the largest scales. This is consistent with the slow perturbations driven by sloshing. Interestingly, we see that the inferred thermodynamical regime shifts to isothermal and adiabatic toward the intermediate scales. The transition from isobaric to the adiabatic state is a sign of more vigorous motions (see Gaspari et al., 2014) as we approach the potential injection scale peak at a few tens of kpc. It is important to note that the isothermal transitional regime does not necessarily imply strong thermal conduction or cooling, but is a sign of a change in the effective equation of state likely due to the varying kinematics at different scales. For instance, Spitzer-like thermal conduction would substantially suppress also the density fluctuations up to hundreds kpc scale (Gaspari et al., 2014), thus generating amplitude spectra with a very steep negative slope in logarithmic space. Our results are also in line with other observational studies (Arevalo et al., 2016; Zhuravleva et al., 2018) which find a mixture of gas equations of state, where Zhuravleva et al. (2018), specifically analyzing a sample of cool-core clusters, find that the gas tends to be isobaric.
### Mach numbers
In principle we can infer non-thermal pressure, \(P_{\rm NT}\), support and ultimately a hydrostatic bias, usually defined as
\[b\equiv 1-M_{\rm HSE}/M_{\rm tot}, \tag{18}\]
from our amplitude spectra presented in Section 4 where we make the assumption that the non-thermal pressure
\begin{table}
\begin{tabular}{c c|c c c c c} & & \(\alpha_{k}\) & \(A_{\rm 3D}(k_{\rm peak})\) & \(\sigma_{\rm 3D}\) & \(k_{\rm peak}(^{\prime\prime-1})\) & \(\lambda_{\rm peak}\) (kpc) \\ \hline Ring 1 & \(\delta\rho/\rho\) & \(2.5\pm 0.1\) & \(0.13\pm 0.003\) & \(0.15\) & \(0.02\) & \(250\) \\ & \(\delta P/P\) & \(0.6\pm 0.8\) & \(0.29\pm 0.08\) & \(0.33\) & \(0.03\) & \(140\) \\ \hline Ring 2 & \(\delta\rho/\rho\) & \(2.2\pm 1.6\) & \(0.11\pm 0.03\) & \(0.18\) & \(0.01\) & \(440\) \\ \hline Ring 3 & \(\delta\rho/\rho\) & \(1.7\pm 1.0\) & \(0.67\pm 0.21\) & \(0.83\) & \(0.02\) & \(250\) \\ \hline \end{tabular}
\end{table}
Table 2: Inferred spectral indices (logarithmic slope) and peaks of the amplitude spectra. The spectral indices assume a single power law across our sampled range with the exception of points at \(k=0.1\) arcsec\({}^{-1}\) (we omit points at \(k=0.1\) arcsec\({}^{-1}\)). The peaks of amplitude spectra are taken with a signal-to-noise cut of 3.
support comes from (quasi) turbulent gas motions. For a perturbation with injection scale of 500 kpc, we have a simple approximation from Gaspari and Churazov (2013) which gives us:
\[\mathcal{M}_{\rm 3D}\approx 4A_{\rho}(k_{\rm peak})\approx 2.4A_{P}(k_{\rm peak}). \tag{19}\]
This can be generalized to \(\mathcal{M}_{\rm 3D}\approx c_{\rho}A_{\rho}(k_{\rm peak})\approx c_{P}A_{P}(k_{\rm peak})\), where \(c_{\rho}\) and \(c_{P}\) have a very weak dependence on the injection scale (\(\propto\ell_{\rm inj}^{-\alpha_{H}}\), with \(0.2\lesssim\alpha_{H}\lesssim 0.3\)). For an injection scale of 250 kpc, \(c_{\rho}\) and \(c_{P}\) will be \(\sim 20\%\) greater than their values for an injection scale of 500 kpc. Other works find similar linear scalings between fluctuations and Mach numbers; e.g., including the 3D correction \(\mathcal{M}_{\rm 3D}=\sqrt{3}\mathcal{M}_{\rm 1D}\), Zhuravleva et al. (2023) find a radially-averaged relation \(\mathcal{M}_{\rm 3D}\approx 2.4\,\delta P/P\).
We might also consider the impact of the cool core of Zwicky 3146. Specifically, for a gas of a given Mach number we may expect density fluctuations to be significantly higher than pressure fluctuations when radiative cooling is prominent (e.g. Mohapatra et al., 2022). It's not clear how strong the radiative cooling is in Zwicky 3146 as the actual cooling rate may be quenched to \(\sim 10\%\) of reported cooling flow rates (see Romero et al., 2020, and references therein). Moreover, the cool core itself has an extent (width) of roughly \(20^{\prime\prime}\)(Forman et al., 2002; Giacintucci et al., 2014), so the impact of the cool core on the power spectra in Ring 1 should be negligible.
Khatri and Gaspari (2016) provide a relation between the hydrostatic bias and \(\mathcal{M}_{\rm 3D}\) which we denote as \(b_{\mathcal{M}}\) when derived from \(\mathcal{M}_{\rm 3D}\).
There are several limitations of our data which inhibit the goal of inferring \(b_{\mathcal{M}}\) from thermodynamic fluctuations. Given the commonality of mass estimations at \(R_{500}\), it is desirable to infer \(b_{\mathcal{M}}(R_{500})\), but our spectra not being robust in Ring 3 does not allow us to do this. Even before then, we have the problem of estimating \(\mathcal{M}_{\rm 3D}\) and eventually its (logarithmic) radial slope. As mentioned in Section 4, we cannot well determine the peaks of the spectra, both due to data quality and due to the scales accessed in this analysis.
Notwithstanding the above caveats, for spectra which we take to be robust and significant we calculate Mach numbers and report them in Table 3. These values are all larger than expected for a relaxed cluster (e.g. Zhuravleva et al., 2023). We have deeply explored instrumental systematic errors and biases in our power spectra analyses (see Appendices B and C). We may also call into consideration the assumptions made when modelling our unperturbed cluster, e.g. would an elliptical surface brightness model be more appropriate?
Khatri and Gaspari (2016) provide a relation between the hydrostatic bias and \(\mathcal{M}_{\rm 3D}\) (and attach a corresponding subscript to denote the method of calculation, \(b_{\mathcal{M}}\)):
\[b_{\mathcal{M}}=\frac{-\gamma\mathcal{M}_{\rm 3D}^{2}}{3}\frac{d\ln P_{\rm NT }}{d\ln P_{\rm th}}\left(1+\frac{\gamma\mathcal{M}_{\rm 3D}^{2}}{3}\frac{d\ln P_{ \rm NT}}{d\ln P_{\rm th}}\right)^{-1}, \tag{20}\]
where \(\gamma\) is the adiabatic index, taken to be 5/3 for the ICM. NB that as defined in Khatri and Gaspari (2016)\(b_{\mathcal{M}}\equiv M_{x}/M_{\rm tot}-1=-b\). Following the recasting performed in Khatri and Gaspari (2016), we find:
\[\frac{d\ln P_{\rm NT}}{d\ln P_{\rm th}}=\frac{d\ln P_{\rm NT}/d\ln r}{d\ln P_{ \rm th}/d\ln r}=1+2\frac{d\ln\mathcal{M}_{\rm 3D}/d\ln r}{d\ln P_{\rm th}/d\ln r}. \tag{21}\]
We can employ the above equation with the average logarithmic pressure slope within Ring 1. Yet, we must also identify a logarithmic Mach number slope (\(d\ln\mathcal{M}_{\rm 3D}/d\ln(r)\)). Taking the weighted average of the \(\mathcal{M}_{\rm 3D,peak}\) values reported in Ring 1 and the X-ray
Figure 12: Constraints on the thermodynamical regimes within Ring 1 given the ratio of the 3D amplitude spectra (pressure relative to density). The isothermal regime is taken to be between 0.9 and 1.1 with the adiabatic regime taken to be values above 1.1 and isobaric regime to be values below 0.9
\begin{table}
\begin{tabular}{l l|l l} \hline & & \(\mathcal{M}_{\rm 3D,peak}\) & \(\mathcal{M}_{\rm 3D,int}\) \\ \hline Ring 1 & \(\delta\rho/\rho\) & \(0.53\pm 0.01\) & 0.32 \\ & \(\delta P/P\) & \(0.69\pm 0.19\) & 0.80 \\ \hline Ring 2 & \(\delta\rho/\rho\) & \(0.43\pm 0.14\) & 0.38 \\ \hline \end{tabular}
\end{table}
Table 3: Inferred Mach numbers (1) based on the peak of the magnitude spectra, \(\mathcal{M}_{\rm 3D,peak}\) and (2) as inferred from the integral of the spectra (i.e. variance: \(\sigma^{2}\)) and radially averaged relations in Zhuravleva et al. (2023), \(\mathcal{M}_{\rm 3D,int}\).
value in Ring 2, we compute a logarithmic slope. Using the weighted average of \(\mathcal{M}_{\rm 3D}\) in Ring 1 we obtain \(-b_{\mathcal{M}}=0.16\pm 0.04\). This value thus represents an estimate of the hydrostatic bias in the central region of the cluster. We note that most estimates of the hydrostatic bias are at a canonical radius like \(R_{500}\), where \(b\) is expected to be in the range \(0.1<b<0.3\)(e.g. Hurier and Angulo, 2018). Given the sloshing present in the core, it's plausible that the hydrostatic bias in the central region (\(r<100^{\prime\prime}\)) is of similar values to values expected at \(R_{500}\).
### Ellipticity
There is the potential for a spherical model to an ellipsoidal cluster to impart a bias on the power spectra recovered (e.g. Khatri and Gaspari, 2016; Zhuravleva et al., 2023, from perspectives of observations and simulations, respectively). Indeed, this could apply to our result, where we should expect that our results overestimate the fluctuations at larger scales (i.e. lower \(k\) modes). However, the resolution to this problem is not simple given that, much like in the Coma cluster, the ellipticities can differ between SZ and X-ray, and even between X-ray images, i.e. pn and MOS images (Neumann et al., 2003). As reported in Romero et al. (2020), the ellipticity also varies with radius. So a choice of a single ellipticity would be inherently arbitrary and would itself impart a bias at radii not matching the ellipticity chosen. By extension, employing elliptical fits to surface brightness has also been shown to sufficiently account for substructure such as a shock (e.g. as in RX J1347.5-1145 Di Mascolo et al., 2019) without explicitly modeling the shock itself, hence a fluctuation analysis with such an elliptical model would risk subtracting sought-after fluctuations. Furthermore, there is no clear choice of ellipticity which escapes its own biases. Finally, when deprojecting to 3D quantities, we also introduce a degeneracy in the ellipsoidal shape and inclination of the ellipsoidal relative to the line of sight.
In a broader sense, the question can be asked: "what constitutes the unperturbed cluster model?" It should be a model that follows the shape of the gravitational potential. This question has been raised elsewhere; for example, in Zhuravleva et al. (2015) they address this by "patching" their \(\beta\)-model of the Perseus cluster and Sanders and Fabian (2012) fit ellipses to surface brightness contours. In either case, this opens the question of "to what degree of complexity we should go" as well as complicating the interpretation of the underlying 3D distribution of the unperturbed thermodynamic quantities. To answer this accurately requires knowledge about the gravitational potential at a detail that is often not be available. We find ourselves in such a position: while our circular surface brightness models are likely not fully sufficient to describe the gravitational potential we lack the data (or data of sufficient depth) to motivate another specific model other than choosing a rather arbitrary elliptical model.
## 6 Conclusions
By leveraging our precursory multiwavelength method (Khatri and Gaspari, 2016), in this work we have presented amplitude spectra of surface brightness fluctuations from \(\delta S/S\) and \(\delta y/y\) images from the X-ray (_XMM-Newton_) and SZ (MUSTANG-2) data, respectively. The two instruments are well matched in angular resolution and their sensitivities are conducive to studying the intracluster medium of galaxy clusters at moderate redshift, such as Zwicky 3146 at \(z=0.29\).
Zwicky 3146 is a relaxed, sloshing, cool core cluster. Our amplitude spectra reflect the sloshing in the core as the density fluctuations are seen to increase relative to pressure fluctuations at the largest scales in our spectra (\(\sim 400\) kpc). Our amplitude spectra suggest an injection scale of \(140\,\mathrm{kpc}<\ell_{\rm inj}<440\) kpc. Our best constraints are in Ring 1, where the X-ray derived spectra (\(\delta\rho/\rho\)) suggest an injection scale of \(\sim 250\) kpc, while the SZ derived spectra (\(\delta P/P\)) suggest an injection scale of \(\sim 140\) kpc. The larger scale from X-rays reflects its sensitivity to a sloshing core. It is conceivable that the SZ data is more sensitive to fluctuations from cavities, where Vantyghem et al. (2021) found potential cavities on the scale of \(\sim 50\) kpc; such scales are supported by AGN feedback simulations (e.g., Wittor and Gaspari, 2020). Our comparison of pressure and density fluctuations in Ring 1 show that from large to small scales, the ICM equation of state is transitioning from isobaric to adiabatic, with a brief transition through the isothermal regime. This is another sign of increased kinematical motions (Gaspari et al., 2014), corroborating the approach toward the turbulence injection peak potentially at a few tens kpc.
In Zwicky 3146 there is no evidence that cavities exist at moderate radii (Ring 2); and in Ring 2 we would expect an injection scale within the scales probed here. We would similarly expect an injection scale within the scales probed for our outermost ring, Ring 3. Unfortunately, neither the X-ray nor SZ data were of sufficient quality to produce reliable constraints in Ring 3. We note that in the case of SZ data an instrument with MUSTANG-2 specifications just changing the instantaneous FOV would greatly improve its ability to probe the outskirts of clusters.
Finally, we derive Mach numbers from the 3D spectra by leveraging scalings from hydrodynamical simulations.
On average, we infer a turbulent 3D Mach number \(\approx 0.5\), with the values inferred from pressure fluctuations being relatively larger than those from density fluctuations. From the Mach numbers in the center of the cluster we infer a hydrostatic bias of \(-b_{\mathcal{M}}=0.16\pm 0.04\). The uncertainty in these measurements grows rapidly as one probes larger cluster-centric radii. Thus, future deeper and higher resolution datasets in both X-ray and SZ will be instrumental to fully unveil Zwicky 3146's kinematical state at varying radii and Fourier modes.
## 7 Acknowledgements
Charles Romero is supported by NASA ADAP grant 80NSSC19K0574 and Chandra grant G08-19117X. Craig Sarazin is supported in part by _Chandra_ grants GO7-18122X/GO8-19106X and _XMM-Newton_ grants NNX17AC69G/80NSSC18K0488. MG acknowledges partial support by HST GO-15890.020/023-A, the _BlackHoleWeather_ program, and NASA HEC Pleiades (SMD-1726). Rishi K. acknowledges support by Max Planck Gesellschaft for Max Planck Partner Group on cosmology with MPA Garching at TIFR and Department of Atomic Energy, Government of India, under Project Identification No. RTI 4002. WF acknowledges support from the Smithsonian Institution, the Chandra High Resolution Camera Project through NASA contract NAS8-03060, and NASA Grants 80NSSC19K0116, GO1-22132X, and GO9-20109X. LDM is supported by the ERC-StG "ClustersXCosmo" grant agreement 716762 and acknowledges financial contribution from the agreement ASI-INAF n.2017-14-H.0. The National Radio Astronomy Observatory is a facility of the National Science Foundation operated under cooperative agreement by Associated Universities, Inc. GBT data was taken under the project ID AGBT18A_175. We would like to thank the anonymous reviewer for their helpful and valuable comments. GBT, XMM Astropy (Astropy Collaboration et al., 2013; The Astropy Collaboration, 2018), pyprofft (Eckert et al., 2017), emcee (Foreman-Mackey et al., 2013), ESAS (Snowden et al., 2008)
|
2302.01551 | Fundamental solutions of an extended hydrodynamic model in two
dimensions: Derivation, theory, and applications | The inability of the Navier-Stokes-Fourier equations to capture rarefaction
effects motivates us to adopt the extended hydrodynamic equations. In the
present work, a hydrodynamic model, which consists of the conservation laws
closed with the recently propounded coupled constitutive relations (CCR), is
utilized. This model is referred to as the CCR model and is adequate for
describing moderately rarefied gas flows. A numerical framework based on the
method of fundamental solutions is developed to solve the CCR model for
rarefied gas flow problems in quasi two dimensions. To this end, the
fundamental solutions of the linearized CCR model are derived in two
dimensions. The significance of deriving the two-dimensional fundamental
solutions is that they cannot be deduced from their three-dimensional
counterparts that do exist in literature. As applications, the developed
numerical framework based on the derived fundamental solutions is used to
simulate (i) a rarefied gas flow between two coaxial cylinders with evaporating
walls and (ii) a temperature-driven rarefied gas flow between two non-coaxial
cylinders. The results for both problems have been validated against those
obtained with the other classical approaches. Through this, it is shown that
the method of fundamental solutions is an efficient tool for addressing quasi
two-dimensional multiphase microscale gas flow problems at a low computational
cost. Moreover, the findings also show that the CCR model solved with the
method of fundamental solutions is able to describe rarefaction effects, like
transpiration flows and thermal stress, generally well. | Himanshi, Anirudh Singh Rana, Vinay Kumar Gupta | 2023-02-03T05:15:20Z | http://arxiv.org/abs/2302.01551v2 | Fundamental solutions of an extended hydrodynamic model in two dimensions: derivation, theory and applications
###### Abstract
The inability of the Navier-Stokes-Fourier equations to capture rarefaction effects motivates us to adopt the extended hydrodynamic equations. In the present work, a hydrodynamic model comprised of the conservation laws closed with the recently propounded coupled constitutive relations (CCR)--referred to as the CCR model--adequate for describing moderately rarefied gas is utilized. A numerical framework based on the method of fundamental solutions is developed and employed to solve the CCR model in two dimensions. To this end, the fundamental solutions of the linearized CCR model are derived in two dimensions. The significance of deriving the two-dimensional fundamental solutions is that they cannot be deduced from their three-dimensional counterparts that do exist in literature. As applications, the developed numerical framework based on the derived fundamental solutions is used to simulate (i) a rarefied gas flow confined between two coaxial cylinders with evaporating walls and (ii) a temperature-driven rarefied gas flow between two non-coaxial cylinders. The results for both problems have been validated against those obtained with the other classical approaches. Through this, it is shown that the method of fundamental solutions is an efficient tool for addressing two-dimensional multiphase microscale gas flow problems at a low computational cost. Moreover, the findings also show that the CCR model solved with the method of fundamental solutions depicts rarefaction effects, like transpiration flows and thermal stress, generally well.
## I Introduction
The study of rarefied gases covers numerous applications, including flows caused by evaporation and condensation, upper-atmospheric dynamics, modeling of airborne particles, the reflective and reactive properties of gases interacting with solid and liquid surfaces and so on. Rarefied gas flows are characterized by a dimensionless parameter, the Knudsen number (Kn), which is the ratio of the mean free path \(\lambda\) of the gas and a characteristic length scale \(L\) in the problem. For very small values of the Knudsen number (Kn \(\lesssim 0.01\)), the classical continuum theories, namely the Euler and Navier-Stokes-Fourier (NSF) equations, are quite effective in capturing rarefaction effects but they fall short of doing so when the Knudsen number is not very small. Although the NSF equations fail to capture several non-equilibrium phenomena (like non-homogeneity in pressure profile and unusual temperature dip in the Poiseuille flow [1; 2; 3], heat flux direction opposite to the temperature gradient or the cross effects where heat flows from a low-temperature region to a high-temperature region [4; 5; 6]), yet, by exploiting the coupling among the thermodynamic forces and fluxes to form a closed system, the range of applicability of the new equations is enhanced in comparison to the NSF equations. A model that exploits the coupling among the thermodynamic forces and fluxes to yield an improved set of constitutive relations for the stress and heat flux appearing in the conservation laws has recently been propounded by Rana et al. [7]. The constitutive relations for the stress and heat flux obtained in this model are coupled through a coupling coefficient; hence they are referred to as the coupled constitutive relations (CCR), and the model wherefore is referred to as the CCR model. In the linearized and steady state, the CCR model reduces to the linearized Grad 13-moment (G13) equations [8] in the steady state as a special case, and on taking the coupling coefficient as zero, the CCR model reduces to the original NSF equations. Owing to its simplicity and viable features, the CCR model has been applied successfully to some problems pertaining to rarefied gas flows [9; 10]. Although, there do exist other models, such as the regularized 13-moment (R13) equations [11; 12], the regularized 26-moment (R26) equations [13], etc., that can describe rarefied gas flows somewhat more accurately than the CCR model, especially for flows at moderate Knudsen numbers, we shall use the CCR model due to its simplicity in this work.
In this paper, we shall focus our attention to exploring rarefied gas flow problems in two dimensions (2D). The reason for this is twofold: firstly, for a symmetric uniform flow in three dimensions (3D), it is sufficient to study
the problem in 2D, thanks to symmetry of the problem, and secondly, there are some intriguing problems in 2D that do not arise in 3D, such as Stokes' paradox [14], which states the non-existence of the steady-state solution to Stokes' equation in 2D. Furthermore, we shall investigate the problems numerically using a truly meshless numerical technique introduced by Kupradze and Aleksidze [15] known as the method of fundamental solutions (MFS).
The MFS is a boundary-type meshfree approach in which an approximate solution of a (linear) boundary value problem is expressed as a linear combination of numerous singular functions, referred to as the fundamental solutions, and the boundary conditions are satisfied at several locations on the boundary, referred to as the boundary nodes or collocation points, aiming to determine the unknown coefficients in the linear combination. Apart from being time efficient due to reduced spatial dimension in boundary discretization, the quality of being free from integrals makes the MFS peerless among other meshfree methods (such as the boundary element method [16], finite point method [17], diffuse element method [18], element-free Galerkin method [19]) that involve complex integrals. The MFS has proven to be an efficient executable numerical scheme in various areas such as thermoelasticity, electromagnetics, electrostatics, wave scattering, inverse problems and fluid flow problems; see, e.g. Refs. [20; 21; 22; 23; 24; 25]. Moreover, the MFS is also suitable for the analysis of problems involving shape optimization, moving boundary and unknown boundary [26; 27; 28; 29; 24], since the problems of modeling and satisfying boundary conditions are relatively simple for them. All these advantages make the application of the MFS to the CCR model evidently favorable.
Several researchers have employed the MFS to solve the Helmholtz-, harmonic- and biharmonic-type boundary value problems in 2D as well as in 3D, see e.g. Refs. [30; 31]. The MFS works as a good numerical strategy if the fundamental solutions to the problem are predefined. In the past few years, there has been a surge of interest in employing the MFS to various models for rarefied gas flows, for instance to the NSF, G13, R13 and CCR models [25; 23; 9], because the predefined fundamental solutions of the well-known equations, such as the Laplace, Helmholtz and biharmonic equations, can be exploited to determine the fundamental solutions for the NSF, G13, R13 and CCR models. Nevertheless, all the works on the MFS for rarefied gas flows have investigated the problems in 3D only. It is, however, important to note that the two-dimensional fundamental solutions for a model cannot be deduced directly from its three-dimensional counterpart due to the fact that the associated Green's functions are entirely different in 2D and 3D. Therefore the main objectives of the paper are (i) to determine the fundamental solutions of the linearized CCR model in 2D and (ii) to implement the determined fundamental solutions in a numerical framework. To gauge the accuracy of the developed numerical framework, the obtained numerical results are also validated against those obtained with other models for a few problems existing in the literature. We pick two internal-flow problems from Refs. [33; 34] that have been investigated in these references with the linearized Bhatnagar-Gross-Krook (BGK) model [35] [also referred to as the Boltzmann-Krook-Welander (BKW) kinetic model by some authors [33; 34; 35]]. In the first problem, the evaporation and condensation of a mildly rarefied vapor confined between two coaxial cylinders is studied while in the second problem, a temperature-driven rarefied gas flow between two non-coaxial cylinders is investigated.
A well-known shortcoming of the MFS is that it yields the results that are highly sensitive toward the location of the singularity points (also referred to as the source points), and that a high accuracy is accompanied with an ill-conditioned collocation matrix [37; 38; 29; 39]. Therefore, we also investigate the optimal location of singularities using an approach based on the effective condition number as discussed in Refs. [40; 41; 42].
The remainder of the paper is structured as follows. The linearized CCR model and the generalized boundary conditions associated with it are outlined in Sec. II. The fundamental solutions for the CCR model in 2D are determined in Sec. III. The technique to apply the MFS by forming a system of equations for any arbitrary geometry is discussed in Sec. IV. The implementation of the MFS along with its validation (i) for the problem of a vapor flow between two coaxial cylinders is demonstrated in Sec. V and (ii) for the problem of a temperature-driven rarefied gas flow between two non-coaxial cylinders is discussed in Sec. VI. The location of singularities based on the effective condition number approach is examined in Sec. VII. The paper ends with conclusions and outlook in Sec. VIII.
## II The linearized CCR model and boundary conditions
The CCR model consists of the conservation laws--the balance equations for the mass, momentum and energy--closed with the constitutive relations for the stress and heat flux, which are coupled with each other through a coupling coefficient. The full details of the CCR model can be found in Ref. [7]. In this work, we require them in the linearized form. To this end, we convert the mass, momentum and energy balance equations and the coupled constitutive relations into a linear-dimensionless form by assuming small perturbations in flow fields from their respective equilibrium values. The velocity, stress and heat flux in the equilibrium state vanish whereas the density and temperature in the equilibrium state are constants \(\tilde{\rho}_{0}\) and \(\tilde{T}_{0}\), respectively. The dimensionless perturbations in the density \(\tilde{\rho}\) and temperature \(\tilde{T}\) from their values in the equilibrium are given by
\[\rho=\frac{\tilde{\rho}-\tilde{\rho}_{0}}{\tilde{\rho}_{0}}\quad\text{and} \quad T=\frac{\tilde{T}-\tilde{T}_{0}}{\tilde{T}_{0}}, \tag{1}\]
respectively. Similarly, the dimensionless perturbations in the velocity \(\tilde{\mathbf{v}}\), stress tensor \(\tilde{\mathbf{\sigma}}\) and heat flux \(\tilde{\mathbf{q}}\) from their values in the equilibrium are given by
\[\mathbf{v}=\frac{\tilde{\mathbf{v}}}{\sqrt{\tilde{\theta}_{0}}},\quad\mathbf{\sigma}=\frac{ \tilde{\mathbf{\sigma}}}{\tilde{\rho}_{0}\tilde{\theta}_{0}}\quad\text{and}\quad\bm {q}=\frac{\tilde{\mathbf{q}}}{\tilde{\rho}_{0}(\tilde{\theta}_{0})^{3/2}}, \tag{2}\]
respectively, where \(\tilde{\theta}_{0}=\tilde{R}\tilde{T}_{0}\) with \(\tilde{R}\) being the gas constant. The linearized equation of state \(p\approx\rho+T\) gives the dimensionless perturbation in the pressure from its equilibrium value \(\tilde{p}_{0}=\tilde{\rho}_{0}\tilde{\theta}_{0}\). For the sake of simplicity, the field variables with tilde are the quantities with dimensions while those without tilde are the dimensionless quantities throughout the paper.
Considering \(\tilde{L}\) to be the characteristic length scale, the dimensionless position vector is \(\mathbf{r}=\tilde{\mathbf{r}}/\tilde{L}\). Inserting these dimensionless variables into the CCR model [7] and dropping all nonlinear terms in the perturbed variables, one readily obtains the linear-dimensionless CCR model. Here, we present them directly. The linear-dimensionless mass, momentum and energy balance equations in the steady state read
\[\mathbf{\nabla}\cdot\mathbf{v} =0, \tag{3}\] \[\mathbf{\nabla}p+\mathbf{\nabla}\cdot\mathbf{\sigma} =\mathbf{0},\] (4) \[\mathbf{\nabla}\cdot\mathbf{q} =0, \tag{5}\]
and, to close the system (3)-(5), the linearized coupled constitutive relations [7]
\[\mathbf{\sigma} =-2\text{Kn}\overline{\mathbf{\nabla}\mathbf{v}}-2\alpha_{0}\text{Kn} \overline{\mathbf{\nabla}\mathbf{q}}, \tag{6}\] \[\mathbf{q} =-\frac{c_{p}\text{Kn}}{\text{Pr}}(\mathbf{\nabla}T+\alpha_{0}\mathbf{ \nabla}\cdot\mathbf{\sigma}), \tag{7}\]
where \(\alpha_{0}\) is the coupling coefficient through which constitutive relations (6) and (7) are coupled; \(c_{p}=\tilde{c}_{p}/\tilde{R}\) with \(\tilde{c}_{p}\) being specific heat capacity of the gas at a constant pressure; and
\[\text{Pr}=\frac{5\tilde{R}\tilde{\mu}_{0}}{2\tilde{\kappa}_{0}}\quad\text{and }\quad\text{Kn}=\frac{\tilde{\mu}_{0}}{\tilde{\rho}_{0}\sqrt{\tilde{\theta}_ {0}}\tilde{L}} \tag{8}\]
are the Prandtl number and Knudsen number, respectively with \(\tilde{\mu}_{0}\) and \(\tilde{\kappa}_{0}\) being the viscosity and thermal conductivity at the equilibrium state. The quantities \(\overline{\mathbf{\nabla}\mathbf{v}}\) and \(\overline{\mathbf{\nabla}\mathbf{q}}\) in (6) are the symmetric-tracefree parts of the tensors \(\mathbf{\nabla}\mathbf{v}\) and \(\mathbf{\nabla}\mathbf{q}\), respectively. For a vector \(\mathbf{\psi}\), the symmetric-tracefree part of the tensor \(\mathbf{\nabla}\mathbf{\psi}\) is defined as [43]
\[\overline{\mathbf{\nabla}\mathbf{\psi}}=\frac{1}{2}\Big{[}\mathbf{\nabla}\mathbf{\psi}+(\mathbf{ \nabla}\mathbf{\psi})^{\mathsf{T}}\Big{]}-\frac{1}{d}(\mathbf{\nabla}\cdot\mathbf{\psi}) \mathbf{I}, \tag{9}\]
where \(d\) is the dimension and \(\mathbf{I}\) is the identity tensor in \(d\)-dimensions. Since we shall be dealing with the problems in 2D, \(d=2\) is fixed throughout the paper. Furthermore, the dimensionless specific heat of a gas at a constant pressure is \(c_{p}=(5+\mathfrak{n})/2\), where \(\mathfrak{n}\) is a positive integer that accounts for the rotational degrees of freedom in a polytomic gas. For monatomic gases, there is no rotational degree of freedom; consequently, \(\mathfrak{n}=0\) and \(c_{p}=5/2\) for monatomic gases. We shall only deal with monatomic gases in this paper, and hence \(c_{p}=5/2\) throughout this paper. Equations (3)-(5) closed with (6) and (7) are referred to as the linearized CCR model. For \(\alpha_{0}=0\), the linearized CCR model reduces to the linearized NSF equations and for \(\alpha_{0}=2/5\), the linearized CCR model reduces to the linearized G13 equations. Since we shall be comparing the results obtained in the present work with those obtained with the BGK model, for which the Prandtl number is unity [12], \(\text{Pr}=1\) throughout this paper. Also, the parameter \(\alpha_{0}\) is taken as \(0.3197\), the value of \(\alpha_{0}\) for hard sphere molecules [7], throughout this paper.
The thermodynamically-consistent boundary conditions complementing the linear CCR model have been derived in Ref. [9]. For a problem in 3D, the boundary conditions complementing the linear CCR model are given in Eqs. (4.2\(a\)), (4.2\(b\)), (4.3\(a\)) and (4.3\(b\)) of Ref. [9]. Eqs. (4.2\(a\)) and (4.2\(b\)) of Ref. [9] are the boundary conditions on the normal components of the mass and heat fluxes, respectively, while Eqs. (4.3\(a\)) and (4.3\(b\)) of Ref. [9] are the boundary conditions on the shear stress--two conditions due to two tangential directions in 3D. Since in 2D, there will be only one tangential direction, boundary condition (4.3\(b\)) of Ref. [9] is irrelevant in the present work and the superscript '(1)' can be dropped from the unit tangent vector \(\mathbf{t}^{(1)}\) in (4.3\(a\)) of Ref. [9] for simplicity. Thus, the linear-dimensionless boundary conditions for the linearized CCR model in 2D read [9]
\[(\mathbf{v}-\mathbf{v}^{I})\cdot\mathbf{n}= -\eta_{11}(p-p_{\text{sat}}+\mathbf{n}\cdot\mathbf{\sigma}\cdot\mathbf{n})\] \[+\eta_{12}(T-T^{I}+\alpha_{0}\mathbf{n}\cdot\mathbf{\sigma}\cdot\mathbf{n}), \tag{10}\]
\[\mathbf{q}\cdot\mathbf{n}= \,\eta_{12}(p-p_{\text{sat}}+\mathbf{n}\cdot\mathbf{\sigma}\cdot\mathbf{n})\] \[-(\eta_{22}+2\tau_{0})(T-T^{I}+\alpha_{0}\mathbf{n}\cdot\mathbf{\sigma} \cdot\mathbf{n}), \tag{11}\]
\[\mathbf{t}\cdot\mathbf{\sigma}\cdot\mathbf{n}= -\varsigma(\mathbf{v}-\mathbf{v}^{I}+\alpha_{0}\mathbf{q})\cdot\mathbf{t}, \tag{12}\]
where \(\mathbf{n}\) and \(\mathbf{t}\) are the unit normal and tangent vectors, respectively. In boundary conditions (10)-(12), \(\eta_{ij}\)'s, for \(i,j\in\{1,2\}\) are the Onsager reciprocity coefficients, which from Sone's asymptotic kinetic theory [36] turn out to be
\[\eta_{11} =0.9134\sqrt{\frac{2}{\pi}}\frac{\vartheta}{2-\vartheta}, \tag{13}\] \[\eta_{12} =0.3915\sqrt{\frac{2}{\pi}}\frac{\vartheta}{2-\vartheta},\] \[\eta_{22} =0.1678\sqrt{\frac{2}{\pi}}\frac{\vartheta}{2-\vartheta}\]
under the assumption of the accommodation coefficient being unity (which also holds true for the diffuse reflection boundary condition). The parameter \(\vartheta\) in the above coefficients is the evaporation/condensation coefficient. For canonical boundaries and phase-change boundaries,
\(\vartheta=0\) and \(1\), respectively, are the largely accepted values of \(\vartheta\) in the literature. The temperature-jump and velocity-slip coefficients are given by [9]
\[\tau_{0}=0.8503\sqrt{\frac{2}{\pi}}\quad\text{and}\quad\varsigma=0.8798\sqrt{\frac {2}{\pi}}, \tag{14}\]
respectively. Furthermore, \(\mathbf{v}^{I}\), \(T^{I}\) and \(p_{\text{sat}}\) in boundary conditions (10)-(12) represent the velocity, temperature and saturation pressure at the interface. It is important to note that the coefficients \(\alpha_{0}\) in boundary conditions (10)-(12) are actually the fitting parameters and could be different from the coupling coefficient \(\alpha_{0}\). Moreover, the coefficient \(\alpha_{0}\) in each of boundary conditions (10)-(12) could also be different from each other. The only reason that the coefficients \(\alpha_{0}\) in boundary conditions (10)-(12) have been taken as the same as the coupling coefficient in the CCR model because the boundary conditions obtained in this way are thermodynamically consistent [7].
## III Derivation of the fundamental solutions of the CCR model
The fundamental solutions of the CCR model in 3D have already been derived in Ref. [9]. However, as mentioned in Sec. I, the fundamental solutions of a model in 2D and 3D are independent of each other because the inherent Green's functions are independent of each other in 2D and 3D; consequently, the fundamental solution of a model in 2D cannot be determined from its 3D counterpart in general. Therefore, we derive the fundamental solutions of the CCR model in 2D from scratch in this section. To this end, we add a Dirac delta forcing term of strength \(\mathbf{f}\) on the right-hand side of the momentum balance equation to represent a point force, and a point heat source of strength \(g\) on the right-hand side of the energy balance equation. Furthermore, to deal with phase-change effects at the liquid-vapor interface, a point mass source of strength \(h\) is also added on the right-hand side of the mass balance equation. For determining the fundamental solutions of a system of partial differential equations, it is customary to consider only one point source at a time and then to superimpose the solutions obtained by taking each point source at a time in order to incorporate the effects of all point sources; see, e.g. Refs. [9; 25]. Nevertheless, we take all three point sources \(\mathbf{f}\), \(g\) and \(h\) simultaneously and solve the resulting system of equations altogether. We have verified--although not shown here for brevity--that this procedure also yields exactly the same solution as that obtained by superimposing the solutions obtained by solving the systems separately with one point source at a time. It is easy to keep track of the derivation of the fundamental solutions of the CCR equations in the indicial notation. Therefore, we shall derive the fundamental solutions of the CCR equations first in the indicial notation and then express them in the vectorial/tensorial notation. For that, let us first write down the linearized CCR model (with the point source terms) in the indicial notation. The mass, momentum and energy balance equations (3)-(5)--with the point source terms--in the indicial notation read
\[\frac{\partial v_{i}}{\partial x_{i}} =h\,\delta(\mathbf{r}), \tag{15}\] \[\frac{\partial p}{\partial x_{i}}+\frac{\partial\sigma_{ij}}{ \partial x_{j}} =f_{i}\,\delta(\mathbf{r}),\] (16) \[\frac{\partial q_{i}}{\partial x_{i}} =g\,\delta(\mathbf{r}), \tag{17}\]
where the Einstein summation is assumed over repeated indices in a term. The CCR closure (Eqs. (6) and (7)) in the indicial notation reads
\[\sigma_{ij}= -\text{Kn}\left(\frac{\partial v_{i}}{\partial x_{j}}+\frac{ \partial v_{j}}{\partial x_{i}}-\frac{\partial v_{k}}{\partial x_{k}}\delta_{ ij}\right)\] \[-\alpha_{0}\text{Kn}\left(\frac{\partial q_{i}}{\partial x_{j}}+ \frac{\partial q_{j}}{\partial x_{i}}-\frac{\partial q_{k}}{\partial x_{k}} \delta_{ij}\right), \tag{18}\]
\[q_{i}= -\frac{c_{p}\text{Kn}}{\text{Pr}}\left(\frac{\partial T}{\partial x_{i }}+\alpha_{0}\frac{\partial\sigma_{ij}}{\partial x_{j}}\right). \tag{19}\]
Note that \(\delta_{ij}\) in Eq. (18) is the Kronecker delta and that \(d=2\) has been substituted while writing Eq. (18). We solve system (15)-(19) using the Fourier transform, which for a function \(\hat{F}(\mathbf{\omega})\) is defined as
\[\mathcal{F}\big{(}F(\mathbf{r})\big{)}=\hat{F}(\mathbf{\omega})=\int_{\mathbb{R}^{2}}F (\mathbf{r})\,\text{e}^{\text{i}\;\mathbf{\omega}\cdot\mathbf{r}}\,\text{d}\mathbf{r} \tag{20}\]
and the corresponding inverse Fourier transformation is defined as
\[\mathcal{F}^{-1}\big{(}\hat{F}(\mathbf{\omega})\big{)}=F(\mathbf{r})=\frac{1}{(2\pi)^{ 2}}\!\int_{\mathbb{R}^{2}}\!\hat{F}(\mathbf{\omega})\,\text{e}^{-\text{i}\;\mathbf{ \omega}\cdot\mathbf{r}}\,\text{d}\mathbf{\omega}. \tag{21}\]
Applying the Fourier transformation in Eqs. (15)-(19) and using the fact that \(\mathcal{F}[\delta(\mathbf{r})]=1\), we obtain
\[\omega_{i}\hat{v}_{i} =\text{i}\,h, \tag{22}\] \[\omega_{i}\hat{p}+\omega_{j}\hat{\sigma}_{ij} =\text{i}\,f_{i},\] (23) \[\omega_{i}\hat{q}_{i} =\text{i}\,g, \tag{24}\]
\[\hat{\sigma}_{ij}= \text{i}\,\text{Kn}\big{[}\omega_{j}(\hat{v}_{i}+\alpha_{0}\hat {q}_{i})+\omega_{i}(\hat{v}_{j}+\alpha_{0}\hat{q}_{j})\] \[-\omega_{k}(\hat{v}_{k}+\alpha_{0}\hat{q}_{k})\delta_{ij}\big{]}, \tag{25}\] \[\hat{q}_{i}= \text{i}\,\frac{c_{p}\text{Kn}}{\text{Pr}}\left(\omega_{i}\hat {T}+\alpha_{0}\omega_{j}\hat{\sigma}_{ij}\right). \tag{26}\]
Using Eqs. (22) and (24), Eq. (25) simplifies to
\[\hat{\sigma}_{ij}= \text{i}\,\text{Kn}\big{[}\omega_{j}(\hat{v}_{i}+\alpha_{0} \hat{q}_{i})+\omega_{i}(\hat{v}_{j}+\alpha_{0}\hat{q}_{j})\big{]}\] \[+\text{Kn}(h+\alpha_{0}g)\delta_{ij}. \tag{27}\]
Multiplying the above equation with \(\omega_{j}\) and \(\omega_{i}\omega_{j}\), we obtain
\[\omega_{j}\hat{\sigma}_{ij}= \,\text{i}\,\text{Kn}[\omega^{2}(\hat{v}_{i}+\alpha_{0}\hat{q}_{i })], \tag{28}\] \[\omega_{i}\omega_{j}\hat{\sigma}_{ij}= -\text{Kn}\,\omega^{2}(h+\alpha_{0}g), \tag{29}\]
respectively, where \(\omega_{i}\omega_{i}=|\omega_{i}|^{2}=\omega^{2}\). Multiplying Eq. (26) with \(\omega_{i}\) and exploiting Eqs. (24) and (29), we obtain
\[\hat{T}=\frac{g\Pr}{\omega^{2}c_{p}\mathrm{Kn}}+\alpha_{0}\mathrm{Kn}(h+\alpha _{0}g). \tag{30}\]
Again, multiplying Eq. (23) with \(\omega_{i}\) and exploiting Eq. (29), we obtain
\[\hat{p}=\mathrm{i}\frac{f_{i}\omega_{i}}{\omega^{2}}+\mathrm{Kn}(h+\alpha_{0}g). \tag{31}\]
Now, from Eqs. (23) and (31), one can easily write
\[\omega_{j}\hat{\sigma}_{ij}=\mathrm{i}\,f_{i}-\mathrm{i}\omega_{i}\frac{f_{k} \omega_{k}}{\omega^{2}}-\omega_{i}\mathrm{Kn}(h+\alpha_{0}g). \tag{32}\]
Substituting the value of \(\hat{T}\) from Eq. (30) and the value of \(\omega_{j}\hat{\sigma}_{ij}\) from Eq. (32) into Eq. (26), we obtain
\[\hat{q}_{i}=\mathrm{i}g\frac{\omega_{i}}{\omega^{2}}-\alpha_{0}\frac{c_{p} \mathrm{Kn}}{\Pr}f_{k}\left(\delta_{ik}-\frac{\omega_{i}\omega_{k}}{\omega^{2 }}\right). \tag{33}\]
Now, from Eqs. (28), (32) and (33),
\[\hat{v}_{i}= \frac{f_{k}}{\mathrm{Kn}}\left(\frac{\delta_{ik}}{\omega^{2}}- \frac{\omega_{i}\omega_{k}}{\omega^{4}}\right)\] \[+\alpha_{0}^{2}\frac{c_{p}\mathrm{Kn}}{\Pr}f_{k}\left(\delta_{ ik}-\frac{\omega_{i}\omega_{k}}{\omega^{2}}\right)+\mathrm{i}h\frac{\omega_{i}}{ \omega^{2}}. \tag{34}\]
Finally, using Eqs. (34) and (33) in (25), we obtain
\[\hat{\sigma}_{ij}= \mathrm{i}f_{k}\left(\frac{\omega_{j}\delta_{ik}+\omega_{i}\delta jk }{\omega^{2}}-2\frac{\omega_{i}\omega_{j}\omega_{k}}{\omega^{4}}\right)\] \[-\mathrm{Kn}\left(\frac{\omega_{i}\omega_{j}}{\omega^{2}}-\frac{ \delta_{ij}}{2}\right)(h+\alpha_{0}g). \tag{35}\]
Applying the inverse Fourier transform in (30), (31) and (33)-(35) with the help of formulae derived in Appendix A, the field variables turn out to be
\[v_{i}= \frac{f_{k}}{\mathrm{Kn}}\left(\frac{x_{i}x_{k}}{4\pi r^{2}}- \frac{2\ln r-1}{8\pi}\delta_{ik}\right)\] \[+\alpha_{0}^{2}\frac{c_{p}\mathrm{Kn}}{2\pi\mathrm{Pr}}f_{k} \left(\frac{2x_{i}x_{k}}{r^{4}}-\frac{\delta_{ik}}{r^{2}}\right)+\frac{hx_{i}} {2\pi r^{2}}, \tag{36}\] \[q_{i}= \frac{gx_{i}}{2\pi r^{2}}-\alpha_{0}\frac{c_{p}\mathrm{Kn}}{2\pi \mathrm{Pr}}f_{k}\left(\frac{2x_{i}x_{k}}{r^{4}}-\frac{\delta_{ik}}{r^{2}} \right),\] (37) \[p= \frac{f_{i}x_{i}}{2\pi r^{2}},\] (38) \[T= -\frac{g\Pr\ln r}{2\pi\mathrm{Kn}\,c_{p}},\] (39) \[\sigma_{ij}= \frac{f_{k}x_{k}+2\mathrm{Kn}(h+\alpha_{0}g)}{2\pi}\left(\frac{2 x_{i}x_{j}}{r^{4}}-\frac{\delta_{ij}}{r^{2}}\right), \tag{40}\]
where \(r=|x_{i}|\). The field variables in (36)-(40) are the fundamental solutions of the linearized CCR model in 2D. These fundamental solutions in the vectorial/tensorial notation are written as
\[\mathbf{v}(\mathbf{r}) = \frac{1}{8\pi\mathrm{Kn}}\mathbf{f}\cdot\mathbf{J}(\mathbf{r})+\frac{h\,\mathbf{r }}{2\pi r^{2}}+\frac{c_{p}\mathrm{Kn}}{2\pi\mathrm{Pr}}\alpha_{0}^{2}\mathbf{f} \cdot\mathbf{K}(\mathbf{r}), \tag{41}\] \[p(\mathbf{r}) = \frac{\mathbf{f}\cdot\mathbf{r}}{2\pi r^{2}},\] (42) \[\mathbf{\sigma}(\mathbf{r}) = \frac{2\mathrm{Kn}(h+g\alpha_{0})+\mathbf{f}\cdot\mathbf{r}}{2\pi}\mathbf{K} (\mathbf{r}),\] (43) \[T(\mathbf{r}) = -\frac{\Pr g}{2\pi\mathrm{Kn}\,c_{p}}\ln r,\] (44) \[\mathbf{q}(\mathbf{r}) = \frac{g}{2\pi}\frac{\mathbf{r}}{r^{2}}-\frac{c_{p}\mathrm{Kn}}{2\pi \mathrm{Pr}}\alpha_{0}\mathbf{f}\cdot\mathbf{K}(\mathbf{r}), \tag{45}\]
where \(r=|\mathbf{r}|\) and
\[\mathbf{J}(\mathbf{r}) = \frac{2\mathbf{r}\mathbf{r}}{r^{2}}-(2\ln r-1)\mathbf{I}, \tag{46}\] \[\mathbf{K}(\mathbf{r}) = \frac{2\mathbf{r}\mathbf{r}}{r^{4}}-\frac{\mathbf{I}}{r^{2}}. \tag{47}\]
It is worthwhile noticing that the fundamental solutions for the linearized NSF and G13 equations in 2D can be obtained directly from Eqs. (41)-(45) by taking \(\alpha_{0}=0\) and \(\alpha_{0}=2/5\), respectively. The fundamental solutions (41)-(45) need to be implemented with appropriate boundary conditions for a given problem. We shall discuss their implementation and validation in Sec. V and Sec. VI for two different problems.
## IV Boundary discretization
We describe the construction of a system of algebraic equations for applying the MFS through the problem of flow past a complex geometry as depicted in Fig. 1. As an example, the geometry of the object in Fig. 1 is mathematically defined in the parametric form as
\[(x,y)=\left(\frac{5}{4}a\cos\theta,\frac{1}{4}a(5-\cos 5\theta)\sin\theta\right) \tag{48}\]
with \(0\leq\theta\leq 2\pi\) and \(a\leq 1\) being the dilation factor. In the MFS, it is quite natural to place the singularity points outside the flow domain [15]. Notwithstanding, the location of the singularity points is a major concern as the results obtained from the MFS are highly sensitive toward the location of singularities [37; 29; 39]. There are two most common ways of distributing singularities in the MFS. One way is to place the singularities on a fictitious boundary of a very simple shape--irrespective of the shape of the object--with just one parameter to control; for example, on a circle in the two-dimensional case and on a sphere in the three-dimensional case, and the radius of the circle or sphere would be the controlling parameter. Another way is to recreate a dilated (or shrunk) fictitious boundary, which has the same shape as the boundary of the original object and to place the singularities on this fictitious boundary [44; 39]--similarly to
that shown in Fig. 1 as well. The latter is also easy if the original boundary of the object can be described by a set of parametric equations having only a single controlling parameter, the dilation factor. For the problem depicted in Fig. 1, we have taken the fictitious boundary to be of the same shape as the original boundary.
Let \(B\) be the number of the discretized boundary nodes and \(S\) the number of singularity points. The boundary nodes and the singularities are placed at equispaced angles \(\theta\) on the original and the fictitious boundary, respectively, and the distance between both boundaries can be varied by changing the value of the dilation factor \(a\). It may be noted that singularities need not be placed at equispaced angles in principle; nonetheless, we have done so for the sake of simplicity. Let \(\mathbf{x}_{i}^{s}\) and \(\mathbf{x}_{j}^{b}\) be the position vectors of the \(i^{\rm th}\) singularity site and the \(j^{\rm th}\) boundary node, respectively, then the position vector from the \(i^{\rm th}\) singularity site to any position \(\mathbf{x}\) in the domain is \(\mathbf{r}_{i}=\mathbf{x}-\mathbf{x}_{i}^{s}\) and the position vector from the \(i^{\rm th}\) singularity site to the \(j^{\rm th}\) boundary node is \(\mathbf{r}_{ij}=\mathbf{x}_{j}^{b}-\mathbf{x}_{i}^{s}\). It is important to note that the subscripts '\(i\)' and '\(j\)' are now being used for denoting the \(i^{\rm th}\) singularity site and \(j^{\rm th}\) boundary node and consequently, the repetition of indices henceforth shall _not_ imply the Einstein summation per se, unless stated otherwise (particularly, in Appendix A, wherein the Einstein summation does hold over the repeated indices). Since the point sources \(\mathbf{f}\), \(g\) and \(h\)--of different strengths--are to be put at each singularity site, there are four degrees of freedom corresponding to each singularity point [two scalars \(g\) and \(h\) from the point heat and mass sources, and two components \(f_{1}\) and \(f_{2}\) of the point force vector \(\mathbf{f}=(f_{1},f_{2})^{\rm T}\)]. In total, we have \(4\times S\) unknowns, which are determined typically by satisfying the boundary conditions at the boundary points. Once the location of the singularity points is decided, the next step in the implementation of the MFS is superposition of the fundamental solutions associated with each singularity site, which makes sense because of the linearity of equations and gives the value of the field variables at the \(j^{\rm th}\) boundary node. Superimposing the fundamental solutions (41)-(45) for each singularity site, the field variables at the \(j^{\rm th}\) boundary node read
\[\mathbf{v}_{j} =\sum_{i=1}^{S}\bigg{[}\frac{\mathbf{f}_{i}\cdot\mathbf{J}(\mathbf{r}_{ij})}{ 8\pi{\rm Kn}}+\frac{h_{i}}{2\pi}\frac{\mathbf{r}_{ij}}{r_{ij}^{2}}+\frac{c_{p}{ \rm Kn}}{2\pi{\rm Pr}}\alpha_{0}^{2}\mathbf{f}_{i}\cdot\mathbf{K}(\mathbf{r}_{ij})\bigg{]}, \tag{49}\] \[p_{j} =\sum_{i=1}^{S}\frac{\mathbf{f}_{i}\cdot\mathbf{r}_{ij}}{2\pi r_{ij}^{2}},\] (50) \[\mathbf{\sigma}_{j} =\sum_{i=1}^{S}\frac{2{\rm Kn}\left(h_{i}+g_{i}\,\alpha_{0} \right)+\mathbf{f}_{i}\cdot\mathbf{r}_{ij}}{2\pi}\mathbf{K}(\mathbf{r}_{ij}),\] (51) \[T_{j} =-\sum_{i=1}^{S}\frac{g_{i}\,{\rm Pr}}{c_{p}{\rm Kn}}\frac{\ln r_ {ij}}{2\pi},\] (52) \[\mathbf{q}_{j} =\sum_{i=1}^{S}\bigg{[}\frac{g_{i}}{2\pi}\frac{\mathbf{r}_{ij}}{r_{ij }^{2}}-\frac{c_{p}{\rm Kn}}{2\pi{\rm Pr}}\alpha_{0}\mathbf{f}_{i}\cdot\mathbf{K}(\mathbf{ r}_{ij})\bigg{]}, \tag{53}\]
where \(r_{ij}=|\mathbf{r}_{ij}|\); \(\mathbf{f}_{i}=(f_{1i},f_{2i})^{\rm T}\), \(g_{i}\) and \(h_{i}\) are the point force (vector), point heat source and point mass source, respectively, applied on the \(i^{\rm th}\) singularity site; and
\[\mathbf{J}(\mathbf{r}_{ij})= \frac{2\mathbf{r}_{ij}\mathbf{r}_{ij}}{r_{ij}^{2}}-(2\ln r_{ij}-1)\mathbf{I}, \tag{54}\] \[\mathbf{K}(\mathbf{r}_{ij})= \frac{2\mathbf{r}_{ij}\mathbf{r}_{ij}}{r_{ij}^{4}}-\frac{\mathbf{I}}{r_{ij}^{ 2}}. \tag{55}\]
This system is solved for the unknowns \(f_{1i},f_{2i},g_{i},h_{i},\,i\in\{1,2,3,\ldots,S\}\) by employing the boundary conditions at each boundary node. Once the unknowns \(f_{1i},f_{2i},g_{i},h_{i}\) for \(i\in\{1,2,3,\ldots,S\}\) are found, the flow variables at any position \(\mathbf{x}\) in the flow domain can be determined simply by dropping the subscript '\(j\)' everywhere in Eqs. (49)-(53). For instance, the velocity \(\mathbf{v}\equiv\mathbf{v}(\mathbf{x})\) at a position \(\mathbf{x}\) in the flow domain is given by
\[\mathbf{v}=\sum_{i=1}^{S}\bigg{[}\frac{\mathbf{f}_{i}\cdot\mathbf{J}(\mathbf{r}_{i})}{8\pi{ \rm Kn}}+\frac{h_{i}}{2\pi}\frac{\mathbf{r}_{i}}{r_{i}^{2}}+\frac{c_{p}{\rm Kn} \alpha_{0}^{2}}{2\pi{\rm Pr}}\mathbf{f}_{i}\cdot\mathbf{K}(\mathbf{r}_{i})\bigg{]}. \tag{56}\]
The other flow variables are obtained from Eqs. (50)-(53) analogously. The above procedure to evaluate flow variables works for any geometry and we have implemented this in a numerical framework. We shall elaborate on the placement of boundary nodes and source points, formation and solution of the system separately corresponding to the two problems in Sec. V and Sec. VI.
## V Vapor flow confined between two coaxial cylinders
For the validation of the developed numerical framework, we revisit the problem of a rarefied vapor flow
Figure 1: Schematic diagram of a flow past an object of an arbitrary shape depicting the boundary discretization and the placement of singularities outside the flow domain. The red and blue arrows at each boundary node depict the normal (pointing toward the flow domain) and tangential directions at that node, respectively.
confined between two concentric cylinders. The same problem was investigated by Onishi [33] with the linearized BGK model and the diffuse reflection boundary conditions.
### Problem description
Let us consider a moderately rarefied vapor confined between the condensed phases of two concentric infinitely long circular cylinders of radii \(\tilde{R}_{1}\) and \(\tilde{R}_{2}\), where \(\tilde{R}_{1}<\tilde{R}_{2}\). Owing to the axial symmetry, it is sufficient to investigate the problem in 2D. A cross-sectional (two-dimensional) view of the problem is illustrated in Fig. 2. For the purpose of non-dimensionalization, we take the inner radius as the characteristic length \(\tilde{L}\), i.e. \(\tilde{L}=\tilde{R}_{1}\). Consequently, the dimensionless radii of the inner and outer cylinders are \(r_{1}=\tilde{R}_{1}/\tilde{L}=1\) and \(r_{2}=\tilde{R}_{2}/\tilde{L}\), respectively. The condensed phases of the vapor at the inner and outer cylinders are assumed to be negligibly thin. Let the temperatures of the inner and outer condensed phases be maintained at uniform temperatures \(\tilde{T}_{0}\) and \(\tilde{T}_{s}\), respectively. Moreover, let the saturation pressures of the condensed phases corresponding to the temperatures \(\tilde{T}_{0}\) and \(\tilde{T}_{s}\) be \(\tilde{P}_{0}\) and \(\tilde{P}_{s}\), respectively; see Fig. 2. Again, for the purpose of linearization and non-dimensionalization, we take the temperature at the inner wall \(\tilde{T}_{0}\) as the reference temperature and the saturation pressure at the inner wall \(\tilde{P}_{0}\) as the reference pressure. Thus, the dimensionless perturbations in the temperature and saturation pressure at the inner wall vanish, and the dimensionless perturbations in the temperature and saturation pressure at the outer wall read
\[\tau_{s}=\frac{\tilde{T}_{s}-\tilde{T}_{0}}{\tilde{T}_{0}}\quad\text{and} \quad p_{s}=\frac{\tilde{P}_{s}-\tilde{P}_{0}}{\tilde{P}_{0}}, \tag{57}\]
respectively.
### Analytic solution of Onishi [33]
Onishi [33] investigated the problem by employing an asymptotic theory [45]. According to this theory, a field variable \(\tilde{h}\) of the gas can be written as
\[\tilde{h}=\tilde{h}_{H}+\tilde{h}_{K}, \tag{58}\]
where \(\tilde{h}_{H}\) is referred to as the hydrodynamic part or the Hilbert part that describes the flow behavior in the bulk of the domain and \(\tilde{h}_{K}\) is referred to as the kinetic boundary layer part or the Knudsen layer part that can be seen as a correction to the Hilbert part and is significant only in a small layer near an interface. Both \(\tilde{h}_{H}\) and \(\tilde{h}_{K}\) for all field variables are expanded in power series of the Knudsen number, and the contribution at each power of the Knudsen number is then computed by means of the considered model (the BGK model in [33]) and appropriate boundary conditions (the diffuse reflection boundary conditions in [33]).
The linearized CCR model is anyway not able to predict Knudsen layers. Therefore, it makes sense to compare the results obtained from the MFS only with the Hilbert part of the solution given in Ref. [33]. For the problem under consideration and for the linearized BGK model with the diffuse reflection boundary conditions, the Hilbert part of the solution is indeed straightforward to determine by solving a set of simple ordinary differential equations analytically, see Ref. [33]. Denoting the radius ratio by \(\beta=r_{2}/r_{1}\) and the ratio of \(p_{s}\) to \(\tau_{s}\) by \(\gamma=p_{s}/\tau_{s}\), the analytic solution obtained from the linearized BGK model with the diffuse reflection boundary conditions for \(\text{Kn}\approx 0\) is given by [33]
\[p = p_{s}\left(\frac{1}{r_{1}}+\frac{1}{r_{2}}\right)^{-1}\frac{1}{r_ {1}}, \tag{59}\] \[v_{r} = -\frac{p_{s}}{C_{0}}\left(\frac{1}{r_{1}}+\frac{1}{r_{2}}\right)^ {-1}\frac{1}{r},\] (60) \[T = \tau_{s}\left[\left(1-\frac{D_{0}}{C_{0}}\gamma\right)\frac{\ln r }{\ln\beta}-\left(1-\frac{D_{0}}{C_{0}}\gamma\right)\frac{\ln r_{1}}{\ln\beta}\right]\] (61) \[+\frac{D_{0}}{C_{0}}\gamma\tau_{s}\left(\frac{1}{r_{1}}+\frac{1 }{r_{2}}\right)^{-1}\frac{1}{r_{1}},\] \[q_{r} = 0, \tag{62}\]
where \(C_{0}=2.132039\) and \(D_{0}=0.4467494\).
### Boundary conditions and implementation of the MFS
We shall revisit the problem described above by means of the MFS applied on the linearized CCR model. Re
Figure 2: Cross-sectional view of the vapor flow confined between two coaxial cylinders.
call that we have already determined the fundamental solutions of the linearized CCR model and outlined the way to implement them in Sec. III for a general two-dimensional object. The solution for the field variables at the \(j^{\text{th}}\) boundary node (Eqs. (41)-(45)) can directly be used once the boundary nodes and singularity points for the present problem have been decided.
Since the singularity sites are to be placed outside of the computational domain, we assume the source points to be placed on two fictitious circular boundaries, one inside the circle associated with the inner cylinder and the other outside the circle associated with the outer cylinder, as shown in Fig. 3. Note that both fictitious boundaries are concentric with the circles associated with the cylinders. Let the radii of the inner and outer fictitious boundaries be \(\tilde{S}_{1}\) and \(\tilde{S}_{2}\), respectively. For simplicity, we consider \(N_{s}\) equispaced source points on each of the two fictitious boundaries and \(N_{b}\) equispaced boundary nodes on each of the actual boundaries (the boundaries of the inner and outer cylinders). As explained in Sec. IV, we have four degrees of freedom corresponding to each source point, and the total number of singularity points for the problem under consideration is \(S=2N_{s}\). Thus, there will be a total of \(4\times S=4\times 2N_{s}\) unknowns in the problem. Accordingly, the summations in Eqs. (41)-(45) will run from \(i=1\) to \(i=2N_{s}\).
Boundary conditions at the \(j^{\text{th}}\) boundary node are obtained from (10)-(12) by replacing the flow variables and the normal and tangent vectors with their respective values at the \(j^{\text{th}}\) boundary node. Furthermore, since the walls of the cylinders are fixed, \(\mathbf{v}^{I}=\mathbf{0}\). Consequently, the boundary conditions at the \(j^{\text{th}}\) boundary node read
\[\mathbf{v}_{j}\cdot\mathbf{n}_{j}= -\eta_{11}(p_{j}-p_{\text{sat}}+\mathbf{n}_{j}\cdot\mathbf{\sigma}_{j} \cdot\mathbf{n}_{j})\] \[+\eta_{12}(T_{j}-T^{I}+\alpha_{0}\mathbf{n}_{j}\cdot\mathbf{\sigma}_{j} \cdot\mathbf{n}_{j}), \tag{63}\] \[\mathbf{q}_{j}\cdot\mathbf{n}_{j}= \eta_{12}(p_{j}-p_{\text{sat}}+\mathbf{n}_{j}\cdot\mathbf{\sigma}_{j} \cdot\mathbf{n}_{j})\] \[-(\eta_{22}+2\tau_{0})(T_{j}-T^{I}+\alpha_{0}\mathbf{n}_{j}\cdot\mathbf{ \sigma}_{j}\cdot\mathbf{n}_{j}), \tag{64}\]
\[\mathbf{t}_{j}\cdot\mathbf{\sigma}_{j}\cdot\mathbf{n}_{j}= -\varsigma(\mathbf{v}_{j}+\alpha_{0}\mathbf{q}_{j})\cdot\mathbf{t}_{j}. \tag{65}\]
The dimensionless perturbations in saturation pressures at the inner and outer interfaces are \(p_{\text{sat}}=0\) and \(p_{\text{sat}}=p_{s}\), respectively, and the dimensionless perturbations in temperatures at the inner and outer interfaces are \(T^{I}=0\) and \(T^{I}=\tau_{s}\), respectively, which need to be replaced in boundary conditions (63)-(65) accordingly. Note that boundary conditions (63)-(65) are to be satisfied at \(B=2N_{b}\) boundary nodes. On substituting the values of the field variables at the \(j^{\text{th}}\) boundary node from (49)-(53) into boundary conditions (63)-(65), the resulting system of equations (associated with the \(j^{\text{th}}\) boundary node) can be written in a matrix form as
\[\sum_{i=1}^{S}M_{ji}\mathbf{u}_{i}=\mathbf{b}_{j}, \tag{66}\]
for the unknown vector associated with the \(i^{\text{th}}\) singularity \(\mathbf{u}_{i}=(f_{1i},f_{2i},g_{i},h_{i})^{\mathsf{T}}\). Here, \(M_{ji}\)'s are coefficient matrices of dimensions \(3\times 4\) and \(\mathbf{b}_{j}\) is the \(3\times 1\) vector containing the interface properties, such as \(p_{s}\) and \(\tau_{s}\). We collect all such \(B\) systems into a new system
\[\mathcal{M}\mathbf{\mathcal{X}}=\mathbf{\mathcal{B}}, \tag{67}\]
where \(\mathbf{\mathcal{X}}=\big{(}f_{11},f_{21},g_{1},h_{1},f_{12},f_{22},g_{2},h_{2}, \ldots,f_{1S},f_{2S},\)\(g_{S},h_{S}\big{)}^{\mathsf{T}}\) is the vector containing all \(4S\) unknowns, the matrix \(\mathcal{M}\)--containing the coefficients of the unknowns--has dimensions \(3B\times 4S\) (or \(6N_{b}\times 8N_{s}\)) and is referred to as the collocation matrix. We have solved system (67) in the computer algebra software, Mathematica(r) using the method of least squares. For the identification purpose, the first \(N_{s}\) singularity points (\(i=1,2,\ldots,N_{s}\)) in our code belong to the inner fictitious boundary and the rest \(N_{s}\) singularity points (\(i=N_{s}+1,N_{s}+2,\ldots,2N_{s}\)) to the outer fictitious boundary. Similarly, the first \(N_{b}\) boundary nodes (\(j=1,2,\ldots,N_{b}\)) belong to the actual inner boundary and the rest \(N_{b}\) boundary nodes (\(j=N_{b}+1,N_{b}+2,\ldots,2N_{b}\)) to the actual outer boundary.
### Results and discussion
For numerical computations, we have taken \(N_{b}=100\) boundary nodes on each of the actual boundaries and
Figure 3: Schematic of the boundary nodes on the boundaries and singularity points outside the flow domain for the problem illustrated in Fig. 2. The red and blue arrows at each boundary node depict the normal (pointing toward the flow domain) and tangential directions at that node, respectively.
\(N_{s}=100\) singularity points on each of the fictitious boundaries. The dimensionless radii of the original and fictitious boundaries are taken as \(r_{1}=1\), \(r_{2}=2\), \(s_{1}=\tilde{S}_{1}/\tilde{R}_{1}=0.5\) and \(s_{2}=\tilde{S}_{2}/\tilde{R}_{2}=4\).
Figure 4 illustrates the variation of the (scaled) temperature of the vapor in the radial direction for \(\text{Kn}\approx 0\) and for different values of the parameter \(\gamma\,(=p_{s}/\tau_{s})\), wherein \(\tau_{s}=4\) is fixed and \(p_{s}\) is being varied for varying \(\gamma\). The solid lines represent the results obtained from our numerical framework based on the MFS while the symbols delineate the results from Eq. (61), which was obtained analytically for \(\text{Kn}\approx 0\) through an asymptotic theory [45] performed on the linearized BGK model in Ref. [33]. It is evident from the figure that the results obtained with the MFS in the present work are in an excellent agreement with the analytic results from the linearized BGK model for \(\text{Kn}\approx 0\). Although not shown here for brevity, the results for the pressure and velocity from the MFS are also in excellent agreement with the analytic results from Eqs. (59) and (60) for \(\text{Kn}\approx 0\). It is also evident from Fig. 4 that the temperature increases on moving away from the inner cylinder toward the outer cylinder for smaller values of \(\gamma\) (red and blue curves and symbols in the figure) and vice versa for larger values of \(\gamma\) (green and magenta curves and symbols in the figure). This indicates the existence of a reverse temperature gradient after a critical value of \(\gamma\). Indeed, at this critical value of \(\gamma\), the (scaled) temperature remains constant along the radial direction. An expression for this critical value of \(\gamma\) from the asymptotic theory [45] is given by [33]
\[\gamma_{c}=\frac{C_{0}}{D_{0}}\left[1-\text{Kn}\frac{C_{0}}{D_{0}}(0.124226) \left(\frac{1}{r_{1}}-\frac{1}{r_{2}}\right)+\mathcal{O}(\text{Kn}^{2})\right]. \tag{68}\]
For \(\text{Kn}\approx 0\), the critical value of \(\gamma\) from the above expression is \(\gamma_{c}=C_{0}/D_{0}\approx 4.772337\). From the MFS presented here, the critical value of \(\gamma\) for \(\text{Kn}\approx 0\) turns out to be \(\gamma_{c}\approx 4.7723\), which is also very close to that computed from the above expression. The phenomenon of reverse temperature gradient can be understood form boundary condition (64) as follows. There are two factors determining the normal heat flux component in boundary condition (64) according to which the evaporation/condensation rate depends on (i) the difference between the pressure and saturation pressure, and (ii) the temperature difference between the temperatures of the gas (or vapor) and and interface. The temperature gradient gets reversed when one dominates the other.
To examine the capabilities of the developed method, we also study the problem for higher Knudsen numbers. Figure 5 exhibits the variation of the (scaled) temperature of the vapor in the radial direction for \(\text{Kn}=0.1\) and for different values of the parameter \(\gamma\). The solid lines again represent the results obtained from our numerical framework based on the MFS but the symbols now denote the data from the linearized BGK model taken directly from Ref. [33]. It is clear from the figure that the results from the MFS are in good agreement with those from the linearized BGK model even for \(\text{Kn}=0.1\)
Figure 4: Variation of the (scaled) temperature in the gap between the two cylinders for different values of \(\gamma\). The solid lines denote the results obtained from the MFS applied on the CCR model and the symbols indicate the analytic results from Eq. (61), which was obtained analytically from the linearized BGK model for \(\text{Kn}\approx 0\) in Ref. [33]. The other parameters are \(N_{b}=100\), \(N_{s}=100\), \(r_{1}=1\), \(r_{2}=2\), \(s_{1}=0.5\), \(s_{2}=4\).
Figure 5: Same as Fig. 4 except for the symbols denote the data from Ref. [33] for \(\text{Kn}=0.1\) obtained using the linearized BGK model.
nonetheless, the quantitative differences in the results from both methods are now noticeable. In addition, figure 5 also shows the existence of a reverse temperature gradient. For \(\mathrm{Kn}=0.1\), the critical value of \(\gamma\), at which the phenomenon of reverse temperature gradient occurs, computed from the MFS is \(\gamma_{c}=4.66247\) whereas its reported value from the linearized BGK model in Ref. [33] is \(\gamma_{c}=4.63087\).
To have further insight on the reverse temperature gradient, the (scaled) radial heat flux at the actual inner boundary (i.e. at \(r=1\)) is plotted against \(\gamma\) in Fig. 6. The solid lines and symbols denote the results from the MFS in the present work and the data from the linearized BGK model given in Ref. [33], respectively. It is apparent from the figure that our results for the radial heat flux are also in good agreement with the data from the linearized BGK model for a smaller value of the Knudsen number (\(\mathrm{Kn}=0.1\) in the figure); however, for a higher value of the Knudsen number (\(\mathrm{Kn}=0.2\) in the figure), there is a noticeable mismatch between the results obtained from the MFS and the data from the linearized BGK model given in Ref. [33]. A plausible reason for this discrepancy could be the truncation of power series at the first order in Ref. [33] because the neglected terms in the series could have significant contributions for larger values of the Knudsen number. Figure 6 also shows that for each value of the Knudsen number, there is a \(\gamma\) at which the radial heat flux changes its sign. This \(\gamma\) is indeed the same as the \(\gamma_{c}\) described above, at which reversal of the temperature gradient takes place.
Through the plots of heat flux lines, also not shown here for brevity, it has been found that, in the case of \(\tau_{s}>0\), heat flows from the outer cylinder toward the inner cylinder for \(\gamma<\gamma_{c}\) and vice versa for \(\gamma>\gamma_{c}\). This makes sense in view of Figs. 4 and 5. The direction of heat flow reverses in both cases when \(\tau_{s}\) is taken to be negative or, in other words, when the initial temperature of the inner cylinder is taken higher than that of the outer cylinder.
Figure 7 displays the (scaled) radial velocity plotted against \(\gamma\) for \(\mathrm{Kn}\approx 0\) and \(\mathrm{Kn}=0.1\). The solid lines are again the results from the MFS in the present work while the symbols in the case of \(\mathrm{Kn}\approx 0\) denote the results from Eq. (60) and those in the case of \(\mathrm{Kn}=0.1\) denote the data taken from Ref. [33]; nevertheless, in both cases symbols denote the results from the linearized BGK model. The figure also demonstrates a good agreement between the results from the method developed in the present work and those from the linearized BGK model.
## VI Rarefied gas flow between two non-coaxial cylinders
In this section, we revisit the problem of flow induced by a temperature difference in a rarefied gas confined between two non-coaxial cylinders via the MFS developed above. The same problem was investigated numerically by Aoki, Sone and Yano [34] with the linearized BGK model and the diffuse reflection boundary conditions.
Figure 6: Variation of the (scaled) radial heat flux with \(\gamma\). The solid lines denote the results obtained from the MFS applied on the CCR model and the symbols indicate the data taken directly from Ref. [33], which were obtained using the linearized BGK model. The other parameters are the same as those for Fig. 4.
Figure 7: Variation of the (scaled) radial velocity with \(\gamma\). The solid lines denote the results obtained from the MFS applied on the CCR model and the symbols indicate those from the linearized BGK model (from Eq. (60) in the case of \(\mathrm{Kn}\approx 0\) and directly from Ref. [33] in the case of \(\mathrm{Kn}=0.1\)). The other parameters are the same as those for Fig. 4.
### Problem description
Let us consider a rarefied (monatomic) gas confined between two infinitely long circular cylinders of radii \(\tilde{R}_{1}\) and \(\tilde{R}_{2}\) (with \(\tilde{R}_{1}<\tilde{R}_{2}\)) that are not coaxial. Again, owing to the axial symmetry, it is sufficient to investigate the problem in 2D. Let the locations of both cylinders be fixed according to the cross-sectional view portrayed in Fig. 8 so that the centers of the circles associated with the inner and outer cylinders be at the origin and at \((0,-\tilde{d})\), respectively. Furthermore, let the temperatures of the inner and outer cylinders be kept fixed at \(\tilde{T}_{i}=\tilde{T}_{0}\) and \(\tilde{T}_{o}=\tilde{T}_{0}(1+\Delta\tau)\), respectively, with \(\Delta\tau\) being sufficiently small in comparison to \(\tilde{T}_{0}\) so that the linear theory remains meaningful.
For the purpose of non-dimensionalization, we again take the radius of the inner cylinder as the characteristic length \(\tilde{L}\), i.e. \(\tilde{L}=\tilde{R}_{1}\). Consequently, the dimensionless radii of the inner and outer cylinders are \(r_{1}=\tilde{R}_{1}/\tilde{L}=1\) and \(r_{2}=\tilde{R}_{2}/\tilde{L}\), respectively, and the dimensionless distance between the centers of the cylinders is \(d=\tilde{d}/\tilde{L}\). Furthermore, for the purpose of the linearization and non-dimensionalization, the equilibrium pressure of the gas \(\tilde{p}_{0}\) is taken as the reference pressure and the temperature of the inner cylinder \(\tilde{T}_{i}\) as the reference temperature so that the dimensionless perturbations in temperatures on the inner and outer walls are \(T_{i}=(\tilde{T}_{i}-\tilde{T}_{i})/\tilde{T}_{i}=0\) and \(T_{o}=(\tilde{T}_{o}-\tilde{T}_{i})/\tilde{T}_{i}=\Delta\tau\), respectively. For comparing the results from the present method with those of Ref. [34], the parameters are fixed to \(r_{2}=2\), \(d=0.5\) and \(\Delta\tau=1\).
### Boundary Conditions and implementation of the MFS
In order to place the singularity sites outside the computational domain, we again assume the source points to be placed on two fictitious circular boundaries, one inside the circle associated with the inner cylinder and the other outside the circle associated with the outer cylinder, as shown in Fig. 9. The inner (outer) fictitious boundary is concentric with the circle associated with the inner (outer) cylinder. Let the radii of the inner and outer fictitious boundaries be \(\tilde{S}_{1}\) and \(\tilde{S}_{2}\), respectively. Consequently, the dimensionless radii of the inner and outer fictitious boundaries are \(s_{1}=\tilde{S}_{1}/\tilde{L}\) and \(s_{2}=\tilde{S}_{2}/\tilde{L}\). Similarly to the above, we consider \(N_{s}\) equispaced source points on each of the two fictitious boundaries and \(N_{b}\) equispaced boundary nodes on each of the actual boundaries (the boundaries of the inner and outer cylinders).
Since the walls of the cylinders are fixed for this problem as well, \(\mathbf{v}^{I}=\mathbf{0}\). Hence, the boundary conditions (63)-(65) at the \(j^{\text{th}}\) boundary node hold true for the present problem as well. However, since the present problem does not involve evaporation and condensation, the evaporation/condensation coefficient \(\vartheta\) is zero for this problem. Consequently, boundary conditions (63)-(65)
Figure 8: Cross-sectional view of the flow of a rarefied gas confined between two non-coaxial cylinders having different wall temperatures.
Figure 9: Schematic of the boundary nodes on the boundaries and singularity points outside the flow domain for the problem illustrated in Fig. 8. The red and blue arrows at each boundary node depict the normal (pointing toward the flow domain) and tangential directions at that node, respectively.
for the problem under consideration further reduce to
\[\mathbf{v}_{j}\cdot\mathbf{n}_{j} =0, \tag{69}\] \[\mathbf{q}_{j}\cdot\mathbf{n}_{j} =-2\tau_{0}(T_{j}-T^{I}+\alpha_{0}\,\mathbf{n}_{j}\cdot\mathbf{\sigma}_{j} \cdot\mathbf{n}_{j}),\] (70) \[\mathbf{t}_{j}\cdot\mathbf{\sigma}_{j}\cdot\mathbf{n}_{j} =-\varsigma(\mathbf{v}_{j}+\alpha_{0}^{\prime}\mathbf{q}_{j})\cdot\mathbf{t} _{j}. \tag{71}\]
Note that the coefficient \(\alpha_{0}\) in boundary condition (71) has been changed to \(\alpha_{0}^{\prime}=1/5\) (see, e.g. Refs. [2; 4; 12]) in order to have a fair comparison with the findings of Ref. [34]. The interface temperature \(T^{I}\) in boundary condition (70) is \(0\) for the inner cylinder and \(\Delta\tau\) for the outer cylinder.
The construction of the collocation matrix and the formation of system (67) for the present problem is exactly similar to that demonstrated in Sec. V.3. We have again solved system (67) for the present problem analogously in the computer algebra software, Mathematica(r) using the method of least squares to determine the unknowns \(f_{11},f_{21},g_{1},h_{1},f_{12},f_{22},g_{2},h_{2},\ldots,f_{1S},f_{2S},g_{S },h_{S}\).
### Results and discussion
We have computed the results numerically by taking the parameters as \(\Delta\tau=1\), \(N_{b}=N_{s}=100\), \(r_{1}=1\), \(r_{2}=2\), \(s_{1}=0.5\), \(s_{2}=4\). In all the figures below, the solid lines represent the results obtained with the MFS in the present work and the symbols denote the data taken from Ref. [34], which were obtained using the linearized BGK model.
Figure 10 illustrates the variation of the tangential component of the (dimensionless) velocity on the right halves of the inner (top row) and outer (bottom row) circles associated with the respective cylinders with respect to the angle \(\theta\), which is the angle measured from the negative \(y\)-axis anticlockwise around the center of the inner circle as shown in Fig 9. The angle has been taken in this way in order to maintain the geometrical similarity with Ref. [34]. The unit tangential directions on the inner and outer circles are marked in Fig 9 with blue arrows. Figure 10 shows that the tangential components of the velocity for both inner and outer circles remain zero at \(\theta=0\) and \(\theta=\pi\) and that they attain the maximum values somewhere in \((0,\pi/2)\). Furthermore, the value of \(\theta\) at which the maximum is attained also shifts more toward \(\theta=\pi/2\) on increasing the value of the Knudsen number. Figure 10 evinces that the results from the MFS applied on the CCR model (solid lines) are in reasonably good agreement with those from the linearized BGK model for small Knudsen numbers (green lines and symbols) and that the differences between the results from both methods become more and more prominent with increasing Knudsen numbers (red and blue lines and symbols), where the present method starts overpredicting the results, though the trends from both methods remain qualitatively similar to each other even for high Knudsen numbers.
Figure 10, in other words, also reveals that at \(\theta=0\) and \(\theta=\pi\) the flow can happen only in the normal directions. This prompts us to draw the flow streamlines in Fig. 11. The streamlines in Fig. 11 shows that at the narrowest gap (near \(\theta=0\)), the gas starts moving
Figure 10: Tangential velocity on the right halves of the inner and outer circles associated with the respective cylinders plotted against the angle \(\theta\) for different values of the Knudsen number and for \(\Delta\tau=1\). The solid lines denote the results obtained from the MFS applied on the CCR model and the symbols indicate the data from the linearized BGK model [34]. The other parameters are the same as those for Fig. 4.
from the outer (hotter) cylinder toward the inner (colder) cylinder and flows along the surface of the inner cylinder on both halves until it reaches \(\theta=\pi\), at which it can flow only in the normal direction. Therefore, at the widest gap (near \(\theta=\pi\)), the gas flows from the inner cylinder toward the outer cylinder and returns back from there toward the narrowest gap along the surface of the outer cylinder (but in the opposite directions due to symmetry along the \(y\)-axis). This renders two counter-directional circulating flows, one in the left half of the domain and the other in the right half of the domain. The directions of the circulating flows reverse on taking \(\Delta\tau<0\), i.e. when the inner cylinder is at a higher temperature than the outer one. From the considered values of the Knudsen number, the directions of the circulating flows apparently do not depend on the Knudsen number.
The superposition of the second components of the point force vectors at the inner source points gives the total drag force \(D\) on the inner cylinder, i.e.
\[D=-\sum_{i=1}^{N_{s}}f_{2i}, \tag{72}\]
where \(i=1,2,\ldots,N_{s}\) refer to the points on the inner fictitious boundary and the negative sign represents the direction opposite to the flow. Variation of the drag force with the Knudsen number is illustrated in Figure 12, which shows good agreement between the results from the MFS applied on the CCR model (solid lines) and those from the linearized BGK model (symbols) even for high Knudsen numbers (especially, for \(\mathrm{Kn}\lesssim 2\)). This was actually not the case for tangential velocity displayed in Fig. 10, where the differences between the results from the two models were noticeable for high Knudsen numbers. This shows that the CCR model is capable of predicting the global quantities, e.g. the drag force, quite accurately but is incapacitated of predicting the local quantities, e.g. the velocity and temperature, for high Knudsen numbers due to its limitation of not being able to predict Knudsen layers.
## VII Location of singularities
Finding an optimal location of singularity points has been a widely-discussed issue in the implementation of the MFS [26; 27; 29; 37; 41]. It is well established that a high accuracy in the MFS is accompanied with a high condition number due to the collocation matrix being ill-conditioned [39; 40; 41; 42]. This is justified as the traditional condition number is not adequate for measuring the stability of the resulting system since the condition number does not take boundary data into account. For instance, while forming matrix system (67), the boundary data, such as \(p_{s}\), \(\tau_{s}\) or \(\Delta\tau\), for the problems investigated in this paper appear in the vector \(\mathcal{B}\) but not in the collocation matrix \(\mathcal{M}\). Hence, the (usual) condition number of the matrix \(\mathcal{M}\) is not an adequate parameter to gauge the sensitivity of the MFS toward the location of the source points.
A more accurate estimation of the sensitivity of the MFS toward the location of the source points can be
Figure 11: Velocity streamlines in a cross section for the problem described in Sec. VI at \(\mathrm{Kn}=0.1\). The other parameters are the same as those for Fig. 10.
Figure 12: Drag force on the inner cylinder plotted against the Knudsen number. The solid lines denote the results obtained from the MFS applied on the CCR model and the symbols indicate the data from the linearized BGK model [34]. The other parameters are the same as those for Fig. 10.
made by the _effective condition number_, which also takes the boundary data into account (through the right-hand side vector). The concept of the effective condition number has been used by many authors to determine an optimal location of the singularity points by conjecturing a reciprocal relationship between the inaccuracy of the MFS and the effective condition number [40; 41; 42]. For both the problems discussed in the above sections, we have used the same strategy to place the source points. Further details on this strategy are as follows.
Using the singular value decomposition, \(\mathcal{M}\) (having dimensions \(n\times m\)) can be decomposed as \(\mathcal{M}=UDV^{\mathsf{T}}\), where \(U\) and \(V\) are \(n\times n\) and \(m\times m\) orthogonal matrices and \(D\) is a \(n\times m\) diagonal matrix containing the positive singular values in descending order: \(\sigma_{1}\geq\sigma_{2}\geq\sigma_{3}\geq\cdots\geq\sigma_{r}>0\), where \(r\leq m\). The definitions of the (traditional) condition number \(\kappa\) and the effective condition number \(\kappa_{\text{eff}}\) in \(\ell^{2}\)-norm are given by
\[\kappa=\frac{\sigma_{1}}{\sigma_{r}}\quad\text{and}\quad\kappa_{\text{eff}}= \frac{\|\mathcal{B}\|}{\sigma_{r}\|\boldsymbol{\mathcal{X}}\|}.\]
Using the definition of the effective condition number, we first verify the inverse relationship between the maximum error and the effective condition number. Let \(\alpha>1\) be the dilation parameter that determines the separation between the actual boundary (containing boundary nodes) and the fictitious boundary (containing singularities) such that \(s_{1}=r_{1}/\alpha\) and \(s_{2}=\alpha\,r_{2}\). A larger value of \(\alpha\) corresponds to a larger gap between the boundary nodes and source points.
For the problem described in Sec. V, the maximum absolute error \(\epsilon_{\text{max}}\) in the temperature computed with the MFS and with the analytic solution for \(\text{Kn}\approx 0\) along with the effective condition number is plotted against the dilation parameter \(\alpha\) in Fig. 13. The figure shows that the inaccuracy of the MFS is roughly inversely proportional to the effective condition number. It is also evident from the figure that the maximum value of the effective condition number is attained for \(\alpha\) around \(1.6\), where the effective condition number is of order \(10^{8}\) and the absolute error is minimum. It is worthwhile noting that the order of the effective condition number remains \(10^{8}\) for higher values of \(\alpha\) beyond \(\alpha\approx 1.6\); similarly, the order of the maximum absolute error remains \(10^{-5}\) for higher values of \(\alpha\) beyond \(\alpha\approx 1.6\).
In order to have further insight, the effective condition number for the problems considered in Sec. V and Sec. VI is plotted against the dilation parameter in Fig. 14 for different values of the Knudsen number. The number of boundary nodes at either of the actual boundaries \(N_{b}\) and the number of singularity points at either of the fictitious boundaries \(N_{s}\) are taken as \(100\) (i.e. \(N_{b}=N_{s}=100\)) in Fig. 14. It can be noticed from the figure that the highest value of the effective condition number for a given Knudsen number is attained at a value of \(\alpha\) somewhere in between \(1.8\) and \(2\). Although, not shown here for succinct
Figure 14: Variation of the effective condition number \(\kappa_{\text{eff}}\) with respect to the dilation parameter \(\alpha\) for the problems described in Sec. V (coaxial case) and Sec. VI (non-coaxial case). The number of boundary nodes at either of the actual boundaries and the number of singularity points at either of the fictitious boundaries are \(100\) (i.e. \(N_{b}=N_{s}=100\)).
Figure 13: The maximum absolute error \(\epsilon_{\text{max}}\) in the temperature and effective condition number \(\kappa_{\text{eff}}\) for the problem of flow between coaxial cylinders plotted over the dilation parameter \(\alpha\) for \(\text{Kn}\approx 0\) and \(N_{b}=N_{s}=100\).
ness, it turns out that the value of \(\alpha\) at which the highest effective condition number is attained increases (decreases) with decrease (increase) in the number of boundary nodes and singularities. Therefore, to save computational time, one can use smaller number of boundary nodes and source points along with a bigger value of \(\alpha\). From Fig. 14, although the effective condition number decreases on increasing \(\alpha\) after a certain value of \(\alpha\), we have not encountered any significant change in the results on keeping the singularities farther (or on taking bigger values of \(\alpha\)). Therefore, it is apparently sufficient to just ensure \(\alpha\geq 2\) to attain an optimal accuracy in the case of \(N_{b}=N_{s}=100\). Therefore, the fictitious boundaries for both problems have safely been positioned at locations for which \(\alpha=2\).
## VIII Conclusion and Outlook
The fundamental solutions of the CCR model in 2D have been determined by exploiting the fundamental solutions of some well-known partial differential equations, e.g. the Laplace and biharmonic equations. It turns out that the fundamental solutions of the linearized NSF and G13 equations in 2D can be recovered from the fundamental solutions of the CCR model in 2D derived in this paper by taking the coupling coefficient \(\alpha_{0}\) as 0 and 2/5, respectively, in them. The derived fundamental solutions for the two-dimensional CCR model have then been implemented in a numerical framework.
To gauge the capability of the developed numerical framework, two problems: (i) evaporating/condensing vapor flow between two coaxial cylinders, and (ii) temperature-driven rarefied gas flow between two non-coaxial cylinders having different temperatures, have been revisited. These problems have already been investigated with the linearized BGK model in Refs. [33; 34]. Comparing the results obtained from the MFS for the first problem with those from Refs. [33], the accuracy of the MFS with the CCR model in investigating rarefied gas flows with phase change is vivid, particularly for small Knudsen numbers. Similarly, for the second problem, the results for the local flow fields, such as the temperature and velocity, obtained using the MFS with the CCR model compares quite well with those obtained using the linearized BGK model in the case of small Knudsen number; but the results for the local flow fields from the two models differ noticeably for larger Knudsen numbers, although their trends from both methods are qualitatively similar. On the other hand, the MFS with the CCR model is able to capture the global flow fields, such as the drag force, quite accurately even for large Knudsen numbers. In addition, since the MFS does not involve numerical computation of integrals and its implementation does not require the discretization of the domain, it is computationally efficient in comparison to the other numerical methods used for investigating rarefied gas flows. This makes the MFS with the CCR model a favorable choice for investigating rarefied gas flows. It should, however, be noted that the position of singularity points plays a major role to achieve the best results. By performing, effective condition number based studies for both problems, it has been established that the singularity points should be kept sufficiently far from the boundary nodes.
The utility of the derived fundamental solutions (and their implementation) can be perceived particularly for problems wherein the two-dimensional version of a problem is sufficient to study the complete problem in 3D (due to some symmetry) as the fundamental solutions of a model in 2D cannot be deduced directly from its counterpart in 3D and vice-versa. Furthermore, the derived solutions can also be extended from monatomic to polyatomic gases (of Maxwell molecules) by taking \(c_{p}=(5+\mathsf{n})/2\) and \(\alpha_{0}=2/(5+\mathsf{n})\), and by choosing an appropriate value of the parameter \(\mathsf{n}\) that denotes the degree of freedom in a polyatomic gas.
Although the MFS is equally efficient for external flow problems as well (as demonstrated in Refs. [25; 9]), we have not considered external flow problems in 2D as they are somewhat more involved in comparison to their 3D counterparts. This is due to Stokes' paradox in which the solution diverges because of the presence of logarithmic term(s) in the two-dimensional fundamental solutions of Stokes' equation. A similar logarithmic term appears in the two-dimensional fundamental solutions of the CCR model as well that makes the study of two-dimensional external flows with the MFS applied on the CCR model involved. At present, we do not have a clear understanding of dealing with Stokes' paradox using the MFS for external flow problems, specifically for flows past an arbitrary geometry. Notwithstanding, external flows with the MFS applied on the CCR model in 2D will be considered elsewhere in the future.
## IX Acknowledgement
Himanshi gratefully acknowledges the financial support from the Council of Scientific and Industrial Research (CSIR) [File No.: 09/1022(0111)/2020-EMR-I]. A.S.R. acknowledges the financial support from the Science and Engineering Research Board, India through the grants SRG/2021/000790 and MTR/2021/000417. Himanshi and V.K.G. also acknowledge Bhaskaracharya Mathematics Laboratory and Brahmagupta Mathematics Library supported by DST-FIST Project SR/FST/MS I/2018/26.
## Appendix A Inverse Fourier transforms
We use the fundamental solutions of some well-known equations, such as the Laplace and biharmonic equations, from the literature [40; 41; 30] to find the inverse Fourier transforms of the terms on the right-hand sides of
Eqs. (30), (31) and (33)-(35). Note that the Einstein summation holds over the repeated indices in this appendix. The fundamental solution of the Laplace equation
\[\nabla^{2}\phi\equiv\frac{\partial^{2}\phi}{\partial x_{k}^{2}}= \delta(\mathbf{r}) \tag{104}\]
in 2D is given by
\[\phi=\frac{\ln r}{2\pi} \tag{105}\]
where \(r=|x_{i}|\).
Taking the Fourier transform [defined by Eq. (20)] in the Laplace equation (104), we obtain
\[(-\mathbf{i})^{2}\omega^{2}\hat{\phi}=1\quad\implies\quad\hat{ \phi}=-\frac{1}{\omega^{2}}. \tag{106}\]
Hence, the inverse Fourier transform of \(1/\omega^{2}\) is
\[\mathcal{F}^{-1}\left(\frac{1}{\omega^{2}}\right)=\mathcal{F}^{- 1}(\hat{\phi})=-\frac{\ln r}{2\pi}. \tag{107}\]
Also, by definition (21), the inverse Fourier transform of \(1/\omega^{2}\) is given by
\[\mathcal{F}^{-1}\left(\frac{1}{\omega^{2}}\right)=\frac{1}{(2 \pi)^{2}}\int_{\mathbb{R}^{2}}\frac{1}{\omega^{2}}\mathrm{e}^{-\mathrm{i} \,\mathbf{\omega}\cdot\mathbf{r}}\,\mathrm{d}\mathbf{\omega}. \tag{108}\]
Therefore, from Eqs. (107) and (108), we have
\[\frac{1}{(2\pi)^{2}}\int_{\mathbb{R}^{2}}\frac{1}{\omega^{2}} \mathrm{e}^{-\mathrm{i}\,\mathbf{\omega}\cdot\mathbf{r}}\,\mathrm{d}\mathbf{\omega}=- \frac{\ln r}{2\pi}. \tag{109}\]
Now, taking the partial derivative with respect to \(x_{i}\) on both sides in (109), we obtain
\[-\frac{\mathbf{i}}{(2\pi)^{2}}\int_{\mathbb{R}^{2}}\frac{\omega_ {i}}{\omega^{2}}\mathrm{e}^{-\mathrm{i}\,\mathbf{\omega}\cdot\mathbf{r}}\,\mathrm{d} \mathbf{\omega}=-\frac{1}{2\pi}\frac{x_{i}}{r^{2}} \tag{110}\]
which, in turn, gives
\[\mathcal{F}^{-1}\left(\frac{\omega_{i}}{\omega^{2}}\right)=- \frac{\mathbf{i}\,x_{i}}{2\pi r^{2}}. \tag{111}\]
Moreover, taking the partial derivative with respect to \(x_{j}\) on both sides in (110), we obtain
\[\frac{-1}{(2\pi)^{2}}\int_{\mathbb{R}^{2}}\frac{\omega_{i}\omega _{j}}{\omega^{2}}\mathrm{e}^{-\mathrm{i}\,\mathbf{\omega}\cdot\mathbf{r}}\,\mathrm{d} \mathbf{\omega}=-\frac{1}{2\pi}\bigg{(}\frac{\delta_{ij}}{r^{2}}-\frac{2x_{i}x_{j }}{r^{2}}\bigg{)}, \tag{112}\]
which, in turn, gives
\[\mathcal{F}^{-1}\left(\frac{\omega_{i}\omega_{j}}{\omega^{2}} \right)=-\frac{1}{\pi}\frac{x_{i}x_{j}}{r^{4}}+\frac{1}{2\pi}\frac{\delta_{ij }}{r^{2}}. \tag{113}\]
The fundamental solution of the biharmonic equation
\[\frac{\partial^{4}\phi}{\partial^{2}x_{i}\,\partial^{2}x_{j}}= \delta(\mathbf{r}) \tag{114}\]
in 2D is given by
\[\phi=\frac{r^{2}\,\ln r}{8\pi}. \tag{115}\]
Following similar steps as for the Laplace equation above, we obtain
\[\mathcal{F}^{-1}\left(\frac{1}{\omega^{4}}\right) =\frac{r^{2}\,\ln r}{8\pi}, \tag{116}\] \[\mathcal{F}^{-1}\left(\frac{\omega_{i}}{\omega^{2}}\right) =\mathbf{i}\frac{x_{i}(\ln r^{2}+1)}{8\pi},\] (117) \[\mathcal{F}^{-1}\left(\frac{\omega_{i}\omega_{j}}{\omega^{4}}\right) =-\frac{(\ln r^{2}+1)}{8\pi}\delta_{ij}-\frac{x_{i}x_{j}}{4\pi r ^{2}}. \tag{118}\]
|
2307.03598 | Introducción a los D-módulos | Estas notas son las memorias del cursillo dictado en el XXII Congreso
Colombiano de Matem\'aticas en la Universidad del Cauca en Popay\'an -
Colombia. El objetivo de este escrito es brindar un acercamiento a la teor\'ia
de m\'odulos sobre el anillo de operadores diferenciales de una variedad
algebraica suave.
These are the lecture notes of a short course given at the XXII Colombian
Congress of Mathematics held at Universidad del Cauca in Popay\'an - Colombia.
The aim of this paper is to provide an introduction to the theory of modules
over rings of differential operators over a smooth algebraic variety. | Juan Camilo Arias, Camilo Rengifo | 2023-07-07T13:41:28Z | http://arxiv.org/abs/2307.03598v1 | # Introduction a los \(\mathcal{D}\)-modulos
###### Abstract.
These are the lecture notes of a short course given at the XXII Colombian Congress of Mathematics held at Universidad del Cauca in Popayan - Colombia. The aim of this paper is to provide an introduction to the theory of modules over rings of differential operators over a smooth algebraic variety.
\(\mathcal{D}\)-modules, sheaves, smooth algebraic variety, Lie algebras 2010 Mathematics Subject Classification: Primary 1402; Secondary 17B10.
## 1. Introduccion
La teoria de los \(\mathcal{D}\)-modulos tiene su origen como parte del analisis algebraico1 de la escuela japonesa de Kyoto, liderada por M. Sato y M. Kashiwara entre otros [18], [19], [20], [11]. Uno de los principales objetivos fue el estudio de soluciones de sistemas de ecuaciones diferciales utilizando herramientas como la teoria de anillos, el algebra homologica y la teoria de haces. Paralamente, el matematico ruso-israeli J. Bernstein introdujo los \(\mathcal{D}\)-modulos en los articulos [3] y [1] desde el enfoque del analisis complejo. Concretamente, J. Bernstein considero un polinomio \(P\) en \(n\) variables complejas y mostro que la funcion \(\mathcal{P}(s)=|P|^{s}\), para \(Re(s)>0\), extiende a una funcion meromorfa de \(s\) sobre todo el plano complejo, y tal que toma valores en distribuciones de \(\mathbb{C}^{n}\). Algunos de los resultados que se desprenden del estudio de los \(\mathcal{D}\)-modulos son las hiper-funciones de Sato, el analisis microlocal, aplicaciones a la geometria algebraica, la teoria de representaciones y la fisica matematica.
Footnote 1: El termino analisis algebraico es una palabra atribuida a M. Sato quien buscó estudiar propiedades de funciones y distribuciones analizando los operadores diferciales lineales que se anulan en estos objetos mediante la teoria de haces, [13].
Los \(\mathcal{D}\)-modulos has mostrado ser de gran utilidad ya que mediantes el uso de \(\mathcal{D}\)-modulos (holonomicos regulares) se resuelve el problema 21 de Hilbert, como consecuencia de la correspondencia de Riemann-Hilbert. Adicionalmente, la teoria de los \(\mathcal{D}\)-modulos provee el contexto apropiado para resolver las conjeturas de Kazhdan-Lusztig.
A grosso modo, la correspondencia de Riemann-Hilbert responde a lo siguiente. La monodromia asociada a un sistema lineal de ecuaciones diferciales da pie a una representacion del grupo fundamental del espacio base. Ahora bien, si se tiene una representacion del grupo fundamental del espacio base, _i_es posible encontrar un sistema de ecuaciones diferciales tal que su monodromia coincida con la representacion fijada previamente? |
2310.11553 | NuSTAR and XMM-Newton observations of the binary 4FGL J1405.1-6119. A
$γ$-ray emitting microquasar? | 4FGL J1405.1-6119 is a high-mass $\gamma$-ray emitting binary that has been
studied at several wavelengths. The nature of this type of binary is still
under debate, with three possible scenarios usually invoked to explain the
origin of the $\gamma$-ray emission: collisions between the winds of a rapidly
rotating neutron star and its companion, collisions between the winds of two
massive stars, and non-thermal emission from the jet of a microquasar. We
analyze two pairs of simultaneous NuSTAR and XMM-Newton observations to
investigate the origin of the radio, X-ray, and $\gamma$-ray emissions. We
extracted light curves between 0.5-78 keV from two different epochs, named
Epoch 1 and Epoch 2, respectively. We propose a scenario to explain the
observations involving a parabolic, mildly relativistic, lepto-hadronic jet.
This jet has a compact acceleration region that injects a hard spectrum of
relativistic particles. The dominant non-thermal emission processes include
synchrotron radiation of electrons, inverse Compton scattering of photons from
the stellar radiation field, and the decay of neutral pions resulting from
inelastic proton-proton collisions within the bulk matter of the jet. These
estimates are in accordance with the values of a super-Eddington lepto-hadronic
jet scenario. The compact object could be either a black hole or a neutron star
with a low magnetic field. Most of the X-ray emission from the disk could be
absorbed by the dense wind that is ejected from the same disk. We conclude that
it is possible that the binary 4FGL J1405.1-6119 could be a supercritical
microquasar like SS433. | Enzo A. Saavedra, Federico A. Fogantini, Gastón J. Escobar, Gustavo E. Romero, Jorge A. Combi, Estefania Marcel | 2023-10-17T19:52:07Z | http://arxiv.org/abs/2310.11553v2 | # NuSTAR and _XMM-Newton_ observations of the binary 4FGL J1405.1-6119
###### Abstract
Context:4FGL J1405.1-6119 is a high-mass \(\gamma\)-ray-emitting binary that has been studied at several wavelengths. The nature of this type of binary is still under debate, with three possible scenarios usually invoked to explain the origin of the \(\gamma\)-ray emission: collisions between the winds of a rapidly rotating neutron star and its companion, collisions between the winds of two massive stars, and nonthermal emission from the jet of a microquasar.
Aims:We analyzed two pairs of simultaneous _NuSTAR_ and _XMM-Newton_ observations to investigate the origin of the radio, X-ray, and \(\gamma\)-ray emissions.
Methods:We extracted light curves between 0.5 and 78 keV from two different epochs, wich we call Epoch 1 and Epoch 2. We then extracted and analyzed the associated spectra to gain insight into the characteristics of the emission in each epoch. To explain these observations, along with the overall spectral energy distribution, we developed a model of a microquasar jet. This allowed us to make some inferences about the origin of the observed emission and to discuss the nature of the system.
Results:A power-law model combined with the inclusion of a blackbody accurately characterizes the X-ray spectrum. The power-law index (\(E^{-1}\)) was found to be \(\sim 1.7\) for Epoch 1 and \(\sim 1.4\) for Epoch 2. Furthermore, the associated blackbody temperature was \(\sim 1\) keV and with a modeled emitting region of size \(\lesssim 16\) km. The scenario we propose to explain the observations involves a parabolic, mildly relativistic, lepto-hadronic jet. This jet has a compact acceleration region that injects a hard spectrum of relativistic particles. The dominant nonthermal emission processes include synchrotron radiation of electrons, inverse Compton scattering of photons from the stellar radiation field, and the decay of neutral pions resulting from inelastic proton-proton collisions within the bulk matter of the jet. These estimates are in accordance with the values of a super-Eddington lepto-hadronic jet scenario. The compact object could be either a black hole or a neutron star with a weak magnetic field. Most of the X-ray emission from the disk could be absorbed by the dense wind that is ejected from the same disk.
Conclusions:We conclude that the binary 4FGL J1405.1-6119 could be a supercritical microquasar similar to SS433.
Conclusions:
## 1 Introduction
Binary sources containing neutron stars (NSs) or black holes (BHs) dominate the Galactic X-ray emission above 2 keV (see, e.g., Grimm et al. 2002). These systems are called X-ray binaries and are usually divided into two major classes, high-mass X-ray binaries and low-mass X-ray binaries, according to the mass of the donor star (mass \(\gtrsim 8\)\(M_{\odot}\) for the former and \(\lesssim 8\)\(M_{\odot}\) for the latter).
Within the high-mass X-ray binary class, three types of non-transient systems can emit \(\gamma\)-ray radiation (Chernyakova et al. 2019; Chernyakova & Malyshev 2020):
1. Colliding-wind binaries involve the interaction of two massive stars, whose nonrelativistic winds collide and produce \(\gamma\)-ray emission. Prominent examples of such systems are \(\eta\)-Carinae, WR11, and Apep. In colliding-wind binaries, intense shocks occur in the wind collision region, leading to the formation of a very hot plasma (\(>10^{6}\) K). In addition, these systems have the ability to accelerate relativistic particles (Eichler & Usov 1993; Benaglia & Romero 2003), which classifies them as particle-accelerating colliding-wind binaries (e.g., De Becker & Raucq 2013; del Palacio et al. 2023, and references therein).
2. Gamma-ray binaries are characterized by the presence of a young, magnetized, and rapidly rotating NS that emits a relativistic wind. This wind collides with the nonrelativistic wind of the companion OB star, and through this interaction they emit nonthermal radiation that peaks at high energies (\(E>100\) MeV). These systems are thought to represent a short phase in the evolution of massive binaries, which comes after the birth of the compact object (CO) and is followed by the X-ray binary phase, in which the CO accretes matter from its companion (e.g., Dubus
et al., 2017; Saavedra et al., 2023, and references therein). In the latter phase, the nonthermal emission of the system peaks in X-rays. Examples of \(\gamma\)-ray binaries are LS I+61\({}^{\circ}\)303, PSR J1259-63, PSR J2032+4127, LS 5039, 1FGL J1018.6-5856, and HESS J0632+057.
3. Microquasars (MQs) differ from the previous two categories in that their emission of \(\gamma\)-rays does not come from wind collisions. Instead, it comes from the jets ejected by the CO (BH or NS) and their interaction with the environment. Examples of MQs are Cyg X-1, Cyg X-3, and SS433
In total, there are nine known non-transient \(\gamma\)-ray-emitting binaries in our Galaxy (e.g., Corbet et al., 2019; Dubus et al., 2017; Chernyakova and Malyshev, 2020, and references therein). Five of these systems contain an O-type star: LS 5039, LMC P3, 1FGL J1018.6-5856, HESS J1832-093, and 4FGL J1405.1-6119. The remaining four contain a Be star: HESS J0632+057, LS I+61\({}^{\circ}\)303, PSR B1259-63, and PSR J2032+4127. Radio pulsations were observed in three of these systems - PSR J2032+4127 (Camilo et al., 2009), PSR B1259-63 (Moldon et al., 2014), and LS I+61\({}^{\circ}\)303 (Weng et al., 2022) - evidence in favor of the presence of a NS. Many more galactic \(\gamma\)-ray binaries are expected to exist. We recall that the nature of many of the \(\gamma\)-ray sources detected by telescopes such as _Fermi_-LAT is not known, and some of the sources may be \(\gamma\)-ray binaries. In fact, the number of detectable Galactic \(\gamma\)-ray binaries is estimated to be \(\sim 100\)(Dubus et al., 2017).
4FGL J1405.1-6119 (also known as 3FGL J1405.4-6119) is a high-mass \(\gamma\)-ray-emitting binary first studied by Corbet et al. (2019). Using Fermi and Swift/XRT data, Corbet et al. (2019) found a strong modulation of \(13.713~{}\pm~{}0.002\) days associated with the system's orbital period. The companion is classified as an O6.5 III star with a mass of about \(25-35\) M\({}_{\odot}\)(Mahy et al., 2015). The absence of partial and total eclipses suggests that this system has a low inclination (60 \({}^{\circ}\)).
To explain the origin of its \(\gamma\)-ray emission, Corbet et al. (2019) proposed that 4FGL J1405.1-6119 may be a \(\gamma\)-ray binary. This hypothesis draws on analogies with other similar systems, such as 1FGL J1018.6-5856 and LMC P3. Xingxing et al. (2020) modeled the GeV time behavior of 4FGL J1405.1-6119 in the \(\gamma\)-ray binary scenario, assuming a binary consisting of a young pulsar and an O-type main sequence star. Conversely, the radio (5.5 GHz and 9 GHz) and X-ray (0.2-10 keV) luminosities show a positive correlation (Corbet et al., 2019), as expected in MQs (Falcke et al., 2004).
We were able to perform a detailed temporal and spectral study of 4FGL J1405.1-6119 over a broad energy range by analyzing simultaneous XMM-Newton and NuSTAR observations. In this paper we present our results and conclusions about this source. In Sect. 2 we present the X-ray observations and the corresponding tools used for their analysis. Our main results are presented in Sect. 3. A jet model for the source is introduced and discussed in Sect. 4. We discuss our results and present our conclusions in Sect. 5.
## 2 Observation and Data Analysis
### XMM-Newton data
The XMM-Newton observatory is equipped with an optical instrument and two X-ray instruments: the Optical Monitor, which is mounted on the mirror support platform and provides coverage between 170 nm and 650 nm of the central 17 arcminute square region; the European Photon Imaging Camera (EPIC); and the Reflecting Grating Spectrometers (RGS). The EPIC instrument comprises three detectors - the pn camera (Struder et al., 2001) and two MOS cameras (Turner et al., 2001) - which are most
Figure 1: Background-corrected light curves of 4FGL J1405.1-6119 observed by NuSTAR (FPMA+B; red) and _XMM_-Newton (EPIC pn+MOS; blue) with a binning of 350s, starting at 58711.705573 MJD. The first section of the light curve corresponds to the observation from August 18 (Epoch 1), while the second light curve is associated with the observation from August 25 (Epoch 2).
sensitive in the 0.3 - 10 keV energy range. The RGS instrument comprises two high-resolution spectrographic detectors sensitive in the energy range 0.3 - 2 keV.
XMM-Newton observed 4FGL J1405.1-6119 on August 17, 2019, with an exposure time of 32 ks (ObsID 0852020101) and on August 24, 2019, with an exposure time of 44 ks (ObsID 0852020201). In both observations, the MOS cameras were in large window mode, and the pn camera was in timing mode.
We reduced the XMM-Newton data using Science Analysis System (SAS) v20.0 and the latest available calibration in early 2022. To process the observation data files, we used the EPPROC and EMPROC tasks. We selected circular regions with radii of 18 arcsec and 36 arcsec for the source and the background, respectively, with the latter away from any source contamination. We then filtered the raw events lists, removing the high-energy, single-pattern particle backgrounds and thus creating cleaned event lists for each camera and observation. The resulting exposure times after background filtering are 19 ks (59% of total) for the first observation, and 44 ks (100% of total) for the second observation. We studied the presence of pile-up with the EPPilot task and did not find any deviation of the data from the expected models; thus, no excision radii were applied. We barycentered each cleaned event list with the barycen task in order to perform precise timing studies. EPIC light curves were summed using the LCMTH task, with proper scaling factors for the different source photons collecting areas. We extracted and grouped spectra with a minimum of 25 counts per bin in the 0.5 - 12 keV energy band.
### NuSTAR data
The NuSTAR X-ray observatory was sent into orbit in the year 2012 and is notable for its exceptional sensitivity at hard X-rays. It is equipped with two X-ray grazing incidence telescopes, designated FPMA and FPMB, which are arranged in parallel and contain 2x2 solid-state CdZnTe detectors each. NuSTAR can operate in the energy range of 3-79 keV and can achieve an angular resolution of 18 arcsec as reported in Harrison et al. (2013).
NuSTAR observed 4FGL J1405.1-6119 on August 16, 2019 (58711.7031 MJD - ObsID 30502015002), with an exposure of time of \(\sim\)61 ks and on August 24, 2019 (58719.3511 MJD - ObsID 30502015004) with an exposure time of \(\sim\)86 ks. We processed NuSTAR data using NuSTARDAS-v.2.1.2 from HEASoft v.6.30 and CALDB (v.20211020) calibration files. We took source events that accumulated within a circular region of 85 arcsec around the focal point. The chosen radius encloses \(\sim 90\%\) of the point spread function. We took a circular source-free region with a radius of 160 arcsec to obtain the background events within the same CCD.
We used the unpipeline task to create level 2 data products, with SAA parameters saacalc=1 saamode=strict tentacle=yes to filter for high-energy particle background, obtained from SAA filtering reports1. We extracted light curves and spectra with the nuproducts task. We obtained the barycenter-corrected light curves using barycorr task with nuClock2010@101v136 clock correction file. We used celestial coordinates \(\alpha=211.2472^{\circ}\) and \(\delta=+61.3234^{\circ}\) for the barycentric correction with JPL-DE200 as the Solar System ephemeris. Finally, we subtracted the background from each detector's light curve. Then we used the LCMATH task to create FPMA+B light curves. We extracted and re-binned spectra with a minimum of 25 counts per bin in the 3 - 78 keV energy band.
Footnote 1: [https://mustarsoc.caltech.edu/NuSTAR_Public/NuSTAROperationSite/SAA_Filtering/SAA_Filter.php](https://mustarsoc.caltech.edu/NuSTAR_Public/NuSTAROperationSite/SAA_Filtering/SAA_Filter.php)
We used the XSPEC v12.12.1 package (Arnaud, 1996) to model XMM-Newton and NuSTAR spectra, with parameters uncertainties reported at the 90% confidence level.
## 3 Results
### Analysis of the light curves
Figure 1 shows the background-corrected light curves obtained from XMM-Newton (0.5-10 keV, top panel) and NuSTAR (3-78 keV, bottom panel) missions, with a bin time of 350 s. The observation conducted on August 17 is labeled as Epoch 1, while the observation on August 24 is labeled as Epoch 2. The long-term exposures of NuSTAR, both on the order of \(\sim\)1.5 d, show a hard X-ray flux modulation of \(\sim\)1 d seen in both epochs. The shorter but continuous exposures of XMM-Newton do not capture this behavior. Instead, it captures very short (on the order of some ks) changes in soft X-ray flux, as seen in Epoch 1.
Figure 2 shows the orbital flux modulation associated with each observation and mission. Epoch 1 occurred within the orbital phases of 0.93-1.08, while Epoch 2 occurred within the orbital phases of 0.37-0.48. The observed orbital behavior of XMM-Newton and NuSTAR data is very similar to that reported by Corbet et al. (2019) using Swift/XRT data.
We used a sinusoidal model with Gaussian measurement errors to visualize the orbital modulation through \(\sim\)10\({}^{4}\) simulations (Buchner, 2021). Specifically, the following was used:
\[y=A\ \sin\left(2\pi\left[\frac{t}{P}+t_{0}\right]\right)+B+\epsilon, \tag{1}\]
where \(\epsilon\sim\) Normal(0, \(\sigma\)), that is, a normal distribution with a mean of zero and a standard deviation of \(\sigma\). We obtain the following values: \(A=0.56\pm 0.3\), \(P=1.002\pm 0.002\),
Figure 2: NuSTAR and XMM-Newton folded light curve using 48 phase bins, an orbital period of 13.713 days, and with 56498.7 MJD as the reference epoch (Corbet et al., 2019). A sine function fit is shown in orange (see the main text for details). The observed orbital modulation is similar to that shown by Corbet et al. (2019) using Swift/XRT data.
\(0.623\pm 0.001\) and \(B=1.13\pm 0.01\). The fitted model is shown in Fig. 2.
From phase \(\sim\) 0.93, the flux starts to decay, and from phase \(\sim\) 0.37 the source has the maximum local emission. This modulation is anticorrelated with the modulation obtained from the _Fermi_ data in the energy range 200 MeV to 500 GeV (Corbet et al., 2019).
We employed spectral timing routines provided by Stingray software (Huppenkothen et al., 2019) to conduct a comprehensive search for any potential pulsation linked to the X-ray source. Light curves from both telescopes across various energy ranges do not show any significant pulsations above noise on the 0.1-100 mHz frequency range.
### Spectral analysis
We simultaneously modeled source and background spectra extracted from all five detectors (pn, MOS1, MOS2, FPMA and FPMB). We introduced calibration constants in our models to account for disparities in effective areas between instruments. The pn constant was fixed to unity, while the remaining calibration constants were permitted to vary: \(C_{\rm MOS1}=1.10\pm 0.16\), \(C_{\rm MOS2}=1.05\pm 0.15\), \(C_{\rm FPMA}\)=1.40\(\pm 0.16\), and \(C_{\rm FPMB}\)=1.35\(\pm 0.16\). Each epoch was modeled separately.
The NuSTAR background showed significant activity at energies above 20 keV. As a result, we limited our analysis to the energy range 3-20 keV for both epochs. In the case of the XMM-Newton background, it was significant at energies below 2 keV. Therefore, we focused our analysis on the 2-10 keV energy range for both epochs. Consequently, the total spectrum used for each epoch included the 2-20 keV energy range.
The interstellar absorption was modeled using the Tubingen-Boulder model (tbabs), with solar abundances defined by Wilms et al. (2000) and effective cross sections of Verner et al. (1996). Several continuum models were used to fit the time-averaged spectra, including combinations of power-law variants such as powerlaw, higbecut+powerlaw, cutoffpfl, and bknpow, and thermal models such as apec, bbodyrad, and diskbb. After several fits, we selected the best fits: a power law (tbabs powerlaw, hereafter Model 1) and a combination of a power law and a blackbody (tbabs(powerlaw + bbodyrad), hereafter Model 2). No emission or absorption lines above the continuum were observed in the spectra of all epochs.
Model 1 yielded a \(\chi^{2}\) of 221 with 191 degrees of freedom (\(\chi^{2}_{\nu}=1.16\)) for Epoch 1, while Epoch 2 resulted in 184/194\(\sim\)0.95. On the other hand, if we apply Model 2, we obtain 216/189\(\sim\)1.14 for Epoch 1 and 181/192\(\sim\)0.94 for Epoch 2.
To assess the significance level of the blackbody components, we performed spectral simulations using the same observational data using fake-it command from XSPEC. Each fake spectra was constructed from arrays of randomly sampled parameters using the simbars command. By generating a cumulative distribution function of F-values from the simulated data, we determined the minimum significance level of detection by comparing it with the F-value derived from the real data, as described in Hurkett et al. (2008).
Each F-value, for both the real and simulated data, is calculated as \(F=(\nu_{0}/\delta\nu)\times(\delta\chi^{2}/\chi^{2}_{1})\), where the subscripts 0, 1 correspond to the null hypothesis (powerlaw) and the tested hypothesis (powerlaw+bbody). Each hypothesis is associated with a total \(\chi^{2}\) and \(\nu\) degrees of freedom. The significance level is determined by computing the corresponding \(p\) value, which is the number of simulated spectra with \(F\) values greater than the \(F\) value derived from fitting the actual data. The uncertainty of this quantity can be calculated using \(\sqrt{p(1-p)/N_{\rm s}}\), where \(N_{\rm s}\) is the total number of simulated spectra.
We ran \(\sim 10^{5}\) simulations for both epochs and found that the blackbody component is significant at \(\sim 2.6\sigma\) level for Epoch 1 and \(\sim 2.8\sigma\) for Epoch 2.
From our analysis we conclude that the spectrum was non-thermal dominated during the observed period, possibly of synchrotron origin. This implies the presence of particles with TeV energies for the typical magnetic field strengths in this type of system. In the next section we explore this hypothesis in more detail.
In Fig. 3 we present the time averaged spectra and residuals associated with Model 1 and Model 2 of Epoch 1 (left panel) and Epoch 2 (right panel), while the corresponding best-fit parameters and uncertainties are detailed in Table 1. The powerlaw normalization component is expressed in units of photons keV\({}^{-1}\) cm\({}^{-2}\) s\({}^{-1}\) at 1 keV. The bbodyrad normalization is equal to \(R_{\rm km}^{2}/D_{10}^{2}\), where \(R_{\rm km}\) is the source radius in km and \(D_{10}\) is the distance to the blackbody source in units of 10 kpc. At a distance of 6.7 kpc, the size of the emitting region ranged from 1.5 to 5.4 km during Epoch 1 and from 2.9 to 8.9 km during Epoch 2. Alternatively, at a distance of 8.7 kpc, the size of the emitting region ranged from 2 to 7 km for Epoch 1 and from 4 to 11.6 km for Epoch 2. In Table 1, we report the values assuming a distance of 7.7 kpc.
To compute the unabsorbed flux, we used the convolution model cflux. Assuming a distance of 7.7 kpc (Corbet et al., 2019), the unabsorbed 2-20 keV luminosity ranges between \(7.2-7.8~{}\times 10^{33}\) erg s\({}^{-1}\) for Epoch 1 and \(4.4-5~{}\times 10^{33}\) erg s\({}^{-1}\) for Epoch 2.
## 4 Model
### Jet nonthermal radiation
We adopted the hypothesis that a jet is present in the \(\gamma\)-ray-emitting binary 4FGL J1405.1-6119, and tried to evaluate whether it can adequately explain the nonthermal spectral energy distribution (SED) of the system. The radiative jet model used follows that presented in detail in Escobar et al. (2022), which in turn is based on Romero & Vila (2008). In the following, the model setup is outlined.
\begin{table}
\begin{tabular}{l|c c c c} \hline \hline & \multicolumn{2}{c|}{Epoch 1} & \multicolumn{2}{c}{Epoch 2} \\ \hline Parameters & Model 1 & Model 2 & Model 1 & Model 2 \\ \hline N\({}_{\rm H}\) [\(10^{22}\) cm\({}^{-2}\)] & \(7.0^{+1.5}_{-1}\) & 5\(\pm 3\) & \(8.0^{+1.5}_{-1}\) & \(9\pm 3\) \\ \(\Gamma\) & 1.6\(\pm 0.2\) & 1.15\(\pm 0.10\) & 1.65\(\pm 0.15\) & 1.2\(\pm 0.5\) \\ Norm \({}_{\rm H}\) [\(10^{-4}\)] & \(1.7^{+0.5}_{-0.4}\) & \(0.5^{+1.5}_{-0.5}\) & 1.00\(\pm 0.04\) & 0.02\(\pm 0.01\) \\ kT [keV] & - & 1.0\(\pm 0.3\) & - & 0.8\(\pm 0.2\) \\ \(R_{\rm bb}\) [km] & - & 3\({}^{+8}_{-2}\) & - & 6\({}^{+10}_{-3}\) \\ \(L_{\rm z}\) [\(10^{33}\) erg s\({}^{-1}\)] & \(7.5\pm 0.3\) & & \(4.7\pm 0.3\) \\ \hline \(\chi^{2}\)/dof & 221/191 & 216/189 & 184/194 & 181/192 \\ \hline \end{tabular}
*
\end{table}
Table 1: Best-fit parameters of XMM-Newton +NuSTAR time-averaged spectral modeling of 4FGL J1405.1-6119 with const\({}^{\star}\)tbabs(powerlaw) (Model 1) and const\({}^{\star}\)tbabs(powerlaw+bbodyrad) (Model 2).
The scenario consists of a CO accreting material from the companion star with an accretion power \(L_{\rm acc}\), which can be expressed in terms of the Eddington luminosity as
\[L_{\rm acc}=q\ L_{\rm Edd}\approx q\ 1.3\times 10^{38}\left(\frac{M}{M_{\odot}} \right)\ {\rm erg\ s^{-1}}, \tag{2}\]
where \(q\) is a constant that represents the accretion efficiency in terms of the Eddington limit. Coupled with the inner accretion disk we assume the presence of a lepto-hadronic jet of kinetic luminosity, \(L_{\rm jet}\). This jet power relates to the accretion power through
\[L_{\rm jet}=q_{\rm jet}L_{\rm acc}, \tag{3}\]
where \(q_{\rm jet}\) is another constant indicating the fraction of the accretion power that is transferred to the jet. The parameters \(q\) and \(q_{\rm jet}\) define the accretion regime of the MQ. To compute the SED and the normalizations with the emission power, we chose to use \(L_{\rm jet}\) directly as the parameter instead of a combination of \(q\) and \(q_{\rm jet}\); we defer discussion of the interpretation of the accretion regime to Sect. 5.
The jet propagates with a bulk velocity \(v_{\rm jet}\), corresponding to a bulk Lorentz factor \(\Gamma_{\rm jet}\). A fraction \(q_{\rm rel}\) of this power is converted into relativistic particles by an acceleration mechanism. The relativistic proton and electron luminosities, \(L_{p}\) and \(L_{\rm e}\), are distributed according to the power ratio \(a=L_{p}/L_{\rm e}\).
We use cylindrical coordinates, with the coordinate \(z\) along the jet axis and the origin at the CO. The jet is started at a distance \(z_{0}\) from the CO. The region in which the particles are accelerated extends from \(z_{\rm min}\) to \(z_{\rm max}\), and its shape is described by
\[r(z)=r_{0}\left(\frac{z}{z_{0}}\right)^{\varepsilon}, \tag{4}\]
where \(r_{0}\) is the radius of the jet at its base and \(0<\varepsilon\leq 1\) describes its geometry. We note that the degree of collimation of the jet increases with decreasing values of \(\varepsilon\), with the extreme value \(\varepsilon=1\) representing a conical shape. The magnetic field along the jet, \(B\), decreases with \(z\) following a power law of index \(m\), namely, \(B(z)=B_{0}(z/z_{0})^{-m}\), where \(1\leq m\leq 2\) (e.g., Krolik 1999). The value of \(B_{0}\) is obtained by assuming equipartition between magnetic and kinetic energy at the jet base.
We parameterize the injection function of energy to relativistic particles as a power law with an exponential cutoff,
\[Q_{i}=Q_{i,0}E_{i}^{-p}\exp\left(\frac{E}{E_{i,\rm max}}\right), \tag{5}\]
where \(i={\rm e,p}\) accounts for electrons and protons, respectively; \(Q_{i,0}\) is obtained normalizing the injection function with the total power of each particle population, \(L_{i}\); and \(E_{i,\rm max}\) is the maximum reachable energy, achieved when the acceleration rate equals that of energy losses.
Relativistic particles are accelerated at a rate of
\[t_{\rm acc}^{-1}=\frac{\eta ceB}{E_{i}}, \tag{6}\]
where \(\eta\leq 1\) is the acceleration efficiency, \(c\) is the speed of light, and \(e\) is the elementary electric charge. On the other hand, relativistic particles lose energy via both radiative and non-radiative mechanisms. The latter are adiabatic and escape losses for both proton and electron populations. Regarding the radiative mechanisms, we consider synchrotron emission for both types of particles. In the case of protons, we also considered photons from the decay of neutral pions, which are the product of inelastic collisions between relativistic and cold protons; the latter consist mainly of protons in the jet bulk and those in the stellar wind of the companion star. In the case of electrons, we also computed
Figure 3: Spectral modeling results corresponding to Epoch 1 (left column) and Epoch 2 (right column) derived from simultaneous XMM-Newton and NuSTAR data (top panels). \(\chi^{2}\) residuals correspond to an absorbed power-law (middle panel) and absorbed power-law with a black-body component (bottom panel).
the inverse Compton scattering of photons from the stellar radiation.
We assume that the relativistic particle populations reach a steady state. Their spectral densities are obtained by solving the steady state transport equation, taking injection, escape, and continuous losses into account. For a discussion of the general form of the transport equation, we refer the reader to Ginzburg & Syrovatskii (1964).
The complete formulae for calculating of radiation processes in this work can be found in Blumenthal & Gould (1970), Begelman et al. (1990), Atoyan & Dermer (2003), Kafexhiu et al. (2014), and references therein.
### Spectral energy distribution of 4FGL J1405.1-6119
To fit the SED of the source with our model, we used the radio observations of Corbet et al. (2019), the X-ray flux obtained in this work, and the \(\gamma\)-ray flux from the 4FGL-DR2 catalog (Abdollahi et al., 2020; Ballet et al., 2020).
In Fig. 4 we show the acceleration, escape, and cooling rates of the relativistic particles at different locations in the jet. We find that for the electron population, the losses are dominated by synchrotron cooling throughout the emitting region. For the protons, both adiabatic and proton-proton collisions with bulk protons are the dominant mechanisms of energy loss in the region close to the base; the escape rate competes with them in regions further from the base and dominates the losses toward the end of the emitting region.
In Fig. 5 we show the derived nonthermal SED of the jet, \(L_{\gamma}\), which includes all the above radiative processes, where
\[L_{\gamma}=E_{\gamma}^{2}\frac{dN}{dE_{\gamma}dt}, \tag{7}\]
and \(dN\) is the number of photons with energies between (\(E_{\gamma},E_{\gamma}+dE_{\gamma}\)) emitted during a time \(dt\). The assumed and derived parameters of the model, with uncertainties reported at the 90% confidence level for the free parameters, are listed in Table 2. The X-ray data points shown in Fig. 5 correspond to Epoch 1. Since there is no significant change in the X-ray flux between the two epochs (see Fig. 1), the same set of parameters also fits the observations of Epoch 2.
To obtain the reported model parameters, we ran a first set of \(\sim\)100 simulations2 covering a wide range of values, and decided which of these remained fixed, apart from those resulting from observed properties of the system. Then we ran another set of 120 simulations considering the variation of all free parameters (i.e., using 40 simulations for each parameter variation at a time; \(m\), \(p\), \(\varepsilon\), and \(\eta\)), and chose the model with the minimum \(\chi^{2}/\mathrm{dof}\). The set of values of the free parameter space was covered with a uniform grid for \(p\), \(\varepsilon\), \(m\), and \(\log\eta\). To estimate the free parameter errors, we computed the chi-squared for each simulation and chose the \(\chi^{2}_{\mathrm{min}}+2.706\) contour for each free parameter, which
Figure 4: Acceleration, escape, and cooling rates of electrons (top row) and protons (bottom row). Each column shows the aforementioned rates calculated at \(z_{\mathrm{min}}\) (left column), at the logarithmic midpoint (middle column), and at \(z_{\mathrm{max}}\) (right column).
represents the 90 % credible region (e.g., Frodesen et al. 1979). The errors are reported in Table 2. In the particular case of \(m\) (\(\eta\)) the upper (lower) bound on the error comes from restricting the possible values to those reported in the fourth column of Table 2, while for the case of \(\varepsilon\) all the values within the bounds fall below the aforementioned contour. We obtained a minimum chi-squared goodness of fit of \(\chi^{2}/\mathrm{d.o.f.}=1.06\), with 7 d.o.f.
We show that the broad spectrum of 4FGL J1405.1-6119 can be explained by the nonthermal emission associated with a mildly relativistic (\(\Gamma_{\mathrm{jet}}=1.9\)), lepto-hadronic model of a MQ jet. In particular, this model represents an approximately parabolic jet (\(\varepsilon\approx 0.56\)), with a compact emitting region of size \(\approx 1.0\times 10^{12}\) cm (jet extension could be orders of magnitude larger), and a magnetic induction field at its base of \(B_{0}\approx 2.8\times 10^{7}\) G. Relativistic protons and electrons share the total power in the relativistic populations, \(L_{\mathrm{rel}}=q_{\mathrm{rel}}L_{\mathrm{jet}}=10^{37}\) erg s\({}^{-1}\), according to an assumed hadron-to-lepton ratio of \(a\approx 0.11\), and are accelerated through a low-efficiency mechanism for which \(\eta\approx 1.0\times 10^{-4}\). The particle injection function shows a hard spectrum with spectral index \(p=1.98\).
In all cases, the values of the parameters are consistent with their commonly assumed range of values for MQ jets (see, for example, Romero & Vila 2008; Vila et al. 2012; Pepe et al. 2015; Escobar et al. 2021). The SED is computed assuming a viewing angle of \(\theta=30^{\circ}\) (i.e., the angle between the jet axis and the line of sight). With the Lorentz factor from Table 2, and for lower viewing angles, the same data could be explained with a less powerful jet (\(\sim 0.07\) times the one reported here), while higher angles would favor a scenario with more powerful jets, up to a factor of \(\sim 50\). Instead, maintaining fixed all the values of the parameters but the viewing angle, the model still manages to explain the observations with at most a variation of \(\approx 10^{\circ}\). As we can see, knowing the viewing angle is a crucial factor in determining the accretion regime of the emitter.
## 5 Discussion and conclusions
In this work we have analyzed simultaneous XMM-Newton (2-10 keV) and NuSTAR (3-20 keV) observations of the \(\gamma\)-ray-emitting binary 4FGL J1405.1-6119. The difference between the two observed epochs is about six days. We found no significant pulsation above statistical noise in the XMM-Newton and NuSTAR light curves. This non-detection leaves open the question of the nature of the CO.
We fitted the time-averaged spectra of both epochs with empirical models, which yielded continuum parameters consistent with \(\gamma\)-ray-emitting binaries (Kretschmar et al. 2019; An et al. 2015; Yoneda et al. 2021) and MQs (Natalucci et al. 2014; Soria et al. 2020; Hirsch et al. 2020; Rodi et al. 2021; Saavedra et al. 2022). For Epoch 1, a power-law model adequately characterized the spectrum. For Epoch 2, however, the addition of a blackbody component significantly improved the fit to the observed data. The power-law index was found to be \(\sim\)1.7 for Epoch 1 and \(\sim\)1.4 for Epoch 2. Furthermore, the associated blackbody temperature for Epoch 2 was \(\sim\)0.8 keV, corresponding to a compact region of radius \(\lesssim 16\) km tentatively associated with the inner region of an accretion disk.
Corbet et al. (2019) present a different explanation for the emission of 4FGL J1405.1-6119. They set out a scenario involving the collision of winds between a rapidly rotating NS and its stellar companion. In addition, Xingxing et al. (2020) modeled this scenario to explain the GeV emission from 4FGL J1405.1-6119. In the colliding-wind scenario, the interaction between the stellar and pulsar winds gives rise to shocked regions that are characterized by a spiral shape due to Coriolis forces (see Molina & Bosch-Ramon 2020). Within this framework, the high-energy emission in a \(\gamma\)-ray binary system can be attributed to the up-scattering of photons from the stellar radiation field due to inverse Compton scattering by the relativistic electrons and positrons present in the shocked fluid. Synchrotron emission contributes to a lesser extent, especially in the energy range around \(\sim\)1 MeV (Molina & Bosch-Ramon 2020).
To explain the new X-ray data and archival multiwavelength observations (ATCA and _Fermi_), we implemented a lepto-hadronic jet model under the hypothesis that 4FGL J1405.1-6119 is a MQ. The parameters associated with the properties of the relativistic particle populations in the jet and the nonthermal emission are consistent with the values commonly assumed for this type of source (see, for example, Romero & Vila 2008;
\begin{table}
\begin{tabular}{l l l l l} \hline \hline Parameter & Symbol & Adopted(Derived Value & Typical Values & Units \\ \hline Luminosity & \(L_{\mathrm{pl}}\) & \(10^{38}\) & \(10^{38}-10^{39}\) & erg s\({}^{-1}\) \\ Luminous distance & \(\simeq 0\) & \(1.0\times 10^{4}\) & \(\geq 10^{6}\) & cm \\ Magnetic field at \(z_{0}\) & \(B_{0}\) & \(2.8\times 10^{7}\) & \(\sim 10^{5}-10^{7}\) & G \\ Base of acceleration region & \(z_{a}\) & \(3.0\times 10^{6}\) & \(\geq 10^{6}\) & cm \\ Top of acceleration region & \(\lambda\) & \(1.0\times 10^{22}\) & up to jet extent & cm \\ Base half-opening angle\({}^{\dagger}\) & \(\theta_{\mathrm{pk}}\) & \(0.3\) & \(5\) \(\leq 10\) & deg \\ Viewing angle & \(\theta\) & \(60\) & \(0-180\) & deg \\ Bulk Lorentz factor & \(\Gamma_{\mathrm{jet}}\) & \(1.9\) & \(-1-5\) & \\ Relativistic power fraction & \(q_{\mathrm{rel}}\) & \(0.1\) & \(5\) \(\leq 0.1\) & \\ Proton-electron power ratio & \(a\) & \(0.11\) & \(0-100\) & \\ Magnetic power-law index & \(m\) & \(1.95\pm 0.18\) & \(1-2\) & \\ Acceleration efficiency log & log \(\eta\) & \(-4.01\pm 0.28\) & \(-5\) \(\leq\) log \(\eta\) \(\leq-1\) & \\ Injection spectral index & \(p\) & \(1.98\pm 0.25\) & \(1.5-2.5\) & \\ Geometric index & \(\varepsilon\) & \(0.56\pm 0.46\) & \(0.1\leq\varepsilon\leq 1\) & \\ \hline \end{tabular} 1
\end{table}
Table 2: Jet model parameters.
Figure 5: Nonthermal SED derived from our jet model. We considered the following radiative processes: for protons, synchrotron emission and proton-proton collisions with the cold protons of the jet bulk and the companion’s wind; for electrons, synchrotron emission and inverse Compton scattering off the radiation field of the companion. The figure also shows luminosities derived from XMM-Newton +NuSTAR data (our work) as well as ATCA and _Fermi_ data taken from Corbet et al. (2019) and the 4FGL-DR2 catalog (Abdollahi et al. 2020; Ballet et al. 2020), respectively. All the references are in the figure.
Vila et al., 2012; Pepe et al., 2015; Escobar et al., 2021, 2022). The scenario consists of a parabolic jet with a compact acceleration region, where the high-energy emission is produced by a hard spectrum of relativistic particles driven by a low-efficiency acceleration mechanism.
In this scenario, the \(\gamma\)-ray emission is produced via two processes. First, inverse Compton scattering produces nonthermal radiation in the energy range from about 100 keV to 10 GeV. Second, the decay of neutral pions from proton-proton collisions produces energetic photons at energies above 10 GeV. At lower energies, the synchrotron emission of electrons completely dominates, allowing us to accurately model radio and X-ray data. We thus propose that a lepto-hadronic jet model, which includes both leptonic and hadronic processes, may be sufficient to explain the multiwavelength emission of 4FGL J1405.1-6119.
There are essentially two interpretations of the results we obtained with our jet model. If the accretion regime is sub-Eddington, the CO should be a BH at least \(\sim\)10 \(M_{\odot}\). This scenario, like the case of Cygnus X-1, would require an X-ray luminosity on the order of \(\sim 10^{37}\) erg s\({}^{-1}\) in the low-hard state (Di Salvo et al., 2001; Makishima et al., 2008), which is in apparent contradiction with the observations. On the other hand, if the source is in a super-Eddington accretion regime, this would imply that the CO consists of a BH of a few solar masses or a NS with a weak magnetic field. In this case, the X-ray emission from the disk is absorbed by the photosphere of the wind ejected by the same disk (Abaroa et al., 2023). This is more consistent with what is observed in the X-ray emission. The supercritical source may also have an equatorial radio component or equatorial lines with velocities of \(10^{3}-10^{4}\) km s\({}^{-1}\), as in the case of SS 433, which could be observed in the future (Fabrika et al., 2021; Abaroa et al., 2023).
We note that leptonic jet models can also be adopted to explain observations of other MQs, as in the cases of Cyg X-3 (Zdziarski et al., 2012) and Cyg X-1 (Zdziarski et al., 2014). In our case, however, this type of model would not fully explain the observations for two main reasons. First, there is no clear evidence for disk or coronal emission. The X-ray observations could then be explained by a dominant synchrotron emission, which at these energies hides the radiation from the disk and/or the corona (see, for example, Fig. 11 of Bosch-Ramon et al., 2006). On the other hand, a relativistic proton component seems necessary to explain the observed slope change in the high-energy part of the Fermi data. In the lepto-hadronic picture, this part of the spectrum is dominated by the \(\gamma\)-ray emission, which originates in proton-proton collisions through neutral pion decays.
As shown in Fig. 2, the flux is higher in Epoch 1 than in Epoch 2. This behavior is also consistent with the presence of a thermal component in the spectra of Epoch 2, which can be explained by the contribution of a thermal inner disk and a reduced nonthermal jet component. In the case of a moderate or low viewing angle, some of the emission from the disk can escape from the central funnel in the wind (Abaroa et al., 2023). As for Epoch 1, the contribution to the total flux may be completely dominated by the jet.
The method we implemented for estimating parameters and uncertainties, although not very robust, allowed us to explain the multiwavelength behavior of 4FGL J1405.1-6119, favoring the MQ scenario. A more detailed analysis using a Markov chain Monte Carlo method would improve these estimates. This method would also allow a model comparison test to be included (e.g., to compute the odds ratio between leptonic and lepto-hadronic models, or between the MQ and colliding-wind scenarios). This analysis is beyond the scope of the current work, but we will present it in a future paper.
It is important to collect more observational data to evaluate the pros and cons of the pulsar-star collision wind and MQ scenarios for 4FGL J1405.1-6119. These additional data should be used to focus on timing analysis, especially with respect to the orbital period. By analyzing these data, we can gain a more complete understanding of the nature of this and other \(\gamma\)-ray-emitting binary systems.
###### Acknowledgements.
We thank the anonymous reviewer for their valuable comments on this manuscript. We extend our gratitude to Fiona A. Harrison and Brian W. Gerfesten for their assistance with the instruments' technical aspects. We thank Sergio Campana for his suggestions that helped improve this work. EAS, FAF and JAC acknowledge support by PIP 0113 (CONICET). FAF is fellow of CONICET. IAC is CONICET reserves, JAC is a Maria Zharkomom researcher fellow funded by the European Union - NextGenerationEU- (UAIOR2MZ). This work received financial support from PICT-27865 (ANPCY). JAC was also supported by grant PID2019-10551GB-C32/AE/ID.10.3039/1000110033 from the Agentsia Estatal de Investigacion of the Spanish Ministerio de Ciencia, Innovacion y Universidades, and by Conseljeria de Economia, Innovacion, Ciencia y Empleo of Junta de Andalucia as research group FQM-322, as well as FEDER funds. GER acknowledges financial support from the State Agency for Research of the Spanish Ministry of Science and Innovation under grants PID2019-1055103-C114/ID.10.3039/0100110033/ and PID2022-136828NB-C41/AE/ID.10.3039/501100011033/, and by "ERDF A way of making Europe", by the "European Union", and through the "Unit of Excellence Maria de Maeztu 2020-2023" award to the Institute of Cosmos Sciences (CEX2019-000918-M). Additional support came from PIP 0554 (CONICET). GJE acknowledges financial support from the European Research Council for the ERC Consolidator grant DEMOBLACK, under contact no. 770017.
|
2303.06117 | Aftermath Epidemics: Percolation on the Sites Visited by Generalized
Random Walks | We study percolation on the sites of a finite lattice visited by a
generalized random walk of finite length with periodic boundary conditions.
More precisely, consider Levy flights and walks with finite jumps of length
$>1$ (like knight's move random walks (RW) in 2 dimensions and generalized
knight's move RW in 3d). In these walks, the visited sites do not form (as in
ordinary RW) a single connected cluster, and thus percolation on them is
non-trivial. The model essentially mimics the spreading of an epidemic in a
population weakened by the passage of some devastating agent -- like diseases
in the wake of a passing army or of a hurricane. Using the density of visited
sites (or the number of steps in the walk) as a control parameter, we find a
true continuous percolation transition in all cases except for the 2-d knight's
move RW and Levy flights with Levy parameter $\sigma \geq 2$. For 3-d
generalized knight's move RW, the model is in the universality class of Pacman
percolation, and all critical exponents seem to be simple rationals, in
particular $\beta=1$. For 2-d Levy flights with $0 <\sigma < 2$, scale
invariance is broken even at the critical point, which leads at least to very
large corrections in finite size scaling, and even very large simulations were
unable to determine unambiguously the critical exponents. | Mohadeseh Feshanjerdi, Amir Ali Masoudi, Peter Grassberger, Mahdiyeh Ebrahimi | 2023-03-10T18:13:51Z | http://arxiv.org/abs/2303.06117v3 | # Aftermath Epidemics: Percolation on the Sites Visited by Generalized Random Walks
###### Abstract
We study percolation on the sites of a finite lattice visited by a generalized random walk of finite length with periodic boundary conditions. More precisely, consider Levy flights and walks with finite jumps of length \(>1\) (like knight's move random walks (RW) in 2 dimensions and generalized knight's move RW in 3d). In these walks, the visited sites do not form (as in ordinary RW) a single connected cluster, and thus percolation on them is non-trivial. The model essentially mimics the spreading of an epidemic in a population weakened by the passage of some devastating agent - like diseases in the wake of a passing army or of a hurricane. Using the density of visited sites (or the number of steps in the walk) as a control parameter, we find a true continuous percolation transition in all cases except for the 2-d knight's move RW and Levy flights with Levy parameter \(\sigma\geq 2\). For 3-d generalized knight's move RW, the model is in the universality class of pacman percolation, and all critical exponents seem to be simple rationals, in particular \(\beta=1\). For 2-d Levy flights with \(0<\sigma<2\), scale invariance is broken even at the critical point, which leads at least to very large corrections in finite size scaling, and even very large simulations were unable to determine unambiguously the critical exponents.
## I Introduction
One disaster often does not come alone. In the present paper we deal with the purely geometric - i.e., percolation - aspects of an epidemic which comes in the wake of another disaster like a storm or a hurricane, and can spread only on the sites weakened by the first.
Percolation in its simplest version (called OP in the following) deals with the establishment of long range connectivity in random but statistically homogeneous systems with only short range links between its units [1; 2]. The two best known examples of OP are site and bond percolation, where the system is a regular lattice of finite dimension, and local links are established by inserting sites or bonds [1].
This is one of the paradigmatic models in statistical physics and has many applications, the most important one being the spreading of epidemics [3]. Starting from a local seed, a system-wide epidemic (or pandemic) can evolve only, if the spreading agent (virus, bacterium, or even rumor) can reach wide regions, i.e. if large clusters of sites are connected. If the population is originally healthy and susceptible (except for the seed), and becomes immune or dead after a finite time of illness, this is the so-called SIR (susceptible-infected-removed) model [4; 5].
There are of course many modifications of this simple scenario [6; 7]:
- The system is not a regular lattice, but some sort of network [8]. This leads to new universality classes, but at least if the network is close to regular (all nodes have similar degree) and uncorrelated, the situation is similar as for a regular lattice.
- When recovered individuals become susceptible again, the resulting SIS model is in a different universality class from SIR or OP [9].
- If there are finite incubation or latency periods [3], the universality class is in general not changed.
- Things change again, if contact with more than one infectious neighbor is needed to infect a susceptible individual. In the extreme case of bootstrap and \(k\)-core percolation [10; 11], clusters can grow (or do not shrink) only, if new (old) sites have a certain minimal number of neighbors in the cluster. This can be relaxed so that infection of a new site is more likely if it has more infected neighbors [12; 14; 15]. The most dramatic effect in such cases is that the percolation transition can become discontinuous or, actually, hybrid: Although the order parameter jumps at the transition point, one observes also scaling laws as for continuous transitions.
- Similar cooperativity effects occur, if two (or more) diseases "cooperate" in the sense that infection by one makes the individuum also more susceptible to be infected by the other [16].
- Very important, in particular in modern times where
people can carry infections over very long distances by flights, are non-local single links. Often, this is modeled by assuming that the infectious agent can perform a Levy flight, i.e. the probability for a link between two sites is described by a power law [23; 24; 25; 26; 21; 27]. In this case one finds continuous transitions in new universality classes which depend on the value of the power-law exponent.
- In OP, new local connections are established randomly. In contrast, in "explosive percolation" [17] (EP) one inserts new connections such that the occurrence of large clusters is delayed. The percolation transitions in EP were first thought to be discontinuous, but they are actually continuous. Apart from the smallness of the order parameter exponent \(\beta\), its most striking feature is that for finite lattice sizes \(L\), the width of the critical region and its shift relative to the infinite lattice critical point satisfy power laws with different exponents [15]
- at least when analyzed in the conventional way where the transition point is defined as independent of the individual realization of the process [19].
- The system can be non-homogeneous in the sense that some regions are more susceptible and others less so. This can lead to multiple percolation transitions, such that changes in cluster size are of order \(N\) in each, where \(N\) is the size of the system [20].
- Even if the system is homogeneous on large scales, it might be that there are long-range correlations between the densities of susceptible individuals and/or the links. This is called "correlated percolation" (CP), and is maybe the largest and most varied class of non-trivial percolation models.
It is this class of models which is considered in the present paper.
By far the best studied special case is the Ising model. It is well known that the Ising critical point can be understood as a percolation type transition for carefully defined ("Fortuin-Casteleyn") clusters [27]. But one can also study the percolation of clusters defined simply as connected sets of "+" and "-" spins, and of the boundaries between them. This was recently done by Grady [28], who found in 3 dimensions a true percolation transition which is not in the OP universality class. Remarkably, Grady found that, as in explosive percolation, the width of the critical region for finite \(L\) and its shift from the exact critical point at \(L=\infty\) scale with different exponents.
Another class of CP models is one where the correlations are assumed to decay with power laws \(C(r)\sim r^{-\alpha}\), without specifying the mechanism which generated them [29; 30; 31]. Whether the resulting percolation transition is in the OP class or not should now depend on \(\alpha\), according to a generalized Harris criterion: The universality class should be modified iff \(d\nu^{(0)}>\alpha\), where \(\nu^{(0)}\) is the correlation length exponent of the model without long-range correlations. This is seen in [31] for some critical exponents, but not for all.
Finally, there is "pacman percolation" [32; 33]. In this case, all sites are susceptible initially. But before the actual percolation process starts, a random walker performs a walk (with periodic boundary conditions) of \(T\) steps, where \(T\sim N\), with \(N=L^{d}\) being the number of sites. Percolation is then considered only on those sites which were _not_ visited by the walker.
The model studied in the present paper can be seen as the opposite of pacman percolation: We again have a finite-time random walk before the percolation proper takes place, but now the percolation process can take place only on sites that _had_ been visited by the walker. A real-world scenario which might be modeled by this is an army or a hurricane that passes through some geographic region, and an epidemic which can evolve only in the areas devastated by them. It is true that Hurricanes in the Caribic don't make random walks, but Timur's armies in Iran and the armies in the 30-years war in Germany came very close. Periodic boundary conditions are used both for the walker and for the percolation process.
An immediate problem with such a type of models is that the visited sites are connected for an ordinary random walk (RW), and thus the problem of percolation seems trivial. The way out of this dilemma is of course to modify the walk such that visited sites are not (necessarily) connected. In the present paper we study two such modifications:
(a) _Knight's move and next-nearest neigbor move RW._ A knight's move in chess is one where one moves two lattice constants in one direction (say x), and one in the other (say y). From a given position, there are 8 such moves. A next-nearest neigbor move RW (nnnRW) is a walk where one moves \(\pm 1\) step in each direction. In the following we shall only show results for the knight's move RW, but we have done also extensive simulations for nnnRW's. We will show that there is no sharp percolation transition in this model in 2 dimensions, but there is one if the model is generalized to 3d. A knight's move in this generalized 3-d walk is one where one moves two lattice constants in one direction and one in each of the two others. In this case there are 24 moves.
(b) _Levy flights._ Here, the probability for a step to have a length \(>r\) decreases for large \(r\) as
\[P(r)\sim r^{-\sigma} \tag{1}\]
with \(0<\sigma<2\). Here, we studied only 2-dimensional lattices. For \(\sigma\to 0\), the "walk" is just a sequence of random jumps, and our model reduces to site percolation. For \(\sigma>2\), the walk is in most respects equivalent to a random walk, except that visited sites do not necessarily form a single connected cluster. It is for the latter reason that we studied also the case \(\sigma=2.5\), to verify that the behavior is the same as for the knight's move RW. We also studied the case \(\sigma=2\), which is at the border between Levy flights and ordinary walks.
A particular feature of the present model is that the finite value of \(T\) can induce, for finite \(L\), an additional characteristic length scale. For RW, this length scale would be the square root of the r.m.s. end-to-end dis
tance
\[\sqrt{\langle R^{2}\rangle}\sim T^{1/2}\sim L^{d/2}, \tag{2}\]
which diverges for \(d>2\) faster than \(L\) when \(L\rightarrow\infty\), if the periodic b.c. would not bring it down to \(L\). Indeed, as shown in [33], the latter imply that the correlation between visited sites decays as \(C(r)\sim r^{2-d}\). For Levy flights, different powers of \(R\) scale differently, \(\langle R^{q}\rangle\sim T^{q/\sigma}\), if \(q>\sigma\)[34], and the correlation function \(C(r)\) is in general not a power law (see the Appendix). Thus it is not scale-free, suggesting that several new length scales might be involved. This might imply that the standard finite size scaling (FSS) behavior is no longer valid for Levy flights, and that in particular the width and the shift of the critical peak in variables like the fluctuations of the order parameter might scale with different exponents, as found also in EP [18] and in boundary percolation in the Ising model [28].
## II Definitions of the models, algorithms, and computational details
Both models live on square resp. cubic lattices. For computational efficiency, we replaced the periodic b.c. by helical ones, where one uses a single integer to label sites and neighbors of site \(i\) are \((i\pm 1)\) mod \(N,(i\pm L)\) mod \(N,\ldots(i\pm L^{d})\) mod \(N\). For generating Levy flights, we used the algorithm of [23] (see also [25]). In the following, we shall use the words 'walk' and 'walker' both for Levy walks and for (generalized) knight's move walks.
In the introduction, walk and percolation were discussed as independent and subsequent parts of the model, but for computational efficiency we measured the properties related to percolation already during the walk by means of the site insertion version of the Newman-Ziff (NZ) algorithm [36]. In our algorithm, we keep track of the number \(n\) of sites visited by the walker (we use \(\rho=n/N\) as control parameter) and the size \(S_{n}\) of the largest cluster when \(n\) sites are visited (\(S_{n}/N\) is used as order parameter). At each step of the walk we registered whether a new site was visited or not. In the latter case, the next step was taken immediately. If a new site \(i\) was visited, however, we increased the number \(n\) of visited sites by \(1\) and performed one step of the NZ algorithm. During this step, the connected cluster containing \(i\) is determined. Let us call its size \(C_{n}\), whence
\[S_{n}=\max\{S_{n-1},C_{n}\}. \tag{3}\]
The \(n\)-th "gap" is defined as
\[\Delta_{n}=S_{n+1}-S_{n}, \tag{4}\]
and the maximal gap over all values of \(n\) is called \(\Delta_{\max}\), while the \(n\)-value at which the maximum occurs is called \(n_{\max}\) and the giant cluster size at this point is \(S_{\max}\).
As observables we measured the average order parameter and its variance as functions of \(n\), the averages of \(\Delta_{\max}\) and \(n_{\max}\), and their variances. These were measured at lattice sizes \(L=32,64,\ldots 16384\) for \(d=2\), and at \(L=32,64,\ldots 512\) for \(d=3\). The number of realizations for each Levy flight parameter \(\alpha\) and for each dimension in the case of (generalized) knight's move RW was \(>70,000\) for the largest \(L\), and increased up to \(>2,000,000\) for the smallest.
## III Finite size scaling
Because FSS might be different in the present model in view of the additional length scale induced by the finiteness of the walk time \(T\), we should review the standard scenario for its scaling.
We expect that \({\rm Var}[S_{n}]\) has a peak near the percolation transition which gets sharper with increasing \(N\). At the same values of \(N\) also the gaps should be maximal. Let us call \(\rho_{c}(L)\) the position of the peak of the distribution of \(n_{\max}/N\) at given \(N=L^{d}\), and \(\rho_{c}=\lim_{L\rightarrow\infty}\). Let us furthermore define the order parameter exponent \(\beta\) and the correlation length exponent \(\nu\) by demanding for infinite systems that
\[s\equiv L^{-2}\langle S_{n}(\rho)\rangle\sim(\rho-\rho_{c})^{\beta}\quad{\rm for }\rho>\rho_{c} \tag{5}\]
and
\[\xi(\rho)\sim|\rho-\rho_{c}|^{\nu}\, \tag{6}\]
where \(\xi(\rho)\) is the correlation length which for percolation is defined as the r.m.s. radius of the largest finite cluster.
Standard (FSS) arguments (mainly that observables are homogeneous functions near a critical point, that there is only one unique divergent length scale as \(\rho\rightarrow\rho_{c}\), and that the scaling of a quantity depends only on its (anomalous) dimension), lead to the ansatzes
\[s=L^{d_{f}-d}\Psi_{S}[(\rho-\rho_{c})L^{1/\nu}] \tag{7}\]
and
\[\chi\equiv L^{-2}\{{\rm Var}[S_{n}(\rho)]\}^{1/2}=L^{d_{f}-d}\Psi_{\chi}[(\rho -\rho_{c})L^{1/\nu}], \tag{8}\]
where
\[d_{f}=d-\beta/\nu. \tag{9}\]
For _bond_ percolation (whether correlated or not), \(S_{n}\) would increase whenever the largest cluster eats a smaller one. The largest gap would thus occur when the largest second-largest cluster gets eaten. If we still assume that all masses scale with \(L\) according to their anomalous dimension, this would imply that also
\[\langle\Delta_{\max}\rangle\sim\chi_{\Delta}\ \equiv\ \{{\rm Var}[\Delta_{\max}]\} ^{1/2}\sim L^{d_{f}} \tag{10}\]
at criticality, while equations analogous to Eqs. (7),(8) (with scaling functions \(\Psi_{\Delta}\) and \(\Psi_{\chi_{\Delta}}\)) should hold for \(\rho\neq\rho_{c}\).
For the present case of site percolation, essentially the same argument applies. There, \(\Delta_{n}\) corresponds to the sum of a small number of eaten neighboring clusters, and Eq.(10) can be assumed still to hold.
Finally, we expect that distributions of observables like \(S_{\rm max},\rho_{\rm max}\) (the density of visited sites where the largest gap occurs) and \(\Delta_{\rm max}\) should be, up to normalization, functions of dimensionless variables, which we can write \(S_{\rm max}/L^{d_{f}},(\rho-\rho_{c})L^{1/\nu}\) and \(\Delta/L^{d_{f}}\), so that we can write
\[P_{S}(S_{\rm max})=L^{-d_{f}}f_{S}(S_{\rm max}/L^{d_{f}}), \tag{11}\]
\[P_{\rho}(\rho_{\rm max})=L^{1/\nu}f_{\rho}[(\rho_{\rm max}-\rho_{c})L^{1/\nu}] \tag{12}\]
and
\[P_{\Delta}(\Delta_{\rm max})=L^{-d_{f}}f_{\Delta}(\Delta_{\rm max}/L^{d_{f}}) \tag{13}\]
(notice that equations (11) and (13) of [37] which are analogous to Eqs.(11) and (13) are wrong).
According to the standard FSS scenario, the variance of \(S_{n}\) and the distribution of \(\rho_{\rm max}\) have near-by peaks which have the same scaling with \(L\) and whose position is shifted from \(\rho_{c}\) by the same scaling. If we denote the average of these two peak positions as \(\rho_{c}(L)\), we should thus have
\[\rho_{c}(L)-\rho_{c}\sim{\rm peak\;width}\sim 1/L^{\nu}. \tag{14}\]
## IV Numerical results
### Two dimensions
#### iv.1.1 Conventional variables
We studied percolation on the sites visited by Levy flights with \(\sigma=0.1,0.2,0.3,0.5,0.75,1.0,1.25,1.5,1.7\), \(1.8,1.9,2.0\) and \(2.5\). The two last values are strictly spoken no longer Levy flights (where \(\sigma<2\) for \(d=2\)) but scale like ordinary RW, but we can use the Levy flight generating algorithm also for these values, and get nontrivial results because the visited sites do not form, in general, connected clusters. We also simulated ordinary site percolation, which corresponds to \(\sigma=0\), in order to see whether the scaling changes when going from \(\sigma=0\) to \(\sigma>0\).
In Fig. 1 we show the order parameter \(s\) and its fluctuations \(\chi\) as functions of the density \(\rho\) of visited sites, for \(N=16384\) and for typical values of \(\sigma\). We see the very sharp transition for ordinary site percolation (\(\sigma=0\)), while the transitions become increasingly more fuzzy for increasing \(\sigma\) and happen at smaller densities of allowed sites. Indeed we claim that the leftmost curve (for \(\sigma=2.5\)) and maybe also that for \(\sigma=2\) do not show phase transitions at all. In order to settle this question, we have to look also at smaller \(L\) and perform careful FSS analyses.
In Fig. 2 we show the values of \(\chi\) against \(\rho\) for \(L\) ranging from 64 to 16384, and for \(\sigma=0\) (panel a) and \(\sigma=1.5\) (panel b). More precisely, in view of Eq. (8), we plotted \(L^{d-d_{f}}\chi\) against \((\rho-\rho_{c})L^{1/\nu}\), where we took the standard OP values of \(d_{f}\) and \(\nu\) for \(\sigma=0\), but had to use fitted values of the critical exponents for \(\sigma=1.5\). There are several comments:
(i) The collapse is not perfect even for \(\sigma=0\) (where we know the exact asymptotic scaling), which illustrates the importance of non-leading corrections to scaling. This shows also that using least square fits in order to obtain the best data collapse in such figures could be highly misleading. Indeed, data collapse plots like Fig. 2 are very helpful in getting rough overviews, but other methods are in general better suited to obtain precise results. For percolation, these include e.g. spanning probabilities [38], the mass of the second-largest cluster at criticality [39], or the scaling of gaps as discussed in the previous section [37; 40; 41]. In the present case, estimating spanning probabilities or second-largest cluster masses would abrogate the advantages of the NZ algorithm, and was thus not done.
(ii) With increasing \(\sigma\), the fractal dimension \(d_{f}\) increases slightly, but it hardly changes. In contrast, \(\nu\)
Figure 1: Order parameters (a) and the square root of their variances (b) at \(L=16384\), for five values of \(\sigma\) between 0 and 2.5, plotted against the density of allowed sites \(\rho\) which serves as control parameter.
increases dramatically. But we still obtain a perfect data collapse, which implies that the width of the peak and its shift from the exact critical point (which has also decreased significantly from its value for \(\sigma=0\)) scale in the same way with \(L\). Thus we see here no indication for two different \(\nu\)-exponents.
The critical threshold \(\rho_{c}\) and the exponents \(\nu\) and \(\beta\) can also be estimated by using Eqs. (5,7). In Fig. 3 we show, again for \(\sigma=1.5\), a data collapse plot in which we plotted \(L^{2-d_{f}}s\) against \((\rho-\rho_{c})L^{1/\nu}\). We used the same value of \(\nu\) as in Fig. 2b, but for optimal collapse we had to use slightly different values of \(d_{f}\) and \(\rho_{c}\). Since precise error estimates are difficult from such data collapse plots, we see these differences as rough error estimates. In addition, we show in Fig. 3 a curve indicating \(const\times(\rho-\rho_{c})^{\beta}\), with \(\beta=(2-d_{f})\nu\). It shows that Eqs. (5,9) are rather well satisfied.
Similar analyses were also made for other values of \(\sigma\), but we do not report results since more precise estimates of critical parameters are obtained from gap scaling, as we shall show next.
#### iii.1.2 Gap scaling in the event-based ensemble
In the above conventional types of analyses, observables are studied at fixed values of the control parameters. It was suggested first by Manna and Chatterjee [40] (see also [19; 37; 41; 42]) that more precise estimates could be obtained by studying observables at that value of the control parameter where the largest "gap" (i.e., the largest jump in the order parameter) occurs in individual realizations. These values fluctuate of course from realization to realization, and the ensemble of realizations at the point of maximal gap is called "event-based ensemble" in [19]. This was proposed first for explosive percolation [40; 41], where these fluctuations are excessively large [18], and its usefulness for other percolation transitions was suggested in [37; 42].
That gap scaling studied at the points of maximal gaps is useful also in the present model is suggested by Fig. 4. There we plotted \(P_{\rho}(\rho)\) (the distribution of maximal gap positions) at \(L=2048\) and \(\sigma=1.8\), and compared it to three curves of \(\chi\) at the same value of \(\sigma\) and for three different values of \(L\). For easier comparison of their widths, we used the same arbitrary normalization for all four curves. It is clearly seen that \(P_{\rho}(\rho)\) has the narrowest peak. It has the largest fluctuations, but this drawback is far outweighed by the sharpness of its peak.
**Fractal dimensions:** Let us first look at the fractal dimension. It can be either obtained from the average values and variances of \(S_{\rm max}\) (the size of the giant cluster at criticality), or from the average values and variances of \(\Delta_{\rm max}\) (which, as we pointed out, should scale like the size of the second-largest cluster). In Fig. 5 we show log-log plots of \(L^{-d_{f}^{(0)}}\langle S_{\rm max}\rangle\) (panel a) and of \(L^{-d_{f}^{(0)}}\langle\Delta_{\rm max}\rangle\)
Figure 2: Data collapse plots of \(\chi\) against \(\rho\) for \(\sigma=0\) (panel a) and \(\sigma=1.5\) (panel b). The critical exponents used in these plots are the exact ones for standard OP in panel (a), and fitted ones in panel(b). Notice that, in view of the visible deviations from a perfect data collapse in panel (a) (where the asymptotic scaling is known exactly), the good data collapse in panel (b) might be a bit fortuitous, and the precise values of the fitted exponents for \(\sigma=1.5\) should not be taken too seriously.
Figure 3: Data collapse plot of \(s\) against \(\rho\) for \(\sigma=1.5\). The exponent \(\nu\) is the same as in Fig. 2b, but \(d_{f}\) and \(\rho_{c}\) are slightly re-adjusted for best collapse. Also plotted is a power law \(s=const\times(\rho-\rho_{c})^{\beta}\), in order show that Eqs. (5,9) are well satisfied.
(panel b) against \(L\), where \(d_{f}^{(0)}=91/48\) is the fractal dimension in OP. We see in both panels that the curves are horizontal for \(\sigma<1\), suggesting that the model is in the OP universality class for \(\sigma<1\). For \(\sigma>1\) there are, however, significant deviations which become more and more pronounced with increasing \(\sigma\). But since all curves are strongly non-linear, it is impossible to quote with certainty an asymptotic power law for any \(\sigma>1\). We also indicate in both panels the power laws \(S_{\rm max}\sim\Delta_{\rm max}\sim L^{2}\), which we would expect for compact clusters. It is very strongly suggested that this is the asymptotic scaling for \(\sigma>2\) (and for knight's move RW), and we will later give strong arguments that there is no sharp percolation transition in this case. Whether there is a sharp transition for \(\sigma=2\) is an open question.
Analogous plots for the (square roots of the) variances are shown in Fig. 6. Again both panels of Fig. 6 show clearly OP scaling for \(\sigma<1\), and non-OP scaling for \(\sigma>1\). But again it is impossible to determine the asymptotic scaling laws for \(\sigma>1\), except that the data suggest strongly that clusters are compact for Levy flights with \(\sigma>2\) and for the knight's move RW.
All these results agree perfectly with what we obtained from the conventional analysis (data not shown). In particular, we understand now that the fractal dimensions used in Figs. 2b and 3 are only effective exponents valid in the studied range of \(L\), and it should not surprise that they differ from each other.
**Correlation length exponents:** Correlation length exponents are obtained from the scalings of the shift of the averages of \(\rho_{\rm max}\) and of the widths of their distributions. According to standard FSS, both give the same exponent \(\nu\), but due to possible violations of the standard FSS scenario, this might not be the case in the present model.
Since measuring the shifts of \(\bar{\chi}\equiv\langle\rho_{\rm max}\rangle\) with \(L\) requires precise estimates of the true critical point positions, this is a somewhat delicate and error-prone procedure, in particular since we have already seen strong deviations from pure power law scalings. Thus we look first at the scaling of the variances. In Fig. 7 we show log-log plots of \(L^{1/2}\chi_{\rho}\) against \(L\), where
\[\chi_{\rho}=\{{\rm Var}[\rho_{\rm max}]\}^{1/2}\;. \tag{15}\]
We see now strong deviations from OP scaling for all \(\sigma>0.5\). Superficially, all curves look rather straight so that \(\nu\) seems well determined for each \(\sigma>0.5\) and \(1/\nu\) seems to increase continually with it, until \(1/\nu=0\) for \(\sigma>2\) (which would suggest that \(\chi_{\rho}=const\) for \(\sigma>2\)). But more careful inspection shows that all curves for \(\sigma<1\) bend downwards, while those for \(\sigma>1\) bend up. Only the curve for \(\sigma=1\) seems perfectly straight for \(L>256\), with slope
\[\nu^{(\sigma=1)}=2.00\pm 0.03\;. \tag{16}\]
It is not clear what this means for the true asymptotic values of \(\nu\). If the deviations from straight lines are a minor finite size correction (which is suggested superficially), then \(1/\nu\) seems to decrease roughly linearly with
Figure 4: Plots of \(P_{\rho}(\rho)\) (the distribution of maximal gap positions) and of the width \(\chi\) of the order parameter distribution at given \(\rho\) at \(\sigma=1.8\). Normalization of all curves is such that they all have the same height, for easier comparison of their widths. It is seen that \(P_{\rho}(\rho)\) has the sharpest peak, even if we compare it to curves of \(\chi\) at different values of \(L\).
Figure 5: Log-log plots of \(L^{-d_{f}^{(0)}}\langle S_{\rm max}\rangle\) (panel a) and of \(L^{-d_{f}^{(0)}}\langle\Delta_{\rm max}\rangle\) (panel b) against \(L\). In panel b we also show a straight line with the slope that would be expected for compact clusters (\(d_{f}=2\)).
\(\sigma\) in the range \(1/2<\sigma<2\), i.e.
\[1/\nu=\left\{\begin{array}{rcl}3/4&:&\sigma<1/2\\ 1-\sigma/2&:&1/2<\sigma<2\\ 0&:&\sigma>2\end{array}\right. \tag{17}\]
This would mean that the model is not in the OP class for \(1/2<\sigma<1\), although we had clear evidence that \(d_{f}\) there is the same as in OP.
Another, more radical, extrapolation could be the following: The curvatures seen in Fig. 7 imply that all curves for \(\sigma<1\) align asymptotically with the one for \(\sigma=0\), and those for \(\sigma>1\) become finally parallel to that for \(\sigma=2\). In this scenario, \(\nu\) is would be constant for all \(\sigma\neq 1\), and that it jumps at \(\sigma=1\) from \(4/3\) to \(\infty\). Neither of these two scenarios is very plausible. A third one could be that \(1/\nu=1/\nu^{(0)}\) for \(\sigma<1\), and decreases then continuously to 0.
Whatever the correct scenario is, it is clear that \(1/\nu=0\) for \(\sigma>2\), which means that the order parameter curve \(s\) versus \(\rho\) becomes, for \(\sigma>2\), independent of \(L\), and in particular no singularity develops in the limit \(L\to\infty\). Thus there is no percolation transition for \(\sigma>2\).
Let us now look at the values of \(\bar{\rho}\) and their dependences on \(\sigma\) and \(L\). To be specific, take \(\sigma=1.8\). In Fig. 7 we had seen that if there is a scaling law \(\chi_{\rho}\sim L^{1/\nu}\), then there must be very large finite size corrections to it. In contrast, if we choose \(\rho_{c}(\sigma=1.8)\) carefully, we can make the curve of \(\log(\bar{\rho}-\rho_{c}(\sigma=1.8))\) versus \(\log L\) nearly perfectly straight - but with a value of \(\nu\) which is closer to \(L^{1/\nu^{(0)}}\). This would support the conjecture that there are two different correlation length exponents. But there is also another, more plausible scenario: If we allow similarly large corrections to scaling for the dependence of \(\bar{\rho}\) on \(L\) as for \(\chi_{\rho}\), we can find a value of \(\rho_{c}\) such that the curves \(\bar{\rho}-\rho_{c}\) versus \(L\) and \(\chi_{\rho}\) versus \(L\) give practically the same value of \(\nu\). This is demonstrated in Fig. 8,
Figure 7: Plots of the variances of times of largest jumps as a function of \(L\) for \(\sigma=0.0,0.1,0.2,0.3,0.5,0.75,1.0,1.25,1.5,1.7,1.8,1.9,2,2.5\) and Knight’s move Random Walk from the bottom to the top of the curves, respectively. The upper solid line corresponds to \(\nu=\infty\) and seems to apply for \(\sigma>2\) and Knight’s move RW, while the lower line corresponds to \(\nu=4/3\) which holds for standard \(2D\) percolation, and is consistent with our results within error bars for all \(\sigma<0.5\).
where we plotted both quantities against \(L\) with suitably chosen values of \(\nu\) and \(\rho_{c}\). More precisely, in this log-log plot we show one curve for \(\chi_{\rho}\) and two curves for \(\bar{\rho}-\rho_{c}\) - one such that is it as straight as possible, the other such that it mimics \(\chi_{\rho}\).
We conclude thus that the model definitely is not in the OP universality class for \(\sigma>1\). The possible deviations from the conventional FSS picture due to a possible new length scale generated by the finite times of the Levy flights seem to have not led to two values of \(\nu\), but they might be the source for the huge observed corrections to scaling.
### Three dimensions
Here we just simulated the model with modified knight's move random walks. As we said in the introduction, the finiteness of the walk trajectory does not introduce an additional length scale in this case, whence we expect standard FSS.
Plots of the raw data of \(s\) against \(\rho\) for \(L=64,128,256,512\), and \(1024\) and a collapse plot of these data analogous to Fig. 3 are shown in Fig. 9. In contrast to OP and to all other percolation models we are aware of, the raw data curves cross each other, but the scaling relations Eqs. (5,9) are well satisfied. The exponent \(\nu=1.96(2)\) is very different from that in OP, but the fractal dimension \(d_{f}=2.512(10)\) is the same within errors. These values are still preliminary (we will say more about critical exponents when discussing \(\chi\) and gap statistics), but we can already say now that these data do not seem to suffer from large corrections to scaling, in contrast to those of the previous subsection.
A collapse plot of \(\chi\) (analogous to Fig. 2) is shown in Fig 10. We see a very good data collapse, albeit for sightly different values of the critical parameters. These differences give a first impression of error estimates.
The fact that FSS is satisfied this time with small corrections, and that critical exponents can be determined rather precisely, is supported by looking at event-based gap scaling. In Fig. 11 we show the four observables which should scale with the fractal dimension (\(S_{\rm max},D_{\rm max}\), and the square roots of their variances). For easier comparison we multiply each by an arbitrary constant and divide it by \(L^{d_{f}}\). The best fit is obtained with
\[d_{f}=2.502(5), \tag{18}\]
which represents our final estimate.
When determining the correlation exponent \(\nu\), we are faced again with the fact that we have to know the precise value of the critical point \(\rho_{c}\), if we want to check that the width of the critical peak and its shift from \(\rho_{c}\) scale with the same power of \(L\). But in contrast to the case of 2-d Levy flights, there does not seem to be now a problem, as shown in Fig. 12. From this figure we obtain our best
Figure 10: Data collapse plot of \(\chi\) against \(\rho\) for 3-d generalized knight’s move RWs. The numerical values of the critical parameters were, as in all previous collapse plot, obtained by eyeball fits.
Figure 9: (a) Order parameter \(s\) against \(\rho\) for 3-d generalized knight’s move RW, for different lattice sizes \(L\). Notice the region very near the critical point where curves cross each other (in contrast to OP and to the 2-d Levy flight model discussed in the previous subsection).
(b) Data collapse plot of the data shown in panel a. The values of \(\rho_{c}\) and of the exponents \(\nu\) and \(d_{f}\) are fitted to obtain best collapse. Also plotted is a power law \(s=const\times(\rho-\rho_{c})^{\beta}\), in order show that Eqs. (5,9) are well satisfied.
estimates
\[\rho_{c}=0.20382(5)\;,\;\;\;\nu=1.99(1). \tag{19}\]
It was conjectured in [29; 30] that, for \(3\leq d\leq 6\), one has \(\nu=2/a\), if the correlation decays as \(C(r)\sim r^{-a}\). According to [33], the sites visited by a random walk and the sites _not_ visited by it are correlated with \(a=d-2\). Thus the present aftermath percolation model with (generalized) knight's move walks should be in the same universality class as pacman percolation in \(3\leq d\leq 6\), and in particular for \(d=3\) we expect \(\nu=2\) in perfect agreement with our simulations. In view of this agreement we conjecture that also \(d_{f}\) and \(\beta\) are simple rationals, i.e.
\[\nu=2,\;\;d_{f}=5/2,\;\;\beta=1. \tag{20}\]
This is s also compatible with the estimates of [32], who found \(\nu=1.8(1)\) and \(\beta=1.0(1)\) for pacman percolation, and is fully confirmed by somewhat less extensive simulations of aftermath percolation with nnnRW's, for which \(\rho_{c}=0.2120(3)\).
In the present project we also measured the distributions of \(S_{\rm max},\rho_{\rm max}\), and \(\Delta_{\rm max}\) and their scaling functions defined in Eqs.(11, 12, 13). It was claimed in [37] that these are super-universal (i.e., universal across different universality classes) and the same even in discontinuous percolation transitions. Due to the possible difficulties with scaling violations mentioned above, we postpone their discussions for the model with Levy flights to a forthcoming paper, where we shall also discuss several other models. Here we present just one figure for the knight's move RW in 3 dimensions (Fig. 13). In this figure we show the three distributions for \(L=128\). According to [37], the distribution of \(S_{\rm max}\) should be Gumbel and should thus have an exponential right-hand tail, while the two other distributions should fall off faster than exponential. The opposite is true: \(P_{\rho}(\rho_{\rm max}\) and \(P_{\Delta}(\Delta_{\rm max}\) seem to fall off exponentially, while \(P_{S}(S_{\rm max}\) falls off faster. More details will be given in [44].
## V Conclusions
In this paper, we have introduced a new version of correlated percolation. Motivated by the fact that disasters like wars, floods, or hurricanes often leave a weakened region which then falls easy prey to a second disaster like an epidemic, we have studied percolation restricted to the sites visited by generalized random walks. Essentially, this "aftermath epidemic" model is the inverse of pacman percolation [32; 33], where percolation is restricted to the sites _not_ visited by a random walk.
A crucial difference from pacman percolation is that the sites not visited by ordinary random walks are not connected, while those visited are. Thus, in order to ob
Figure 11: Log-log plots, for the modified knight’s move RW in 3d, of the four event-based observables (\(S_{\rm max},D_{\rm max}\), and the square roots of their variances) which should scale \(\sim L^{d_{f}}\) at the critical point. For easier comparison, each curve is shifted vertically by an arbitrary factor and is divided by \(L^{d_{f}}\). Please notice the very much blown up y-scale in this and in the following figure.
Figure 12: Log-log plots analogous to those in Fig. 8 of the \(L\)-dependence of the width of the peak of \(\rho_{\rm max}\) and of its shift from \(\rho_{c}\), but for the modified knight’s move RW in 3d. In contrast to Fig. 8, we find now very good scaling, with the same value of \(\nu\) for both curves. Notice again the very much blown up y-scale.
Figure 13: Histograms of \(S_{\rm max},\rho_{\rm max}\), and \(\Delta_{\rm max}\) for aftermath percolation with knight’s move RW in d=3 for \(L=128\), based on a sample of \(4.5\times 10^{6}\) realizations.
tain non-trivial percolation in aftermath epidemics, one has to use generalized walks where the visited sites are not connected. We studied Levy flights in two dimensions, and knight's move RWs both in two and three dimensions.
In three dimensions (and with knight's move RWs) we found that our model is in the same universality class as pacman percolation, and we conjecture that not only \(\nu=2\) is a simple rational, but also \(d_{f}=5/2\).
Knight's move RWs in 2d do not lead to a sharp percolation transition. This is analogous to pacman percolation, where one also has to go to three or more dimensions to find a sharp transition. But for Levy flights, sharp transitions are found whose universality classes seem to depend on the Levy flight exponent \(\sigma\).
As control parameter one can take in these models the number of walker steps or the number of visited points. Since finite walks might introduce new length scales, one has to worry that this breaks scale invariance and violates thereby one of the essential assumptions in the theory of critical phenomena. We find that this is indeed the case for Levy flights (but not for knight's move RWs). Thus it is not obvious that the usual finite size scaling applies. We found indeed no such problem for knight's move RWs in 3d. But we found problems in the form of very poor scaling in the case of Levy flights. It is not clear whether these are finite-size corrections, or whether they show that FSS is basically broken in this model. Another effect induced by additional length scales could be that different observables with the same scaling dimension show different critical exponents. In particular, we looked carefully into the possibility that there are two different correlation exponents, as has been found in some other non-standard percolation models. We found no such deviation from FSS.
When simulating and analyzing these models, we used the fast Newman-Ziff algorithm. This implied that we could very fast determine quantities like cluster masses and gaps (i.e. jumps in the leading cluster mass), but not spanning probabilities. Thus we have not considered the latter, nor have we looked at backbones or conductivity exponents. But we have analyzed our data both within the traditional paradigm where one considers observables at given values of the control parameter, and in the 'event-based ensemble' [40; 41; 19], where observables are measured at those control parameter values where the biggest gap occurs. We found that the latter gives in general more precise results.
## VI Appendix
In order to measure correlations between sites visited by a Levy flight in 2 dimensions, we measured the correlation sum \(C(r)\), i.e. the fraction of pairs of visited sites which are a distance \(\leq r\) apart. This is shown in Fig. 14 for \(\sigma=1.5\), \(L=16384\) and \(T=0.96L^{2}\), which corresponds to a density \(\rho=0.55\) of visited sites. For better resolution, we multiplied this by \((L/r)^{2}\), so that the curve would be a horizontal flat line for a Poisson process, i.e. for \(\sigma=0\). We see only very small deviations from this, and definitely no power law.
Acknowledgements: M.F. and A.A.M. acknowledge the supports from the research council of the Alzahra University. P.G. thanks Nuno Araujo, Michael Grady, Hans Herrmann, and Yacov Kantor for discussions about correlated percolation.
|
2308.04472 | Entropy of the Canonical Occupancy (Macro) State in the Quantum
Measurement Theory | The paper analyzes the probability distribution of the occupancy numbers and
the entropy of a system at the equilibrium composed by an arbitrary number of
non-interacting bosons. The probability distribution is derived both by tracing
out the environment from a bosonic eigenstate of the union of environment and
system of interest (the empirical approach) and by tracing out the environment
from the mixed state of the union of environment and system of interest (the
Bayesian approach). In the thermodynamic limit, the two coincide and are equal
to the multinomial distribution. Furthermore, the paper proposes to identify
the physical entropy of the bosonic system with the Shannon entropy of the
occupancy numbers, fixing certain contradictions that arise in the classical
analysis of thermodynamic entropy. Finally, by leveraging an
information-theoretic inequality between the entropy of the multinomial
distribution and the entropy of the multivariate hypergeometric distribution,
Bayesianism and empiricism are integrated into a common ''infomechanical''
framework. | Arnaldo Spalvieri | 2023-08-08T10:26:11Z | http://arxiv.org/abs/2308.04472v6 | # Entropy of the Canonical Occupancy (Macro) State in the Quantum Measurement Theory
###### Abstract
The paper analyzes the probability distribution of the occupancy numbers and the entropy of a system at the equilibrium composed by an arbitrary number of non-interacting bosons. The probability distribution is derived both by tracing out the environment from the pure state of the union of environment and system of interest (the empirical approach) and by tracing out the environment from the mixed state of the union of environment and system of interest (the Bayesian approach). In the thermodynamic limit, the two coincide and are equal to the multinomial distribution. The physical entropy of the system is then identified with the Shannon entropy of the multinomial distribution. This fixes certain contradictions arising in the classical analysis of thermodynamic entropy.
_Keywords:_ Occupancy Numbers, Maximum Entropy Principle, Canonical Typicality, Multinomial Distribution, Gibbs Correction Factor, Ideal Gas, Sackur-Tetrode Formula, Szilard Engines.
## I Introduction
The standard approach to the computation of entropy of systems of indistinguishable particle is to subtract \(\log(N!)\), where \(N\) is the number of system's particles, to the entropy of system's microstates. However, when the entropy of microstates is smaller than \(\log(N!)\), e.g. at temperature low enough, the difference between entropy of microstates and \(\log(N!)\) becomes negative. Since the entropy of any random variable is guaranteed to be always non-negative, entropy of microstates minus \(\log(N!)\) cannot be entropy. This pathology is not surprising because, after the subtraction of \(\log(N!)\), it is no more specified the random variable and the probability distribution the resulting formula refers to. Our crucial observation is that the random variable is the vector of the occupancy numbers. This pushes us to work out the probability distribution of the occupancy numbers of a system of bosons at the equilibrium. The paper also shows that the empirical distribution derived from the modern quantum approach to thermodynamic of [2]and [3] converges to the information-theoretic distribution derived from Jaynes MxEnt principle [1].
## II Notation and terminology
Let the eigenstates allowed to a quantum particle be identified by the quantum numbers belonging to the set \(\mathbb{C}\) (the set of colors),
\[\mathbb{C}=\{1,2,\cdots,|\mathbb{C}|\},\]
where, here and in what follows, the blackboard bold character denote discrete sets and \(|\mathbb{C}|\) is the number of elements of \(\mathbb{C}\). Consider a system made by \(N\) such non-interacting particles (the \(N\) colored balls). The Hamiltonian of the system is
\[\hat{H}_{\mathbb{C}^{N}}=\sum_{\bar{c}^{N}\in\mathbb{C}^{N}}\epsilon(\bar{c} ^{N})\ket{\bar{c}^{N}}\bra{\bar{c}^{N}},\]
where \(\mathbb{C}^{N}\) is the Cartesian product of \(\mathbb{C}\) with itself \(N\) times, the eigenvector \(\ket{\bar{c}^{N}}\) (the overline denotes vectors and the size of the vector will be omitted for brevity in the following when it is unnecessary) is
\[\ket{\bar{c}}=\ket{c_{1},c_{2},\cdots,c_{N}}, \tag{1}\]
\(c_{i}\) is the color of the \(i\)-th particle, and
\[\epsilon(\bar{c})=\sum_{i=1}^{N}\epsilon(c_{i}),\]
where \(\epsilon(c)\) is the \(c\)-th energy eigenvalue of the single-particle Hamiltonian, in other words, the energy of color \(c\). In the first quantization formalism, the set of eigenkets \(\{\ket{\bar{c}},\bar{c}\in\mathbb{C}^{N}\}\) is a complete orthonormal basis for the Hilbert space \(\mathscr{C}^{N}\) of the system. In statistical mechanics, \(\bar{c}\) is the _microstate_ of the system of distinguishable particles. \(N\) particles of the same species can be distinguishable, for instance, when they are confined in \(N\) identical but distinct regions of space.
Let the particles be bosons and call \(n(c,\bar{c})\) the occupancy number of color \(c\), that is the number of bosons of microstate \(\bar{c}\) whose color is \(c\):
\[n(c,\bar{c})\stackrel{{\rm def}}{{=}}\sum_{i=1}^{N}\delta(c-c_{i }),\ \ c=1,2,\cdots|\mathbb{C}|, \tag{2}\]
where \(\delta(\cdot)\) is the indicator function:
\[\delta(c-c_{i})=\left\{\begin{array}{ll}1,&\ c_{i}=c,\\ 0,&\mbox{elsewhere}.\end{array}\right.\]
In what follows, the dependency on the microstate will be omitted when what matters are only the occupancy numbers.
Also, it is understood that the occupancy numbers obey to the constraint
\[N=\sum_{c\in\mathbb{C}}n(c). \tag{3}\]
For finite \(\left|\mathbb{C}\right|\) and \(N\), the size \(\left|\mathbb{N}\right|\) of the set \(\mathbb{N}\) spanned by the vector \(\bar{n}\) of the occupancy numbers is the negative binomial coefficient [4]:
\[\left|\mathbb{N}\right|=\left(\begin{array}{c}N+\left|\mathbb{C}\right|-1\\ \left|\mathbb{C}\right|-1\end{array}\right).\]
In statistical mechanics, the _occupancy macrostate_, or, in short, macrostate, denoted \(\mathbb{C}^{N}(\bar{n})\), is the set of microstates whose occupancy numbers are \(\bar{n}\). The number of elements of \(\mathbb{C}^{N}(\bar{n})\) is the multinomial coefficient \(W(\bar{n})\),
\[\left|\mathbb{C}^{M}(\bar{n})\right|=W(\bar{n})=\frac{N!}{\prod_{c=1}^{\left| \mathbb{C}\right|}n(c)!}, \tag{4}\]
which is equal to the number of distinct permutations of the elements of a vector \(\bar{c}\) whose occupancy numbers are \(\bar{n}\). The elements of the set \(\{\mathbb{C}^{N}(\bar{n}),\bar{n}\in\mathbb{N}\}\) form a disjoint partition of \(\mathbb{C}^{N}\), so
\[\sum_{\bar{n}\in\mathbb{N}}W(\bar{n})=\left|\mathbb{C}^{M}\right|=\left| \mathbb{C}\right|^{M}.\]
In the second quantization formalism, the occupancy quantum state \(\left|\bar{n}\right\rangle\) of a system of \(N\) bosons that are non-interacting between them is specified by the occupancy numbers:
\[\left|\bar{n}\right\rangle=\sum_{\bar{c}\in\mathbb{C}^{N}(\bar{n})}\frac{ \left|\bar{c}\right\rangle}{\sqrt{W(\bar{n})}}, \tag{5}\]
see [5] for a derivation of (5) based on standard arguments. The \(\left|\mathbb{N}\right|\) states of the set \(\{\left|\bar{n}\right\rangle,\bar{n}\in\mathbb{N}\}\) form a complete set of orthogonal basis states for the bosonic Hilbert subspace \(\mathcal{N}\) of the Hilbert space \(\mathcal{C}^{N}\), \(\mathcal{N}\subseteq\mathcal{C}^{N}\). The Hamiltonian operator \(\hat{H}_{\mathbb{N}}\) of the bosonic system is the projection of the Hamiltonian operator \(\hat{H}_{\mathbb{C}^{N}}\) onto the bosonic subspace:
\[\hat{H}_{\mathbb{N}}=\sum_{\bar{n}\in\mathbb{N}}\left|\bar{n}\right\rangle \left\langle\bar{n}\right|\hat{H}_{\mathbb{C}^{N}}=\sum_{\bar{n}\in\mathbb{N} }\epsilon(\bar{n})\left|\bar{n}\right\rangle\left\langle\bar{n}\right|, \tag{6}\]
where
\[\epsilon(\bar{n})=\sum_{c\in\mathbb{C}}n(c)\epsilon(c).\]
## III Empirical approach
Let us call "universe" the union of system of interest and environment. In our analogy between the system of interest and the \(N\) colored balls, the universe is represented by the colored balls contained in the urn from which the \(N\) colored balls are drawn, while the environment are the colored balls that remain inside the urn after the extraction. In the following we use \(U\) and \(E\) in the subscript to identify the universe and the environment, e,g, \(N_{U}\) is the number of bosons of the universe, while we don't put any subscript for the system of interest, so \(N_{U}\geq N\), \(N_{U}\geq N_{E}\). Following [2], we do not limit the scope of our analysis to the thermal state, rather we consider the system in a _generalized_ canonical state of equilibrium with the environment. Let us consider the case where the pure state of the universe is a bosonic eigenstate \(\left|\bar{n}_{U}\right\rangle\), and let \(\bar{n}_{U}\) play the role of a vector of known parameters in the mixed state of the system of interest. The case of pure states of the universe that are linear combination of bosonic eigenstates will be discussed later on in the paper.
Let us consider the mixed state
\[\check{\nu}(\bar{n}_{U})=\text{Tr}_{\mathcal{C}^{N_{E}}_{E}}(\left|\bar{n}_ {U}\right\rangle\left\langle\bar{n}_{U}\right|).\]
and write the pure state in the form
\[\left|\bar{n}_{U}\right\rangle=\sum_{\bar{n}\in\mathbb{N}(\bar{n}_{U}),\; \bar{c}_{E}\in\mathbb{C}^{N_{E}}(\bar{n}_{E}=\bar{n}_{U}-\bar{n}),\;\bar{c} \in\mathbb{C}^{N}(\bar{n})}\frac{\left|\bar{c},\bar{c}_{E}\right\rangle}{ \sqrt{W(\bar{n}_{U})}}, \tag{7}\]
where \(\mathbb{N}(\bar{n}_{U})\) is the set of macrostates of the system of interest compatible with \(\bar{n}_{U}\). The partial trace of the outer product \(\left|\bar{c},\bar{c}_{E}\right\rangle\left\langle\bar{c}^{\prime}_{E},\bar{c }^{\prime}\right|\) is
\[\text{Tr}_{\mathcal{C}^{N_{E}}_{E}}\left(\left|\bar{c},\bar{c}_{E}\right\rangle \left\langle\bar{c}^{\prime}_{E},\bar{c}^{\prime}\right|\right)=\delta(\bar{c} _{E}-\bar{c}^{\prime}_{E})\left|\bar{c}\right\rangle\left\langle\bar{c}^{\prime} \right|. \tag{8}\]
Since \((\bar{c},\bar{c}_{E})\) and \((\bar{c}^{\prime},\bar{c}^{\prime}_{E})\) belong to the same macrostate \(\bar{n}_{U}\) of the universe, there are \(W(\bar{n}_{E})=W(\bar{n}_{U}-\bar{n})\) microstates \(\bar{c}_{E}\) that are equal to \(\bar{c}^{\prime}_{E}\) when both \(\bar{c}\) and \(\bar{c}^{\prime}\) belong to the same macrostate of the system of interest, while \(\delta(\bar{c}_{E}-\bar{c}^{\prime}_{E})\) is always zero when \(\bar{c}\) and \(\bar{c}^{\prime}\) belong to different macrostates of the system of interest, because in this case also \(\bar{c}_{E}\) and \(\bar{c}^{\prime}_{E}\) must belong to different macrostates of the environment, so they cannot be equal. Based on this reasoning, we conclude that
\[\hat{\nu}(\bar{n}_{U}) =\sum_{\bar{n}\in\mathbb{N}(\bar{n}_{U})}\frac{W(\bar{n}_{U}-\bar {n})}{W(\bar{n}_{U})}\sum_{\bar{c}\in\mathbb{C}^{N}(\bar{n})}\sum_{\bar{c}^{ \prime}\in\mathbb{C}^{N}(\bar{n})}\left|\bar{c}\right\rangle\left\langle\bar{c} ^{\prime}\right| \tag{9}\] \[=\sum_{\bar{n}\in\mathbb{N}(\bar{n}_{U})}\frac{W(\bar{n}_{U}-\bar {n})W(\bar{n}_{U})}{W(\bar{n}_{U})}\left|\bar{n}\right\rangle\left\langle\bar{n}\right|\] (10) \[=\sum_{\bar{n}\in\mathbb{N}(\bar{n}_{U})}p_{\bar{\mathcal{N}}}( \bar{n},\bar{n}_{U})\left|\bar{n}\right\rangle\left\langle\bar{n}\right|, \tag{11}\]
where the uppercase calligraphic character denotes random variables and \(p_{\mathcal{X}}(x)\) is the probability that \(\mathcal{X}=x\). The probability distribution \(\{p_{\bar{\mathcal{N}}}(\bar{n},\bar{n}_{U})\}\) defined by the equality (11), is the multivariate hypergeometric distribution. This result is expected: the multivariate hypergeometric is the distribution of occupancy numbers of colors in drawing without replacement \(N\) balls out of an urn containing \(N_{U}\) colored balls. For large but finite \(N_{U}\) and \(N_{U}\gg N\), tail bounds on the probability of deviations of \(\mathcal{N}(c)\) from its expectation can be found in [6].
Writing the joint probability distribution of microstates in the form of a chain of conditional probability distributions,
\[p_{\mathcal{C}^{N}}(\bar{c}^{N},\bar{n}_{U})=\frac{W(\bar{n}_{U}-\bar{n}(\bar{c }^{N}))}{W(\bar{n}_{U})}=\prod_{i=1}^{N}p_{\mathcal{C}_{i}|\mathcal{C}^{i-1}}( \bar{c}^{i},\bar{n}_{U}),\]
we promptly recognize that
\[p_{\mathcal{C}_{i}|\mathcal{C}^{i-1}}(\bar{c}^{i},\bar{n}_{U})=\frac{W(\bar{n}_{U}- \bar{n}(\bar{c}^{i}))}{W(\bar{n}_{U}-\bar{n}(\bar{c}^{i-1}))}. \tag{12}\]
The first ring of the chain is the empirical one-particle distribution, that is the relative number of balls of color \(c=c_{1}\) among the \(N_{U}\) balls, which, as expected, is
\[p_{\mathcal{C}_{1}}(c_{1},\bar{n}_{U})=\frac{W(\bar{n}_{U}-\bar{n}(c_{1}))}{W( \bar{n}_{U})}=\frac{n_{U}(c_{1})}{N_{U}}. \tag{13}\]
Using Stirling's formula and the Law of Large Numbers (LLN) we compute
\[\lim_{N_{U}\to\infty}\frac{W(\bar{n}_{U}-\bar{n})}{W(\bar{n}_{U})} =\lim_{N_{U}\to\infty}\prod_{c\in\mathbb{C}}(N_{U}^{-1}n_{U}(c))^ {n(c)} \tag{14}\] \[=\prod_{c\in\mathbb{C}}(p_{\mathcal{C}}(c))^{n(c)}\] (15) \[=\prod_{i=1}^{N}p_{\mathcal{C}}(c_{i})=p_{\mathcal{C}}(\bar{c}), \tag{16}\]
that is the generalized canonical distribution of microstates. Substituting (15) in the multivariate hypergeometric distribution we find that the generalized canonical distribution of the occupancy numbers is the multinomial distribution:
\[\lim_{N_{U}\to\infty}\hat{\nu}(\bar{n}_{U})=\hat{\nu} =\sum_{\bar{n}\in\mathbb{N}}\left|\bar{n}\right\rangle\left\langle \bar{n}\right|W(\bar{n})\prod_{c\in\mathbb{C}}(p_{\mathcal{C}}(c))^{n(c)}\] \[=\sum_{\bar{n}\in\mathbb{N}}\left|\bar{n}\right\rangle\left\langle \bar{n}\right|p_{\bar{\mathcal{N}}}(\bar{n}). \tag{17}\]
For \(N_{U}\to\infty\), the dependency on the _absolute_ occupancy numbers of the universe that characterizes (9) turns, thanks to the LLN, into the dependency on the _relative_ occupancy numbers of the universe. As shown by (16), the consequence is that microstates become independent and identically distributed (i.i.d.) and that, by the weak LLN, the empirical one-particle distribution
\[\bar{p}_{\mathcal{C}}=\lim_{N_{U}\to\infty}N_{U}^{-1}\bar{n}_{U}, \tag{18}\]
and, consequently, the mixed state \(\hat{\nu}\) of (17), are the same _for almost every bosonic eigenstate of the universe_ compatible with the constraints imposed on the universe, e.g., volume, temperature, or, equivalently, expected energy (here and in what follows \(\bar{p}_{\mathcal{C}}\) is a shorthand for \(\{p_{\mathcal{C}}(c)\}\)).
The multinomial distribution defined by equality (17) is the occupancy distribution of colors in drawing with replacement of \(N\) balls out of an urn containing colored balls with relative frequency distribution of colors in the urn equal to \(\bar{p}_{\mathcal{C}}\). Actually, by the LLN, drawing without replacement tends to drawing with replacement as the number of balls contained in the urn tends to infinity. For large but finite \(N\), convergence in the weak sense of \(N^{-1}\bar{\mathcal{N}}\) to \(\bar{p}_{\mathcal{C}}\) is studied by _concentration inequalities_ that bound the probability of deviations of \(N^{-1}\bar{\mathcal{N}}\) from \(\bar{p}_{\mathcal{C}}\), see [7] for recent advances on the subject.
## IV Bayesian approach
In the Bayesian approach, the vector of known parameters \(\bar{n}_{U}\) becomes the vector of random parameters \(\bar{\mathcal{N}}_{U}\). The multivariate hypergeometric distribution (11) is the Bayesian _likelihood_\(\{p_{\bar{\mathcal{N}}|\bar{\mathcal{N}}_{U}}(\bar{n},\bar{n}_{U})\}\), that is a conditional distribution where the occupancy numbers of the universe play the role of random conditions, the distribution \(\{p_{\bar{\mathcal{N}}_{U}}(\bar{n}_{U})\}\) of the random occupancy numbers of the universe is the Bayesian _prior_, the sought distribution \(\{p_{\bar{\mathcal{N}}}(\bar{n})\}\) of system's occupancy numbers is the Bayesian _marginal_. System's mixed state is obtained by tracing out the environment from the mixed state of the universe characterized by the prior. Substituting the multinomial distribution for the prior and the partial trace (10), after straightforward manipulations we find that, whichever is the number of particles of the universe, the marginal that characterizes the mixed state of the system of interest is the multinomial distribution. This shows that the Bayesian approach is self-consistent, in the sense that the mixed state of any sub-system is always multinomially distributed when the mixed state of the system is multinomially distributed, or, equivalently, when the distribution of microstates is i.i.d., hence when it maximizes the Sahnnon entropy of microstates for the given one-particle distribution \(\bar{p}_{\mathcal{C}}\). The one-particle distribution, in place of being found empirically by (13) from the knowledge of the occupancy numbers of the universe, in Jaynes' information-theoretic approach, the famous (and debated) MaxEnt approach, is found by maximization of the one-particle entropy under the constraints imposed on the system. Theorem 2 of [8] proves that multinomial models maximize Shannon's entropy of the occupancy distribution constrained to \(N\bar{p}_{\mathcal{C}}\), see also [9] for the multinomial distribution as the MaxEnt distribution in statistical mechanics.
## V Entropy
The Shannon entropy \(H_{\bar{\mathcal{X}}}\) of the random vector \(\bar{\mathcal{X}}\) is the expectation of the _surprise_\(H(\bar{\mathcal{X}})\):
\[H(\bar{\mathcal{X}})\stackrel{{\text{def}}}{{=}}-\log(p_{\bar{ \mathcal{X}}}(\bar{\mathcal{X}})), \tag{19}\]
\[H_{\bar{\mathcal{X}}}=E\{H(\bar{\mathcal{X}})\}, \tag{20}\]
where
\[E\{f(\bar{\mathcal{X}})\}=\sum_{\bar{x}\in\mathbb{X}}p_{\bar{\mathcal{X}}}( \bar{x})f(\bar{x})\]
is the classical expectation operator over the random variable \(\bar{\mathcal{X}}\) inside the argument of the deterministic function \(f(\cdot)\). In physics, \(\log(x)\) is the natural logarithm of \(x\) and the Boltzmann constant, which is in front of the logarithm, is omitted here for brevity. The random \(H(\bar{\mathcal{X}})\) is called surprise because it reflects the surprise that the experimenter experiences when the result of his experiment is \(\bar{\mathcal{X}}\). It is based on a probability distribution, but it is not an expectation, it is a property of the specific result \(\bar{\mathcal{X}}\) of the experiment. As such, the surprise \(H(\bar{\mathcal{N}})\) can be regarded as the pre-measurement physical entropy of the system that the measurement finds in state \(|\bar{\mathcal{N}}\rangle\). The use of a quantum equivalent of the surprise is not standard. We suggest that, in analogy to classical entropy, one could use the surprise for the eigenvalues of the following entropy observable \(\hat{S}_{\bar{\mathcal{X}}}\):
\[\hat{S}_{\bar{\mathcal{U}}}\stackrel{{\text{def}}}{{=}}-\sum_{\bar {n}\in\mathbb{N}}\log(p_{\bar{\mathcal{X}}}(\bar{n}))\left|\bar{n}\right\rangle \left\langle\bar{n}\right|,\]
\[S_{\bar{\nu}}=\text{Tr}(\hat{S}_{\bar{\nu}}\hat{\nu})=-\sum_{\bar{n}\in\mathbb{N}}p _{\bar{\mathcal{N}}}(\bar{n})\log(p_{\bar{\mathcal{N}}}(\bar{n}))=H_{\bar{ \mathcal{N}}},\]
where the second equality of the last line follows from orthogonality of the bosonic eigenstates. Note that we completely skip the notion of phase space, leading to the _exact_ probability distribution (17) of the quantum occupancy numbers, and, as a consequence, to the exact surprise and to the exact Shannon entropy. Conversely, the standard phase space approach inherently leads to approximations to entropy, that ask for improvement at low temperature/density ratio, see e.g. [10], still remaining approximations.
The entropy of a probability distribution of the form (11) is
\[H_{\bar{\mathcal{N}}}=H_{\bar{\mathcal{C}}}-H_{\bar{\mathcal{C}}|\bar{ \mathcal{N}}}, \tag{21}\]
\[H_{\bar{\mathcal{C}}|\bar{\mathcal{N}}} \stackrel{{\text{def}}}{{=}}-E\{\log(p_{\bar{ \mathcal{C}}|\bar{\mathcal{N}}}(\bar{\mathcal{C}}|\bar{\mathcal{N}}))\}=E\{ \log(W(\bar{\mathcal{N}}))\}\] \[=\log(N!)-\sum_{c\in\mathbb{C}}E\{\log(\mathcal{N}_{c}!)\}. \tag{22}\]
\(H_{\bar{\mathcal{C}}}\) is the Shannon entropy of microstates, hence of the distinguishable particles. The conditional Shannon entropy \(H_{\bar{\mathcal{C}}|\bar{\mathcal{N}}}\) is due to indistinguishability of particles. Indistinguishability of particles prevents the access to \(\log(W(\bar{\mathcal{N}}))\) units of information, whose expectation is is just the term that is subtracted to the Shannon entropy of distinguishable particles in (21). The term \(\log(N!)\) in (22) was introduced by Gibbs to make the non-quantized phase-space (differential) entropy of systems of indistinguishable particles compatible with his paradox. We observe that, while the probability that two or more particles have the same position and momentum is zero because position and momentum are dense variables, the probability that two or more particles occupy the same quantum state is not zero. This probability leads to the sum of expectations in (22). As the entropy of microstates becomes lower and lower, this sum becomes closer and closer to \(\log(N!)\), till becoming equal to it when all the particles occupy the ground state. This prevents system's entropy to become negative, as it happens for instance with the Sackur-Tetrode formula, also when the entropy of microstates becomes vanishingly small. Equality (21) is equation (11) of [11], where the authors call the entropy of the distribution of the occupancy numbers _entropy fluctuations_. Apart of certain exceptions, the authors of [11] consider these "entropy fluctuations" negligible compared to the "entropy" of the system, failing to recognize that the entropy of the occupancy numbers _is_ the thermodynamic entropy of a system of indistinguishable particles.
When microstates are i.i.d. random variables we have
\[H_{\bar{\mathcal{C}}}=NH_{\mathcal{C}}. \tag{23}\]
The following sequence of inequalities sandwiches the Boltzmann entropy \(\log(W(N\bar{p}_{\mathcal{C}}))\) between the two terms that contribute to \(H_{\bar{\mathcal{N}}}\):
\[NH_{\mathcal{C}} \geq\log(W(N\bar{p}_{\mathcal{C}}))\] (24) \[=\log(W(E\{\bar{\mathcal{N}}\})\)
\[\geq E\{\log(W(\bar{\mathcal{N}}))\}, \tag{25}\]
where, with some abuse of notation, here and in what follows the factorials of the real numbers in the denominator of \(W(N\bar{p}_{\mathcal{C}})\) are intended as \(x!=\Gamma(x+1)\), where \(\Gamma(\cdot)\) is the Gamma function. The first inequality is (11.22) of [4], the second inequality is obtained by applying the Jensen inequality
\[E\{f(\mathcal{N}(c))\}\geq f(E\{\mathcal{N}(c)\}),\ \forall\ c\in\mathbb{C},\]
to the convex (upward) function \(f(\mathcal{N}(c))=\log(\mathcal{N}(c)!)\). In statistical mechanics it is standard to derive from Stirling's formula an approximation between the two terms of (24).
The expectation appearing in (22) is
\[E\{\log(\mathcal{N}(c)!)\}=\sum_{n=0}^{N}\left(\begin{array}{c}N\\ n\end{array}\right)(p_{\mathcal{C}}(c))^{n}(1-p_{\mathcal{C}}(c))^{N-n}\log( n!),\]
see [12], see [13] for the calculation of the above expectation in integral form, see also [11] for approximations to the entropy of the multinomial distribution in the context of statistical mechanics.
Before concluding this section we remark that, if we pretend that entropy is a variable of state, then the probability distribution of the occupancy numbers must depend only on the state of the system. However, the multivariate hypergeometric distribution depends also on the state of the universe. The dependency becomes weaker and weaker as the number of particles of the universe tends to infinity, but it remains that this makes the empirical approach incompatible with the notion of entropy as variable of state: entropy can be a variable of state only if we renounce to the empirical approach.
## VI Two examples
In the case of an ideal monoatomic dilute gas in a cubic container of side \(L\), one particle of the gas is modelled as a quantum "particle in a box" with three degrees of freedom, whose energy eigenvalues with aperiodic boundary conditions are
\[\epsilon(c)=(c_{x}^{2}+c_{y}^{2}+c_{z}^{2})\frac{h^{2}}{8mL^{2}}, \tag{26}\]
where \(c\) consists of the three quantum numbers \((c_{x},c_{y},c_{z})\), \(m\) is the mass of the particle and \(h=6.626\cdot 10^{-34}\) J \(\cdot\) s is the Planck constant. When the gas is at the thermal equilibrium at temperature \(T\) Kelvin degrees with the heat bath, maximization of entropy with constrained temperature leads to the Boltzmann distribution for \(\bar{p}_{\mathcal{C}}\) and, by the i.i.d. assumption, to the multinomial distribution
\[p_{\bar{\mathcal{N}}}(\bar{n})=W(\bar{n})Z^{-M}\prod_{c\in\mathbb{C}}e^{-n_{c} \epsilon(c)/k_{B}T}, \tag{27}\]
where \(k_{B}=1.38\cdot 10^{-23}\) J/K is the Boltzmann constant and \(Z\) is the one-particle partition function:
\[Z=\sum_{c\in\mathbb{C}}e^{-\epsilon(c)/k_{B}T}.\]
When the temperature-to-density ratio is high, it becomes possible to employ two approximations. In the first one, the
partition function is approximated to an integral, see eqn. 19.54 of [14], leading to
\[H_{\mathcal{C}}\approx\frac{3}{2}\left(1+\log\left(\frac{2\pi mk_{B}TL^{2}}{h^{2 }}\right)\right). \tag{28}\]
In the second one, the sum of expectations in (22) is neglected and, for large number of particles, \(\log(N!)\) is approximated to \(N\log(N)-N\) by Stirling's formula. The result is the textbook Sackur-Tetrode entropy formula:
\[H_{\bar{\mathcal{N}}}\approx N\left(\log\left(\frac{L^{3}}{N}\left(\frac{2\pi mk _{B}T}{h^{2}}\right)^{\frac{3}{2}}\right)+\frac{5}{2}\right). \tag{29}\]
A detailed numerical analysis of the entropy of the ideal gas can be found in [15, 16].
As a second example, consider a container that contains a particle of gas at the thermal equilibrium, and divide the container into two sub-containers of equal size by inserting a wall. If the wall functions as a piston, system's state is the second state of a one-particle Szilard engine. The state of the system after the insertion of the wall is represented by the joint random variable \(\mathcal{C}=(\mathcal{B},\mathcal{C}^{\prime})\) where \(\bar{p}_{\mathcal{C}^{\prime}}\) is the Boltzmann distribution of the states of one particle in the sub-container and \(\mathcal{B}\) is a binary random variable independent of \(\mathcal{C}^{\prime}\) that indicates which of the two containers the measurement will localize the particle in.
The entropy of the state after the insertion of the wall is
\[\log(2)-\sum_{c\in\mathcal{C}^{\prime}}p_{\mathcal{C}^{\prime}}(c)\log(p_{ \mathcal{C}^{\prime}}(c)), \tag{30}\]
where the famous \(\log(2)\) of Landauer [17] comes from the random variable \(\mathcal{B}\). We have numerically evaluated the partition function of the Boltzmann distribution with the parametrization of [18], that is mass of the particle \(m=9.11\cdot 10^{-31}\) kg, temperature \(T=300\) K, and one-dimensional box of size \(L=20\cdot 10^{-9}\) m. We obtain that the entropy of the single particle with one degree of freedom before the insertion of the piston is \(1.988\) in \(k_{B}\) units, while with size of the one-dimensional box equal to \(10\cdot 10^{-9}\) m, that is, after the insertion of the piston, the entropy in \(k_{B}\) units is \(1.243\), leading to the difference \(1.988-0.693-1.243=0.052\), in excellent agreement with the entropy fall shown in Fig. 3 of [18], where the result is derived by the phase space approach.
In the case of \(N\) particles, \(\mathcal{B}\) is a binomial random variable \((N,V^{\prime}/V)\), where \(V\) is the total volume, \(V^{\prime}\) is the volume of one of the two sub-containers. The probability distribution of the macrostate is
\[\sum_{b=0}^{N}p_{\bar{\mathcal{N}}^{\prime}|\mathcal{B}}(\bar{n}^{\prime},b)p _{\bar{\mathcal{N}}^{\prime\prime}|\mathcal{B}}(\bar{n}^{\prime\prime},b)p_{ \mathcal{B}}(b),\]
where \(\{p_{\bar{\mathcal{N}}^{\prime\prime}|\mathcal{B}}(\bar{n}^{\prime},b)\}\) (\(\{p_{\bar{\mathcal{N}}^{\prime\prime}|\mathcal{B}}(\bar{n}^{\prime\prime},b)\}\)) is the probability distribution of the occupancy numbers of a gas with \(\mathcal{B}\)\((N-\mathcal{B})\) particles in the sub-container of volume \(V^{\prime}\)\((V-V^{\prime})\).
## VII Discussion
So far, we have proved that, for \(N_{U}\rightarrow\infty\), system's state converges to canonicality _for almost every bosonic eigenstate of the universe._ We hereafter disprove the more general claim of [2, 3] that, for \(N_{U}\rightarrow\infty\), system's mixed state converges to canonicality _for almost every pure state of the universe._ First we present a counterexample, then we discuss why the arguments of [2, 3] fail.
Suppose that bosons can occupy two states, let \(N_{U}=2M_{U}+1\) and let
\[\bar{n}_{U}=(M_{U}+1,M_{U}),\ \ \bar{n}_{U}^{\prime}=(M_{U},M_{U}+1)\] \[\left|\psi_{U}\right\rangle=\sqrt{0.5}\left|\bar{n}_{U}\right\rangle +\sqrt{0.5}\left|\bar{n}_{U}^{\prime}\right\rangle.\]
Consider \(N=1\) and use the first quantization formalism to express the state of the one-particle system. Computing the partial traces and taking the limit for \(M_{U}\rightarrow\infty\) one finds
\[\hat{\nu}= \lim_{M_{U}\rightarrow\infty}\hat{\nu}(\bar{n}_{U})=\lim_{M_{U} \rightarrow\infty}\hat{\nu}(\bar{n}_{U}^{\prime})=0.5\sum_{c=1}^{2}\left|c \right\rangle\left\langle c\right|,\] \[\lim_{M_{U}\rightarrow\infty}\hat{\nu}(\psi_{U})=0.5\sum_{c=1}^{2 }\sum_{c^{\prime}=1}^{2}\left|c\right\rangle\left\langle c^{\prime}\right| \neq\hat{\nu}.\]
This example shows that, when the universe is not in a bosonic eigenstate, the presence of terms coming from the cross outer products \(\left|\bar{n}_{U}\right\rangle\left\langle\bar{n}_{U}^{\prime}\right|\) prevents system's mixed state to converge to canonicality.
The claim of [2] and [3] is based on typicality of microstates of the universe, and for this reason convergence to canonicality of the system is called _canonical typicality_ in [3]. When applied to the surprise of i.i.d. microstates, the LLN leads to
\[\lim_{N\rightarrow\infty}\text{Pr}(\left|N^{-1}H(\bar{\mathcal{C}})-H_{\mathcal{ C}}\right|\leq\eta)=1, \tag{31}\]
where, here and in the rest of this section, \(N\) is the number of particles of the universe and \(\eta\) is an arbitrarily small positive real number. For any \(N\) and \(\eta\), the set of microstates that satisfy the inequality (31) is the information-theoretic _typical set_, which, for small \(\eta\) and \(N\) large enough, in statistical mechanics is the set of the _accessible_ microstates. Paper [19] uses the properties of the information-theoretic typical set to characterize weak convergence to equiprobability of the accessible microstates. In the geometrical view of typicality, the distribution of microstates tends to be uniform over a spherical shell of the Hilbert space that surrounds the surface of the sphere of the Hilbert space determined by the constraints imposed on the universe. As \(N\) increases, the _relative_ thickness of the spherical shell (i.e., the thickness of the shell divided by the radius of the sphere) diminishes, becoming zero in the thermodynamic limit. Thanks to (23) and (31), the consideration of the relative thickness of the spherical shell basically is enough to capture entropy of microstates. However, the increasing _absolute_ randomness of the occupancy numbers, that in the geometrical view is represented by the absolute thickness of the spherical shell, makes the entropy of macrostates bigger and bigger as \(N\) grows, because
that spherical shell of increasing absolute thickness contains a number of bosonic eigenstates that increases with \(N\), none of which dominates in probability over the others. Therefore, unlike what happens with microstates, when macrostates are considered, the absolute thickness of the spherical shell cannot be ignored, elsewhere we miss the concept itself of entropy of macrostates. Actually, looking at the relative thickness is how to look at the entropy per particle, and it is easy to see that \(N^{-1}H_{\bar{\mathcal{N}}}\to 0\) as \(N\to\infty\).
Papers [2] and [3], as virtually all the textbooks and the research papers of statistical mechanics, guided by the idea of capturing properties that descend from typicality of microstates, claim more or less explicitly that, in the thermodynamic limit, all the properties of the system are preserved if the spherical shell is identified with the surface of the sphere. As shown by the previous discussion about entropy of microstates and entropy of macrostates, this is true when dealing with properties of microstates, but this is no more true when dealing with properties of macrostates. Specifically, since at most one bosonic eigenstate can lie exactly on the surface of the sphere, if the surface is considered in place of the spherical shell, then at most one bosonic eigenstate will survive. We conclude that the consideration of the surface of the sphere in place of the spherical shell wrongly knocks out all the bosonic eigenstates excepting at most one, and, with them, it wrongly knocks out also all the cross outer products between two different bosonic eigenstates of the previous counterexample.
## VIII Conclusion
Entropy is a macroscopic property of a physical system and, at the same time, it is a mathematical property of a random variable/vector. Given this, entropy _must_ be a property of system's random macrostates, specifically, in the case of bosonic systems, of system's random occupancy macrostates. This intuition motivated us to work out the empirical probability distribution and the Bayesian probability distribution of macrostates of bosonic systems at the equilibrium. As expected from previous results, the empirical probability distribution converges to the Bayesian one when the number of particles of the universe from which the empirical distribution is obtained tends to infinity.
Before concluding the paper, we propose for future study a new engaging connection between statistical mechanics and information theory. Let us regard a PVM operated on the universe as a POVM operated on the system. In quantum information theory, the non-negative difference between the entropy of the multinomial Bayesian marginal of the system and the expectation over the multinomial Bayesian prior of the universe of the entropy of the multivariate hypergeometric Bayesian likelihood of the system, that is
\[H_{\bar{\mathcal{N}}}+\sum_{\bar{n}_{U}\in\mathbb{N}_{U}}\sum_{\bar{n}\in \mathbb{N}}p_{\bar{\mathcal{N}}_{U}}(\bar{n}_{U})p_{\bar{\mathcal{N}}|\bar{ \mathcal{N}}_{U}}(\bar{n},\bar{n}_{U})\log(p_{\bar{\mathcal{N}}|\bar{\mathcal{ N}}_{U}}(\bar{n},\bar{n}_{U}))\]
\[=H_{\bar{\mathcal{N}}}-H_{\bar{\mathcal{N}}|\bar{\mathcal{N}}_{U}}\geq 0,\]
is equal to the quantum information brought by the POVM operated on the system. The Bayesian approach, which is controversial in physics, can be overcome by observing that, according to [8], the difference between the entropy \(H_{\bar{\mathcal{N}},\mathrm{empirical}}\) of the multinomial distribution of the system based on the empirical one-particle distribution of microstates (13) and the entropy of the multivariate hypergeometric of the system in (10) is always non-negative:
\[H_{\bar{\mathcal{N}},\mathrm{empirical}}+\sum_{\bar{n}\in\mathbb{N}}p_{\bar{ \mathcal{N}}}(\bar{n},\bar{n}_{U})\log(p_{\bar{\mathcal{N}}}(\bar{n},\bar{n}_{ U}))\geq 0.\]
This difference is the "empirical information" about the system brought by the bosonic eigenstate of the universe resulting from the PVM.
|
2308.10407 | Federated Learning for Connected and Automated Vehicles: A Survey of
Existing Approaches and Challenges | Machine learning (ML) is widely used for key tasks in Connected and Automated
Vehicles (CAV), including perception, planning, and control. However, its
reliance on vehicular data for model training presents significant challenges
related to in-vehicle user privacy and communication overhead generated by
massive data volumes. Federated learning (FL) is a decentralized ML approach
that enables multiple vehicles to collaboratively develop models, broadening
learning from various driving environments, enhancing overall performance, and
simultaneously securing local vehicle data privacy and security. This survey
paper presents a review of the advancements made in the application of FL for
CAV (FL4CAV). First, centralized and decentralized frameworks of FL are
analyzed, highlighting their key characteristics and methodologies. Second,
diverse data sources, models, and data security techniques relevant to FL in
CAVs are reviewed, emphasizing their significance in ensuring privacy and
confidentiality. Third, specific applications of FL are explored, providing
insight into the base models and datasets employed for each application.
Finally, existing challenges for FL4CAV are listed and potential directions for
future investigation to further enhance the effectiveness and efficiency of FL
in the context of CAV are discussed. | Vishnu Pandi Chellapandi, Liangqi Yuan, Christopher G. Brinton, Stanislaw H Zak, Ziran Wang | 2023-08-21T01:21:21Z | http://arxiv.org/abs/2308.10407v2 | Federated Learning for Connected and Automated Vehicles: A Survey of Existing Approaches and Challenges
###### Abstract
Machine learning (ML) is widely used for key tasks in Connected and Automated Vehicles (CAV), including perception, planning, and control. However, its reliance on vehicular data for model training presents significant challenges related to in-vehicle user privacy and communication overhead generated by massive data volumes. Federated learning (FL) is a decentralized ML approach that enables multiple vehicles to collaboratively develop models, broadening learning from various driving environments, enhancing overall performance, and simultaneously securing local vehicle data privacy and security. This survey paper presents a review of the advancements made in the application of FL for CAV (FL4CAV). First, centralized and decentralized frameworks of FL are analyzed, highlighting their key characteristics and methodologies. Second, diverse data sources, models, and data security techniques relevant to FL in CAVs are reviewed, emphasizing their significance in ensuring privacy and confidentiality. Third, specific and important applications of FL are explored, providing insight into the base models and datasets employed for each application. Finally, existing challenges for FL4CAV are listed and potential directions for future work are discussed to further enhance the effectiveness and efficiency of FL in the context of CAV.
Federated learning, connected and automated vehicles, distributed computing, privacy protection, data security.
## I Introduction
Connected and automated vehicles (CAV) are the key to future intelligent transportation systems (ITS) that encompass both ground and air transportation [1, 2, 3, 4, 5, 6, 7, 8]. With the advent of big data, the Internet of Things (IoT), edge computing, and intelligent systems, CAVs have the potential to improve the overall transportation system by reducing traffic accidents, congestion, and pollution [9, 10, 11, 12]. CAVs integrate both Vehicle-to-Vehicle (V2V) and Vehicle-to-Infrastructure (V2I) communication capabilities, fostering an enhanced perception of the environment beyond the direct line of sight [13, 14, 15]. This involves interaction with other vehicles, traffic signals, pedestrians, and other elements of the transportation ecosystem. Furthermore, CAVs are designed to assume control of driving tasks by the human operator under certain conditions, using a variety of sensors and sophisticated machine learning (ML) algorithms to achieve autonomous operation.
Currently, CAVs are generating a tremendous amount of raw data, between 20 and 40 TB per day, per vehicle [16] from various sources such as engine components, electronic control units (ECU), perception sensors, and vehicle-to-everything (V2X) communications. This large amount of data is sent to other vehicles, roadside infrastructures, or the cloud, continuously or periodically for monitoring, prognostics, diagnostics, and connectivity features [17]. This influx of data has driven the flourishing deployment and application of ML in CAVs, including areas such as Advanced Driver-Assistance Systems (ADAS) [18], automated driving [19], ITS [20], and sustainable development [21].
### _Motivation_
Due to the large amount of data required to train ML models, concerns have been raised about data security in terms of the legitimacy of data collection, data misuse, and privacy breaches. Data collected by various sensors in CAVs, are also considered private and are subject to stringent privacy protection regulations in different regions. One such example is the General Data Protection Regulation (GDPR) in the European Union [22], which imposes strict requirements and guidelines on the handling and processing of personal data to ensure individuals' privacy rights are protected. Even with the development of advanced ML techniques and vehicle connectivity, it has not been feasible to have a secure framework to collect data from every vehicle and train an ML model. These limitations led to the development of a new ML paradigm known as Federated Learning (FL) [23, 24]. FL has been coined by Google [25] and was initially used for mobile keyboard prediction in Gboard [26] to allow multiple mobile phones to cooperatively and securely train an ML model.
In FL, edge devices/clients only send the gradients or the learnable parameters to cloud servers rather than sending massive local datasets in a centralized learning framework. Cloud servers perform a secure aggregation of the received gradients/weights and update the global model parameters
that are transmitted back to clients/edge devices [27]. This procedure, known as a communication round, continues iteratively until the convergence criteria are met in the global model optimization. The key advantage of FL is reducing the strain on the network while also preserving the privacy of the local data. FL is a potential candidate that can utilize the data available from each CAV and develop a robust ML model.
Despite the benefits of V2X communications among CAVs, the invasion of privacy, accuracy, effectiveness, and communication resources is an essential problem to be addressed. FL frameworks have received attention for their natural ability to preserve privacy by transmitting only model data between the server and its clients without including local vehicle data. In particular, the model data packets are smaller than the user data, thus saving the consumption of communication resources. Similarly, FL frameworks distribute training tasks to each client, and the server does not perform training but only aggregates, which can reduce the computational demand on the server and improves training efficiency. Recently there have also been efforts on training decentralized FL that allows multiple vehicles to collaboratively train a model without needing a central server [28, 29]. In our initial survey of FL for CAV (FL4CAV) presented in [30], we emphasized applications and explored foundational challenges in the subject. Building upon that conference version, this extended journal paper further delves into the underlying methodologies, provides a more comprehensive review of recent developments, and introduces novel insights and evaluations, thereby presenting a more exhaustive and nuanced understanding of the field.
### _Paper Organization_
In this paper, we provide a survey of FL4CAV, including deployment of various FL frameworks on CAVs, data modalities and security, diverse applications, and key challenges. The organization of this survey is shown in Fig. 1. The following topics are covered in this survey:
* A systematic review of FL algorithms is conducted, specifically focusing on their deployment in CAVs. Additionally, we examine the integration of ML models within the FL framework for CAV applications.
* Data modalities and data security considerations in CAVs are summarized, highlighting the diverse range of multi-modal data generated by various sensors.
* Critical applications of FL4CAV are explored, such as driver monitoring, steering wheel angle prediction, vehicle trajectory prediction, object detection, motion control application, traffic flow prediction, and V2X communications.
* Current challenges and future research directions of FL4CAV are highlighted, such as performance, safety, fairness, applicability, and scalability. A comparison of our survey with other related surveys can be found in Table I.
The remainder of this survey is organized as follows. In Section II, we explain the two main FL frameworks with algorithms. In Section III, we discuss various data modalities, ML methods used in FL4CAV applications, and FL data security in CAVs. Section IV reviews the application of FL in CAVs with detailed examples. The multi-modal data, algorithms, and datasets used in the relevant literature are also summarized. Current challenges and potential research opportunities are discussed in Section V. In Section VI, we present the conclusions of this survey and outline future directions.
## II Federated Learning Methods
In this section, we describe the FL frameworks in terms of two broad categories: centralized FL and decentralized FL. A detailed illustration of the categories is shown in Fig. 2 In
Fig. 1: Roadmap of this survey paper.
addition, we provide a brief overview of the ML techniques that are commonly used as base models on local devices during FL. The FL process can be sequentially listed as:
1. _Global Model Distribution_: The edge server disseminates the global model parameters to \(K\) vehicles.
2. _Model Update Using Local Data_: Each vehicle independently trains the ML model using its own local data. This training process typically adopts a simple Stochastic Gradient Descent (SGD) algorithm. The computational infrastructure is usually limited.
3. _Local Update Upload_: After training the model, each vehicle applies privacy-preserving techniques such as differential privacy (introduces artificial noise to the parameters) and then uploads/communicates the model parameters to the selected central server (Centralized Federated Learning, i.e., CFL) or other vehicles (Decentralized Federated Learning, i.e., DFL).
4. _Aggregation of Vehicle Updates_: The server securely aggregates the parameters uploaded from \(K\) vehicles, obtaining the global model. Furthermore, it tests the model's performance.
### _Centralized Federated Learning_
In this section, we review two notable aggregation methods in the centralized framework, namely averaging and a relatively newer technique called knowledge distillation.
#### Iii-A1 Averaging
Most of the existing literature uses the Federated Averaging (FedAvg) algorithm [25] for the FL aggregation process on the server--see Table III. FedAvg applies SGD optimization to local vehicles and performs a weighted averaging of the weights of the vehicles on the central server. FedAvg performs multiple local gradient
updates before sending the parameters to the server, reducing the number of communication rounds. For FL4CAV, data on each CAV are dynamically updated at each communication round.
A typical FL setup has \(K\) vehicles that have their own local dataset and the ability to perform simple local optimization. At the central server, the problem can be represented as
\[\min_{x\in\mathbb{R}^{d}}\Big{[}f(x)=\frac{1}{K}\sum_{i=1}^{K}f_{i}(x_{i})\Big{]}, \tag{1}\]
where \(f_{i}:\mathbb{R}^{d}\rightarrow\mathbb{R}\) for \(i\in\{1,\ldots,K\}\) is the local objective function of the \(i^{th}\) vehicle. The local objective function of the \(i^{th}\) vehicle can have the following form,
\[f_{i}(x_{i})=\mathbb{E}_{\xi_{i}\sim\mathcal{D}_{i}}[\ell(x_{i},\xi_{i})], \tag{2}\]
where \(\xi_{i}\) is the data that have been sampled from the local vehicle data \(\mathcal{D}_{i}\) for the \(i^{th}\) vehicle. The expectation operator, \(\mathbb{E}\), is acting on the local objective function, \(\ell(x_{i},\xi_{i})\), with respect to a data sample, \(\xi_{i}\), drawn from the vehicle data, \(\mathcal{D}_{i}\). The function \(\ell(x_{i},\xi_{i})\) is the loss function evaluated for each vehicle, \(x_{i}\), and data sample, \(\xi_{i}\). Here, \(x_{i}\in\mathbb{R}^{d}\) represents the model parameters of vehicle \(i\), and \(X\in\mathbb{R}^{d\times K}\) is the matrix formed using these parameter vectors. The learning process is performed to find a minimizer of the objective function, \(x_{i}=x^{*}=\operatorname*{arg\,min}_{x\in\mathbb{R}^{d}}f(x)\).
The data obtained from CAVs are typically non-independent and non-identically distributed (non-IID). FedAvg faces challenges in realistic heterogeneous data settings, as a single global model may not perform well for individual vehicles, and multiple local updates can cause the updates to deviate from the global objective [37]. Several variants of FedAvg have been proposed to address the challenges encountered by FL, such as data heterogeneity, client drift, local vehicle data imbalance, communication latency, and computation capabilities. FedProx algorithm, FedAvg with a proximal term, has been proposed to improve the convergence and reduce communication cost [38]. Dynamic Federated Proximal [39] algorithm (DFP) is an extension of FedProx that could effectively deal with non-IID data distribution by dynamically varying the learning rate and regularization coefficient during the learning process. FedAdam [40] has shown improved convergence and optimization performance by incorporating ADAM optimization in the FedAvg algorithm. There have been several other efforts to improve the performance of the FL model, and it is an ongoing research area [41, 42, 43, 44].
#### Ii-B2 Knowledge Distillation
In this subsection, we discuss the integration of knowledge distillation with FL. Federated Distillation (FD) [45] uses knowledge distillation to transfer knowledge in a decentralized manner, leading to a significant reduction in the communication size compared to a traditional FL, and also has the ability to handle non-IID data samples [46]. Wang _et al._proposed a conceptual framework called FD for CAV (FDCAV), where CAVs share their outputs (e.g., bounding boxes) with a central server, which computes the average output from the global model and sends it back to vehicles. The vehicles then update their local models based on the output of the global model [47].
Another approach is to deploy a teacher model on the server and student models on the clients. In this process, client devices usually train and deploy a smaller, simpler model to mimic the behavior of a larger, more complex model residing on the server. It allows for the transfer of knowledge from the larger server model to the smaller
Fig. 2: Illustration of (a) centralized and (b) decentralized federated learning for connected and automated vehicles.
client model, thereby reducing computational complexity and enhancing efficiency. For example, in Federated Group Knowledge Transfer (FedGKT) [48], a ResNet-55 or ResNet-109 is deployed on the server, while a ResNet-8 is utilized on the clients. Similarly, Federated Knowledge Distillation (FedKD) [49] employs a comparable approach, conducting experiments on natural language recognition tasks. Knowledge distillation with FL is particularly beneficial in scenarios where computational resources or storage capacities are constrained or where the deployment of larger models is infeasible. CAVs are prime examples of such application scenarios.
The CFL is summarized in Algorithm 1 while the DFL is given in Algorithm 2.
```
0: Vehicle set \(\mathbb{V}\), communication rounds \(T\), isolated time-varying local dataset \(\xi=\{\xi_{v}^{(t)}:v\in\mathbb{V}\}\), local epochs \(E\), learning rate \(\{\eta_{t}\}_{t=0}^{T-1}\), loss function \(f\)
0: Segreated global model \(\theta\)
1: For each vehicle \(v\in\mathbb{V}\) initialize model: \(\theta_{v}^{(0)}~{}\in~{}\mathbb{R}^{d}\)
2:for\(t=0,\dots,T-1\)do
3: Perform local SGD for vehicle \(v\in\mathbb{V}\) in parallel do
4: Sample \(\xi_{v}^{(t)}\), compute \(g_{v}^{(t)}:=\widehat{\nabla}_{fv}(\theta_{v}^{(t)},\xi_{v}^{(t)})\)
5: \(\theta_{v}^{(t+1)}\leftarrow\theta_{v}^{(t)}-\eta_{v}g_{v}^{(t)}\implies\) SGD (\(E\) epochs)
6: Vehicle sent model \(\theta_{v}^{(t+\frac{1}{2})}\) to other vehicles
7:endfor
8:\(\theta^{(t+1)}\leftarrow\sum_{v\in\mathbb{V}}\frac{|\xi_{v}^{(t)}|}{|\xi^{(t) }|}(\theta_{v}^{(t+1)})\implies\) Aggregation on server
9: Server sent model \(\theta^{(t+1)}\) to vehicles
10:endfor
11: Output the aggregated global model \(\theta\leftarrow\theta^{(T)}\)
```
**Algorithm 1** CFL for Dynamic Data Updating CAV
### _Decentralized Federated Learning_
In the CFL paradigm, model parameters (weights or gradients) are transmitted to a central server, often a Road-Side Unit (RSU), where the FL server-side aggregation process takes place. On the contrary, DFL relies on a consensus among the vehicles, fostering collaboration to collectively update global parameters without the need for a central server. The scalability of CFL is limited by the computational capacity of the server, which requires a dedicated infrastructure. The dependence on a single server introduces a potential point of failure in the learning process and can lead to communication congestion between the server and vehicles, especially when handling a substantial number of vehicles [50].
DFL offers scalability by accommodating a large number of vehicle clients without relying on a central server, and exhibits enhanced robustness since the collaborative training among vehicles can continue even if an individual vehicle becomes unavailable. DFL relies on the V2X communication module to send model data directly to other neighboring vehicles for updates [51, 52].
The primary concept behind the DFL process is to establish consensus among vehicles by enabling communication exclusively between adjacent neighbors. This communication process can be effectively represented by employing a consensus/gossip matrix within a network topology graph. More precisely, a vehicle \(i\) communicates with vehicle \(j\) based on a non-negative weight that formulates the connectivity of vehicle \(i\) and vehicle \(j\), that is, \(w_{ij}>0\). The case \(w_{ij}=0\) indicates that no communication takes place between \(i\) and \(j\). Similarly, for self-loops, the associated weight is represented by \(w_{ii}>0\). Fig. 3 shows examples of two commonly employed network topologies, namely the ring and the torus for the \(n=16\) client/vehicle configuration. These associated weights can be compiled into a matrix of dimension \(n\times n\) and can be written as \(W=[w_{ij}]\in[0,1]^{n\times n}\). The most standard name for \(W\) used in the literature is _gossip or mixing matrix_.
The mixing matrix, \(W=[w_{ij}]\in[0,1]^{n\times n}\), is a non-negative, symmetric \((W=W^{\top})\) and doubly stochastic, \(\mathds{1}W=\mathds{1},\mathds{1}^{\top}W=\mathds{1}^{\top}\) matrix, where \(\mathds{1}\) is the column of ones. Then, the consensus operation can be represented as,
\[\theta_{i}^{(t+1)}~{}=~{}\sum_{j\in[n]}w_{ij}^{(t)}~{}\theta_{j}^{(t)}, \tag{3}\]
where \(\theta\) is the model parameter (weights/gradients).
However, DFL also encounters notable hurdles, including hindered convergence (caused by the heterogeneity of data) and network latency, and the need to synchronize/arbitrate
Fig. 3: Network Topology (_Left_ - Ring with \(n\) = 16 and _Right_ - Torus with \(n\) = 16)
parameters and adapt to dynamic network topologies during vehicle communications. These challenges arise from the decentralized nature of the FL framework, which requires efficient mechanisms to address disparities in data distribution and network connectivity among the participating vehicles [53, 54, 55, 56, 57].
## III Overview of Data Modalities, Base Machine Learning Models and Securities
The concept of FL4CAV is illustrated in Fig. 2. Each CAV as a client, undertakes sensing data acquisition, signal processing, storage, communication, perception, and decision-making. For sensing data acquisition, a variety of sensors are integrated into CAVs, including Global Navigation Satellite Systems (GNSS), multi-modal cameras, Radio Detection And Ranging (Radar), Light Detection And Ranging (LiDAR), and Inertial Measurement Unit (IMU) to capture the vehicle, driver, passenger, and external information.
CAV tasks are diversified to include tracking the target speed, prediction of behavior, motion planning, motion control, object detection, and in-vehicle human monitoring. After training on ML models with local data, clients send the trained model to the server. Then, the server shares a generalized model with clients for perception, prediction, and decision-making purposes. The FL4CAV framework shows a trend towards multi-modal sensing data, massively parallel clients, and multi-class tasks.
An overview of the data modalities, the base ML models of CAVs, and data security is presented next.
### _Data Modality_
CAVs collect multi-modal data from various sensors to perform tasks such as navigation, perception, etc. The FL training process involves vehicles that may have different types of sensors. The data collected by sensors depend on the sensor type, the sensor's range, the accuracy/precision of the sensor, sensor placement, and the operating environment. The operating environment, such as snow, heavy rain, or fog, can reduce sensor visibility, thereby deteriorating data quality. These factors lead to variations that can significantly affect the sensor performance. The performance of the FL model is directly dependent on the quality of the data collected by the vehicles. The data resolution, size, and sampling rate obtained from CAVs are generally heterogeneous, and processing the data is also a challenging task. In the following, we review the various data modalities in FL4CAV applications that are illustrated in Fig. 4.
#### Iii-A1 Image
Images, especially visible RGB images, are one of the most important data modalities for CAVs. Vision-related tasks, such as driver monitoring IV-A, steering wheel angle prediction IV-B, object detection IV-D, traffic sign recognition [58], and semantic segmentation [59] use images captured by the camera as the data source. In most applications, various ML models are trained to achieve the intended functionality. However, due to its intrusive design, privacy issues are always a concern for image-based systems, especially for in-cabin and driver-related applications [60, 61, 62, 63]. Privacy concerns for visual image-based systems are addressed by FL since only the model parameters are transmitted while the user data are kept locally in the vehicle. Moreover, FL also solves the data transmission problem due to the large size of images and video data, thus leading to a more communication-efficient learning framework.
#### Iii-A2 LiDAR
LiDAR data provide a solid foundation for automated driving capabilities. LiDAR data has also been used for object detection tasks [64, 65, 66]. LiDAR generates 3D point clouds that can detect objects accurately even under adverse weather conditions, unlike cameras that are not robust. However, the dense point cloud of LiDAR data makes transmission a daunting task. FL system for LiDAR data can improve learning efficiency and save communication resources while being able to handle large data sets.
#### Iii-A3 Radar
Radar sensors are used for object detection and collision avoidance in applications such as automatic emergency braking, traffic alerts, and adaptive cruise control [67, 68, 69]. Radars have long operating ranges, good measurement accuracy, and are operational in varying weather conditions [70]. Radar data provides critical information about the vehicle's surroundings, including the position and the speed of other objects. Similarly to LiDAR, the FL system for Radar can also improve learning efficiency and save communication resources.
#### Iii-A4 Vehicle Status and GNSS
Vehicle status data such as velocity, acceleration, throttle/brake command, vehicle global position through GNSS, and other vehicle parameters are also an important part of the CAV data modality. These parameters are relevant primarily to the vehicle rather than to the external environment. These data typically reveal sensitive information about driver locations, habits, and behaviors that could potentially compromise their privacy and security. FL addresses these privacy concerns well while utilizing these data to improve several applications such as collision avoidance [71], vehicle trajectory prediction IV-C, and motion control application IV-E.
### _Base Models in FL4CAV Applications_
ML has been widely used to achieve superior performance in various complex tasks, given the availability of multi-modal data from in-vehicle sensors. Furthermore, the ML in FL4CAV shows the feasibility of implementation in real time, which is required due to the limited computing and communication resources of vehicle equipment. We next discuss the various ML architectures that are used as base models in critical tasks of CAVs.
#### Iii-B1 Multilayer Perceptron
A Multilayer Perceptron (MLP), as a classic ML architecture, consists of multiple layers of fully connected neurons. It can be applied to various vehicle-related tasks, including perception, decision-making, and control. MLP provides a flexible and versatile
tool for modeling complex relationships in vehicle-related data. However, its performance in specific tasks may be limited due to the computational demand for large models.
#### Iii-B2 Convolutional Neural Network
Convolutional Neural Networks (CNNs) are presently one of the most popular architectures in ML. They are known for their excellent performance in handling image-related tasks. CNN uses convolutional layers to automatically extract features from images and learn to associate these features with corresponding labels. CNNs exhibit versatile performance in accomplishing a wide array of tasks, including, but not limited to, classification (as exemplified by LeNet [72], ResNet [73]), object detection (such as the YOLO [74] framework), and mask generation for semantic segmentation (typified by models such as U-Net [75], BiSeNet [76]), among others. It is widely applied in various vehicle-related applications, such as traffic signal recognition, object recognition, and driver monitoring.
#### Iii-B3 Recurrent Neural Network
Recurrent Neural Networks (RNNs) excel at extracting spatial relationships in features. They are specifically designed to capture temporal dependencies in sequences of data. Some popular RNN architectures include Long Short-Term Memory (LSTM) [77] and Gated Recurrent Unit (GRU) [78]. In the context of vehicles, RNNs have found extensive applications in modeling the motion and behavior of vehicles, their surroundings, and targets. By processing sequential inputs over time, RNNs can effectively capture the dynamics and temporal patterns in various vehicle-related scenarios.
#### Iii-B4 Transformer
Transformer [79] architecture and its variant, Vision Transformer (ViT) [80], have emerged as powerful alternatives to traditional CNNs and RNNs. The Transformer architecture, initially introduced for natural language processing tasks, has shown exceptional performance in various domains, including computer vision. Transformers take advantage of self-attention mechanisms to capture global dependencies across the input sequence or image. This allows them to effectively model long-range dependencies and contextual relationships, leading to improved performance in tasks such as image classification, object detection, and semantic segmentation. Transformers' ability to capture global context and long-range dependencies makes them well-suited for various tasks in the automotive domain [81].
#### Iii-B5 Generative Network
Generative networks form images based on input data, such as mask labels and super-resolution. These networks, also known as Generative Adversarial Networks (GANs) [82] or Variational Auto-Encoders (VAEs) [83], exhibit a remarkable ability to generate high-quality and realistic images. In the realm of vehicular applications, generative networks provide several potential use cases. One application lies in super-resolution, where generative networks can enhance the resolution and details of low-resolution images, proving particularly useful for tasks like license plate recognition or surveillance systems. Furthermore, generative networks can also be utilized for data augmentation and enhancement in training data sets for vehicle-related tasks.
#### Iii-B6 Reinforcement Learning
Reinforcement learning (RL) demonstrated remarkable capabilities in solving complex decision-making problems, surpassing human-level performance in various domains [84]. RL improves the abilities of the agent through interaction with the environment, enabling the agent to learn optimal policies through trial and error. RL has been extensively applied in CAV operations such as motion planning, optimal control, trajectory planning, collision avoidance, selection of newly joined CAVs, and resource allocation [85, 86].
### _Model Security_
Robust and secure privacy-preserving techniques are essential to protect sensitive data during the FL training process for CAVs. It is demonstrated that the training can still be
Fig. 4: Illustration of various data sources from a connected and automated vehicle.
vulnerable to various malicious attacks, such as when one or more participants are compromised, and they could transmit false parameters to hinder the global model performance. The FL central server is also prone to attacks that may cause the entire learning process to collapse [87]. The type of data considered in this section refers to the model parameters, such as gradients or weights, that are transmitted to the server/neighboring vehicles. These are not the raw data used for the training of the local model, as they are inherently preserved in the FL process.
Homomorphic encryption, differential privacy, and blockchain-based techniques are notable methods to preserve privacy in FL4CAV. These approaches aim to minimize the trade-offs between model performance and data privacy, ensuring data security while enabling effective model performance. A review of various cyber-security threats can be found in [88, 89, 90, 91, 92, 93]. We will next discuss some of the widely used privacy-preserving techniques.
#### Iii-B1 Homomorphic encryption
Homomorphic Encryption (HE) is a powerful technique that allows the server to perform training on encrypted vehicle data without the need for decryption, thus ensuring data privacy and security. In particular, it allows direct computation on encrypted data with decrypted results [94].
#### Iii-B2 Differential Privacy
Differential Privacy (DP) is an approach that safeguards data privacy by injecting random noise into the data before transmitting them to the server, preventing unauthorized extraction of sensitive information while also preserving data ownership and alignment with regulatory compliance. However, there is a trade-off between privacy settings and accuracy that can impact the performance of the models. DP has been used in multiple applications of FL4CAV for incorporating data security [95, 96, 97, 98, 99]
#### Iii-B3 Blockchain Technology
Another disruptive technology gaining traction in CAV applications is blockchain-based methods, leveraging the decentralized and tamper-resistant nature of blockchain to enhance data integrity, transparency, and security [100, 101, 102, 103, 104, 105, 106, 107]. Blockchain is a type of digital ledger technology that securely transfers data in a decentralized framework. CAVs share their data with the vehicular network and the information is stored on the blockchain. The system is designed to protect data privacy and security, as well as to provide greater security to the general vehicular networks involved in the learning process [108]. An analysis of various privacy preservation approaches is given in [89, 109].
In FL4CAV, the model parameters of individual vehicles can be stored as transactions on the blockchain, ensuring transparency and accountability. This creates trust among the vehicles, as the model updates can be verified. Additionally, blockchain enables incentive mechanisms through smart contracts, which reward CAVs that contribute high-quality model updates or share their computational resources for training. These incentives encourage active participation and foster collaboration among vehicles [110, 111, 94, 112].
## IV Applications of FL for CAV
In this section, we review some important applications of FL in CAV. The FL4CAV literature, including FL configuration, data modalities, underlying models, applications, FL algorithm, and datasets, can be found in Tables III and IV. The strengths of FL, such as protecting privacy, improving learning efficiency, enhancing generalization ability, and reducing communication overhead, resulted in a number of applications of FL4CAV.
### _In-Vehicle Human Monitoring_
In-vehicle human monitoring is a critical issue for CAV and ITS. The in-vehicle human monitoring serves not just the driver but also extends to the other passenger monitoring in vehicles [147]. Beyond the application in commercial taxis, human monitoring becomes particularly critical in large public transportation modes such as buses, subways, ferries, and more, where adequate human personnel for service may be lacking. Consequently, computer-aided monitoring programs can effectively offer superior service quality and protect passenger safety by handling tasks such as passenger counting, predicting passenger ingress and egress, detecting elderly falls, and emergency situations such as fires.
FL significantly enhances privacy protection, enriches and diversifies knowledge, and improves learning efficiency, which makes it crucial for the application of human monitoring in the vehicle in the deployment of CAVs. Given the sensitivity of personal privacy and the rarity of traffic accidents, FL serves as a valuable tool in these contexts. FL has the potential to enhance the security of user data onboard while enabling knowledge transfer and ensuring the generalizability of the model. However, in human-related applications where data are highly heterogeneous and personalized, it can be challenging to balance the generalization ability of the model with the need for personalization to specific users [148].
Driver monitoring applications, such as distraction detection, are critical safety features that monitor driver stability and alertness and warn distracted drivers to apply safety-critical actions [149, 150, 151, 152, 153]. In [61], the authors highlighted computational and communication efficiency issues in driver activity recognition and proposed the use of FedGKT to reduce the communication bandwidth and asynchronous training requirements. Driver privacy may be a bigger concern than steering wheel angle prediction and object recognition, leading to FL's ability to be more highlighted in terms of privacy protection. However, the driver monitoring application is a highly personalized application where the driver's behavior is strongly associated with personal habits, emotions, cultural background, and even the interpretation of instructions. This user heterogeneity poses a challenge for FL systems. For human-related applications, such as driver
monitoring, personalized FL is the dominant solution [62]. A DFL framework was proposed in [125] that incorporates a gossip protocol for knowledge dissemination. This framework not only achieves personalized models without requiring any additional processing but also incorporates a knowledge dissemination technique that significantly accelerates the training process.
Passenger monitoring applications are an emerging research area that involves detecting passengers' intents to board and leave and warning of dangerous behavior in public transportation [154]. However, this field has not yet received much attention due to the lack of available datasets and the difficulty of monitoring multiple users simultaneously. Nevertheless, the ability of FL to integrate knowledge about public transportation and the growing demand for passenger monitoring makes FL a promising application in this area.
### _Steering Wheel Angle Prediction_
The prediction of the angle of the steering wheel has become a crucial feature of self-driving. The performance of ADAS features, such as lane keep assist and lane departure warning, is based on the prediction of the steering angle [155, 156]. The steering wheel angle prediction is used to estimate the steering wheel rotation angle based on the input of road images. The prediction of the steering wheel angle manages the lateral positioning of the vehicle, even under challenging circumstances, such as on unpaved and unmarked roads. The steering wheel angle prediction needs to adapt to different driving and environmental conditions, and thus requires continuous model updates for high accuracy.
FL achieves the above objectives by enabling several vehicles to collaborate in learning from new data and updating the model in a relatively short time. FL offers the benefit of continuous and collaborative learning, low communication overhead, and data security that is needed to develop a robust prediction model.
It was demonstrated that FL can collectively train the prediction model while, at the same time, significantly reducing communication costs. In the study presented in [115], the authors demonstrated a significant improvement in edge model quality through the use of FL in CAV. Specifically, the study involved predicting steering wheel angles using two modalities of data: images and optical flow. In [116], the performance of FL and centralized learning in steering angle prediction was assessed under different levels of noise and the results were comparable. Furthermore, this study considered the implications of communication load and disruptions, providing a comprehensive evaluation of the systems. This makes FL suitable for applications involving an increasing number of CAVs, specifically for tasks such as steering wheel angle prediction.
### _Vehicle Trajectory Prediction_
An accurate vehicle trajectory prediction allows CAVs to perform proper motion planning, as well as anticipate potentially dangerous behaviors of other vehicles, such as sudden lane change, skidding, or hard braking, react proactively and prevent accidents [157, 158, 159, 160]. This is a challenging task and would require substantial amounts of sensitive vehicle data to train a model for trajectory prediction.
FL is a viable solution that provides a collaborative learning framework with multiple vehicles while keeping sensitive local data private and secure. FL models are trained on diverse data from various vehicles operating in different scenarios. This enhances the generalization of the model and enables vehicles to handle rare events such as traffic accidents, adverse weather, and risky behaviors. Additionally, the FL framework supports continuous learning and model updates, allowing quick adaptation to dynamic traffic, road conditions, and unfamiliar scenarios.
Trajectory prediction models commonly rely on time series data that encompass vehicle/passenger position, velocity, and acceleration. These models leverage the strength of deep neural networks, mainly RNNs, and Transformers, that have proven effective in predicting trajectories for various entities, including vehicles and pedestrians, while also capturing their behavioral patterns [161]. FL framework has been shown to be effective in learning spatio-temporal features with the Transformer model [124] (or the LSTM model [162]) while also protecting user privacy. FL coupled with One-Class Support Vector Machine (OC-SVM) has been used to detect anomalous trajectories at traffic intersections [163]. The reported findings indicate that the federated approach improves both the overall accuracy of anomaly detection and the benefit of individual data owners. FL has been reported to perform similarly to centralized learning [121, 164, 165]. Centralized learning requires that all data from the private vehicle be transferred to the central server for training, whereas the data are kept locally in the vehicle in the case of FL.
### _Object Detection_
Object detection is one of the main functions of the visual perception system of CAVs intended to detect and localize objects using sensor data such as LiDAR and high-resolution image/video. These data are large in size and sensitive from a privacy point of view. As a result, there are limitations to deploying robust detection models in a traditional centralized learning approach due to privacy and communication overhead. These concerns can be mitigated by using an FL-based approach for CAVs. FL can effectively help CAVs detect various objects in different driving scenarios, road types, traffic conditions, and weather types. FL enables the CAV framework to learn efficiently with low communication overhead, which is particularly advantageous when the volume of data is much larger than the size of the ML model while also ensuring the privacy of the data.
FL has already been used in computer vision-related tasks, such as developing safety hazard warning solutions in smart
city applications [166]. Object detection accuracy generally struggles under adverse weather conditions such as snow and rain. FL frameworks have been shown to improve detection accuracy [167] and perform better than the centralized and gossip-decentralized models [64]. Recently, there have been numerous studies to improve the performance of FL on complex tasks such as object detection [168]. In [47], it has been shown that with multistage resource allocation and appropriate vehicle selection, FL performance improved significantly compared to traditional centralized learning and baseline FL approaches. In [120], a decentralized FL method is used for object classification using LiDAR on CAVs. The parameters of the ML model (PointNet [169]) are communicated through V2V networks. It has been experimentally confirmed that FL is highly effective compared to self-learning approaches.
Another important application of FL is the recognition and detection of license plates. It is used in ITS for applications such as traffic safety and violations, traffic monitoring, illegal/overtime parking detection, and parking access authentication. ML techniques have been shown to be highly efficient in detecting objects and recognizing license plates [170, 171, 172, 173]. However, due to the large size of the data from all vehicles, it is not feasible to train on a real-time edge device. FL techniques offer numerous advantages to license plate detection and recognition systems, namely: higher robustness, privacy protection, enabling collaborative learning, and reduced network bandwidth requirements. These benefits of using FL contribute to increased effectiveness and adaptability of such systems in real-world scenarios [174, 175].
### _Motion Control_
The motion controller of the vehicle executes the desired trajectory by determining the optimal control values of the throttle (longitudinal acceleration motion), the steering of the vehicle (lateral motion), and the brake (longitudinal deceleration motion) [176, 177]. FL enables CAVs to train and optimize controller parameters collaboratively. Some potential benefits of using FL are enabling CAVs to adapt to unseen routes/traffic scenarios or operating conditions due to previous data from other CAVs, acceleration on the ramp, driving in congestion scenarios, or challenges associated with higher vehicle speed [178]. FL enables CAVs to adapt to different driving scenarios, including unfamiliar and unvisited roads, cities, and countries. Furthermore, FL may allow CAVs to adjust driving styles based on different driving habits, climates, scenarios, and cultural norms.
FL has been used to dynamically update the controller parameters, resulting in improved achievement of the target speed with enhanced driver comfort and safety [39]. Additionally, FL finds application in collaborative optimization of control parameters between multiple vehicles at traffic intersections, resulting in the avoidance of collisions and improved driving comfort [179, 180]. In [181], FL is utilized to improve braking performance under different driving conditions and environments by determining optimal road friction coefficients. This approach ensures the privacy of the driver while optimizing the braking system. In [39], an FL framework is proposed to optimize the controller design for CAVs with variable vehicle participation in the FL training process.
RL approach has been widely applied for motion control in vehicles due to its ability to train in complex scenarios with dynamic environments. RL enables CAVs to learn control policies for the required objectives with user feedback and sensor measurements [182, 183, 184, 185]. There are open research problems in motion control of CAVs that could be addressed by FL such as platooning, lane change, merging on-ramps, signalized, and unsignalized intersections. A review of existing CAV control methods is provided in [186, 187, 188], while applications of ML to CAV control are reported in [189, 190, 191, 192, 193].
### _Traffic Flow Prediction_
Traffic flow prediction is one of the critical components of an ITS for efficient traffic control, safety, and management. Accurate predictions using historical data to forecast future traffic conditions can lead to reduced traffic congestion, such as optimal route recommendation and variable road signal timing. Predicting traffic flow can also allow timely notification to authorities of occurrences of events, such as accidents. ML techniques, such as CNNs and RNNs, have shown promising results in predicting traffic flow [194, 195, 196].
FL has been used to predict traffic flow with improved accuracy while ensuring privacy and scalability. Sources for model training include data from CAV, RSUs, and traffic sensors. The predictions could be in real-time or for future time intervals, and the model can be trained to predict traffic patterns and improve the accuracy of traffic flow predictions. FL allows CAVs to collaboratively learn from their data while addressing privacy concerns.
In [114], an accurate Gated Recurrent Unit (GRU) network is trained using FL to predict traffic flow. Experimental evaluations of a real-world data set show that the FL-based approach can achieve predictions comparable with traditional centralized approaches. In [197], an FL-based Spatial-Temporal Networks (FedSTN) algorithm was proposed to predict traffic flow. The algorithm employs various methods like Recurrent Longterm Capture Network, Attentive Mechanism Federated Network, and Semantic Capture Network (SCN) to learn spatial-temporal and semantic information. It is reported that the FedSTN algorithm outperforms in terms of higher prediction accuracy compared to existing baselines such as Auto-Regressive Integrated Moving Average (ARIMA), eXtreme Gradient Boosting (XGBoost), FedGRU, and ST-ResNet [198]. In [199], a Long-Short-Term Memory (LSTM) is trained in an FL framework for traffic flow prediction along with an RL that is used for resource optimization.
In [122], an LSTM algorithm on an FL framework has been trained on a real Vehicular Ad hoc NETwork (VANET) data set based on V2V and V2R communication for the prediction of network traffic. The above results show the benefits of using FL for complex tasks such as traffic flow prediction.
### _Vehicular Cyber-Physical Systems_
Vehicular Cyber-Physical Systems (VCPS) encompass the integration of physical systems, cyber systems, and vehicular communication networks [200]. Physical systems comprise vehicles, roads, and telematics/edge devices, while cyber systems include data centers, central servers (i.e., cloud), and traffic management systems. Vehicular networks, namely Cellular Vehicle-to-Everything (C-V2X) and V2X communication networks, play a key role in facilitating information sharing to improve driving comfort, safety, and traffic management. VCPS utilizes various technologies to enhance the vehicular network and enable seamless and robust communication between vehicles and systems.
FL plays a critical role in VCPS by enhancing data privacy and addressing resource constraints. FL uses a collaborative and distributed learning framework that captures data heterogeneity while eliminating the need to transfer local data from vehicles. This enables VCPS to benefit from FL's ability to preserve data privacy and facilitate efficient learning without compromising resource limitations.
In [201], an FL framework is proposed to detect and mitigate data leakage in VCPS while enhancing data privacy. The proposed scheme achieves good accuracy, efficiency, and high security based on simulations of a real-world data set. In [202], an FL framework (OES-Fed) is proposed for outlier detection and noise filtering in vehicular networks. In [203], extreme value theory (EVT) and personalized FL are proposed to model anomalous events caused by the non-heterogeneous data distribution among vehicles in vehicular networks. In [204], an efficient and secure FL framework is combined with the Deep Q-Network (DQN) to ensure an efficient and secure scheme to reduce the latency of vehicular data sharing in vehicular networks.
FL has gained significant popularity for enhancing the resilience and robustness of VCPS networks against adversarial attacks. This is achieved through the integration of FL with techniques such as differential privacy [205] and blockchain-based approaches [112, 206]. These combinations have shown promising results in improving the security and reliability of the VCPS network.
### _Vehicle-to-Everything Communication_
An efficient and robust V2X communication such as V2V and V2I is a crucial step towards achieving an ITS. V2X communication plays a pivotal role in improving traffic management and enhancing driving comfort. As ITS continues to progress, we encounter a substantial increase in data transmission due to a large number of vehicles. This surge in data poses challenges in terms of communication and energy consumption. Moreover, given the private and sensitive nature of the data, ensuring its security is essential. Therefore, it is crucial to address these issues by adopting energy-efficient approaches and establishing low-latency transmission in V2X communication [207]. This will help to address the demands of data-intensive systems while safeguarding data privacy and optimizing resource utilization.
FL offers a promising solution for learning parameters with minimal latency and data transmission due to its decentralized training framework. It ensures data security while enabling efficient client/server selection during the training process [208, 209, 210] and resource management [211, 212]. These approaches have demonstrated an effective reduction in communication overload, addressing a significant challenge in FL implementations.
In [213, 214], the authors utilized extreme value theory in conjunction with an FL framework to model anomalous events, specifically large queue lengths. They also incorporated Lyapunov optimization for power allocation, which contributed to improving system performance.
## V Challenges and Future Directions
In this section, we review various challenges in using FL4CAV and potential future research directions.
### _Resource Limitations and Utilization_
#### V-A1 Collaboration capabilities and management in massively parallel CAVs
Significant participation of CAVs in FL could increase solve time and memory utilization, and, therefore, calls for an increase in computational demand for a global model update. In particular, vision- and LiDAR-related perception tasks are characterized by large data sets that lead to high communication costs. Decentralized FL and clustered FL [215, 216, 217] are being explored to reduce communication overhead.
The high communication demands and low reliability of 5G networks call for the development of 6G-V2X systems. Integrating 6G, V2X, and multi-access edge computing (MEC) powered by ML techniques creates the potential to achieve efficient and collaborative processing at the network edge. This approach aims to overcome the limitations of current 5G systems and pave the way for improved performance and reliability in future networks [218].
V-A2 Challenges due to lack of sufficient real-world datasets, simulators, and pre-trained base models
There is a need for more real-world datasets (different weather conditions and traffic scenarios), realistic high-fidelity FL4CAV simulators for seamless FL integration [219, 220, 221], and good pre-trained models.
#### V-A3 Low model accuracy
FL often struggles with a trade-off between the accuracy achieved through model personalization and imposing high computational requirements on
edge devices during learning. Split learning is one potential solution that enables efficient inference in resource-constrained edge clients while capturing both generalization and personalization capabilities [222].
#### Iv-A4 Inefficient resource utilization
Some of the issues of FL related to resource optimization include idle of powerful edge devices, underutilized network infrastructure, neglected edge devices without proper network connectivity, and discouraged sharing of parameters from edge devices with diverse privacy requirements [223]. Therefore, there is a need for a robust FL framework that jointly utilizes and optimizes the resources of the device, server, and network infrastructure.
Cooperative FL is a promising solution that overcomes these shortcomings and has been shown to be feasible and beneficial for learning processes leading to improved ML performance and resource efficiency [224]. In another related study by [225], a cooperative architecture and an FL combined with an RL-based algorithm are proposed for the allocation of resources in CAV networks.
### _Digital Ethics Issues_
#### Iv-B1 Privacy and security issues
Massive data also leads to privacy and security concerns. This problem must be addressed to train the ML model efficiently without compromising the model's accuracy and redundancy.
#### Iv-B2 Fairness and incentives
There is a need for appropriate rewarding policies and incentive mechanisms for CAVs to share the quality data needed for efficient model training performance.
### _Imperfect Methodology_
#### Iv-C1 Lack of methods for efficient vehicle selection and resource allocation
Currently, there are no efficient methods that can filter useful data from CAVs to minimize network loading. There are ongoing efforts to develop reliable methods to optimally select vehicles and resource allocation schemes for efficient model training and communication [226, 227, 228]. In [229], the overall training process was demonstrated to be efficient when incorporating a client selection model. The setup looks at the resource availability of the clients and then determines the clients eligible to be part of the FL global model learning process. In [164], it is demonstrated that the model performance was improved with CAVs that were selected by trust-based deep RL.
#### Iv-C2 Catastrophic forgetting
CAVs cannot keep all user data due to storage capacity limitations, and new data will always be generated during training iteration. Therefore, when the FL framework is updated on new data in iteration, the global model might forget the previous knowledge, which may lead to catastrophic forgetting. This is an open research problem in FL4CAV.
#### Iv-C3 System heterogeneity in FL4CAV
Poor performance of the FL model (longer training time and a larger number of communication rounds) is generally caused by poor connectivity and slower devices (straggler devices). In traditional FL, a communication round is not complete until the data from all the chosen devices are available. Hence, various adaptive strategies have been proposed to minimize the impact of stragglers and also eliminate them if possible [230, 231].
### _Inadequate Evaluation Criteria_
#### Iv-D1 FL suitability evaluation for new users
It is often difficult for the newcomer vehicle to make any informed decisions. In [164], a trust-aware Deep RL model is proposed to assist new vehicles in making superior trajectory and motion planning decisions.
#### Iv-D2 Need for high capability diagnostics
There are several noise factors that could influence the decision of the FL, such as faulty sensors in a visual perception case and incorrect imputation of missing data. The development of
Fig. 5: Illustration of challenges and future directions of federated learning for connected and automated vehicles.
robust diagnostics that can identify and eliminate the updates from these vehicles is needed.
## VI Conclusions
This survey paper reviews FL algorithms, data modalities, model security, and provides a list of critical applications and challenges of FL4CAV. Currently, FL4CAV also presents unique challenges, such as ensuring data integrity, addressing communication latency, managing heterogeneous data sources, and maintaining model synchronization across different vehicles. However, with proper design and implementation, FL can offer significant advantages in terms of privacy preservation, network efficiency, and collaborative intelligence for CAVs.
Further promising applications of FL are in the areas, such as privacy-preserving driver behavior modeling, anomaly detection, and predictive maintenance. With the advent of cloud infrastructure, 6G, V2X technology, and flying cars, the adoption of FL models is expected to provide significant breakthroughs.
|
2301.04179 | Scattering Amplitude from Quantum Computing with Reduction Formula | Utilizing the Lehmann-Symanzik-Zimmermann reduction formula, we present a new
general framework for computing scattering amplitudes in quantum field theory
with quantum computers in a fully nonperturbative way. In this framework, one
only has to construct one-particle states of zero momentum, and no wave packets
of incoming particles are needed. The framework is able to incorporate
scatterings of bound states, and is ideal for scatterings involving a small
number of particles. We expect this framework to have particular advantages
when applied to exclusive hadron scatterings. As a proof of concept, by
simulations on classical hardware, we demonstrate that in the one-flavor
Gross-Neveu model, the fermion propagator, the connected fermion four-point
function, and the propagator of a fermion-antifermion bound state obtained from
our proposed quantum algorithm have the desired pole structure crucial to the
implementation of the Lehmann-Symanzik-Zimmermann reduction formula. | Tianyin Li, Wai Kin Lai, Enke Wang, Hongxi Xing | 2023-01-10T19:16:56Z | http://arxiv.org/abs/2301.04179v2 | # Scattering Amplitude from Quantum Computing with Reduction Formula
###### Abstract
Utilizing the Lehmann-Symanzik-Zimmermann (LSZ) reduction formula, we present a new general framework for computing scattering amplitudes in quantum field theory with quantum computers in a fully nonperturbative way. In this framework, one only has to construct one-particle states of zero momentum, and no wave packets of incoming particles are needed. The framework is able to incorporate scatterings of bound states, and is ideal for scatterings involving a small number of particles. We expect this framework to have particular advantages when applied to exclusive hadron scatterings. As a proof of concept, by simulations on classical hardware, we demonstrate that the two-point function in the 1+1-dimensional Nambu-Jona-Lasinio (NJL) model obtained from our proposed quantum algorithm has the desired pole structure crucial to the implementation of the LSZ reduction formula.
_Introduction_. The calculation of scattering amplitudes in quantum field theory (QFT) has long been a core topic in theoretical particle physics [1; 2; 3; 4; 5]. All tests of theories against experiments in particle accelerators entail theoretical predictions of scattering amplitudes. Despite the huge success of the perturbative approach to the calculation of scattering amplitudes [6; 7; 8], there are still circumstances in which the perturbative framework does not work, namely the cases where the coupling constants are large, as is the case for quantum chromodynamics at low energies for instance. To date, first-principle nonperturbative calculations of scattering amplitudes in quantum field theory are not available. The main obstacle is that real-time dynamics cannot be simulated in traditional path-integral lattice QFT [9], while simulating real-time Hamiltonian evolutions in quantum field theory require unbearable computational cost on a classical computer. It has been proposed in Refs. [10; 11] that, with the help of quantum computers, simulations of Hamiltonian evolutions of scattering processes in quantum field theory can be achieved with affordable computational cost on the lattice, making nonperturbative evaluations of scattering amplitudes possible.
Although the quantum-computational framework developed in Ref. [10] is fully general, it may encounter difficulties in practice. One major difficulty is that one has to prepare spatially well-separated wave packets of incoming particles in the initial state. The central values of the 4-momenta of these wave packets \(p_{i}\), as well as their Lorentz-invariant products \(p_{i}\cdot p_{j}\), set a constraint
\[a\ll 1/|p_{i}^{\mu}|,1/\sqrt{|p_{i}\cdot p_{j}|}\ll L \tag{1}\]
on the lattice spacing \(a\) and the lattice size \(L\); while the spatial separation distances \(d_{ij}\) between the initial-state wave packets set a constraint
\[L\gg d_{ij} \tag{2}\]
on the lattice size. In addition, the separations \(d_{ij}\) has to be wide, meaning that \(d_{ij}\gg 1/\Delta p_{i}^{\mu}\), where \(\Delta p_{i}^{\mu}\) is the uncertainty of the wave packet of the \(i\)th incoming particle in momentum space, and we require \(\Delta p_{i}^{\mu}\ll|p_{i}^{\mu}|\) so that the wave packets are narrow enough to mimic a scattering process of definite incoming momenta. Constraint Eq. (1) is required for reliable simulations of incoming particles with definite 4-momenta on the lattice, and can be potentially improved by introducing factorization theorems using the method of effective field theory [12]. Constraint Eq. (2) is due to the introduction of wave packets, which generally implies a larger lattice size than required by Eq. (1). Another feature of the method developed in Ref. [10] is that the wave packets are first prepared with the coupling constant turned off. Therefore, the incoming particles cannot be bound states. The coupling constant is subsequesntly adiabatically turned on before the scattering occurs and adiabatically turned off after the scattering occurs. To ensure adiabaticity, a long time span of evolution is required. In addition, one has to insert backward evolutions in order to eliminate unwanted broadening of wave packets during the adiabatic turn on and turn off of the coupling constant, thus increasing the time complexity. In fact, in the strong coupling regime, most theoretical uncertainties come from the adiabatic turn on and turn off of the coupling constant [10].
We note that, in the conventional perturbative approach, scattering amplitudes are computed using the Lehmann-Symanzik-Zimmermann (LSZ) reduction formula [13], which relates scattering amplitudes to \(n\)-point correlation functions, which in turn can be expanded as
a power series in the coupling constant using the Feynman diagram technique. The LSZ reduction formula, being a nonperturbative relation, is a natural alternative starting point for the evaluation of scattering amplitudes with quantum computers in a fully nonperturbative way. In this approach, in order to evaluate scattering amplitudes, one calculate \(n\)-point correlation functions on a quantum computer. We will see that, this approach is ideal for scattering processes involving a small number of particles, and will have potential applications in exclusive strong-interaction processes such as two-to-two scatterings of pions or nucleons.
In the following, we first give a short review of the LSZ reduction formula. We then propose a quantum algorithm which utilizes the LSZ reduction formula to compute scattering amplitudes, and discuss its features and advantages. As a concrete example, we describe schematically how to apply our formalism to the case of pion-pion elastic scattering. After that, as a proof of concept, we simulate the fermion propagator in the 1+1-dimensional Nambu-Jona-Lasinio (NJL) model with our proposed quantum algorithm on classical hardware. We give a conclusion at the end.
_LSZ reduction formula_. The LSZ reduction formula [13] relates the scattering amplitude of a given scattering process to correlation functions of fields in the vacuum. For instance, consider the scattering process \(h(\mathbf{k}_{1})+\cdots+h(\mathbf{k}_{n_{\text{in}}})\to h(\mathbf{p}_{1})+\cdots+h(\mathbf{p }_{n_{\text{out}}})\), where \(h\) is some spin-0 particle with mass \(m\) annihilated by a scalar field \(\phi\). Using the LSZ reduction formula, the scattering amplitude \(\mathcal{M}\) can be written as
\[i\mathcal{M} =R^{n/2}\,\,\lim_{\begin{subarray}{c}p_{i}^{2}\to m^{2}\\ k_{j}^{2}\to m^{2}\end{subarray}}\,G(\{p_{i}\},\{k_{j}\})\] \[\times\left(\prod_{r=1}^{n_{\text{out}}}K^{-1}(p_{r})\right) \left(\prod_{s=1}^{n_{\text{in}}}K^{-1}(k_{s})\right)\,, \tag{3}\]
where \(n=n_{\text{in}}+n_{\text{out}}\). The \(G(\{p_{i}\},\{k_{j}\})\) is the \(n\)-point function in momentum space, given by
\[G(\{p_{i}\},\{k_{j}\})\] \[=\left(\prod_{i=1}^{n_{\text{out}}}\int d^{4}x_{i}\,e^{ip_{i}\cdot x _{i}}\right)\left(\prod_{j=1}^{n_{\text{in}}-1}\int d^{4}y_{j}\,e^{-ik_{j} \cdot y_{j}}\right)\] \[\times\,\langle\Omega|T\big{\{}\phi(x_{1})\cdots\phi(x_{n_{\text {out}}})\phi^{\dagger}(y_{1})\cdots\phi^{\dagger}(y_{n_{\text{in}}-1})\phi^{ \dagger}(0)\big{\}}\,|\Omega\rangle\,, \tag{4}\]
where \(T\) denotes time-ordering and \(|\Omega\rangle\) is the vacuum, i.e. the ground state. The \(K(p)\) is the two-point function in momentum space, also called the propagator, given by
\[K(p)=\int d^{4}x\,e^{ip\cdot x}\langle\Omega|T\{\phi(x)\phi^{\dagger}(0)\}| \Omega\rangle\,. \tag{5}\]
The factor \(R\) is the field normalization, defined by
\[R=|\langle\Omega|\phi(0)|h(\mathbf{p}=0)\rangle|^{2}\,, \tag{6}\]
where \(|h(\mathbf{p}=0)\rangle\) denotes the state with a single particle \(h\) with zero spatial momentum. The generalization of Eq. (3) to cases which involve multiple types of massive particles with arbitrary spin is trivial, with suitable inclusions of polarization tensors and spinors on the right-hand side of Eq. (3). In essence, the LSZ reduction formula Eq. (3) says that the scattering amplitude is simply an \(n\)-point function in momentum space with momenta put on-shell, with external-leg propagators amputated. The field normalization factors \(\sqrt{R}\) on the right-hand side of Eq. (3) ensure that the scattering amplitude, as a physical observable, is independent of the normalization of the field operators which create or annihilate the external particles. It should be noted that the \(n\)-point function \(G(\{p_{i}\},\{k_{j}\})\) has simple poles at \(p_{i}^{2},k_{j}^{2}=m^{2}\), and so is divergent when the momenta are put on-shell. On the other hand, the propagator \(K(p)\) also has a simple pole at \(p^{2}=m^{2}\), namely
\[K(p)\stackrel{{ p^{2}\to m^{2}}}{{\longrightarrow}}\frac{iR}{p^{2 }-m^{2}+i\epsilon}\,. \tag{7}\]
Therefore, in Eq. (3), the pole singularities in \(G(\{p_{i}\},\{k_{j}\})\) cancel with those in the \(K(p)\) factors, giving a finite scattering amplitude. In practice, when the continuum theory is approximated by a theory on the lattice, these singularities are tamed and the pole structure \(\frac{1}{p^{2}-m^{2}+i\epsilon}\) is approximated by some bounded function of \(p^{2}\) which approaches it in the continuum and infinite-volume limits.
According to Eq. (3), the computation of the scattering amplitude is broken down to the computation of three objects: the \(n\)-point function \(G(\{p_{i}\},\{k_{j}\})\), the propagator \(K(p)\), and the field normalization \(R\). Implementing the computation of these objects on a quantum computer will involve three steps: (1) the spatial dimensions are discretized into a lattice, (2) the field degrees of freedom are mapped to qubits, (3) a suitable quantum algorithm is constructed to evaluate the three objects individually. For gauge theories, Step (1) can be achieved in the standard way under the Kogut-Susskind Hamiltonian formalism [14; 15], and alternative approaches have been proposed [16; 17; 18]. Step (2) can be done straightforwardly for fermionic degrees of freedom [19; 20; 21], while for bosonic degrees of freedom and in particular gauge bosons considerable progress has been made [17; 18; 22; 23; 24; 25; 26; 27; 28; 29]. In this work, we will focus on Step (3), assuming that Steps (1) and (2) have been achieved. It should be remarked that, in Step (3), the three objects to be calculated could be ultraviolet-divergent, meaning that their individual values blow up in the continuum limit. However, the scattering amplitude, as a physical observable, remains a finite constant when the continuum limit is taken. The large cancellation in the continuum limit among the components in the LSZ reduction formula could potentially cause problems on numerical stability in practical calculations. We leave the detailed study of the approach to
the continuum limit of the LSZ reduction formula for the future.
_The quantum algorithm_. Here we propose a quantum algorithm to compute the three objects involved in the LSZ reduction formula Eq. (3), namely the \(n\)-point function, the propagator, and the field normalization. Accordingly to Eq. (6), The field normalization \(R\) involves the field operator \(\phi\) evaluated at \(x=0\) sandwiched between the vacuum and a single-particle state with zero spatial momentum (\(\mathbf{p}=0\)). Since no time evolution of the field operator is involved, the value of \(R\) can be readily determined once the vacuum and the single-particle state are obtained. To obtain the vacuum and the single-particle state, one can employ the quantum algorithm proposed in Ref. [30], which shows that both the vacuum and the single-particle state can be obtained efficiently with the quantum alternating operator ansatz (QAOA) and the quantum-number-resolving variational quantum eigensolver (VQE). It should be noted that, only states with zero spatial momentum are involved in our formalism.1 Since these states are translational-invariant, the QAOA can be applied easily: one simply uses input reference states and alternating operators which are constructed to be translational-invariant. Next, we need to compute the \(n\)-point function and the propagator, for which again we can use the quantum algorithm proposed in Ref. [30] developed for the evaluation of parton distribution functions (PDFs), based on the general method introduced in Ref. [31]. In Ref. [30], with simulations on classical hardware, it is shown that with such a quantum algorithm the PDF of the 1+1-dimensional 1-flavor Nambu-Jona-Lasinio (NJL) model can be obtained with good accuracy with only 18 qubits.
Footnote 1: In this work, we only consider massive particles, for which a rest frame exist.
Our approach to the quantum computation of scattering amplitudes differs in many ways from the one introduced in Ref. [10]. The essential difference is that, the approach in Ref. [10] is a direct Hamiltonian simulation of the scattering process, for which the outgoing particle states are unknown; while in our approach we specify the ingoing and outgoing states, and aim at calculating the amplitude of the specified scattering process by evaluating relevant correlation functions. In this regard, computational cost is reduced in our approach in two ways. First, in our framework, the contraint Eq. (2) is relaxed and a smaller lattice is allowed. Second, since there is no adiabatic turn on or turn off of the coupling constant in our formalism, there is no associated extra time evolution and corrections to broadening of wave packets, and thus the associated theoretical errors can be avoided.
It should be noted that, in the approach of direct simulation in Ref. [10], the outgoing state is unknown, and measurements of momentum-space occupation numbers are performed on the final state in order to extract information on the outgoing particles. The scattering amplitude of a specific scattering process is obtained only after enough statistics is obtained for the specific process. In our approach, the outgoing state is known. However, this does not necessarily mean that our approach involves a fewer number of gates. In fact, according to Eq. (4), we have to evaluate the position-space \(n\)-point function at every spacetime point and then perform a Fourier transform. We can estimate the scaling of the complexity with \(n\) in our approach as follows. Suppose we have \(N\) lattice sites and \(T\) temporal sites. Since each evaluation of the position-space \(n\)-point function has complexity \(\mathcal{O}(n)\)[31], the complexity for evaluating the position-space \(n\)-point function at all spacetime points is \(\mathcal{O}(n(NT)^{n})\). The subsequent Fourier transform can be done efficiently using the quantum Fourier transform with complexity \(\mathcal{O}(\log(n)\log(\log(n)))\)[32]. On the other hand, evaluating the \(n\) propagators in Eq. (3) has complexity \(\mathcal{O}(n)\). Consequently, the overall scaling of the complexity in our approach is exponential in \(n\). In the approach of Ref. [10], it was shown that the scaling of the complexity with \(n\) is polynomial in \(n\). Therefore, our approach is ideal only when the number of external particles is small, e.g. \(2\to 2\) scatterings.
An important feature of our approach using the LSZ reduction formula is that bound states are allowed as either incoming or outgoing particles. This is because the interaction coupling constant is never turned off in our formalism, as opposed to the method of direct simulation in Ref. [10]. In Eqs. (4)-(6), the field operator \(\phi\) is not necessarily a fundamental field of the theory. In fact, any operator which has the same quantum numbers as the external particle \(h\) can be used. For instance, in a theory with only a spin-1/2 fundamental field \(\psi\), there might exist a spin-0 scalar bound state \(h\) made of a fundamental fermion and its antiparticle. One can then simply take the composite operator \(\phi=\bar{\psi}\psi\) as the operator which annihilates \(h\) in the LSZ reduction formula for scattering processes involving \(h\) as external particles. This is an ideal feature of our formalism, since in the most interesting potential application of quantum computing in particle physics, namely quantum chromodynamics (QCD), all incoming and outgoing particles are bound states owing to quark confinement. Our framework is therefore ideal for scattering processes involving a small number of composite particles in a strongly coupled theory, such as \(2\to 2\) scatterings of pions or nucleons in QCD.
_Example 1: pion-pion elastic scattering_. As a concrete example, let us take a look at how our proposed formalism works schematically for the case of elastic scatterings of two pions \(\pi\pi\to\pi\pi\). These scattering processes have been studied intensively on both the experimental and theoretical sides [33]. However, it is impossible to calculate the scattering amplitudes of these processes
from first principles in QCD with current analytic techniques, owing to the strongly coupled nature of QCD at low energies. Interestingly, amplitudes of elastic scatterings of hadrons at energies below the inelastic threshold can be extracted from the finite-volume two-particle spectrum in traditional lattice QCD [34; 35; 36]. Here we will describe how the pion-pion elastic scattering amplitude can be computed using our framework of quantum computing. We emphasize that our method is valid in all kinematic regions, and works also for the inelastic channels.
As a first step, we map the pure \(SU(3)_{c}\) Yang-Mills Hamiltonian to standard qubit Pauli operators as described in Ref. [22]. The required number of qubits is \(\sim Ndn_{\rm trunc}^{\rm gauge}\), where \(N\) is the number of lattice site, \(d=3\) the number of spatial dimensions, and \(n_{\rm trunc}^{\rm gauge}\) the number of qubits to record the quantum state of a single link. The general state of a single link is a superposition of irreducible representations of \(SU(3)_{c}\), with each irreducible representation \((p,q)\) consisting of \([(p+1)(q+1)(p+q+2)/2]^{2}\) states. 2 The numbers \(p\) and \(q\) give the eigenvalue of the chromoelectric electric field squared:
Footnote 2: The number is squared because one has to consider color indices at both the beginning and the end of a link.
\[\sum_{a}(\mathbf{E}^{a})^{2}|p,q\rangle\propto\frac{1}{3}\left[p^{2}+q^{2}+pq+3(p+ q)\right]|p,q\rangle\,. \tag{8}\]
Therefore, the maximum values of \(p\) an \(q\) scale as the maximum value of the chromoelectric field, which is roughly the square of the hardest scale in the scattering process \(\Lambda_{\rm max}\). As a result, we can estimate \(n_{\rm trunc}^{\rm gauge}\) as \(n_{\rm trunc}^{\rm gauge}\sim\log\Lambda_{\rm max}\). The fermion part of the Hamiltonian can be mapped to qubit Pauli operators with the standard Jordan-Wigner transformation [21], for which the number of qubits required is \(4Nn_{f}\), where \(n_{f}\) is the number of quark flavors. We thus see that the total number of qubits required scales linearly with the number of lattice sites and logarithmically with the scattering energies.
Since pions are composite particles, we have to choose appropriate composite operators of quark fields to represent them in the LSZ reduction formula Eq. (3). An obvious choice is the pseudoscalar operators \(O_{\pi}^{\beta}=\frac{1}{2}\bar{\psi}_{i}^{a}(\tau^{\beta})_{ij}\gamma_{5}\psi _{j}^{a}\), where \(a\) and \(i\) are the color and flavor indices respectively, and \(\tau^{\beta}\) are the isospin Pauli matrices, with \(\tau^{\beta}=\tau^{+},\tau^{-},\tau^{3}\) corresponding to \(\pi^{+},\pi^{-},\pi^{0}\). It should be noted that the choice of operators is not unique. For instance, we can alternatively choose the operators \(O_{\pi}^{\beta\mu}=\frac{1}{2}\bar{\psi}_{i}^{a}(\tau^{\beta})_{ij}\gamma^{\mu }\gamma_{5}\psi_{j}^{a}\). For this choice of operators, the propagator near the mass shell reads
\[K_{\beta\gamma}^{\mu\nu}(p)=\int d^{4}x\,e^{ip\cdot x}\langle\Omega|T\{O_{\pi} ^{\beta\mu}(x)O_{\pi}^{\gamma\nu\dagger}(0)\}|\Omega\rangle\]
\[\stackrel{{ p^{2}\to m_{\pi}^{2}}}{{\longrightarrow}}\frac{if_{ \pi}^{2}p^{\mu}p^{\nu}\delta_{\beta\gamma}}{p^{2}-m_{\pi}^{2}+i\epsilon}\,, \tag{9}\]
where \(f_{\pi}\) is the pion decay constant defined by
\[\langle\Omega|O_{\pi}^{\beta\mu}(0)|\pi^{\gamma}(p)\rangle=if_{\pi}p^{\mu} \delta_{\beta\gamma}\,. \tag{10}\]
The LSZ reduction formula for the scattering amplitude of the process \(\pi^{\eta}(1)\pi^{\delta}(k_{2})\to\pi^{\beta}(p_{1})\pi^{\gamma}(p_{2})\) then reads
\[i\mathcal{M}_{\beta\gamma\eta\delta} \stackrel{{ p_{i}^{2},k_{2}^{2}\to m_{\pi}^{2}}}{{ \longrightarrow}}p_{1\mu}p_{2\nu}k_{1\rho}k_{2\sigma}G_{\beta\gamma\eta\delta }^{\mu\nu\rho\sigma}(p_{1},p_{2},k_{1}) \tag{11}\] \[\times\frac{1}{m_{\pi}^{8}f_{\pi}^{4}}(p_{1}^{2}-m_{\pi}^{2}+i \epsilon)(p_{2}^{2}-m_{\pi}^{2}+i\epsilon)\] \[\times(k_{1}^{2}-m_{\pi}^{2}+i\epsilon)(k_{2}^{2}-m_{\pi}^{2}+i \epsilon)\,,\]
where
\[G_{\beta\gamma\eta\delta}^{\mu\nu\rho\sigma}(p_{1},p_{2},k_{1})\] \[= \int d^{4}x_{1}\int d^{4}x_{2}\int d^{4}y_{1}\,e^{i(p_{1}\cdot x_{ 1}+p_{2}\cdot x_{2}-k_{1}\cdot y_{1})}\] \[\times\langle\Omega|T\left\{O_{\pi}^{\beta\mu}(x_{1})O_{\pi}^{ \gamma\nu}(x_{2})O_{\pi}^{\eta\rho\dagger}(y_{1})O_{\pi}^{\delta\sigma\dagger }(0)\right\}|\Omega\rangle\,. \tag{12}\]
The pion decay constant \(f_{\pi}\) can be calculated by evaluating the left hand side of Eq. (10) using our proposed quantum algorithm, with the pion state at zero spatial momentum. It should be noted that the pion decay constant, as a static quantity, can be calculated with traditional path-integral lattice QCD [37]. The main advantage of the quantum algorithm is therefore on the evaluation of the 4-point Green's function \(G_{\beta\gamma\eta\delta}^{\mu\nu\rho\sigma}(p_{1},p_{2},k_{1})\). The freedom of the choice of operators to represent the external particles allows for cross checking among calculations with different representative operators, as well as optimizing the cost of computation by choosing a specific set of operators which requires the least number of quantum gates. In addition, as we have seen in the above example, the field normalizations are sometimes readily available via traditional lattice QCD calculations, and therefore even the preparation of states with a quantum algorithm is not always necessary.
_Example 2: polylogy in the NJL model._ With simulations of the proposed quantum algorithm on classical hardware, we can demonstrate the feature of poles of the propagator in a simple model, the 1+1-dimensional 1-flavor Nambu-Jona-Lasinio (NJL) model [38; 39]. The Lagrangian of this model is given by
\[\mathcal{L}=\bar{\psi}(i\gamma^{\mu}\partial_{\mu}-m_{q})\psi+g(\bar{\psi}\psi) ^{2}\,, \tag{13}\]
where \(\psi\) is a Dirac field in 1+1 dimensions, which we will refer to as the quark field, and \(m_{q}\) and \(g\) are the bare quark mass and the bare coupling constant respectively. Both \(m_{q}\) and \(g\) are free parameters. We choose the values of them in such a way that the particle states \(h\) we
consider below have masses \(m_{h}\) satisfying \(\frac{\pi}{L}<m_{h}<\frac{\pi}{a}\), where \(a\) and \(L\) are the lattice spacing and the lattice size respectively.
Consider the propagator of the quark field,
\[K(p)=\int d^{4}x\,e^{ip\cdot x}\langle\Omega|T\{\psi(x)\bar{\psi}(0)\}|\Omega \rangle\,. \tag{14}\]
Similar to Eq. (7), near the mass shell of a particle state \(h\) which has the same quantum numbers as the quark field, we have
\[K(p)\stackrel{{ p^{2}\to m_{h}^{2}}}{{\longrightarrow}}\frac{iR _{h}\left(\not{p}+m_{h}\right)}{p^{2}-m_{h}^{2}+i\epsilon}\,, \tag{15}\]
where \(R_{h}\) is the field normalization defined by
\[\langle\Omega|\psi(0)|h(p)\rangle=\sqrt{R_{h}}\,u(p)\,, \tag{16}\]
where \(u(p)\) is the positive-energy solution to the free Dirac equation \((\not{p}-m_{h})u(p)=0\), normalized to \(\bar{u}(p)u(p)=2m_{h}\). The field normalization \(R_{h}\) can be calculated using our proposed method by taking \(\mathbf{p}=0\) in Eq. (16).
When we take \(p_{1}=0\) and treat \(K(p)\) as a function of \(p_{0}\), \(K(p)\) should have poles at \(p_{0}=\pm m_{h}\). We evaluate \(K(p)\) as a function \(p_{0}\), with \(p_{1}\) taken to be zero, with the proposed quantum algorithm using classical hardware. The calculation is performed on a desktop workstation with 16 cores, using opensource packages QuSpin [40] and projectQ [41], with 22 qubits (11 lattice sites). We follow the mapping of the NJL model onto qubits and the method to evaluate the correlation function with a quantum algorithm as discussed in Ref. [30]. To obtain \(K(p)\), we first calculate the integrand in Eq. (14) in position space and then implement a discrete Fourier transform to perform the Fourier integral. Figure 1 shows the real part of \(\mathrm{Tr}K(p)\) as a function of \(p_{0}a\). The peaks at \(p_{0}a=\pm 0.97\) and \(\pm 2.61\) correspond to the poles from the two lowest-lying states with the same quantum numbers as the quark field, as is verified by solving for the mass spectrum with direct numerical diagonalization of the discretized Hamiltonian. The peaks at \(p_{0}a=\pm 0.97\) can be interpreted as a quark, 3 and the peaks at \(p_{0}a=\pm 2.61\) can be interpreted as a bound state made up of two quarks and one antiquark. In the continuum limit, a pole corresponds to a peak of infinite height, while in the discretized model we consider here the peaks have finite height. This simple example shows that the quantum algorithm succeeds in recovering the expected pole structure of the propagator, which is crucial to the implementation of the LSZ reduction formula.
Footnote 3: Note that in this model there is no quark confinement.
_Conclusions_. In this work, we proposed a new framework for evaluating scattering amplitudes in quantum field theory on quantum computers in a fully nonperturbatively way. The framework was based on the LSZ reduction formula, which relates scattering amplitudes to correlation functions. In this framework, as opposed to a direct Hamiltonian simulation of the scattering process, no preparation of wave packets of incoming particles are required, and one only has to prepare one-particle states of zero momentum. The framework is capable of incorporating scatterings of bound-state particles, and is ideal for scatterings which involve a small number of particles. This framework is expected to have potential applications in exclusive processes in a strong-coupled theory, such as two-to-two scatterings of pions or nucleons. As a proof of concept, in a simple model, the 1+1-dimensional NJL model, we demonstrated by simulations on classical hardware that the two-point function obtained from the quantum algorithm has the desired pole structure crucial to the implementation of the LSZ reduction formula.
_Acknowledgements_. This work is supported by the National Natural Science Foundation of China (NSFC) under Grants No. 12022512 and No. 12035007, and by the Guangdong Major Project of Basic and Applied Basic Research No. 2020B0301030008. W. K. L. acknowledges support by the UC Southern California Hub, with funding from the UC National Laboratories division of the University of California Office of the President.
|
2302.02058 | Toric orbit spaces which are manifolds | We characterize the actions of compact tori on smooth manifolds for which the
orbit space is a topological manifold (either closed or with boundary). For
closed manifolds the result was originally proved by Styrt in 2009. We give a
new proof for closed manifolds which is also applicable to manifolds with
boundary. In our arguments we use the result of Provan and Billera who
characterized matroid complexes which are pseudomanifolds. We study the
combinatorial structure of torus actions whose orbit spaces are manifolds. In
two appendix sections we give an overview of two theories related to our work.
The first one is the combinatorial theory of Leontief substitution systems from
mathematical economics. The second one is the topological Kaluza--Klein model
of Dirac's monopole studied by Atiyah. The aim of these sections is to draw
some bridges between disciplines and motivate further studies in toric
topology. | Anton Ayzenberg, Vladimir Gorchakov | 2023-02-04T01:45:52Z | http://arxiv.org/abs/2302.02058v1 | # Toric orbit spaces which are manifolds
###### Abstract.
We characterize the actions of compact tori on smooth manifolds for which the orbit space is a topological manifold (either closed or with boundary). For closed manifolds the result was originally proved by Styrt in 2009. We give a new proof for closed manifolds which is also applicable to manifolds with boundary. In our arguments we use the result of Provan and Billera who characterized matroid complexes which are pseudomanifolds. We study the combinatorial structure of torus actions whose orbit spaces are manifolds. In two appendix sections we give an overview of two theories related to our work. The first one is the combinatorial theory of Leontief substitution systems from mathematical economics. The second one is the topological Kaluza-Klein model of Dirac's monopole studied by Atiyah. The aim of these sections is to draw some bridges between disciplines and motivate further studies in toric topology.
Key words and phrases:torus representation, orbit space, matroid, pseudomanifold, Hopf bundle, Kaluza-Klein model, Dirac monopole, Leontief substitution system 2020 Mathematics Subject Classification: Primary: 57S12, 55R55, 57R91, 54B15, 20C35, Secondary: 06A07, 52B12, 52B40, 05B35, 57M60, 57R18 The article was prepared within the framework of the HSE University Basic Research Program.
###### Abstract.
We consider the problem of the following problem:
**Problem 1.1**.: _A linear operator \(\mathcal{F}\) is a linear operator \(\mathcal{F}\
## 2. Definitions and results
In the paper \(T^{k}\) denotes the compact \(k\)-dimensional torus considered as a Lie group. The lattice \(N=\operatorname{Hom}(T^{k},S^{1})\cong\mathbb{Z}^{k}\) is called _the weight lattice_, and its dual lattice \(N^{*}=\operatorname{Hom}(S^{1},T^{k})\) is called _the lattice of 1-dimensional subgroups_.
Consider a representation of \(T=T^{k}\) on \(V=\mathbb{R}^{m}\). It decomposes into a direct sum of irreducible representations. An irreducible representation of the abelian group \(T\) has real dimension 1 or 2. One-dimensional representations are trivial (no-actions). A 2-dimensional real representation \(V(\alpha)\) is determined by a nonzero weight \(\alpha\in N\), so that
\[V(\alpha)\cong\mathbb{C}\cong\mathbb{R}^{2},\quad tz=\alpha(t)\cdot z.\]
Since neither a complex structure nor an orientation is fixed on \(\mathbb{R}^{2}\), the weight \(\alpha\) is determined uniquely up to sign. Therefore, an arbitrary representation \(V\) of a torus decomposes into the sum
\[V\cong V(\alpha_{1})\oplus\ldots\oplus V(\alpha_{r})\oplus\mathbb{R}^{m-2r}, \tag{2.1}\]
where the torus action is trivial on \(\mathbb{R}^{m-2r}\), and \(\alpha_{1},\ldots,\alpha_{r}\in N\) is a collection of nonzero vectors of the weight lattice \(N\), defined up to sign.
In the following we consider weights as rational vectors in the vector space \(N_{\mathbb{Q}}=N\otimes\mathbb{Q}\cong\mathbb{Q}^{k}\). Moreover, since weights are defined up to sign, we can treat them as rational lines (one-dimensional vector subspaces). Although a rational line contains infinitely many nonzero integral vectors, the choice of a representative is nonessential for our arguments. It follows that any torus representation is completely characterized by a multiset \(\alpha=\{\alpha_{1},\ldots,\alpha_{r}\}\) of vectors (or lines) in \(N_{\mathbb{Q}}\).
Notice that the summand \(\mathbb{R}^{m-2r}\) in (2.1) is the fixed point subspace of the representation. So far, the representation has isolated fixed point if and only if \(m=2r\).
In the following it is assumed that torus representations are effective, which means their weights span the vector space \(N_{\mathbb{Q}}\).
**Definition 2.1**.: Consider a representation of \(T=T^{k}\) on \(V=\mathbb{R}^{m}\) with the weight system \(\alpha=\{\alpha_{1},\ldots,\alpha_{r}\}\). The number \(r-k=|\alpha|-\operatorname{rk}\alpha\) is called _the complexity of the representation_.
Notice that complexity is always a nonnegative number and does not depend on the dimension of the trivial summand of the representation. If a representation \(V\) has isolated fixed point, its complexity equals \(\frac{1}{2}\dim V-\dim T\).
Change of coordinates in a torus motivates the definition of weakly equivalent representations.
**Definition 2.2**.: Two representations \(V\) and \(W\) of \(T\) are called weakly equivalent, if there is an automorphism \(\psi\colon T\to T\) and an isomorphism \(g\colon V\to W\) such that \(g(tv)=\psi(t)g(v)\).
**Example 2.3**.: A representation of complexity 0 takes the form
\[V(\alpha_{1})\oplus\cdots\oplus V(\alpha_{n})\oplus\mathbb{R}^{m-2n},\]
where \(\alpha=\{\alpha_{1},\ldots,\alpha_{n}\}\subset N=\operatorname{Hom}(T^{n},T^{1})\) is a rational basis of \(N_{\mathbb{Q}}\cong\mathbb{Q}^{n}\). Using Smith normal form over \(\mathbb{Z}\), we can change coordinates in \(T^{n}\) (or equivalently in \(N\)). Therefore, up to weak equivalence, we have \(\alpha_{i}=d_{i}e_{i}\), where \(\{e_{1},\ldots,e_{n}\}\) is the basis of the lattice \(N\), and \(d_{i}\)'s are nonzero integers satisfying \(d_{1}\mid d_{2}\mid\cdots\mid d_{n}\). Assuming there is no trivial component, a complexity zero action takes the form
\[(t_{1},\ldots,t_{n})(z_{1},\ldots,z_{n})=(t_{1}^{d_{1}}z_{1},\ldots,t_{n}^{d_{ n}}z_{n}),\]
for \(z_{i}\in\mathbb{C}\). If \(d_{i}=1\) for any \(i\), the representation is called _standard_. This class of representations is well studied and widely used in toric topology.
However, even for general \(d_{i}\)'s, the orbit space of the complexity zero representation (without trivial component) is a nonnegative cone \(\mathbb{R}^{n}_{\geq 0}\). As a topological space it is homeomorphic to the halfspace \(\mathbb{R}_{\geq 0}\times\mathbb{R}^{n-1}\).
**Definition 2.4**.: A representation of \(T=T^{n-1}\) on \(V\cong\mathbb{R}^{2n}\) is called _a complexity one representation in general position_ if its trivial part vanishes, and any \(n-1\) of the weights \(\alpha=\{\alpha_{1},\ldots,\alpha_{n}\}\subset N_{\mathbb{Q}}\cong\mathbb{Q}^{ n-1}\) are linearly independent over \(\mathbb{Q}\).
For a complexity one representation in general position, the collection \(\alpha=\{\alpha_{1},\ldots,\alpha_{n}\}\subset N\) determines a group homomorphism
\[A=\prod_{i=1}^{n}\alpha_{i}\colon T^{n-1}\to T^{n}.\]
Since \(\alpha\) spans \(N_{\mathbb{Q}}\), the kernel \(\operatorname{Ker}A\) is a finite abelian group. The image of \(A\) is a codimension 1 toric subgroup \(\{(t_{1},\ldots,t_{n})\in T^{n}\mid\prod_{i=1}^{n}t_{i}^{c_{i}}=1\}\) where \(c_{1}\alpha_{1}+\cdots+c_{n}\alpha_{n}=0\) is a unique (up to multiplier) linear relation on \(n\) vectors \(\alpha_{i}\) in \(N_{\mathbb{Q}}\cong\mathbb{Q}^{n-1}\), and \(c_{i}\)'s don't have a nontrivial common divisor. The condition that any \(n-1\) of \(\alpha_{i}\)'s are independent is equivalent to \(c_{i}\neq 0\) for any \(i\).
Since \(\alpha_{i}\) are only defined up to sign, we can assume that \(c_{i}\)'s are natural numbers. The orbit space for the original representation naturally coincides with that of the image of \(A\), which implies the following observation.
**Lemma 2.5**.: _The orbit space of a complexity one representation of \(T^{n-1}\) in general position on \(V\cong\mathbb{R}^{2n}\) is homeomorphic to the orbit space of the action of the subgroup \(H=\{(t_{1},\ldots,t_{n})\in T^{n}\mid\prod_{i=1}^{n}t_{i}^{c_{i}}=1\}\), where \(c_{i}>0\), induced by the standard action of \(T^{n}\) on \(\mathbb{C}^{n}\cong\mathbb{R}^{2n}\)._
Notice that the stabilizer subgroups of the original action and those of \(H\) do not necessarily coincide: these depend on the finite group \(\operatorname{Ker}A\) described above. For the orbit space, however, one can use the particular model action of \(H\) to prove the following result.
**Proposition 2.6** ([2, Lm.2.11] or [25, Thm.3.6]).: _For a representation of \(T^{n-1}\) on \(\mathbb{C}^{n}\) of complexity one in general position we have a homeomorphism \(\mathbb{C}^{n}/T^{n-1}\cong\mathbb{R}^{n+1}\)._
**Definition 2.7**.: Consider a collection of complexity one representations in general position \(T^{n_{i}-1}\to\operatorname{GL}(\mathbb{C}^{n_{i}})\), for \(i\in\{1,\ldots,s\}\), a complexity zero representation \(T^{d}\to\operatorname{GL}(\mathbb{C}^{d})\), and a trivial representation on \(\mathbb{R}^{l}\). Then the product representation of
\(\prod_{i=1}^{s}T^{n_{i}-1}\) on \(V=\mathbb{R}^{l}\times\mathbb{C}^{d}\times\prod_{i=1}^{s}\mathbb{C}^{n_{i}}\) is called _a Leontief representation_. It is called _totally Leontief_ if \(d=0\).
The reason for the chosen name of the term is explained in Appendix A. For a totally Leontief representation we have
\[V/T=\mathbb{R}^{l}\times\prod_{i=1}^{s}\mathbb{C}^{n_{i}}/T^{n_{i}-1}=\mathbb{ R}^{l}\times\prod_{i=1}^{s}\mathbb{R}^{n_{i}+1},\]
so the orbit space is a topological manifold. Similarly, the orbit space of any non-totally Leontief representation is a half-space, that is a manifold with boundary.
Theorems 1.1 and 1.2 stated in the introduction assert that Leontief representations provide an exhaustive list of representations whose orbit spaces are manifolds (either open or bounded). The following is a reformulation of these theorems.
Theorem 2.8.
1. _Assume that the orbit space_ \(V/T\) _of a representation_ \(T\to\operatorname{GL}(V)\) _is homeomorphic to_ \(\mathbb{R}^{l}\)_. Then the representation is weakly equivalent to a totally Leontief representation._
2. _Assume that the orbit space_ \(V/T\) _of a representation_ \(T\to\operatorname{GL}(V)\) _is homeomorphic to a half-space_ \(\mathbb{R}_{\geqslant 0}\times\mathbb{R}^{l-1}\)_. Then the representation is weakly equivalent to a non-totally Leontief representation._
The first result was proved in [25, Thm.1.3] in a much bigger generality: Styrt characterized all representations of \(G\subset\operatorname{GL}(V)\) with orbit spaces homeomorphic to \(\mathbb{R}^{d}\), under the assumption that the connected component of \(G\) is a compact torus. Our proof of the first part of Theorem 2.8 essentially follows the lines of the proof in [25] for the particular case of \(G=T\), however we simplify the argument by referring to some known results about matroids. A similar technique is applied to prove item 2 of Theorem 2.8.
## 3. Proofs
Recall that _a(n abstract) simplicial complex_ on a vertex set \(A\) is a collection \(K\subseteq 2^{A}\) of subsets of \(A\), such that (1) \(\varnothing\in K\); (2) if \(I\in K\) and \(J\subset I\), then \(J\in K\). The elements of \(A\) are called _vertices_, the elements of \(K\) are called _simplices_. The value \(\dim I=|I|-1\) is called the dimension of a simplex \(I\). The maximal dimension of simplices is called the dimension of \(K\). A simplex \(I\) is called _maximal_ (or _a facet_), if there is no \(J\in K\) which strictly contains \(I\). A simplicial complex is called _pure_ if all facets have the same dimension. In a pure simplicial complex, a simplex \(J\) is called _a ridge_, if \(\dim J=\dim K-1\). An element \(i\in A\) is called a ghost vertex of \(K\) if \(\{i\}\notin K\).
If \(K_{1},K_{2}\) are simplicial complexes on the vertex sets \(A_{1},A_{2}\) respectively, then the join \(K_{1}*K_{2}\) is a simplicial complex \(\{I_{1}\sqcup I_{2}\subset A_{1}\sqcup A_{2}\mid I_{1}\in K_{1},I_{2}\in K_{2}\}\). The full simplex on a set \(A\) is the simplicial complex \(\Delta_{A}=2^{A}\) of all subsets of \(A\). The boundary of a simplex on a set \(A\) is the simplicial complex \(\partial\Delta_{A}=2^{[A]}\backslash\{A\}\) of all proper subsets of \(A\). The ghost complex on a set \(A\) is the simplicial complex \(o_{A}=\{\varnothing\}\), in which all vertices are ghost.
**Construction 3.1**.: For a multiset \(\alpha=\{\alpha_{1},\ldots,\alpha_{r}\}\) of vectors in a rational vector space \(N_{\mathbb{Q}}\) consider a simplicial complex \(K(\alpha)\) on the vertex set \([r]=\{1,\ldots,r\}\) whose simplices are the linearly independent subsets of vectors:
\[\{i_{1},\ldots,i_{l}\}\in K(\alpha)\Leftrightarrow\alpha_{i_{1}},\ldots,\alpha_ {i_{l}}\text{ are linearly independent.}\]
By definition, \(K(\alpha)\) is the independence complex of the linear matroid determined by the collection \(\alpha\). Since \(\alpha\) spans \(N_{\mathbb{Q}}\cong\mathbb{Q}^{k}\), the complex \(K(\alpha)\) is pure of dimension \(k-1\).
**Remark 3.2**.: As proved by Bjorner [9], the independence complex of any matroid is shellable, hence homotopically Cohen-Macaulay. This implies that the geometrical realization \(|K(\alpha)|\) is homotopy equivalent to a wedge of \((k-1)\)-dimensional spheres.
Recall the classical notion of combinatorial topology.
**Definition 3.3**.: A pure simplicial complex \(K\) is called _a (closed) pseudomanifold_ if any ridge is contained in exactly two facets. A pure simplicial complex \(K\) is called _a pseudomanifold with boundary_ if any ridge is contained in one or two facets.
**Remark 3.4**.: If a ridge is contained in one facet, it is called _a boundary ridge_. When we use the term pseudomanifold with boundary we assume that there exists at least one boundary ridge. So a pseudomanifold without boundary is not considered a pseudomanifold with boundary.
Our proof of Theorem 2.8 is essentially based on the next lemma. For convenience we call the assumption of item 1 in Theorem 2.8_the manifold case_, and the assumption of item 2_the boundary case_. Corresponding representations are called respectively _representations of manifold type_, and _representations of boundary type_.
**Proposition 3.5**.: _Consider a representation of a torus \(T=T^{k}\to\operatorname{GL}(V)\), and let \(\alpha=\{\alpha_{1},\ldots,\alpha_{r}\}\in N_{\mathbb{Q}}\) be the defining multiset of weights. Then the following hold true._
1. _In the manifold case, the simplicial complex_ \(K(\alpha)\) _is a pseudomanifold._
2. _In the boundary case, the simplicial complex_ \(K(\alpha)\) _is a pseudomanifold with boundary._
It will be more convenient for us to work with homology manifolds instead of topological manifolds. For a space \(Q\), the relative homology modules \(H_{*}(Q,Q\backslash\{x\};R)\) are called _the local homology modules_ at a point \(x\in Q\) with coefficients in an abelian group \(R\). Recall that a locally compact space \(Q\) is called _a (closed) \(d\)-dimensional homology manifold_ (over \(R\)) if, for any \(x\in Q\), the local homology modules are isomorphic to those of \(\mathbb{R}^{d}\):
\[H_{s}(Q,Q\backslash\{x\};R)\cong H_{s}(\mathbb{R}^{d},\mathbb{R}^{d}\backslash \{0\};R)\begin{cases}\cong R,&\text{if $s=d$};\\ =0,&\text{otherwise}.\end{cases} \tag{3.1}\]
A space \(Q\) is called _a \(d\)-dimensional homology manifold with boundary_, if its local homology modules are isomorphic to those of \(\mathbb{R}_{\geqslant 0}\times\mathbb{R}^{d-1}\): either vanish (for boundary points) or satisfy (3.1) (for interior points). The next statement is a direct consequence of Kunneth formula for relative homology groups.
**Lemma 3.6**.: _Let us fix a ring \(R\) of coefficients._
1. _If_ \(Q\) _is a closed homology manifold, then so is_ \(Q\times\mathbb{R}^{s}\)_._
2. _If_ \(Q\) _is a homology manifold with boundary, then so is_ \(Q\times\mathbb{R}^{s}\)_._
3. _If_ \(Q\) _is not a homology manifold (with or without boundary), then neither is_ \(Q\times\mathbb{R}^{s}\)_._
The next technical lemma is needed for the proof of Proposition 3.5.
**Lemma 3.7**.: _Consider a \(T^{1}\)-representation on \(V\cong\mathbb{R}^{2n}\), \(n\geq 1\), with no trivial component. Then we have an alternative._
1. \(n=1\)_,_ \(\mathbb{C}^{1}/T^{1}\) _is homeomorphic to_ \(\mathbb{R}_{\geq 0}\)_._
2. \(n=2\)_,_ \(\mathbb{C}^{2}/T^{1}\) _is homeomorphic to_ \(\mathbb{R}^{3}\)_._
3. \(n\geq 3\)_,_ \(\mathbb{C}^{n}/T^{1}\) _is not a homology manifold (neither closed nor a homology manifold with boundary) over any_ \(R\)_._
Proof.: Item (1) is straightforward, see Example 2.3). Item (2) follows from Proposition 2.6. The additional details concerning item (2) are provided in Section B. We concentrate on item (3).
Since there is no trivial component, we have \(V\cong V(\alpha_{1})\oplus\cdots\oplus V(\alpha_{n})\), where \(\alpha_{1},\ldots,\alpha_{n}\in\operatorname{Hom}(T^{1},T^{1})\) is a collection of nonzero integers. In the complex coordinates associated with the irreducible summands \(V(\alpha_{i})\), the representation takes the form
\[t(z_{1},\ldots,z_{n})=(t^{\alpha_{1}}z_{1},\ldots,t^{\alpha_{n}}z_{n}).\]
Restricting this action to the unit sphere \(S^{2n-1}=\{\sum_{i=1}^{n}|z_{i}|^{2}=1\}\) we get the weighted projective space \(\mathbb{C}P^{n-1}(\alpha)=\mathbb{C}P^{n-1}(\alpha_{1},\ldots,\alpha_{n})\) as the orbit space. Therefore \(\mathbb{C}^{n}/T^{1}\) is an open cone \(\operatorname{Cone}\mathbb{C}P^{n-1}(\alpha)\) with an apex denoted by \(0\). We have
\[H_{j}(\operatorname{Cone}\mathbb{C}P^{n-1}(\alpha),\operatorname{Cone}\mathbb{ C}P^{n-1}(\alpha)\backslash\{0\};R)\cong H_{j-1}(\mathbb{C}P^{n-1}(\alpha);R).\]
The weighted projective space \(\mathbb{C}P^{n-1}(\alpha)\) has the same homology as an ordinary projective space \(\mathbb{C}P^{n-1}\), over any \(R\)[18]. Therefore we have a nonvanishing local homology module \(H_{3}(\mathbb{C}^{n}/T,(\mathbb{C}^{n}/T)\backslash\{0\};R)\cong H_{2}( \mathbb{C}P^{n-1})\cong R\) which is an obstruction for the \((2n-1)\)-dimensional space \(\mathbb{C}^{n}/T\) to be a homology manifold when \(n\geq 3\).
Now we can prove Proposition 3.5 by reducing it to the circle case.
Proof.: Since the trivial component of the action does not affect the statement we assume for simplicity that there is no trivial component.
Consider any ridge \(J=\{j_{1},\ldots,j_{k-1}\}\in K(\alpha)\). Recall that the multiset \(\alpha=\{\alpha_{1},\ldots,\alpha_{r}\}\) linearly spans \(N_{\mathbb{Q}}\cong\mathbb{Q}^{k}\). Let \(\Pi_{J}\subset N_{\mathbb{Q}}\) be the rational hyperplane spanned by \(\alpha_{j_{1}},\ldots,\alpha_{j_{k-1}}\). We partition all weights' indices into two disjoint classes: \([r]=A_{J}\sqcup B_{J}\), one for the weights lying in \(\Pi_{J}\), and another for the weights transversal to \(\Pi_{J}\):
\[A_{J}=\{j\in[r]\mid\alpha_{j}\in\Pi_{J}\},\qquad B_{J}=[r]\backslash A_{J}.\]
Notice that the set \(B_{J}\) consists of all indices \(i\) such that \(\{\alpha_{j}\mid j\in\{i\}\sqcup J\}\) is linearly independent and has rank \(k\). Therefore \(B_{J}\) parameterizes the ways to complement the ridge \(J\) to the facet in \(K(\alpha)\). Hence
\[|B_{J}|\text{ equals the number of facets containing }J. \tag{3.2}\]
Consider the decomposition of \(V=V_{A}\oplus V_{B}\) into two summands corresponding to the partition of the weights:
\[V_{A}=\bigoplus_{i\in A}V(\alpha_{i})\cong\mathbb{C}^{|A_{J}|},\qquad V_{B}= \bigoplus_{i\in B}V(\alpha_{i})\cong\mathbb{C}^{|B_{J}|}\]
Notice that \(V_{A}\) is the fixed point set of the \(1\)-dimensional toric subgroup
\[G=\operatorname{Ker}\prod_{j\in J}\alpha_{j}\colon T^{k}\to T^{k-1}= \operatorname{Ker}\prod_{j\in A_{J}}\alpha_{j}\colon T^{k}\to T^{|A_{J}|}\]
(more precisely, we take the connected component of \(1\) in these kernels to avoid disconnected groups). A general fact is described in Section 5: the flats of the linear matroid \(\alpha\) are in bijective correspondence with the fixed point sets of toric subgroups acting on \(V\). Here we apply this correspondence to the flat \(A_{J}\) of the matroid of weights.
Now we can summarize the idea of proof as follows. If we take a generic point \(x\) in \(V_{A}\), its tangent space decomposes as the sum of the tangent and normal components (parallel to \(V_{A}\) and \(V_{B}\) respectively). Then, informally, the \(T^{k}\)-action in vicinity of \(x\) splits into "the product" of the \(T^{k}/G\)-action on the tangent component and the \(G\)-action on the normal component. Since \(x\) is generic in \(V_{A}\), the action of \(T^{k}/G\) is free on the tangent component, so the orbit space of the tangent space is a manifold, and does not affect the local topology of the orbit space by Lemma 3.6. The \(G\)-action on the normal component is a circle representation on \(\mathbb{C}^{|B_{J}|}\). We are in position to apply Lemma 3.7. In the manifold case, this lemma implies \(|B_{J}|=2\), and in the boundary case it implies \(|B_{J}|=1\) which proves the required statement according to (3.2).
In order to justify this argument, the Slice Theorem should be applied. Let \(x\) be a point in \(V_{A}\subset V\) such that all its coordinates in this subspace are nonzero. For example, one can take
\[x=(\underbrace{1,\ldots,1}_{A_{J}},\underbrace{0,\ldots,0}_{B_{J}}).\]
Let \(\tau_{x}V\), \(\tau_{x}V_{A}\), and \(\nu_{x}\) be respectively the tangent space to \(V\), the tangent space to \(V_{A}\), and the normal space of the embedding \(V_{A}\subset V\) taken at the point \(x\). Obviously, \(\tau_{x}V=\tau_{x}V_{A}\oplus\nu_{x}\), \(\tau_{x}V_{A}\cong\mathbb{C}^{A_{J}}\) and \(\nu_{x}\cong\mathbb{C}^{B_{J}}\). The stabilizer \(T_{x}\) of the point \(x\) is the circle \(G\) introduced above, so the orbit \(T^{k}x\) is \((k-1)\)-dimensional. The Slice Theorem states that the orbit \(T^{k}x\) has a \(T^{k}\)-invariant neighborhood \(U\) equivariantly diffeomorphic to
\[T^{k}\times_{G}(\tau_{x}V/\tau_{x}T^{k}x)\]
Let \([x]\in V/T^{k}\) denote the class of the point \(x\) in the orbit space. Then \([x]\) has an open neighborhood in \(V/T^{k}\) equal to
\[U/T^{k}\cong(T^{k}\times_{G}(\tau_{x}V/\tau_{x}T^{k}x))/T^{k}=(\tau_{x}V/\tau_{ x}T^{k}x)/G. \tag{3.3}\]
Notice that the whole orbit \(T^{k}x\) lies inside \(V_{A}\), so \(\tau_{x}T^{k}x\subset\tau_{x}V_{A}\). Moreover, since \(G\) is the stabilizer of \(V_{A}\), the \(G\)-action on the whole subspace \(\tau_{x}V_{A}\) is trivial. Therefore the \(G\)-action on \(\tau_{x}V/\tau_{x}T^{k}x\) has the same weights as the \(G\)-action on \(\tau_{x}V/\tau_{x}V_{A}=\nu_{x}\). On the other hand, the \(G\)-action on \(\nu_{x}\cong\mathbb{C}^{|B_{J}|}\) is nontrivial (its weights are the projections of \(\{\alpha_{j}\ |\ j\in B\}\) under the induced map \(N=\operatorname{Hom}(T^{k},T^{1})\to\operatorname{Hom}(G,T^{1})\cong\mathbb{Z}\), and these
projections are nonzero by the construction of \(B\)). Therefore, applying Lemma 3.7 to the representation in (3.3) we see that the manifold case implies \(|B_{J}|=2\) and the boundary case implies \(|B_{J}|=1\) as desired.
By Remark 3.2, any independence complex \(K(\alpha)\) is homotopically equivalent to a wedge of spheres. If, moreover, it is a pseudomanifold, there is a fundamental class in the top homology group, so we necessarily have one sphere in the wedge. If \(K(\alpha)\) is a pseudomanifold with boundary, then the wedge consists of zero spheres, hence in this case \(|K(\alpha)|\) should be contractible. It happens that the condition of being both a matroid and a pseudomanifold puts even a stronger restriction on the combinatorics of a complex as proved by Provan and Billera in [8].
**Proposition 3.8** ([8]).:
1. _If_ \(K\) _is an independence complex of a matroid and, at the same time, a closed pseudomanifold, then_ \(K\) _is isomorphic to a join of boundaries of simplices, and, probably, a ghost complex._
2. _If_ \(K\) _is an independence complex of a matroid and, at the same time, a pseudomanifold with boundary, then_ \(K\) _is isomorphic to a join of a simplex, boundaries of simplices, and, probably, a ghost complex._
_Remark 3.9_.: Note that the ghost complex on one vertex can be formally considered as the boundary of \(0\)-dimensional simplex. So it will not be a mistake to remove the mention of ghost simplex from the formulation of Proposition 3.8.
To finalize the proof of Theorem 2.8 it remains to make a simple terminological observation.
_Remark 3.10_.: Recall that any collection of vectors (weights) \(\alpha\) gives rise to the independence complex \(K(\alpha)\). Properties of simplicial complexes translate to weight systems as follows.
1. There is an operation of direct sum of matroids. If \(\alpha\subset\mathbb{Q}^{k}\), and \(\beta\subset\mathbb{Q}^{l}\), then the direct sum is defined \(\alpha\sqcup\beta\subset\mathbb{Q}^{k}\times\mathbb{Q}^{l}\cong\mathbb{Q}^{k+l}\), where \(\alpha\) sits in the first summand, and \(\beta\) sits in the second. Then \(K(\alpha\sqcup\beta)=K(\alpha)*K(\beta)\). Vice versa, if \(K(\alpha)\) splits as the join of two independence complexes, then the weights of \(\alpha\) split in two groups lying in transversal vector subspaces, corresponding to the join factors. Recalling that the ambient vector spaces in our considerations are \(N_{\mathbb{Q}}=\operatorname{Hom}(T,T^{1})\otimes\mathbb{Q}\), it is seen that the join operation on the simplicial complexes corresponds to the direct product of torus representations.
2. A simplex \(\Delta_{A}\) is an independence complex of a linearly independent set in \(N_{\mathbb{Q}}\). This situation corresponds to representations of complexity zero, see Example 2.3.
3. A boundary of simplex \(\partial\Delta_{A}\) is an independence complex of a weight system \(\alpha_{1},\dots,\alpha_{|A|}\) where every \(|A|-1\) vectors are independent, but the whole system is not. These are the weights of complexity one representations in general position by Definition 2.4.
4. Ghost vertices resemble loops in a matroid. They correspond to zero weights, in other words, the trivial component of the action. In accordance with Remark 3.9, a trivial torus action on \(\mathbb{C}\) (or \(\mathbb{R}\)) can be considered as a degenerate case of complexity one torus action in general position.
Theorem 2.8 now follows from Propositions 3.5 and 3.8 and Remark 3.10.
## 4. Torus actions
**Construction 4.1**.: Consider a smooth action of a torus \(T\) on a connected smooth manifold \(X\). If \(H\subset T\) is a connected subgroup, any connected component \(Y\) of the fixed point submanifold \(X^{H}\) is called _an invariant submanifold_ of the action. Since \(T\) is commutative, invariant submanifolds are stable under \(T\)-action. The dimension of the generic toric orbit on \(Y\) is called _the rank_ of an invariant submanifold \(Y\). If \(Y\cap X^{T}\neq\varnothing\) (i.e. \(Y\) contains a \(T\)-fixed point), then \(Y\) is called _a face submanifold_ of the torus action.
The collection of all face submanifolds in \(X\) is a poset (graded by the ranks) which we denote by \(S(X)\). The poset \(S(X)\) has the greatest element, the manifold \(X\) itself. All minimal elements have rank \(0\), these are the connected components of the fixed point set \(X^{T}\).
The orbit space \(Y/T\) of a face submanifold \(Y\) is called _a face_ of the action. Faces are subspaces of the orbit space \(X/T\). Obviously, they are partially ordered by inclusion, and the poset of faces is naturally identified with \(S(X)\). The poset \(S(X)\) of faces carries a lot of useful information about the torus action as evidenced by the next examples.
**Example 4.2**.: If \(X\) is a smooth complete toric variety, \(S(X)\) is isomorphic to the poset of cones of its fan ordered by reversed inclusion. In particular, the Betti numbers of \(X\) are determined by the combinatorics of \(S(X)\) since they coincide with the \(h\)-numbers of the corresponding simplicial sphere. Similar statement holds for topological generalizations of toric varieties: quasitoric manifolds [13] and equivariantly formal torus manifolds [23].
**Example 4.3**.: In [2, 3] the combinatorics and topology of the poset \(S(X)\) was described for torus actions of complexity one in general position with isolated fixed points. In particular, it was proved in [3], that the Betti numbers of equivariantly formal manifolds with the listed properties are determined by the poset \(S(X)\).
**Remark 4.4**.: If a torus action on \(X\) is equivariantly formal and has isolated fixed points, we do not know if the poset \(S(X)\) determines the Betti numbers of \(X\) in general.
**Remark 4.5**.: The study of general properties of the face posets of torus actions with isolated fixed points was initiated by the first author in [4, 5]. Nontrivial examples of such posets related to regular semisimple Hessenberg varieties appeared in [6].
The assumption that a face submanifold should intersect the fixed point set allows to localize consideration of orbit spaces to the vicinity of fixed points. In the vicinity of fixed points the action can be linearized and reduced to the study of torus representations. Under appropriate assumptions about fixed points, we can prove a smooth version of Theorem 2.8. For convenience we introduce the following notion.
**Definition 4.6**.: A \(T\)-action on a smooth manifold \(X\) is called _a Leontief (totally Leontief) action_, if, for any fixed point \(x\in X^{T}\), the tangent representation \(\tau_{x}T\) is a Leontief (resp. totally Leontief) representation.
The action is called non-totally Leontief if it Leontief but not totally Leontief. Equivalently, all fixed points have Leontief tangent representations but at least one of these tangent representations is not totally Leontief.
**Proposition 4.7**.: _Let a torus \(T\) act smoothly on a connected closed smooth manifold \(X\). Assume that each invariant submanifold of \(X\) is a face submanifold, in other words, each invariant submanifold contains a fixed point. The the following statements hold._
1. _The action is totally Leontief if and only if_ \(X/T\) _is a closed topological manifold._
2. _The action is non-totally Leontief if and only if_ \(X/T\) _is a topological manifold with boundary._
3. _The action is non-Leontief if and only if_ \(X/T\) _is not a topological manifold._
Proof.: The proof repeats [2, Thm.2.10] so we only sketch a general idea. In the vicinity of a fixed point \(x\), the orbit space is homeomorphic to \(\tau_{x}X/T\) so the statement follows from Theorem 2.8. If \(x^{\prime}\) is any other point, then \(x^{\prime}\) lies in a principal orbit of some invariant submanifold \(Y\). Since \(Y\) contains some fixed point \(x\), we can continuously move \(x^{\prime}\) until we get in the vicinity of \(x\). Since \([x]\) has a neighborhood in \(X/T\) homeomorphic to an open disc (or a halfspace), the same holds for the orbit class \([x^{\prime}]\).
**Remark 4.8**.: The assumption that each invariant submanifold is a face submanifold may seem complicated and hard to check in practice. However, most actions automatically satisfy this property. All actions with \(H^{\rm odd}(X)=0\) have this property as follows from [23, Lm.2.2]. In particular, equivariantly formal torus actions with isolated fixed points have the property. Algebraic torus actions on smooth projective varieties satisfy this assumption according to Bialynicki-Birula theory, see details in [2].
## 5. Faces of Leontief representations
If a torus representation \(T\to{\rm GL}(V)\) is given, all invariant submanifolds of \(V\), in the sense of Construction 4.1, are \(T\)-invariant vector subspaces of \(V\). All of them are face submanifolds, since they necessarily contain the fixed point \(0\in V\). It is not very difficult to describe the combinatorial condition for a \(T\)-invariant vector subspace of \(V\) to be a face submanifold. Let us recall the notion of flats of a linear matroid.
**Construction 5.1**.: Let \(\alpha=\{\alpha_{1},\ldots,\alpha_{m}\}\) be a linear matroid, that is a multiset of vectors in some vector space \(W\). A subset \(\{i_{1},\ldots,i_{s}\}\subseteq[m]\), or the corresponding submultiset \(A=\{\alpha_{i_{1}},\ldots,\alpha_{i_{s}}\}\), is called _a flat_ of the linear matroid if \(A\) is an intersection of \(\alpha\) with some vector subspace \(\Pi\subset W\). The dimension of the linear span of \(A\) is called the rank of a flat \(A\). The flats of the matroid \(\alpha\) are partially ordered by inclusion, they form a graded poset, which is called _the geometric lattice_ of the matroid \(\alpha\) and is denoted \({\rm Flats}(\alpha)\).
In [4] we observed the following
**Proposition 5.2**.: _Let \(V=\bigoplus_{i=1}^{r}V(\alpha_{i})\oplus\mathbb{R}^{m-2r}\) be a representation of the torus with the weights \(\alpha\). Then all face submanifolds of \(V\) have the form_
\[\bigoplus_{\alpha_{i}\in A}V(\alpha_{i})\oplus\mathbb{R}^{m-2r}\]
_where \(A\) is a flat of the rational matroid of weights \(\alpha\subset N_{\mathbb{Q}}\). Therefore, in particular, the poset \(S(V)\) is isomorphic to the geometric lattice \(\operatorname{Flats}(\alpha)\)._
This statement was proved in the work [4] under the assumption that the trivial component \(\mathbb{R}^{m-2r}\) vanishes, however, the proof follows the same lines in the general case.
**Example 5.3**.: If the representation of \(T^{n}\) on \(V\cong\mathbb{C}^{n}\) is a representation of complexity zero, then \(\alpha=\{\alpha_{1},\ldots,\alpha_{n}\}\) is a basis of \(N_{\mathbb{Q}}\cong\mathbb{Q}^{n}\). In this case every subset of \(\alpha\) is a flat, so the poset \(S(V)\cong\operatorname{Flats}(\alpha)\) is the boolean lattice \(\mathbf{B}_{n}\).
**Example 5.4**.: If the representation of \(T^{n-1}\) on \(V=\mathbb{C}^{n}\) is a representation of complexity zero, then we have a collection of \(n\) weights \(\alpha=\{\alpha_{1},\ldots,\alpha_{n}\}\) in \(N_{\mathbb{Q}}\cong\mathbb{Q}^{n-1}\). Every subset \(A\subseteq\alpha\) is a flat unless \(|A|=n-1\). Let us denote the resulting poset by \(\mathbf{Sp}_{n-1}\):
\[\mathbf{Sp}_{n-1}=\{A\subseteq[n]\mid|A|\neq n-1\}.\]
This poset is isomorphic to the boolean lattice \(\mathbf{B}_{n}\) with all coatoms removed.
Recall from Definition 2.7 that the product representation of \(T^{d}\times\prod_{i=1}^{s}T^{n_{i}-1}\) on \(V=\mathbb{R}^{l}\times\mathbb{C}^{d}\times\prod_{i=1}^{s}\mathbb{C}^{n_{i}}\) is called a Leontief representation. We call it a Leontief representation of type \((d,\underline{n},l)=(d,\{n_{1},\ldots,n_{s}\},l)\). Since the product of matroids induces the product of the corresponding geometric lattices, we get the following consequence of Examples 5.3 and 5.4.
**Proposition 5.5**.: _For a Leontief representation \(V\) of type \((d,\underline{n},l)\), the face poset \(S(V)\) is isomorphic to_
\[\mathbf{B}_{d}\times\prod_{i=1}^{s}\mathbf{Sp}_{n_{i}-1}.\]
_Let \(D,N_{1},\ldots,N_{s}\) be disjoint sets of cardinalities \(d,n_{1},\ldots,n_{s}\) respectively. Then each face submanifold of \(V\) is encoded by a string \((A_{0},A_{1},\ldots,A_{s})\), where_
\[A_{0}\subseteq D,\text{ and for all }i\in[s]=\{1,\ldots,s\}\text{ we have }A_{i}\subseteq N_{i},|A_{i}|\neq n_{i}-1.\]
In toric topology, the structure of the induced torus action on the face submanifolds sometimes plays an important role. If an effective action of \(T\) on \(X\) has rank \(k\), and \(Y\subset X\) is a face submanifold of rank \(l\), then the induced action of \(T\) on \(Y\) has noneffective kernel of dimension \(k-l\). We may quotient out this noneffective kernel.
It should be noted that the class of Leontief representations is closed under taking faces.
**Lemma 5.6**.: _Consider a Leontief representation \(V\) of type \((d,\{n_{1},\ldots,n_{s}\},l)\), and let \(U\) be a face submanifold of \(V\) corresponding to the string \((A_{0},A_{1},\ldots,A_{s})\) as in Proposition 5.5. Let \(\mathcal{M}=\{i\in[s]\mid|A_{i}|=n_{i}\}\). Then \(U\) is a Leontief representation of type
\((d^{\prime},\underline{n^{\prime}},l)\) where_
\[\underline{n^{\prime}}=\{n_{i}\mid i\in\mathcal{M}\},\text{ and }d^{\prime}=|A_{0}|+\sum_{i\in[s] \setminus\mathcal{M}}|A_{i}|.\]
In other words, the complexity one component \(A_{i}\), \(i=1,\ldots,s\), of the string either contributes to a complexity one component (if \(|A_{i}|=n_{i}\)), or contributes to the complexity zero component (if \(|A_{i}|\leq n_{i}-2\)). The lemma is proved by a straightforward examination of flats in the weight matroid of a Leontief representation.
Lemma 5.6 immediately implies
Corollary 5.7.: _The induced action on a face submanifold of a Leontief action is Leontief._
Recall that a face of an action is the orbit space of a face submanifold. Corollary 5.7, Theorem 2.8, and Proposition 4.7 imply the following result.
Proposition 5.8.: _Let a torus \(T\) act smoothly on a closed smooth manifold \(X\). Assume that each invariant submanifold of \(X\) is a face submanifold. If the orbit space \(X/T\) is a topological manifold (either closed or with boundary) then each face of the action is also a topological manifold (either closed or with boundary)._
Example 5.9.: For actions of complexity zero (which are particular cases of Leontief actions), the orbit space is a manifold with boundary. All its faces are also manifolds with boundary except for the minimal elements of \(S(X)\). These minimal elements are the connected components of \(X^{T}\): these are closed manifolds1. Note that an isolated point is considered a closed manifold not a manifold with boundary.
Footnote 1: Abusing notation we identify \(X^{T}\) with \(X^{T}/T\)
Example 5.10.: For actions of complexity one in general position, all proper face submanifolds (all except \(X\) itself) have complexity \(0\). The local structure of faces in the vicinity of a fixed point was described in [2] and axiomatized in the notion of _a sponge_.
Finally, we make a simple observation which relates Leontief actions to the work of Cherepanov [12] on complexity one actions in non-general position.
Lemma 5.11.: _Every representation of complexity one is a Leontief representation._
Proof.: A representation of complexity one is characterized by \(n\) weights \(\alpha_{1},\ldots,\alpha_{n}\) in the \((n-1)\)-dimensional vector space \(N_{\mathbb{Q}}\cong\mathbb{Q}^{n-1}\). Hence there is a unique (up to multiplier) linear relation \(c_{1}\alpha_{1}+\cdots+c_{n}\alpha_{n}=0\). The weights' subset \(\{\alpha_{i}\mid c_{i}\neq 0\}\) corresponds to a complexity one action in general position, while the remaining weights correspond to an action of complexity \(0\).
Corollary 5.12.: _For an action of complexity one in non-general position, the orbit space is a topological manifold with boundary. All its faces are topological manifolds, either closed or with boundary._
## Appendix A Leontief substitution systems
In this section we explain the term Leontief representation by making a survey about Leontief substitution systems and drawing some analogies between the combinatorial theory which appeared in mathematical economics and representations described Section 2. It essentially follows the results of [8], however we provide some details which may be of use for researchers in toric topology.
Let \(A\) be a real matrix with \(g\) rows and \(f\) columns, and \(b\in\mathbb{R}^{g}\) be a column vector. Consider the convex polyhedron \(P\) determined by the system
(A.1) \[Ax=b,\quad x\geq 0.\]
**Definition A.1**.: System (A.1) (and the corresponding polyhedron \(P\)) is called _a Leontief substitution system_ if \(b\geq 0\), each column of \(A\) contains at most one positive entry, and \(P\) is nonempty. If, moreover, the polyhedron \(P\) is bounded, then it is called _a totally Leontief substitution system_.
**Remark A.2**.: Leontief work influenced the field of mathematical economics. In particular, the general task of linear programming was, to much extent motivated by the Leontief models. Originally [22] Leontief systems were introduced to model the following setting.
Assume that we have \(g\) commodities ("goods"), and \(f\) production sites ("factories"). In a production cycle each factory consumes some commodities and either produces a new commodity, or does not produce anything at all. The production cycle of the \(j\)-th factory is therefore characterized by some column vector \((a_{1,j},a_{2,j},\ldots,a_{g,j})^{t}\) where \(a_{i,j}\) is the output of \(i\)-th resource during a cycle. Since all resources, probably except one, are consumed by the factory rather than produced, all entries \(a_{i,j}\), probably except one, are nonpositive. Then the system (A.1) with the \(g\times f\)-matrix \(A=(a_{i,j})\) is a Leontief system. It solves the problem of finding the necessary amount of production cycles \(x=(x_{1},\ldots,x_{f})^{t}\geq 0\) for each factory, in order to get the prescribed vector of goods \(b=(b_{1},\ldots,b_{g})^{t}\geq 0\). The term "substitution" in the definition of Leontief systems refers to the fact that one resource can be produced by several factories: the production of this resource can be substituted (in the economical sense rather than in the mathematical). This means that a positive entry can appear on the same position in several columns of \(A\).
The polyhedron \(P\), which is the set of all solutions to (A.1), may be further used in linear programming, if one needs to minimize a total cost of production given by a linear function. Therefore, the combinatorial structure of the polyhedron \(P\) of a Leontief system is of particular importance: for example, it allows to estimate the complexity of the simplex-method for the optimization task.
System (A.1) is called _nondegenerate_ if the polyhedron \(P\) of its solutions is simple.
**Example A.3**.: The system \(Ax=b\) where \(b=(1,\ldots,1)^{t}\) and \(A\) is given by
\[\left(\begin{array}{ccccccccc}\omit\span\omit\span\omit\span\omit\span\omit\span \omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span \omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit \span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span \omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit \span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span \omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span \omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit \span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span \omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span \omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit \span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span \omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit \span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span \omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit \span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span \omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit \span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit \span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span \omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit \span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit \span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit \span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span \omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span \omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit \span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span \omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit \span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span \omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit \span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span \omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit \span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span \omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span \omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span \omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span \omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span \omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit \span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span \omit\span\omit\span\omit\span\omit\span\omit\omit\span\omit\span\omit \span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit \omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span \omit\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span \omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span \omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span \omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span \omit\span\omit\span\omit\span\omit\span\omit\omit\span\omit\span\omit\span \omit\span\omit\span\omit\omit\span\omit\span\omit\span\omit\span\omit\span \omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit \span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span \omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span \omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span \omit\span\omit\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span \omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span \omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span \omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span \omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span \omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span \omit\span\omit\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span \omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span \omit\span\omit\span\omit\span\omit\omit\span\omit\span\omit\span\omit \span\omit\span\omit\span\omit\span\omit\omit\span\omit\span\omit\span \omit\span\omit\span\omit\omit\span\omit\omit\span\omit\omit\span\omit \span\omit\span\omit\span\omit\span\omit\omit\span\omit\span\omit\omit \span\omit\span\omit\omit\span\omit\span\omit\span\omit\omit\span\omit \span\omit\span\omit\omit\span\omit\span\omit\span\omit\span\omit\omit \span\omit\span\omit\omit\span\omit\span\omit\span\omit\span\omit\span \omit\span\omit\omit\span\omit\span\omit\omit\span\omit\span\omit\span \omit\omit\span\omit\span\omit\span\omit\omit\span\omit\span\omit\span \omit\span\omit\omit\span\omit\span\omit\span\omit\span\omit\span\omit \span\omit\span\omit\omit\span\omit\span\omit\span\omit\span\omit\span \omit\omit\span\omit\span\omit\span\omit\omit\span\omit\span\omit \span\omit\span\omit\span\omit\span\omit\omit\span\omit\span\omit\omit \span\omit\span\omit\omit\span\omit\span\omit\span\omit\omit\span \omit\span\omit\span\omit\omit\span\omit\span\omit\omit\span\omit\omit \span\omit\span\omit\span\omit\omit\span\omit\span\omit\span\omit \span\omit\omit\span\omit\omit\span\omit\span\omit\omit\span\omit\span \omit\span\omit\omit\span\omit\span\omit\omit\span\omit\span\omit\omit \span\omit\span\omit\omit\span\omit\span\omit\omit\span\omit\omit \span\omit\span\omit\omit\span\omit\span\omit\span\omit\omit\span\omit \span\omit\omit\span\omit\span\omit\omit\span\omit\omit\span\omit \span\omit\span\omit\span\omit\omit\span\omit\omit\span\omit\span \omit\span\omit\omit\span\omit\span\omit\omit\span\omit\span\omit \omit\span\omit\span\omit\span\omit\omit\span\omit\span\omit\span \omit\span\omit\omit\span\omit\span\omit\omit\span\omit\omit\span \omit\span\omit\span\omit\span\omit\span\omit\omit\span\omit\span \omit\omit\span\omit\span\omit\span\omit\span\omit\omit\span\omit\span \omit\span\omit\omit\span\omit\span\omit\omit\span\omit\omit\span \omit\span\omit\span\omit\omit\span\omit\span\omit\omit\span\omit \omit\span\omit\span\omit\omit\span\omit\span\omit\omit\span\omit \omit\span\omit\span\omit\span\omit\omit\span\omit\span\omit\omit \span\omit\span\omit\omit\span\omit\omit\span\omit\span\omit\span \omit\span\omit\span\omit\span\omit\span\omit\omit\span\omit\span \omit\omit\span\omit\span\omit\omit\span\omit\omit\span\omit\omit\span \omit\span\omit\span\omit\span\omit\omit\span\omit\span\omit\omit \span\omit\omit\span\omit\span\omit\span\omit\omit\span\omit\span \omit\omit\span\omit\omit\span\omit\omit\span\omit\span\omit\omit\span \omit\omit\span\omit\omit\span\omit\omit\span\omit\omit\span\omit \omit\span\omit\omit\span\omit\span\omit\omit\span\omit\omit \span\omit\omit\span\omit\omit\span\omit\omit\span\omit\span \omit\omit\span\omit\omit\span\omit\omit\span\omit\omit\span\omit \span\omit\omit\span\omit\omit\span\omit\omit\span\omit\omit \span\omit\span\omit\omit\span\omit\omit\span\omit\omit\span\omit \omit\span\omit\omit\span\omit\span\omit\span\omit\omit\span\omit \omit\span\omit\omit\span\omit\omit\span\omit\span\omit\span \omit\omit\span\omit\span\omit\omit\span\omit\span\omit\omit\span \omit\span\omit\omit\span\omit\omit\span\omit\span\omit\span\omit \span\omit\span\omit\omit\span\omit\span\omit\omit\span \omit\span\omit\omit\span\omit\span\omit\span\omit\span\omit\omit\span \omit\omit\span\omit\omit\span\omit\span\omit\span\omit\span\omit \span\omit\omit\span\omit\omit\span\omit\span\omit\omit\span\omit \span\omit\omit\span\omit\span\omit\omit\span\omit\omit\span\omit \span\omit\omit\span\omit\span\omit\span\omit\omit\span\omit\omit \span\omit\span\omit\omit\span\omit\omit\span\omit\span\omit \span\omit\span\omit\span\omit\span\omit\span\omit\omit\span \omit\span\omit\span\omit\omit\span\omit\span\omit\omit\span \omit\span\omit\span\omit\span\omit\span\omit\span\omit\omit\span \omit\span\omit\span\omit\span\omit\omit\span\omit\span\omit\span \omit\span\omit\span\omit\span\omit\span\omit\span\omit\span \omit\omit\span\omit\span\omit\span\omit\omit\span\omit\span\omit \span\omit\span\omit\omit\span\omit\span\omit\span\omit\span\omit\span \omit\span\omit\omit\span\omit\span\omit\span\omit\span\omit\span \omit\span\omit\span\omit\span\omit\omit\span\omit\span\omit\span \omit\
1. _For a nondegenerate totally Leontief system, the simplicial complex_ \(K_{P}\) _is isomorphic to a join_ \(\Delta_{\mathcal{A}}=\partial\Delta_{A_{1}}*\cdots*\partial\Delta_{A_{s}}\) _of boundaries of simplices. In this case_ \(K_{P}\) _is a matroid complex._
2. _For a nondegenerate non-totally Leontief system, the simplicial complex_ \(K_{P}\) _is isomorphic to a subcomplex of_ \(\Delta_{\mathcal{A}}\) _which is a PL-ball of the same dimension as_ \(\Delta_{\mathcal{A}}\)_._
Proposition A.4 is a reformulation of the first part of this lemma.
Remark A.6.: Leontief (simplicial) complexes were introduced in [8] as an abstraction for the nerve-complexes of nondegenerate Leontief substitution systems. We do not give the definition here, however, we notice that among all Leontief simplicial complexes, only \(\Delta_{\mathcal{A}}=\partial\Delta_{A_{1}}*\cdots*\partial\Delta_{A_{s}}\) and \(\Delta_{\mathcal{A}}*\Delta^{d-1}\) are the independent complexes of a matroid, see [8, Thm.3.4]. Therefore, if a torus representation \(T\to\operatorname{GL}(V)\) have weights \(\alpha\), then the following statements are equivalent:
1. \(K(\alpha)\) is a Leontief complex;
2. \(V\) is a Leontief representation.
This explains the name proposed for such representations in the current paper. Notice that both statements above are equivalent to \(V/T\) being a manifold (with or without boundary) as stated in Theorem 2.8.
## Appendix B Dirac monopoles and torus actions
Proposition 2.6 is well-known and extremely important for \(n=2\). According to Lemma 2.5, we may restrict ourselves to a circle action on \(\mathbb{C}^{2}\cong\mathbb{R}^{4}\) given by
\[T^{1}=\{(t_{1},t_{2})\in T^{2}\mid t_{1}^{c_{1}}t_{2}^{c_{2}}=1\}\text{ acts by }(t_{1},t_{2})(z_{1},z_{2})=(t_{1}z_{1},t_{2}z_{2}).\]
with \(c_{1},c_{2}\) both nonzero and coprime. This \(T^{1}\)-representation can be rewritten in a more convenient and familiar form
(B.1) \[t(z_{1},z_{2})=(t^{k}z_{1},t^{l}z_{2})\]
where \(k,l\) are nonzero and coprime (it is easily seen that \(k=c_{2}\), \(l=c_{1}\)). So far we get a particular example of the circle representation described in Lemma 3.7. Here we fix the orientation of \(\mathbb{R}^{4}\) (and the irreducible summands) compatible with the chosen complex structure.
Restricting (B.1) to the unit sphere \(S^{3}\subset\mathbb{C}^{2}\) and taking the quotient by \(T^{1}\) we get the weighted projective line \(\mathbb{C}P^{1}(k,l)\). The latter is a (real) 2-dimensional orbifold with two isolated singularities having the isotropy groups \(\mathbb{Z}_{k}\) and \(\mathbb{Z}_{l}\). The weighted projective line \(\mathbb{C}P^{1}(k,l)\) is also the quotient of \(\mathbb{C}^{2}\backslash\{0\}\) by the action of the algebraical torus \(\mathbb{C}^{\times}\) given by the same formula (B.1), so it is also an algebraic variety. The underlying topological space of \(\mathbb{C}P^{1}(k,l)\) is homeomorphic to \(S^{2}\). It is also isomorphic to \(\mathbb{C}P^{1}\) as a variety. We refer to [7] as a good survey of the topology of weighted projective spaces. The quotient map takes the form
(B.2) \[p_{k,l}\colon S^{3}\to S^{3}/T^{1}=\mathbb{C}P^{1}(k,l)\cong S^{2}.\]
Of particular importance are the cases \((k,l)=(1,1)\) (the Hopf bundle), and \((k,l)=(1,-1)\) (inverse Hopf bundle).
Remark B.1.: The Hopf bundle \(p_{1,1}\) is the generator of \(\pi_{3}(S^{2})\cong\mathbb{Z}\). In this homotopy group, the following identity holds \([p_{k,l}]=kl[p_{1,1}]\in\pi_{3}(S^{2})\). This can be seen by composing \(p_{1,1}\) with the map \(S^{3}\to S^{3}\), \((z_{1},z_{2})\mapsto(z_{1}^{k},z_{2}^{l})\) having degree \(kl\).
Taking the open cone of the map \(p_{k,l}\), we get the map
\[\operatorname{Cone}p_{k,l}\colon\mathbb{R}^{4}=\operatorname{Cone}S^{3}\to \mathbb{R}^{4}/T^{1}=\operatorname{Cone}\mathbb{C}P^{1}(k,l)\cong\mathbb{R}^{3},\]
which, in particular, justifies Proposition 2.6 for \(n=2\).
The maps \(\operatorname{Cone}p_{1,1},\operatorname{Cone}p_{1,-1}\colon\mathbb{R}^{4}\to \mathbb{R}^{3}\) serve as the local topological models of Dirac magnetic monopoles in Kaluza-Klein theory. We give a brief overview of this theory below, and refer to [1, p.5] and references therein explaining the importance of Hopf bundles in theoretical physics.
Construction B.2.: Consider a smooth \(T^{1}\)-action on an orientable \(4\)-manifold \(X\) such that all its stabilizer subgroups are connected, and fixed points of the action are isolated. Then the orbit space \(Q=X/T\) is an orientable (topological) \(3\)-manifold. The projection map \(p\colon X\to Q\) is called _(a simplified topological) Kaluza-Klein model_. The fixed points of the action are called _magnetic monopoles_.
Let \(x\in X^{T^{1}}\) be a fixed point. The tangent representation \(\tau_{x}X\) is a complexity one \(T^{1}\)-representation in general position with some nonzero weights \(k,l\in\operatorname{Hom}(T^{1},T^{1})\). Since \(\tau_{x}X\) is oriented, the weights are defined up to simultaneous change of sign. If either \(|k|>1\) or \(|l|>1\), the representation (and hence the action on \(X\)) has disconnected stabilizers, see details in [2]. Therefore we either have \((k,l)=(1,1)\) or \((1,-1)\), meaning that each tangent representation is the cone over either the Hopf bundle or the inverse Hopf bundle. We say that the magnetic monopole \(x\) has charge \(+1\) in the first case, and \(-1\) in the second case.
Remark B.3.: The physical motivation for this construction goes as follows. The \(3\)-manifold \(Q\) is interpreted as a physical space, while the acting torus \(T^{1}=U(1)\) is the gauge group of electromagnetism. Away from magnetic monopoles, the \(T^{1}\)-action is free, so it gives rise to a principal \(T^{1}\)-bundle over the physical space, resulting in a gauge theory. Historically, Kaluza-Klein model was a precursor of the more general Yang-Mills theory.
The quantum theory of magnetic monopole proposed by Dirac [14], being reformulated in topological terms, is based on the observation that there exist nontrivial \(T^{1}\)-bundles over \(\mathbb{R}^{3}\backslash\{0\}\simeq S^{2}\), where \(0\) is the monopole. Principal \(T^{1}\)-bundles over \(S^{2}\) are classified by the homotopy classes
\[[S^{2},BT^{1}]=[S^{2},K(\mathbb{Z},2)]\cong H^{2}(S^{2};\mathbb{Z})\cong \mathbb{Z}\]
therefore the charge gets quantized. If the charge is \(+1\) or \(-1\), one gets Hopf or inverse Hopf bundle respectively. In this case it is possible to compactify both the base \(\mathbb{R}^{3}\backslash\{0\}\) and the total space of the fibration and get a well-defined map \(\mathbb{R}^{4}\to\mathbb{R}^{3}\) which is either \(\operatorname{Cone}p_{1,1}\) or \(\operatorname{Cone}p_{1,-1}\). This observation elegantly embeds Dirac quantum monopoles into Kaluza-Klein model, at least on the topological level.
Notice, however, that this observation is not suitable for Dirac monopoles with charges \(q\) different from \(\pm 1\). If \(|q|\geq 2\), the total space of the corresponding \(S^{1}\)-fibration on \(S^{2}\) is homeomorphic to the lens space \(L(q;1)\), hence compactifying it at the monopole point produces a singularity of type \(\operatorname{Cone}L(q;1)\) in the total space. In this case, \(X\) is not a manifold anymore.
In the terminology of Construction B.2, the following statement holds.
**Proposition B.4**.: _In a closed Kaluza-Klein model the charges of all monopoles sum to zero._
The more general statement is proved in the seminal works of Fintushel [15, 16] who classified circle actions on 4-manifolds in terms of their orbit data. There is also a resemblance between global topological properties of \(X\) and \(Q\).
**Proposition B.5**.: _Assume that Kaluza-Klein model \(X\mapsto X/T^{1}=Q\) is closed and there is at least one monopole. Then_
1. \(\pi_{1}(X)=1\) _if and only if_ \(Q\) _is homeomorphic to_ \(S^{3}\)_;_
2. _the_ \(T^{1}\)_-action on_ \(X\) _is equivariantly formal if and only if_ \(Q\) _is a homology 3-sphere._
Item (1) follows from [15] and Poincare conjecture. Item (2) is the homological version of the first item: under the assumption of isolated fixed points, equivariant formality is equivalent to the condition \(H^{\mathrm{odd}}(X;\mathbb{Z})=0\). For oriented 4-folds this condition further simplifies to \(H_{1}(X;\mathbb{Z})=0\). This homological statement was proved in [3] in a more general context of complexity one actions in general position. In higher dimensions we still have a correspondence between equivariant formality of a manifold, and the fact that its orbit space is a homology sphere.
**Remark B.6**.: It should be noted that the classical Kaluza-Klein theory considers \(T^{1}\)-bundle not over a 3-dimensional manifold, but over 4-dimensional curved space-time, so the whole theory is 5-dimensional. The aim of this theory is to incorporate both Maxwell equations and Einstein field equations uniformly as the Euler-Lagrange equations of a certain functional defined over a \(T^{1}\)-bundle on a space-time.
With time added into the model, magnetic monopoles become world lines, they are represented by 1-dimensional curves. In this case, the local model of the monopole is a circle action on \(\mathbb{R}^{5}\) which is the product of complexity one representation in general position on \(\mathbb{C}^{4}\), and the trivial action on \(\mathbb{R}^{1}\).
Globally these world lines may be treated as the components of the fixed point \(X^{T^{1}}\) for a \(T^{1}\)-action on a 5-manifold \(X\). Assume that the worldline of the monopole is oriented. Then the trivial component of the tangent representation gets oriented. Then the nontrivial transversal component becomes canonically oriented as well, so there is still a difference between Hopf and its inverse, and we can assign the charge \(\pm 1\) to the worldline depending on which one is the case. Changing an orientation of the worldline switches its charge.
Usually worldlines are assumed timelike, so we can choose their orientation canonically to agree with the causality in the ambient space-time \(Y=X/T^{1}\). However, allowing the curves \(X^{T^{1}}\) to be spacelike at some points, the situation when a couple of oppositely
charged monopoles is born or dies becomes naturally incorporated into the model, see Fig. 1.
The previous remark motivates further topological study of complexity one \(T^{1}\)-actions in general position on \(5\)-folds. In particular, it may be instructive to find the correct analogue of Proposition B.5 for the \(5\)-dimensional Kaluza-Klein theory with monopoles.
**Remark B.7**.: The important feature of the Kaluza-Klein model (either \(4\)- or \(5\)-dimensional) is that it is a torus action whose orbit space is a manifold without boundary, interpreted as the observed universe. If one restricts to the models where the gauge group is a compact torus, the analogue of the Kaluza-Klein model should be a smooth torus action on a manifold, whose orbit space is also a manifold. According to Proposition 4.7, such actions are precisely totally Leontief actions (under some assumptions on the fixed points which can be weakened in a natural way). Therefore, totally Leontief torus action give the broadest class of topological models generalizing Kaluza-Klein model.
|
2307.10282 | The manifestly gauge-invariant spectrum of the Minimal Supersymmetric
Standard Model | Formal field theory requires, even in the presence of a Brout-Englert-Higgs
effect, to maintain manifest non-perturbative gauge invariance. The
Fr\"ohlich-Morchio-Strocchi mechanism allows nonetheless an augmented
perturbative treatment. We perform such an augmented tree-level analysis for
the minimal supersymmetric standard model. We find that, as for the standard
model, corrections to standard perturbation theory are only sub-leading. | Axel Maas, Philipp Schreiner | 2023-07-18T11:01:32Z | http://arxiv.org/abs/2307.10282v1 | # The manifestly gauge-invariant spectrum of the Minimal Supersymmetric Standard Model
###### Abstract
Formal field theory requires, even in the presence of a Brout-Englert-Higgs effect, to maintain manifest non-perturbative gauge invariance. The Frohlich-Morchio-Strocchi mechanism allows nonetheless an augmented perturbative treatment. We perform such an augmented tree-level analysis for the minimal supersymmetric standard model. We find that, as for the standard model, corrections to standard perturbation theory are only sub-leading.
## I Introduction
Gauge symmetry cannot break spontaneously. This is a consequence of Elitzur's theorem [1]. As a consequence, the Brout-Englert-Higgs (BEH) effect can really only be considered a particular gauge fixing [2; 3; 4], which therefore cannot affect physical observables. As a consequence, any physical observable needs to be described in terms of manifestly, non-perturbatively gauge-invariant operators1[3; 4; 8].
Footnote 1: We note in passing that perturbative BRST is insufficient for this purpose, as it is broken [5] by the Gribov-Singer ambiguity [6; 7] in any non-Abelian gauge theory.
The success of phenomenology [9] based on perturbative treatments (PT), which ignore these requirements, can be understood in terms of the Frohlich-Morchio-Strocchi (FMS) mechanism [3; 4]. Utilizing the BEH effect, the FMS mechanism can be used to construct an augmented perturbation theory (APT) [10; 11; 12], which can be applied to the manifest gauge-invariant operators. This explicitly shows that in the standard model (SM) the result is dominated by PT, explaining the success of PT. The approach of APT is briefly introduced in section II for a general theory. See for a review of both APT calculations and lattice studies [13; 10].
This does not imply that there are no differences between PT and APT in the SM, but their size depends strongly on kinematical details, and so far appears to be below experimental sensitivity [11; 12; 13; 14; 15; 16; 17]. However, this is expected to change in the future.
Beyond the SM (BSM), this may no longer be the case. Indeed, a large class of theories has been found to show qualitative differences between PT and APT already at tree-level [13; 18; 19; 20; 10]. Yet some theories, like 2-Higgs doublet models (2HDM) [21], show only small quantitative differences as the SM. So far, no general criterion has been found under which condition the differences are quantitative or qualitative.
We therefore investigate here one of the major candidates of BSM physics, supersymmetry (SUSY) [22; 23; 24]. Especially, we perform a tree-level APT determination of the spectrum for the minimal supersymmetric standard model (MSSM). We find that the MSSM is dominated by its 2HDM-like subsector, and thus the situation is like in the SM and the 2HDM. In particular, we find that the superpartner sector does not actively affect the FMS mechanism.
We will build up the analysis starting from the supersymmetric electroweak sector in section IV, adding leptons in section V, and finally adding the remainder of the MSSM in section VI. We will pay special attention to the lightest supersymmetric particle (LSP), due to its central role in the low-energy dynamics of the MSSM and dark matter [22; 23]. We will not provide a thorough introduction of the MSSM, due to its complexity, but only give a brief collections of basic formulas in section III. We refer to [22; 25] for such an introduction, whose conventions we also follow. More technical details can also be found in [26].
Eventually, we summarize our findings in section VII. There, we will also comment on possible impacts due to the non-perturbative aspects in the strong subsector and implications for supersymmetry in more general contexts, especially for supersymmetry-breaking sectors. In the bulk of the text, we keep with the spirit [22] of the MSSM that SUSY breaking is parametrized. We also add a brief appendix A, in which we give some general arguments on why SUSY does not affect the FMS mechanism.
## II Augmented Perturbation Theory
APT is a straightforward extension of PT [27; 28; 29; 10; 11; 12; 13; 29]. APT starts, in contrast to PT, by formulating the desired matrix element in the form of manifestly and non-perturbatively gauge-invariant operators. In a non-Abelian gauge theory, local operators of this type are necessarily composite. Such operators can only carry global quantum numbers.
Consider an uncharged scalar in the SM. The simplest operator will be built from the Higgs doublet field \(\phi\), e. g. \(\phi^{\dagger}(x)\phi(x)\). The simplest matrix element will be the connected 2-point correlation function, the propagator.
The next step in APT is, just as in PT, to choose a suitable gauge. Suitable in the case of a theory with BEH effect is a gauge with non-zero Higgs vacuum expectation value (vev). After that, the FMS mechanism is applied. This happens by rewriting the matrix element of the gauge-invariant, composite operator by explicitly splitting the Higgs field in its vev and the fluctuations, i. e. \(\phi=vn+\eta\), where \(v\) is the vev, \(n\) a unit vector fixed by the gauge choice, and \(\eta\) the fluctuation field. The FMS mechanism thus yields the identity
\[\left\langle(\phi^{\dagger}\phi)(x)(\phi^{\dagger}\phi)(y)\right\rangle=v^{2} \left\langle(n^{\dagger}\eta(x))(n^{\dagger}\eta(y))\right\rangle+v\left\langle (n^{\dagger}\eta(x))(\eta^{\dagger}\eta)(y)+x\longleftrightarrow y\right\rangle +\left\langle(\eta^{\dagger}\eta)(x)(\eta^{\dagger}\eta)(y)\right\rangle \tag{1}\]
The first term is the same matrix element as appears in PT, while the other terms are new in APT. They ensure explicit gauge invariance to all orders of the left-hand-side, which the PT term alone cannot provide beyond tree-level [11; 12]. Neglecting the other terms, and expanding the first term perturbatively yields that the composite propagator is given by PT to all orders. Especially, the pole position, and thus mass and width, coincide to all orders in PT. This can be extended by including the other terms in \(v\), which in PT do not alter the mass or width, and yield only small changes otherwise. The correct treatment of the appearing composite operators require further adjustments [29], but they do not yield any conceptual changes. In the context of the paper, which is tree-level, this would anyhow not play a role. Furthermore, we will only consider propagators, which only have a single energy scale, the four-momentum \(p\). Hence, on dimensional grounds, the expression (1) can also be considered to be ordered by powers of \(p_{0}^{2}/v^{2}\). On this grounds, terms with less powers in \(v\) can be considered as sub-leading, and for the purpose of the present work an ordering in powers of \(v\) is feasible. This will in general no longer hold true beyond propagators [29].
In the same way, PT is reproduced for all matrix elements in the SM, but not in general theories [10; 13]. There, even the terms with highest powers in \(v\) can differ qualitatively [18; 30; 31], and thus APT and PT disagree qualitatively. This originates from group-theoretical structures, and is thus independent of the size of any couplings. In these cases, the difference already arises for the tree-level result for the matrix element with highest power in \(v\). It is thus the level which will be utilized here.
The validity of APT has been confirmed in lattice calculations, and in principle the success of PT in the SM is also evidence for it. See for a review [10; 13].
## III The Mssm
We will be considering the \(R\)-parity conserving MSSM as discussed in [22; 25]. It is defined by the particle content as listed in Tab. 1, the superpotential
\[\begin{split} W_{\text{MSSM}}&=\widetilde{u}\mathbf{y}_ {u}\widetilde{Q}\cdot H_{u}-\widetilde{d}\mathbf{y}_{d}\widetilde{Q}\cdot H_{d}\\ &-\widetilde{\tilde{e}}\mathbf{y}_{e}\widetilde{L}\cdot H_{d}+\mu H_ {u}\cdot H_{d},\end{split} \tag{2}\]
and the soft SUSY breaking terms
\[\begin{split}&\mathcal{L}^{\text{soft}}_{\text{MSSM}}=-\frac{1}{2} \left[M_{3}\widetilde{g}\widetilde{g}+M_{2}\widetilde{W}\widetilde{W}+M_{1} \widetilde{B}\widetilde{B}+h.c.\right]\\ &-\left[\widetilde{u}\mathbf{a}_{u}\widetilde{Q}\cdot H_{u}-\widetilde {d}\mathbf{a}_{d}\widetilde{Q}\cdot H_{d}-\widetilde{\tilde{e}}\mathbf{a}_{e}\widetilde {L}\cdot H_{d}+h.c.\right]\\ &-\widetilde{Q}^{\dagger}\mathbf{m}_{Q}^{2}\widetilde{Q}-\widetilde{L} ^{\dagger}\mathbf{m}_{L}^{2}\widetilde{L}-\widetilde{u}\mathbf{m}_{u}^{2}\widetilde{u} ^{\dagger}-\widetilde{d}\mathbf{m}_{d}^{2}\widetilde{d}^{\dagger}-\widetilde{ \tilde{e}}\mathbf{m}_{e}^{2}\widetilde{e}^{\dagger}\\ &-m_{u}^{2}H_{u}^{\dagger}H_{u}-m_{d}^{2}H_{d}^{\dagger}H_{d}- \left[m_{u}^{2}H_{u}\cdot H_{d}+h.c.\right].\end{split} \tag{3}\]
All fermions are expressed as left-handed Weyl spinors, their naming follows the convention of the SM and their superpartners are denoted by a tilde above their name. The dot product is the \(SU(2)\) invariant product
\[X\cdot Y\equiv X^{T}(i\sigma^{2})Y=\epsilon^{ab}X_{a}Y_{b}, \tag{4}\]
\(\mathbf{y}\) are the SM Yukawa matrices and \(\mathbf{m}\) and \(\mathbf{a}\) are \(3\times 3\) matrices in flavor space as well. This information is sufficient to uniquely derive the MSSM's Lagrangian [25] starting from the general form
\[\begin{split}\mathcal{L}&=-\frac{1}{4}F_{\mu\nu}^{a }F_{a}^{\mu\nu}+i\widetilde{A}^{\dagger a}\bar{\sigma}^{\mu}(D_{\mu}\widetilde {A})_{a}\\ &+(D_{\mu}\phi)_{i}^{\dagger}(D^{\mu}\phi)_{i}+i\widetilde{\phi}_ {i}^{\dagger}\bar{\sigma}^{\mu}D_{\mu}\widetilde{\phi}_{i}\\ &-\sum_{i}\left|\frac{\partial W}{\partial\phi_{i}}\right|^{2}- \frac{g^{2}}{2}(\phi_{i}^{\dagger}T^{a}\phi_{i})(\phi_{j}^{\dagger}T_{a}\phi_{ j})\\ &-\frac{1}{2}\left[\left(\frac{\partial^{2}W}{\partial\phi_{i} \partial\phi_{j}}\right)\widetilde{\phi}_{i}\widetilde{\phi}_{j}+\text{h.c.} \right]\\ &-\sqrt{2}g\left[(\phi_{i}^{\dagger}T^{a}\widetilde{\phi}_{i}) \widetilde{A}_{a}+\widetilde{A}^{\dagger a}(\widetilde{\phi}_{i}^{\dagger}T_{a} \phi_{i})\right]\\ &+\mathcal{L}_{\text{soft}}.\end{split} \tag{5}\]
As \(SU(2)\) generators we use the scaled Pauli matrices \(T^{a}=\sigma^{a}/2\).
## IV (Electro)weak-Higgs(ino) sector
### Setup
We now consider the MSSM subsector consisting only of the two Higgs \(H_{u,d}\) and their superpartners \(\widetilde{H}_{u,d}\), and the \(SU(2)_{L}\times U(1)_{Y}\) gauge supermultiplets of the \(W\) bosons and \(B\) boson together with the winos \(\widetilde{W}\) and
bino \(\widetilde{B}\). Thus the superpotential (2) contains only the last term. This yields the scalar potential part
\[V(H_{d},H_{u})=(|\mu|^{2}+m_{u}^{2})H_{u}^{\dagger}H_{u}+(|\mu|^{2} +m_{d}^{2})H_{d}^{\dagger}H_{d}\] \[\quad+\Big{(}m_{ud}^{2}H_{u}\cdot H_{d}+h.c.\Big{)} \tag{6}\] \[\quad+\frac{g^{2}+g^{\prime 2}}{8}(H_{d}^{\dagger}H_{d}-H_{u}^{ \dagger}H_{u})^{2}+\frac{g^{2}}{2}(H_{d}^{\dagger}H_{u})(H_{u}^{\dagger}H_{d}).\]
The parameter \(m_{ud}^{2}\) can always be chosen to be real by redefining either of the Higgs fields to absorb its phase [32]. The parameters \(\mu\) and \(M_{2}\) are chosen real to avoid additional CP violations [22; 25].
The scalar potential (6) can be reexpressed in terms of the bidoublet [33]
\[H\equiv(H_{u},-H_{d})=\begin{pmatrix}H_{u}^{+}&-H_{d}^{0}\\ H_{u}^{0}&-H_{d}^{-}\end{pmatrix}, \tag{7}\]
containing both Higgs doublets. Similarly, the \(\widetilde{H}_{u}\) and \(\widetilde{H}_{d}\) are combined into a bidoublet \(\widetilde{H}\). The Higgs potential then reads
\[V(H)= \text{tr}\ H^{\dagger}HM-2m_{ud}^{2}\ \text{Re}\det H^{\dagger}- \frac{g^{2}}{2}\det H^{\dagger}H\] \[\quad+\frac{g^{2}}{8}(\text{tr}\ H^{\dagger}H)^{2}+\frac{g^{\prime 2 }}{8}(\text{tr}\ H^{\dagger}H\sigma^{3})^{2}, \tag{8}\]
with the mass matrix \(M=\text{diag}(m_{u}^{2}+\mu^{2},m_{d}^{2}+\mu^{2})\). From this form we can read off that, so long as \(m_{u}^{2}=m_{d}^{2}\equiv m^{2}\) and \(g^{\prime}=0\), the potential \(V(H)\) is not only invariant under \(SU(2)_{\text{\scriptsize{G}}}\) transformations but exhibits a second global \(SU(2)_{\text{\scriptsize{G}}}\) symmetry. This symmetry will play a central role later on, similarly to a corresponding global symmetry in the SM [10].
In total, \(V(H)\) is then invariant under
\[H\to H^{\prime}=L(x)HR^{\dagger}\quad L(x)\in SU(2)_{L},\ R\in SU(2)_{\text{ \scriptsize{G}}}, \tag{9}\]
where the application from the right acts like a Higgs flavor symmetry. For \(g^{\prime}\neq 0\), \(SU(2)_{\text{\scriptsize{G}}}\) breaks down to the \(U(1)\) group of those transformations which leave \(\sigma^{3}\) in the last term of (8) invariant. This group is not further broken by allowing \(m_{u}^{2}\neq m_{d}^{2}\) because the diagonal matrix \(M\) can always be written in terms of \(\sigma^{3}\) and unity.
If we also rewrite the kinetic Higgs term using the bidoublet,
\[\begin{split}\mathcal{L}_{\text{kin}}^{\text{Higgs}}=\frac{1}{2 }\text{tr}\ \left(\partial^{\mu}H-igW_{a}^{\mu}\frac{\sigma^{a}}{2}H-ig^{\prime}B^{\mu}H \frac{\sigma^{3}}{2}\right)^{\dagger}\\ \left(\partial_{\mu}H-igW_{\mu}^{a}\frac{\sigma^{a}}{2}H-ig^{ \prime}B_{\mu}H\frac{\sigma^{3}}{2}\right),\end{split} \tag{10}\]
we further see that this remaining \(U(1)\) group actually corresponds to hypercharge transformations \(H_{u,d}\rightarrow\exp(\pm i\alpha/2)H_{u,d}\), which in the bidoublet form reads
\[H\to H^{\prime}=H\exp\!\left(i\alpha\frac{\sigma^{3}}{2}\right). \tag{11}\]
Hypercharge is hence a subgroup of the global symmetry group, as in the SM [34; 10]. Until section VI), \(SU(2)_{\text{\scriptsize{G}}}\) will be kept intact to simplify calculations.
However, even for \(g^{\prime}=0\) in general two different vevs emerge from the Higgs potential breaking \(SU(2)_{\text{\scriptsize{G}}}\). To avoid this, we choose \(m_{ud}^{2}=\mu^{2}+m^{2}\). This choice creates flat directions and not the correct phenomenology. However, as this will be lifted later, this will serve for the moment as a technical auxiliary.
The Lagrangian of the custodial symmetry preserving
\begin{table}
\begin{tabular}{c c c c} \hline \hline Names & Boson & Fermion & \([SU(3)_{c},SU(2)_{L},U(1)_{Y}]\) \\ \hline l.h. (s)quarks & \(\widetilde{Q}=(\widetilde{u},\widetilde{d})\) & \(Q=(u,d)\) & [**3**,**2**, \(\frac{1}{3}\)] \\ r.h. up (s)quark & \(\widetilde{u}\) & \(\bar{u}\) & [**3**,**1**, -\(\frac{4}{3}\)] \\ r.h. down (s)quark & \(\widetilde{d}\) & \(\widetilde{d}\) & [**3**,**1**, \(\frac{2}{3}\)] \\ \hline l.h. (s)leptons & \(\widetilde{L}=(\widetilde{\nu},\widetilde{e})\) & \(L=(\nu,e)\) & [**1**,**2**, -1] \\ r.h. (s)electron & \(\widetilde{e}\) & \(\bar{e}\) & [**1**,**1**, \(2\)] \\ \hline Higgs(inos) & \(H_{u}=(H_{u}^{+},H_{u}^{0})\) & \(\widetilde{H}_{u}=(\widetilde{H}_{u}^{+},\widetilde{H}_{u}^{0})\) & [**1**,**2**, 1] \\ & \(H_{d}=(H_{d}^{0},H_{d}^{-})\) & \(\widetilde{H}_{d}=(\widetilde{H}_{d}^{0},\widetilde{H}_{d}^{-})\) & [**1**,**2**, -1] \\ \hline gluons, gluinos & \(g\) & \(\widetilde{g}\) & [**8**,**1**, \(0\)] \\ W bosons, winos & \(W^{\pm},W^{0}\) & \(\widetilde{W}^{\pm},\widetilde{W}^{0}\) & [**1**,**3**, \(0\)] \\ B boson, bino & \(B\) & \(\widetilde{B}\) & [**1**,**1**, \(0\)] \\ \hline \hline \end{tabular}
\end{table}
Table 1: Field content of the MSSM. Notice that only the first family of quarks and leptons is listed explicitly. The charge assignments follow the ones used in [22].
MSSM weak-Higgs(ino) sector is then
\[\mathcal{L}_{\rm WH} =-\frac{1}{4}W^{a}_{\mu\nu}W^{\mu\nu}_{a}+i\widetilde{W}^{\dagger a} \bar{\sigma}^{\mu}(D_{\mu}\widetilde{W})_{a}\] \[+{\rm tr}\ (D_{\mu}H)^{\dagger}(D^{\mu}H)+{\rm tr}\ i\widetilde{H}^{ \dagger}\bar{\sigma}^{\mu}D_{\mu}\widetilde{H}\] \[-\frac{g}{\sqrt{2}}\left[{\rm tr}\ H^{\dagger}\sigma^{a}\widetilde {H}\widetilde{W}_{a}+\widetilde{W}^{\dagger a}{\rm tr}\ \widetilde{H}^{\dagger}\sigma^{a}H\right] \tag{12}\] \[+\mu\left[\det\widetilde{H}+{\rm h.c.}\right]-\frac{M_{2}}{2} \left[\widetilde{W}^{a}\widetilde{W}^{a}+h.c.\right]-V(H)\] \[V(H) =(\mu^{2}+m^{2})\left[{\rm tr}\ H^{\dagger}H-2\ \ {\rm Re}\det H^{ \dagger}\right]\] \[+\frac{g^{2}}{8}{\rm tr}\ H^{\dagger}H^{2}-\frac{g^{2}}{2}\det H^ {\dagger}H. \tag{13}\]
### Tree-level Spectrum
In a suitable gauge, here 't Hooft gauge
\[C^{a}=\partial_{\mu}W^{\mu}_{a}+g\xi\,{\rm Im}\,{\rm tr}\ V^{\dagger}\sigma^{a}H, \tag{14}\]
the neutral components of the Higgs acquire the same (real) vev \(v\). Employing APT, we utilize the split of \(H\) in vev \(V\) and fluctuation field \(\eta\)
\[H\to V+\eta,\qquad V=-v(i\sigma^{2}). \tag{15}\]
As noted, this yields a flat direction, and thus the value of \(v\) is not fixed. It is therefore treated as a free parameter until section VI. The vev \(V\) is not invariant under the full \(SU(2)_{L}\times SU(2)_{\rm G}\) group but only under the diagonal subgroup \(L_{\rm R}VR^{\dagger}=V\) with \(L_{\rm R}=(-i\sigma^{2})R(i\sigma^{2})\).
All fields will fall into multiplets of this diagonal subgroup, which will be denoted \(SU(2)_{m}\). To make this process as transparent as possible it is useful to introduce the basis \(b_{i}=\sigma^{i}(i\sigma^{2})\), \((i=0,1,2,3)\), which is orthonormal with respect to the scalar product \(\langle x,y\rangle\equiv\frac{1}{2}{\rm tr}\ x^{\dagger}y\). Any bidoublet \(Y\) can then be expressed in terms of this basis and the field bilinears \(y^{i}\) via
\[\begin{split} Y&=\begin{pmatrix}Y^{+}_{2}&-Y^{0}_{1} \\ Y^{0}_{2}&-Y^{-}_{1}\end{pmatrix}=y^{i}b_{i}\\ y&=-\frac{1}{2}\left(\begin{smallmatrix}Y^{0}_{1}+Y^{0}_{2}\\ Y^{-}_{1}+Y^{+}_{2}\\ i\left(-Y^{-}_{1}+Y^{+}_{2}\right)\\ Y^{0}_{1}-Y^{0}_{2}\end{smallmatrix}\right).\end{split} \tag{16}\]
The 0-component of such a vector transforms as an \(SU(2)_{m}\) singlet, \(y^{0}\to y^{\prime 0}=y^{0}\), and the remaining three as a triplet, \(y^{a}\to y^{\prime a}=T(L_{R})^{ab}y^{b}\). Here, \(T(L_{R})\) is the adjoint \(SU(2)\) matrix induced by the rotation \(L_{R}\).
In particular, we can write the Higgs fluctuation field \(\eta=\zeta^{i}b_{i}\) and the Higgsino bidoublet \(\widetilde{H}=\widetilde{\zeta}^{i}b_{i}\) in this basis. Inserting this into the Lagrangian (12) yields a mass \(m_{W}^{2}\equiv g^{2}v^{2}\) for the gauge bosons as well as the mass terms
\[\mathcal{L} \supset-\frac{1}{2}\left[2(\mu^{2}+m^{2})+m_{W}^{2}\right](2\,{ \rm Re}\,\zeta^{a})^{2}\] \[\quad-\frac{2(\mu^{2}+m^{2})}{2}(2\,{\rm Im}\,\zeta^{0})^{2}-\xi \frac{m_{W}^{2}}{2}(2\,{\rm Im}\,\zeta^{a})^{2} \tag{17}\] \[\quad+\sqrt{2}gv\widetilde{\zeta}^{a}\widetilde{W}_{a}+\mu\left[ \widetilde{\zeta}^{0}\widetilde{\zeta}^{0}-\widetilde{h}^{a}\widetilde{\zeta}^ {a}\right]-\frac{M_{2}}{2}\widetilde{W}^{a}\widetilde{W}^{a}+h.c.\]
Thus, the spectrum contains a massless scalar field \(h^{0}\equiv 2\,{\rm Re}\,\zeta^{0}\), a pseudoscalar \(A^{0}\equiv 2\,{\rm Im}\,\zeta^{0}\) of mass \(m_{A^{0}}^{2}\equiv 2(\mu^{2}+m^{2})\) and a mass-degenerate scalar triplet \(H^{a}\equiv 2\,{\rm Re}\,\zeta^{a}\) of mass \(m_{H}^{2}=m_{A}^{2}+m_{W}^{2}\). The remaining fields are the would-be Goldstone bosons \(G^{a}\equiv 2\,{\rm Im}\,\zeta^{a}\) which are also an \(SU(2)_{m}\) triplet and have the gauge-parameter dependent mass \(m_{G}^{2}=\xi m_{W}^{2}\). We can further build linear combinations of the members of each triplet to get eigenstates of the \((T^{3})^{ab}=i\epsilon^{3ab}\) operator. We find that \(H^{\pm}\equiv(H^{2}\pm iH^{1})/\sqrt{2}\) and \(H^{0}\equiv H^{3}\) are eigenstates of definite \(T^{3}=\pm 1,0\) and analogously for \(G^{a}\) and \(W^{a}\).
So far, the results are essentially analogous to the 2HDM [35]. For the superpartners we find a singlet2\(\widetilde{N}^{0}_{3}\equiv\sqrt{2}i\widetilde{\zeta}^{0}\) of mass \(\mu\) and two triplets \(\widetilde{\chi}^{a}_{1,2}\) with different masses. The latter are linear combinations of \(\sqrt{2}\widetilde{\zeta}^{a}\) and \(\sqrt{2}\widetilde{W}^{a}\), i. e. they are mixtures of higgsino and wino degrees of freedom. Again, \(T^{3}\) eigenstates can be formed giving two more neutralinos \(\widetilde{N}^{0}_{1,2}\) and two charginos \(\widetilde{C}^{\pm}_{1,2}\). The results are summarized in Tab. 2.
Footnote 2: This is one of the neutralinos [25]. At \(g^{\prime}\neq 0\), \(B_{\mu}\) is a massless eigenstate and \(\widetilde{B}\) has its soft breaking mass \(M_{1}\). This will change at \(g^{\prime}\neq 0\),. Then \(B_{\mu}\) mixes with the \(W^{a}_{\mu}\) to create the photon and the \(Z\) boson, and the bino
\begin{table}
\begin{tabular}{c c c c} literature names [25] & content & \(SU(2)_{m}\) & (squared) \\ names [25] & (bilinear basis) & & mass \\ \hline \(W^{0,\pm}_{\mu}\) & \(W^{a}_{\mu}\) & \(\mathbf{3}\) & \(m_{W}^{2}\) \\ \(h^{0}\) & \({\rm Re}\,\zeta^{0}\) & \(\mathbf{1}\) & \(0\) \\ \(A^{0}\) & \({\rm Im}\,\zeta^{0}\) & \(\mathbf{1}\) & \(m_{A^{0}}^{2}\) \\ \(H^{0,\pm}\) & \({\rm Re}\,\zeta^{a}\) & \(\mathbf{3}\) & \(m_{A^{0}}^{2}+m_{W}^{2}\) \\ \(G^{0,\pm}\) & \({\rm Im}\,\zeta^{a}\) & \(\mathbf{3}\) & \(\xi m_{W}^{2}\) \\ \(\widetilde{N}^{0}_{3}\) & \(\widetilde{\zeta}^{0}\) & \(\mathbf{1}\) & \(\mu\) \\ \(\widetilde{N}^{0}_{1,2},\widetilde{C}^{\pm}_{1,2}\) & \(\widetilde{\zeta}^{a}\), \(\widetilde{W}^{a}\) & \(2\times\mathbf{3}\) & two different \\ \(\widetilde{N}^{0}_{4}\) & \(\widetilde{B}\) & \(\mathbf{1}\) & \(M_{1}\) \\ \(B_{\mu}\) & \(B_{\mu}\) & \(\mathbf{1}\) & \(0\) \\ \end{tabular}
\end{table}
Table 2: Perturbative tree-level spectrum of the (electro)weak-Higgsino sector with intact symmetry \(SU(2)_{m}\). The first column contains the usual names of the mass eigenstates, the second column denotes their field content in terms of our conveniently chosen basis (where applicable).
mixes with the neutralinos to form a fourth neutralino \(\widetilde{N}^{0}_{4}\).
### Gauge-invariant Operators
Switching to APT allows to investigate the physical spectrum. At first, we have to construct gauge-invariant operators which is straightforward using the bidoublet formulation. These are distinguished only by their \(J^{PC}\) assignment, the \(SU(2)_{\rm G}\) quantum number, and \(R\)-parity. In Tab. 3 suitable operators with minimal field content up to spin \(J=1\) are listed.
There are both scalar and fermionic \(SU(2)_{\rm G}\) singlets and triplets. Applying APT yields at highest power of \(v\) the operators listed in the last columns of table 3. To illustrate the process, consider \({\rm tr}\ H^{\dagger}H\). Tree-level APT yields
\[{\rm tr}\ H^{\dagger}H\supset-2v\ {\rm tr}\ \sigma^{i}\,{\rm Re}\,\zeta^{i}=-4v \,{\rm Re}\,\zeta^{0},\]
i.e. it reduces to the elementary scalar singlet. Likewise, the fermionic singlet operator reduces to \(\widetilde{\zeta}^{0}\). The matrix \(c^{Aa}\) maps gauge indices \(a\) to custodial (physical) indices \(A\). Because of its special form
\[c^{Aa}={\rm diag}(1,-1,1) \tag{18}\]
this mapping is one-to-one but not trivial like in the SM [10]. This shows that the nonphysical \(SU(2)_{\rm m}\) triplets map to \(SU(2)_{\rm G}\) triplets and so does their mass degeneracy.
Notice that the mapping to \(SU(2)_{m}\) eigenstates is sufficient as at tree-level the FMS mechanism is transparent to linear combinations. This allows to construct mass eigenstates, in case they differ. E.g. the charged Higgs can be constructed as3
Footnote 3: Notice that the \(\mp\) between the Pauli matrices is due to the special structure of (18) and opposite to our definition of \(SU(2)_{m}\ T^{3}\) eigenstates in Sec. IV.2, like \(H^{\pm}=(H^{2}\pm iH^{1})/\sqrt{2}\) for example.
\[{\rm tr}\ H^{\dagger}H(\sigma^{2}\mp i\sigma^{1})\supset-v(4\,{\rm Re}\,\zeta^ {2}\pm 4i\,{\rm Re}\,\zeta^{1})\sim vH^{\pm}.\]
It is important to realize that there is no gauge-invariant operator mapping to the (unphysical) would-be Goldstone bosons \(G^{0,\pm}\sim{\rm Im}\,\zeta^{a}\). They are 'projected' out of the physical spectrum automatically. This replaces their usual removal from the physical spectrum by a BRST construction.
Finally, the bino is already gauge-invariant, and the \(B\) boson needs to be treated like the photon in QED. In total, all the new Higgs, charginos and neutralinos predicted by PT have a gauge-invariant counterpart. In particular, this holds for the lightest of the neutralinos, the LSP.
## V Leptons
### Setup
The next step is to include one lepton generation, i.e. a left-handed lepton \(L\) with its superpartner \(\widetilde{L}\) and the right-handed (s)electron \(\bar{e}\) (\(\widetilde{\bar{e}}\)). As announced, for simplicity a right-handed neutrino and its superpartner is added, \(\tilde{\nu}\) and \(\widetilde{\tilde{\nu}}\), respectively. The superpotential is then
\[W=\mu H_{u}\cdot H_{d}-y_{\bar{e}}\widetilde{\bar{L}}\cdot H_{d}+y_{\nu} \widetilde{\bar{\nu}}\widetilde{\bar{L}}\cdot H_{u}, \tag{19}\]
and the soft breaking Lagrangian includes
\[\begin{split}\mathcal{L}_{\rm soft}\supset& a_{e} \widetilde{\bar{e}}\widetilde{L}\cdot H_{d}-a_{\nu}\widetilde{\nu}\widetilde{ \bar{L}}\cdot H_{u}\\ &-m_{L}^{2}\widetilde{L}^{1}\widetilde{\bar{L}}-m_{\widetilde{e} }^{2}\widetilde{\bar{e}}^{\dagger}\widetilde{\bar{e}}-m_{\tilde{\nu}}^{2} \widetilde{\bar{\nu}}^{\dagger}\widetilde{\bar{\nu}},\end{split} \tag{20}\]
in addition to the Higgs and Wino contributions from Sec. IV.
For technical simplicity, we assume
\[\begin{split} y_{e}=y_{\nu}&\equiv y\\ a_{e}=a_{\nu}&\equiv a\\ m_{\bar{e}}=m_{\tilde{\nu}}&\equiv m_{\tilde{\chi}},\end{split} \tag{21}\]
such that the \(SU(2)_{\rm G}\) of the weak-Higgs(ino) sector does not get entirely broken yet.
The \(\bar{e}\) and \(\bar{\nu}\) (as well as their superpartners) can then be put into \(SU(2)_{F}\) flavor doublets \(\bar{\lambda}\equiv(\bar{\nu},\bar{e})^{T}\) and \(\widetilde{\bar{\lambda}}\equiv(\bar{\nu},\bar{e})^{T}\).
\begin{table}
\begin{tabular}{c c c c c} \hline \hline Operator & Spin & \(SU(2)_{\rm G}\) & \(P_{R}\) & \(\widetilde{\bar{\nu}}\)S \\ \hline \({\rm tr}\ H^{\dagger}H\) & 0 & **1** & +1 & \(v\,{\rm Re}\,\zeta^{0}\) \\ \({\rm Im}\det H\) & 0 & **1** & +1 & \(v\,{\rm Im}\,\zeta^{0}\) \\ \({\rm tr}\ H^{\dagger}H\sigma^{A}\) & 0 & **3** & +1 & \(vc^{Aa}\,{\rm Re}\,\zeta^{a}\) \\ \({\rm tr}\ H^{\dagger}\widetilde{\bar{H}}\) & \(\frac{1}{2}\) & **1** & -1 & \(v\widetilde{\zeta}^{0}\) \\ \({\rm tr}\ H^{\dagger}\widetilde{\bar{H}}\sigma^{A}\) & \(\frac{1}{2}\) & **3** & -1 & \(vc^{Aa}\widetilde{\zeta}^{a}\) \\ \({\rm tr}\ H^{\dagger}\sigma^{a}H\sigma^{A}\widetilde{W}_{a}\) & \(\frac{1}{2}\) & **3** & -1 & \(v^{2}c^{Aa}\widetilde{W}^{a}\) \\ \({\rm tr}\ H^{\dagger}D_{\mu}H\sigma^{A}\) & 1 & **3** & +1 & \(v^{2}c^{Aa}W_{\mu}^{a}\) \\ \hline \hline \end{tabular}
\end{table}
Table 3: Gauge-invariant bound state operators with minimal field content up to spin 1 for the \(SU(2)_{\rm G}\)-symmetric special case of the MSSM weak-Higgs(ino) sector. The last column contains the corresponding leading order FMS contributions.
\((\widetilde{\nu},\widetilde{\bar{e}})^{T}\), which yields for the Lagrangian
\[\begin{split}\mathcal{L}&=\mathcal{L}_{\text{WH}}+(D_ {\mu}\widetilde{L})^{\dagger}(D^{\mu}\widetilde{L})+iL^{\dagger}\bar{\sigma}^{ \mu}D_{\mu}L\\ &\quad+(\partial_{\mu}\widetilde{\lambda})^{\dagger}(\partial^{ \mu}\widetilde{\lambda})+i\widetilde{\lambda}^{\dagger}\bar{\sigma}^{\mu} \partial_{\mu}\bar{\lambda}\\ &\quad-\frac{g^{2}}{8}\left[(\widetilde{L}^{\dagger}\sigma^{a} \widetilde{L})^{2}+(\widetilde{L}^{\dagger}\sigma^{a}\widetilde{L})\text{ tr }H^{\dagger}\sigma^{a}H\right]\\ &\quad-\frac{g}{\sqrt{2}}\left[(\widetilde{L}^{\dagger}\sigma^{a }L)\widetilde{W}_{a}+h.c.\right]\\ &\quad-|y|^{2}(\widetilde{\widetilde{\lambda}}^{\dagger\widetilde {\lambda}})(\widetilde{L}^{\dagger}\widetilde{L})-\left[\mu^{*}y\widetilde{ \widetilde{\lambda}}\cdot H^{\dagger}\widetilde{L}+h.c.\right]\\ &\quad-|y|^{2}\left(H\widetilde{\lambda}\right)^{\dagger}\left( H\widetilde{\lambda}\right)-|y|^{2}(\widetilde{L}\cdot H)(\widetilde{L} \cdot H)^{\dagger}\\ &\quad-\left[y(L\cdot\widetilde{H})\widetilde{\lambda}+y( \widetilde{L}\cdot\widetilde{H})\bar{\lambda}+y(L\cdot H)\bar{\lambda}+h.c. \right]\\ &\quad-\left[a(\widetilde{L}\cdot H)\widetilde{\lambda}+h.c. \right]-m_{L}^{2}\widetilde{L}^{\dagger}\widetilde{L}-m_{\widetilde{\lambda} }^{2}\widetilde{\lambda}^{\dagger}\widetilde{\lambda}.\end{split} \tag{22}\]
When (degenerate) Yukawa couplings \(y\) and/or the soft breaking parameter \(a\) is non-zero, the \(SU(2)_{\text{G}}\) and the flavor symmetry break down to a diagonal subgroup, \(SU(2)_{\text{G}}\times SU(2)_{F}\to SU(2)_{f}\).
### Tree-level Spectrum
After the Higgs acquires its vev, \(H=v(-i\sigma^{2})+\eta\), the relevant parts of the Lagrangian are
\[\mathcal{L}\supset -\left(\widetilde{L}^{T}\ \widetilde{\bar{\lambda}}^{\dagger} \right)\begin{pmatrix}v^{2}y^{2}+m_{L}^{2}&v(a-y\mu)\\ v(a-y\mu)&v^{2}y^{2}+m_{\widetilde{\lambda}}^{2}\end{pmatrix}\begin{pmatrix} \widetilde{L}^{\dagger T}\\ \widetilde{\bar{\lambda}}\end{pmatrix} \tag{23}\] \[-vy\left[(\nu\bar{\nu}+e\bar{e})+h.c.\right].\]
The last bracket contains Dirac mass terms of the electron and neutrino. We therefore combine their left-handed and right-handed components into Dirac spinors \(\psi^{e}\equiv(e,\bar{e}^{\dagger})^{T}\) and \(\psi^{\nu}\equiv(\nu,\bar{\nu}^{\dagger})^{T}\), and, since they have the same mass \(vy\), we collect them into a doublet \(\psi\equiv(\psi^{\nu},\psi^{e})^{T}\). A straightforward calculation yields the mass terms
\[\begin{split}\mathcal{L}&\supset-m_{\phi_{1}}^{2}\phi_{1}^{ \dagger}\phi_{1}-m_{\phi_{2}}^{2}\phi_{2}^{\dagger}\phi_{2}-m_{\psi}\bar{\psi} \psi\\ m_{\phi_{1,2}}^{2}&=\frac{1}{2}\left(m_{L}^{2}+m_{ \widetilde{\lambda}}^{2}+2v^{2}y^{2}\right.\\ &\qquad\left.\pm\sqrt{\left(m_{L}^{2}-m_{\widetilde{\lambda}}^{2} \right)^{2}+4v^{2}(a-y\mu)^{2}}\right)\\ m_{\psi}&=vy.\end{split} \tag{24}\]
The fields \(\phi_{1,2}\) are linear combinations of \(\widetilde{L}^{\dagger T}\) and \(\widetilde{\bar{\lambda}}\), i.e. they still contain selectrons and sneutrinos which form mass-degenerate doublets. From that we can read off that our mass spectrum consists of two scalar doublets of mass \(m_{\phi_{1,2}}^{2}\) and a doublet of Dirac fermions of mass \(m_{\psi}=vy\). However, they are 'doublets' with respect to different symmetries, but they actually mix \(SU(2)_{L}\) and \(SU(2)_{F}\) rotations. E.g. \(\widetilde{L}\) transforms under \(SU(2)_{L}\) while \(\widetilde{\bar{\lambda}}\) transforms under \(SU(2)_{F}\) and \(\phi_{1,2}\) transform under neither of them.
### Gauge-invariant Operators
The right-handed leptons are already gauge-invariant with respect to the weak interactions. The left-handed lepton doublets are not. A gauge-invariant operator can be constructed as a composite operator of a left-handed lepton and the Higgs bidoublet.
In Tab. 4 we state the possible operators with minimal field content and both the \(SU(2)_{\text{G}}\) and \(SU(2)_{F}\) multiplet structure to emphasize the point that both are important and their interplay is crucial. Nonetheless, the only remaining (global) symmetry is the diagonal symmetry \(SU(2)_{f}\) and all states listed in the table are doublets with respect to that group. Both \(H^{\dagger}L\) and \(\bar{\lambda}\) are Weyl spinors and can readily be combined into Dirac spinors
\[\Psi^{e}\equiv\begin{pmatrix}(H^{\dagger}L)_{1}\\ v(\bar{\lambda}^{c})_{1}\end{pmatrix},\quad\Psi^{\nu}\equiv\begin{pmatrix}(H^ {\dagger}L)_{2}\\ v(\bar{\lambda}^{c})_{2}\end{pmatrix}. \tag{25}\]
Notice that the charge conjugation acts on the doublet of Weyl spinors as \(\bar{\lambda}^{c}=i\sigma^{2}(\bar{\nu}^{\dagger},\bar{e}^{\dagger})^{T}=(\bar {e}^{\dagger},-\bar{\nu}^{\dagger})^{T}\) and that \(\bar{\lambda}^{c}\) transforms identical to \(\bar{\lambda}\) under \(SU(2)_{F}\), resp. \(SU(2)_{f}\).
We can now use those elements to build a gauge-invariant lepton doublet4
Footnote 4: Notice that \((H^{\dagger}L)_{1,2}=H_{u,d}^{\dagger}L\ \stackrel{{\text{FMS}}}{{ \rightsquigarrow}}e,\nu,\) i. e. that depending on which Higgs the left-handed elementary lepton is dressed with, it either results in a physical electron or a physical neutrino.
\[\begin{split}\Psi&=\begin{pmatrix}\Psi^{\nu}\\ \Psi^{e}\end{pmatrix}=\begin{pmatrix}[(H^{\dagger}L)_{2},-v\bar{\nu}^{\dagger}]^{T }\\ [(H^{\dagger}L)_{1},\ v\bar{e}^{\dagger}]^{T}\end{pmatrix}\\ \stackrel{{\text{FMS}}}{{\rightsquigarrow}}&v\begin{pmatrix}-[ \nu,\ \bar{\nu}^{\dagger}]^{T}\\ [e,\ \bar{e}^{\dagger}]^{T}\end{pmatrix}=v\begin{pmatrix}-\psi^{\nu}\\ \psi^{e}\end{pmatrix}\end{split} \tag{26}\]
which transforms like a doublet under \(SU(2)_{f}\) and reduces to the elementary lepton 'doublet' in tree-level APT.
The scalar partners of the leptons will also form a doublet. They are readily constructed from the operators in
\begin{table}
\begin{tabular}{c c c c c c} Operator & Spin & \(SU(2)_{\mathcal{C}}\) & \(SU(2)_{F}\) & \(SU(2)_{f}\) & \(P_{R}\) \\ \hline \(H^{\dagger}L\) & \(\frac{1}{2}\) & \(\mathbf{2}\) & \(\mathbf{1}\) & \(\mathbf{2}\) & \(+1\) \\ \(H^{\dagger}\widetilde{L}\) & \(0\) & \(\mathbf{2}\) & \(\mathbf{1}\) & \(\mathbf{2}\) & \(-1\) \\ \(\bar{\lambda}\) & \(\frac{1}{2}\) & \(\mathbf{1}\) & \(\mathbf{2}\) & \(\mathbf{2}\) & \(+1\) \\ \(\widetilde{\bar{\lambda}}\) & \(0\) & \(\mathbf{1}\) & \(\mathbf{2}\) & \(\mathbf{2}\) & \(-1\) \\ \end{tabular}
\end{table}
Table 4: Gauge-invariant operators in the lepton toy model and their quantum numbers.
the table as
\[\begin{split}\Phi_{1,2}&\equiv\alpha_{1,2}i\sigma^{2}(H^ {\dagger}\widetilde{L})^{*}+\beta_{1,2}v\widetilde{\widetilde{\lambda}}\\ &\stackrel{{\text{FMS}}}{{\sim}}\ v\left(\alpha_{1,2} \widetilde{L}^{\dagger T}+\beta_{1,2}\widetilde{\widetilde{\lambda}}\right)=v \phi_{1,2}\end{split} \tag{27}\]
where \(\alpha\) and \(\beta\) encode the relative phases obtained by diagonalizing the mass matrix in Eq. (23). \(\Phi_{1,2}\) now truly transform as doublets, but under \(SU(2)_{f}\). Hence, we once again observe that the perturbative 'doublet' is obtained from a map of composite physical operators. In total, we have shown that the physical spectrum indeed contains a doublet of Dirac fermions \(\Psi\) as well as two scalar doublets \(\Phi_{1,2}\), the masses of which are the same as in PT. In the totally symmetric case the members of the doublets are again mass degenerate.
Next, we will see how those degeneracies are lifted once we turn to the more realistic case of \(v_{d}\neq v_{u}\), \(y_{e}\neq y_{\nu}\), \(a_{e}\neq a_{\nu}\) and \(m_{\tilde{e}}\neq m_{\tilde{\nu}}\). All those changes break both the \(SU(2)_{\text{G}}\) as well as the flavor symmetry from before. Consequently, \(SU(2)_{f}\) will not be an exact symmetry of the theory anymore and the degeneracies of the bound states will be lifted. Via APT, this in turn explains how the elementary 'doublets' obtain different masses in a completely gauge-invariant fashion.
## VI Generalization to the whole MSSM
Up until now, we intentionally kept \(SU(2)_{\text{G}}\) intact and many parameters degenerate in order to make the calculations of the tree-level spectra easier and to see the structure behind how the different gauge-invariant operator multiplets map to the gauge-dependent multiplets. Lifting these restrictions is straightforward in the sense that the calculations done so far generalize. In Sec. VI.1 we will see how explicitly breaking \(SU(2)_{\text{G}}\) symmetry and having non-degenerate Yukawa couplings and soft breaking terms mixes both the elementary spectra as well as the corresponding bound state operators. Including hypercharge into the description is analogous to the SM (c.f. [10]) and discussed in Sec. VI.2. Finally, Sec. VI.3 discusses the extension to quarks and hadrons as well as multiple fermion generations.
### Broken Global Symmetry
So far, the two Higgs fields acquired the same vacuum expectation value. This will change now. Consider first the pure weak-Higgs(ino) sector. Introducing separate soft breaking masses for the two Higgs, making their off-diagonal mass an independent parameter, and letting them acquire different vevs explicitly breaks the \(SU(2)_{\text{G}}\) as can be seen from Eq. (8). We therefore expect the previously found mass eigenstates to mix with each other. A detailed discussion of this potential including the calculation of the scalar masses can, e.g., be found in [22].
However, we want to take a slightly different route here. Minimizing the potential leads to two independent vevs, \(v_{u}\) and \(v_{d}\). We define \(v^{\pm}\equiv v_{u}\pm v_{d}\) and align their directions in the bidoublet language as
\[H=\begin{pmatrix}0&-v_{d}\\ v_{u}&0\end{pmatrix}+\eta=\frac{v^{+}}{2}(-i\sigma^{2})+\frac{v^{-}}{2}\sigma ^{1}+\eta. \tag{28}\]
Without loss of generality, we choose \(v_{u}>v_{d}\) and we define \(\tan\beta\equiv v_{u}/v_{d}\).
After inserting the split (28) into the Lagrangian we find the gauge-boson mass \(m_{W}^{2}=\frac{g^{2}}{2}(v_{u}^{2}+v_{d}^{2})\). After expressing the fluctuation fields again via the bilinears defined in Eq. (16), i.e. \(\eta=\zeta^{i}b_{i}\), and replacing them by their literature names afterwards (c.f. Tab. 2), we can write the scalar mass matrix in block-diagonal form
\[\mathcal{L}\supset\frac{1}{2}\left(h^{0}\ \ H^{0}\right) \Lambda_{1}\begin{pmatrix}h^{0}\\ H^{0}\end{pmatrix}+\frac{1}{2}\left(A^{0}\ \ G^{0}\right)\Lambda_{2} \begin{pmatrix}A^{0}\\ G^{0}\end{pmatrix}\] \[\ \ \ \ \ \ \ +\left(H^{+}\ \ G^{+}\right)\Lambda_{3}\begin{pmatrix}H^ {-}\\ G^{-}\end{pmatrix}. \tag{29}\]
We see that the fields only mix pair-wise as compared to the previous (symmetric) case.
Solving the eigenvalue problem yields the explicit mixing for the pseudoscalar and charged scalars
\[A^{0^{\prime}} =\frac{1}{\sqrt{2}}\left[(\cos\beta+\sin\beta)A^{0}+(\sin\beta- \cos\beta)G^{0}\right]\] \[H^{-^{\prime}} =\frac{1}{\sqrt{2}}\left[i(\cos\beta+\sin\beta)H^{-}+(\sin\beta- \cos\beta)G^{-}\right].\]
Primed fields correspond to mass eigenstates of the non-\(SU(2)_{\text{G}}\)-symmetric case. For \(v_{d}=v_{u}\), or \(\cos\beta=\sin\beta\), the relations reduce to the previous pseudoscalar \(A^{0}\) and charged scalars \(H^{-}\), even though the entire model does not approach the fully symmetric case in that limit. This becomes apparent when looking at the neutral Higgs scalars \(h^{\prime}\) and \(H^{0^{\prime}}\) where the mixing is not just because \(v_{u}\) and \(v_{d}\) are different but also due to the newly introduced parameters, and \(m_{u}^{2},\ \ m_{d}^{2},\ \ m_{ud}^{2}\) are all independent. Nevertheless, we know from (29), that they will be linear combinations of the previously found fields \(h^{0}\) and \(H^{0}\).
For the gauge-invariant operators we find that the ones corresponding to \(A^{0}\) and \(H^{\pm}\) from before (c.f. Tab. 3) now automatically reduce to the new mass eigenstates, i.e.
\[\text{Im}\det H \stackrel{{\text{FMS}}}{{\sim}}\ v^{+}A^{0}+v^{-}G^{0} \sim A^{0^{\prime}}\] \[\text{tr}\ H^{\dagger}H(\sigma^{2}\mp i\sigma^{1}) \stackrel{{\text{FMS}}}{{\sim}}\ v^{+}H^{\pm}\pm iv^{-}G^{\pm} \sim H^{\pm^{\prime}}.\]
As already mentioned, this is not the case for \(h^{0}\) and \(H^{0}\). However, we can express \(h^{0}\) and \(H^{0}\) via the operators
\[v^{+}\text{tr}\ H^{\dagger}H-v^{-}\text{tr}\ H^{\dagger}H\sigma^{3 }\ \stackrel{{\text{FMS}}}{{\sim}}\ v_{d}v_{u}h^{0}\] \[v^{-}\text{tr}\ H^{\dagger}H-v^{+}\text{tr}\ H^{\dagger}H\sigma^{3 }\ \stackrel{{\text{FMS}}}{{\sim}}\ v_{d}v_{u}H^{0}\]
which in leading order correspond to the elementary fields of the fully symmetric case. Appropriate linear combinations then reduce to the corresponding primed fields.
Next, we turn to the higgsinos and winos: The relevant parts of the Lagrangian are unchanged, however, the new vev introduces additional mixing terms
\[\mathcal{L} \supset\frac{g}{\sqrt{2}}v^{+}\left[\widetilde{\zeta}^{a}\widetilde {W}_{a}+\widetilde{W}^{a\dagger}\widetilde{\zeta}^{a\dagger}\right]\] \[\quad-\frac{g}{\sqrt{2}}v^{-}\left[\widetilde{\zeta}^{0} \widetilde{W}^{3}+i\widetilde{\zeta}^{2}\widetilde{W}^{1}-i\widetilde{\zeta}^ {1}\widetilde{W}^{2}+h.c.\right]\] \[\quad+\mu\left[\widetilde{\zeta}^{0}\widetilde{\zeta}^{0}+ \widetilde{\zeta}^{0\dagger}\widetilde{\zeta}^{0\dagger}-\widetilde{\zeta}^{ \ast}\widetilde{h}^{a}-\widetilde{\zeta}^{a\dagger}\widetilde{\zeta}^{a\dagger }\right].\]
This can again be brought into block diagonal form in the basis of Tab. 2
\[\mathcal{L} \supset\begin{pmatrix}\widetilde{N}_{1}^{0}&\widetilde{N}_{2}^{0} &\widetilde{N}_{3}^{0}\end{pmatrix}\begin{pmatrix}\begin{array}{ccc}\cdot& \ast&\ast\\ \ast&\cdot&\ast\end{array}\end{pmatrix}\begin{pmatrix}\widetilde{N}_{1}^{0}\\ \widetilde{N}_{2}^{0}\\ \widetilde{N}_{3}^{0}\end{pmatrix}\] \[\quad+\begin{pmatrix}\widetilde{C}_{1}^{-}&\widetilde{C}_{2}^{-} \end{pmatrix}\begin{pmatrix}\begin{array}{ccc}\cdot&\ast\\ \ast&\cdot\end{array}\end{pmatrix}\begin{pmatrix}\begin{array}{ccc} \widetilde{C}_{1}^{+}\\ \widetilde{C}_{2}^{+}\end{array}+h.c.\end{array}\]
where the off-diagonal elements \(\ast\) are terms of the form \((v_{u}^{2}+v_{d}^{2})^{1/2}-(v_{u}+v_{d})/\sqrt{2}\) or \((v_{d}-v_{u})\) and therefore vanish when \(v_{u}=v_{d}\). We have thus demonstrated that the neutral (charged) fermions mix once \(SU(2)_{\text{G}}\) symmetry is violated. The new mass eigenstates approach the previous ones when the symmetry is restored. Once again, appropriate linear combinations of the operators found in Sec. IV can be used to build gauge-invariant bound states. Those in turn reduce to the elementary mass eigenstates in tree-level APT. In particular, an operator which augments the lightest of the uncharged fermions can be constructed, i.e. the LSP remains part of the physical spectrum.
At last, we investigate the case of \(v_{d}\neq v_{u}\) in the (s)lepton sector. Additionally, we set now \(y_{e}\neq y_{\nu}\), \(a_{e}\neq a_{\nu}\) and \(m_{\widetilde{e}}^{2}\neq m_{\widetilde{\nu}}^{2}\) but still assume that they are real. Inserting the new split (28) into the lepton Lagrangian yields
\[\mathcal{L} \supset-\xi_{1}^{\dagger}X_{1}\xi_{1}-\xi_{2}^{\dagger}X_{2}\xi_{2 }-v_{d}y_{e}\bar{\psi}^{e}\psi^{e}-v_{u}y_{\nu}\bar{\psi}^{\nu}\psi^{\nu}.\]
We immediately see that the lepton doublet splits, with masses proportional to the different vevs. The slepton masses are currently written in the basis \(\xi_{1}=(\widetilde{\nu},\widetilde{\widetilde{\nu}}^{\dagger})^{T}\), \(\xi_{2}=(\widetilde{e},\widetilde{\widetilde{e}}^{\dagger})^{T}\) with the matrices
\[X_{1} =\begin{pmatrix}\left(y_{\nu}^{2}-\frac{g^{2}}{8}\right)v_{u}^{2} +\frac{g^{2}}{8}v_{d}^{2}+m_{L}^{2}&v_{u}a_{\nu}-\mu v_{d}y_{\nu}\\ v_{u}a_{\nu}-\mu v_{d}y_{\nu}&v_{u}^{2}y_{\nu}^{2}+m_{\widetilde{\nu}}^{2}\\ \end{pmatrix}\] \[X_{2} =\begin{pmatrix}\left(y_{e}^{2}-\frac{g^{2}}{8}\right)v_{d}^{2} +\frac{g^{2}}{8}v_{u}^{2}+m_{L}^{2}&v_{d}a_{e}-\mu v_{u}y_{e}\\ v_{d}a_{e}-\mu v_{u}y_{e}&v_{d}^{2}y_{e}^{2}+m_{\widetilde{e}}^{2}\\ \end{pmatrix}\]
which are yet to be diagonalized. It is straightforward to do so but adds nothing new apart from four different slepton mass eigenstates. We notice, however, that for the case of equal vevs and degenerate \(y,a,m_{\widetilde{\lambda}}\), both mass matrices reduce to the ones found in the fully symmetric case which restores the mass-degenerate doublets.
For the leptons, we can immediately write down composite operators
\[\Psi^{e} =\begin{pmatrix}(H^{\dagger}L)_{1}\\ v_{u}(\widetilde{\lambda}^{c})_{1}\end{pmatrix}\begin{array}{ccc} \stackrel{{\text{FMS}}}{{\longrightarrow}}&v_{u}\psi^{e}\\ \end{pmatrix} \tag{30}\] \[\Psi^{\nu} =\begin{pmatrix}(H^{\dagger}L)_{2}\\ v_{d}(\widetilde{\lambda}^{c})_{2}\end{pmatrix}\begin{array}{ccc}\stackrel{{ \text{FMS}}}{{\longrightarrow}}&v_{d}\psi^{\nu}\end{array}\]
which are essentially the lepton operators found in the fully symmetric case, Eq. (25). Now, they merely expand with the different vevs. The slepton mass eigenstates will be linear combinations of \(\widetilde{\nu}\) and \(\widetilde{\widetilde{\nu}}^{\dagger}\) (\(\widetilde{e}\) and \(\widetilde{\widetilde{e}}^{\dagger}\), respectively) which is why it is sufficient to know that
\[(H^{\dagger}\widetilde{L})_{1} \stackrel{{\text{FMS}}}{{\longrightarrow}} v_{u}\widetilde{e} (H^{\dagger}\widetilde{L})_{2} \stackrel{{\text{FMS}}}{{\longrightarrow}} v_{d}\widetilde{\nu}\] \[v_{u}(\widetilde{\lambda}^{\dagger})_{2} \stackrel{{\text{FMS}}}{{\longrightarrow}} v_{u}\widetilde{e}^{\dagger} v_{d}(\widetilde{\lambda}^{\dagger})_{1} \stackrel{{\text{FMS}}}{{\longrightarrow}} v_{d}\widetilde{\nu}^{\dagger}.\]
Those operators are all gauge-invariant and can be combined such that they match whatever form the explicit mass eigenstates have5.
Footnote 5: Note that the linear combinations are formed between \((H^{\dagger}\widetilde{L})_{1}\) and \((\widetilde{\lambda}^{\dagger})_{2}\), i.e. with the components reversed. This is not a mistake and also present in the fully symmetric case, Eq. (27), where this ‘mixing’ is slightly hidden by \(i\sigma^{2}\). Furthermore, the sneutrino expands with \(v_{d}\) whereas the selectron expands with \(v_{u}\). This is indeed opposite to their masses, which are proportional to \(v_{u}\) and \(v_{d}\), respectively.
We conclude, that a mapping between perturbative mass eigenstates and physical composite states is possible, even for different Higgs vevs and non-degenerate couplings. The mixing of the perturbative mass eigenstates is completely parallel to the mixing of the physical composite state operators.
### Electric Charge and QED
As already mentioned, \(U(1)_{Y}\) is a subgroup of \(SU(2)_{\text{G}}\). As a result, some fields that have no explicit hypercharge assignment in the elementary field description (e.g. \(W_{\mu}^{a}\)) nevertheless acquire a non-zero electric charge in the composite operator language. We should therefore check whether the operators we constructed carry the same electric charge as their elementary counterparts. For that it is sufficient to investigate what effect the (global) hy
percharge transformations (c.f. Tab. 1)
\[H \to H^{\prime}=H\ \exp\!\left(i\alpha\frac{\sigma^{3}}{2}\right)\] \[L \to L^{\prime}=e^{-i\alpha/2}L,\quad\bar{e}\to\bar{e}^{\prime}=e^{2i \alpha/2}\bar{e}\]
of the elementary fields have on the composite operators.
The scalar and pseudo-scalar singlet operators \(\mathrm{tr}\ H^{\dagger}H\) and \(\mathrm{Im}\,\mathrm{det}\ H\) are invariant under such transformation because of the properties of trace and determinant. Hence, they are charge neutral just like their elementary counterparts \(h^{0}\) and \(A^{0}\). Likewise, the LSP operator \(\mathrm{tr}\ H^{\dagger}\widetilde{H}\) is charge neutral. The Higgs triplet transforms as
\[\begin{pmatrix}\mathcal{O}_{H^{+}}\\ \mathcal{O}_{H^{-}}\\ \mathcal{O}_{H^{0}}\end{pmatrix} =\begin{pmatrix}\mathrm{tr}\ H^{\dagger}H(\sigma^{2}-i\sigma^{1} )\\ \mathrm{tr}\ H^{\dagger}H(\sigma^{2}+i\sigma^{1})\\ \mathrm{tr}\ H^{\dagger}H\sigma^{3}\end{pmatrix}\] \[\to\begin{pmatrix}\mathcal{O}^{\prime}_{H^{+}}\\ \mathcal{O}^{\prime}_{H^{-}}\\ \mathcal{O}^{\prime}_{H^{0}}\end{pmatrix} =\begin{pmatrix}e^{i\alpha}\\ &e^{-i\alpha}\\ &&1\end{pmatrix}\begin{pmatrix}\mathcal{O}_{H^{+}}\\ \mathcal{O}_{H^{-}}\\ \mathcal{O}_{H^{0}}\end{pmatrix},\]
which confirms that they indeed carry electric charges of \(0,\pm 1\), just like the corresponding \(H^{0,\pm}\). The same can be done for the remaining triplet operators of the pure weak-Higgs sector as all of them boil down to the same rotation.
The left-handed (s)leptonic operators transform as
\[\begin{pmatrix}\mathcal{O}_{e}\\ \mathcal{O}_{\nu}\end{pmatrix} =H^{\dagger}L\to\begin{pmatrix}\mathcal{O}^{\prime}_{e}\\ \mathcal{O}^{\prime}_{\nu}\end{pmatrix}=\begin{pmatrix}e^{-i\alpha}\\ &1\end{pmatrix}\begin{pmatrix}\mathcal{O}_{e}\\ \mathcal{O}_{\nu}\end{pmatrix}\] \[\begin{pmatrix}\mathcal{O}_{\bar{e}}\\ \mathcal{O}_{\bar{\nu}}\end{pmatrix} =H^{\dagger}\widetilde{L}\to\begin{pmatrix}\mathcal{O}^{\prime}_{ \bar{e}}\\ \mathcal{O}^{\prime}_{\bar{\nu}}\end{pmatrix}=\begin{pmatrix}e^{-i\alpha}\\ &1\end{pmatrix}\begin{pmatrix}\mathcal{O}_{\bar{e}}\\ \mathcal{O}_{\bar{\nu}}\end{pmatrix}\]
and therefore their charge assignment is correct as well. Notice that right-handed fields automatically adopt their hypercharge as electric charge.
If we now gauge hypercharge, by including \(B_{\mu}\) in the covariant derivative and \(g^{\prime}\neq 0\), the diagonal subgroup \(SU(2)_{f}\) breaks down to its \(U(1)\) subgroup and becomes local. In this way a theory which is locally \(U(1)_{\mathrm{EM}}\) symmetric is obtained.
Since \(U(1)_{\mathrm{EM}}\) is a local symmetry, we again have to discuss gauge-invariance. However, as this is an Abelian gauge group, this operates differently. But due to the fact that every representation is one-dimensional, this can be solved like in QED [36; 37], and is therefore not different than in the standard model [37], and especially transparent to the FMS mechanism in the weak sector [10]. It will therefore not be detailed here. Note that the gaugino does not carry charge and is thus \(U(1)_{\mathrm{EM}}\) gauge-invariant.
Due to the introduction of \(B_{\mu}\) and the breaking of the \(W^{a}_{\mu}\) triplet we now have two fields in the neutral vector singlet channel which mix to create the \(Z\) boson and the photon. This holds both at the elementary level and at the composite level. For the superpartners, the situation is similar: Introducing the bino \(\widetilde{B}\) does not affect the charginos and merely mixes with the neutralinos. This implies that there are now two poles in the corresponding channels. Matching the new mass eigenstates with composite operators again reduces to a task of finding a suitable linear combination.
### Multiple Generations and Quarks
Including all three lepton generations substantially increases the complexity but changes nothing about our construction. Intergeneration mixing is completely transparent to our composite operator construction as one could just introduce operators for each generation (c.f. Tab. 4)
\[\begin{array}{ccc}H^{\dagger}L_{e},\ H^{\dagger}L_{\mu},\ H^{\dagger}L_{\tau}& \bar{\lambda}_{e},\ \bar{\lambda}_{\mu},\ \bar{\lambda}_{\tau}\\ H^{\dagger}\widetilde{L}_{e},\ H^{\dagger}\widetilde{L}_{\mu},\ H^{\dagger} \widetilde{L}_{\tau}&\widetilde{\lambda}_{e},\ \widetilde{\bar{\lambda}}_{\mu},\ \widetilde{\bar{ \lambda}}_{\tau}.\end{array}\]
Both components of these operators are inherently gauge-invariant and can be linearly combined and rotated in generation space to augment all resulting mass eigenstates.
In the context of (S)QCD, the low energy description is all about objects which are built from elementary quarks and gluons to form color neutral bound states, i. e. gauge-invariant with respect to \(SU(3)_{c}\). Nevertheless, fields like the pion still carry \(SU(2)_{L}\) charge which has to be taken care of. Luckily, in terms of electroweak and Higgs physics, the description of quarks is completely analogous to leptons [27; 10], i.e. we can readily write down
\[\begin{array}{ccc}\Psi^{d}&=\begin{pmatrix}(H^{\dagger}Q)_{1}\\ v_{u}\bar{d}^{\dagger}\end{pmatrix}&\stackrel{{\mathrm{FMS}}}{{ \sim}}&v_{u}\begin{pmatrix}d\\ \bar{d}^{\dagger}\end{pmatrix}=v_{u}\psi^{d}\\ \Psi^{u}&=\begin{pmatrix}(H^{\dagger}Q)_{2}\\ v_{d}\bar{u}^{\dagger}\end{pmatrix}&\stackrel{{\mathrm{FMS}}}{{ \sim}}&v_{d}\psi^{u},\end{array}\]
in analogy to Eq. (30) for leptons. The only difference is, that \(\Psi^{u,d}\) are not yet physical as they still carry color charge. Even though these 'quark-Higgs bound states' cannot exist in isolation, such contractions are still important when building the usual color singlets. An inherently gauge-invariant operator for \(\pi^{+}\) would e.g. be [10]
\[\begin{array}{ccc}\Pi^{+}\equiv\bar{\Psi}^{d}\Psi^{u}=\begin{pmatrix}v_{u} \bar{d}&(H^{\dagger}Q)_{1}^{\dagger}\end{pmatrix}\begin{pmatrix}(H^{\dagger}Q) _{2}\\ v_{d}\bar{u}^{\dagger}\end{pmatrix}\\ \stackrel{{\mathrm{FMS}}}{{\sim}}&v_{u}v_{d}\left(\bar{d}u+d^{ \dagger}\bar{u}^{\dagger}\right)\sim v_{u}v_{d}\pi^{+}.\end{array} \tag{31}\]
Notice that the expression on the right hand side is a color singlet but not an \(SU(2)_{L}\) singlet which makes
it apparent that using composite operators is also important in the (S)QCD subsector. Finally, just like before, we can use the \(U(1)\) subgroup of \(SU(2)_{\rm G}\) as well as the hypercharge assignments of the quarks in the SM to find that \(\Psi^{d,u}\) carry electric charges \(-1/3\) and \(2/3\), respectively. Consequently, \(\Pi^{+}\) carries the correct electric charge of \(+1\).
Squarks and gluinos are not color-gauge-invariant fields either and have to be treated in a bound state language with respect to \(SU(3)_{c}\), too. Nevertheless, this has no effect on our construction. Gluinos carry no weak charge and are therefore trivial. 'Left-handed' squarks get an appropriate Higgs dressing, just like in the leptonic case.
## VII Summary
We have shown that the MSSM, as the SM [3; 4; 10; 27] and the 2HDM [21], does not experiences any changes in its spectrum once physical composite states are used, rather than the elementary ones. As in the SM [11], it is unlikely that this will change beyond our tree-level calculations. As we performed the analysis keeping the explicit SUSY breaking terms in a suitable way, it follows immediately that SUSY, and \(R\)-parity, is transparent to APT. However, this is neither manifest nor trivial, see appendix A.
While we cannot offer a proof at the current time, the observed structure allows us to conjecture that for a non-Abelian theory the question, whether the physical spectrum and tree-level spectrum are mapped onto each other in a one-to-one fashion has the same answer in the original theory, and its supersymmetrized version. This should also hold when SUSY breaks, as long as the SUSY breaking is not tampering with the gauge symmetry, as is the case, e. g., in gauge-mediated SUSY-breaking scenarios. These cases will require further scrutiny. In the case of gauged SUSY, the situation can be expected to be very different, and in fact SUSY may be completely absent from the physical spectrum [13].
What goes into this conjecture is that the weakly-charged elementary spectrum is not changed due to the presence of SUSY. Beyond the weak sector, the strong sector and its possibility for new hybridization and other non-perturbative alterations of the spectrum of supersymmetric multiplets, needs to be checked as well. Though this appears unlikely without having impact on the experimentally accessible low-energy spectrum of the MSSM, this is a question of principle.
In total, the MSSM fits into the pattern so far observed that manifest gauge invariance does not lead to qualitative changes in the spectrum if the global group carried by the Higgs is at least as large as the gauge group. This is good news to phenomenology, as this implies that, up to the additional (sub-leading) terms in APT, predictions for experiment in the MSSM remain valid.
**Acknowledgements**
This work was done within the scope of the FCC Feasibility Study.
## Appendix A Manifest SUSY
Attempting to use APT in a manifest SUSY-invariant way leads to the following problem. The introduction of a Higgs vev is a gauge choice, but necessarily one which makes supersymmetry not manifest. This follows as the Higgs field is part of a superspin multiplet. Introducing a condensate only in one component necessarily hides the supermultiplet structure. Consider the Higgs superspin multiplet \(\Sigma=(\psi,\phi,\overline{\psi})\), which is build from two chiral supermultiplets to give a fundamental representation of the weak \(SU(2)\). As has been seen in the main text, this structure of two chiral supermultiplets carries the \(SU(2)_{\rm G}\) symmetry.
A manifestly gauge-invariant and SUSY-invariant composite scalar6 operator would be \(\Sigma^{\dagger}(x)\Sigma(x)\). Fixing a gauge with BEH effect and corresponding splitting of the Higgs field would yield for the operator
Footnote 6: Note that any manifest SUSY-invariant operator is necessarily a superspin scalar.
\[\Sigma^{\dagger}(x)\Sigma(x)=v\left(n^{\dagger}\eta(x)+\eta^{\dagger}(x)n \right)+\eta^{\dagger}(x)\eta(x)+\psi^{\dagger}\psi+\overline{\psi}^{\dagger} \overline{\psi}\]
Thus, the conventional APT reduction of section II will only work for the Higgs-component, and it will look as if there was no other pole in this channel. However, applying SUSY before gauge-fixing to the same composite operator necessarily implies that all components have the same mass. Thus, the two two-fermion operators need to carry the same pole, and thus are meson-like bound states, yielding the correct number of degenerate states forming the SUSY multiplet.
The situation would be similar for other supermultiplets. E. g., for a mass multiplet for leptons \(\Omega\), a suitable operator would be \(\Omega^{\dagger}\Sigma+{\rm cc}\), where again only one component will have a mass pole from tree-level APT, and the other components will have a mass pole by virtue of SUSY as non-trivial bound states.
While this is not in contradiction to APT, it invalidates the simplicity of just using tree-level APT. On the other hand, if SUSY is explicitly broken, like in the MSSM, this is no longer necessary, and the situation simplifies as in the main part of the paper. Alternatively, a gauge-dependent diagonal subgroup of the gauge symmetry and the supersymmetry could be used, instead, like using the diagonal subgroup of \(SU(2)_{\rm G}\) and gauge symmetry in the main text. Also, a formulation without manifest SUSY likewise works. Conversely, if one
would like to keep global supersymmetry manifest at the gauge-fixed level, this forbids a gauge choice implementing a BEH effect. This is not surprising, given the links between both symmetries already in PT [22; 23].
A common treatement of SUSY and the BEH effect will only be possible if both symmetries are of the same nature. If both are global, this is the usual situation well known in standard approaches, and APT will not be applicable nor necessary. On the other hand, making SUSY a gauge symmetry, both are again on the same footing. This is then supergravity with an additional local gauge symmetry [38]. In that case, just as in ordinary quantum gravity [39], APT appears to be possible [13]. But in that case there will be two BEH effects, one on the level of the Higgs field, and one on the level of the metric. Then both symmetries are no longer manifest, and again APT is directly applicable for both, like this is already the case in ordinary quantum gravity [39].
|
2305.06299 | Summarizing, Simplifying, and Synthesizing Medical Evidence Using GPT-3
(with Varying Success) | Large language models, particularly GPT-3, are able to produce high quality
summaries of general domain news articles in few- and zero-shot settings.
However, it is unclear if such models are similarly capable in more
specialized, high-stakes domains such as biomedicine. In this paper, we enlist
domain experts (individuals with medical training) to evaluate summaries of
biomedical articles generated by GPT-3, given zero supervision. We consider
both single- and multi-document settings. In the former, GPT-3 is tasked with
generating regular and plain-language summaries of articles describing
randomized controlled trials; in the latter, we assess the degree to which
GPT-3 is able to \emph{synthesize} evidence reported across a collection of
articles. We design an annotation scheme for evaluating model outputs, with an
emphasis on assessing the factual accuracy of generated summaries. We find that
while GPT-3 is able to summarize and simplify single biomedical articles
faithfully, it struggles to provide accurate aggregations of findings over
multiple documents. We release all data and annotations used in this work. | Chantal Shaib, Millicent L. Li, Sebastian Joseph, Iain J. Marshall, Junyi Jessy Li, Byron C. Wallace | 2023-05-10T16:40:37Z | http://arxiv.org/abs/2305.06299v2 | # Summarizing, Simplifying, and Synthesizing Medical Evidence Using GPT-3 (with Varying Success)
###### Abstract
Large language models, particularly GPT-3, are able to produce high quality summaries of general domain news articles in few- and zero-shot settings However, it is unclear if such models are similarly capable in more specialized, high-stakes domains such as biomedicine. In this paper, we enlist domain experts (individuals with medical training) to evaluate summaries of biomedical articles generated by GPT-3, given zero supervision. We consider both single- and multi-document settings. In the former, GPT-3 is tasked with generating regular and plain-language summaries of articles describing randomized controlled trials; in the latter, we assess the degree to which GPT-3 is able to _synthesize_ evidence reported across a collection of articles. We design an annotation scheme for evaluating model outputs, with an emphasis on assessing the factual accuracy of generated summaries. We find that while GPT-3 is able to summarize and simplify single biomedical articles faithfully, it struggles to provide accurate aggregations of findings over multiple documents. We release all data and annotations used in this work.1
Footnote 1: [https://github.com/cshaib/sumarizing-medical-evidence](https://github.com/cshaib/sumarizing-medical-evidence)
sumarizing-medical-evidence
## 1 Introduction
Large language models have been shown to be capable of producing high-quality and reasonably accurate summaries in _zero-shot_ settings (Goyal et al., 2022; Liang et al., 2022), with GPT-3 testing fully supervised models in generic news summarization, according to human judgments (Goyal et al., 2022). In this work we evaluate if such models are similarly able to summarize medical literature, a high-stakes domain that demands factual accuracy.
Specifically, we use the newest iteration of GPT-3 (text-davinci-003; GPT3-D3 from here) to generate summaries of (a) individual articles describing individual randomized controlled trials (RCTs) evaluating the efficacy of interventions, and, (b) collections of such articles that describe several trials addressing the same underlying clinical question (e.g., evaluating the same medication). These constitute single- and multi-document summarization tasks, respectively. In the single-document case, we also evaluate the ability of GPT3-D3 to summarize in _plain language_. We enlist domain experts (with medical training) to annotate model outputs, and seek to address the following questions.
**RQ1** Does GPT3-D3 produce _faithful_ summaries of medical articles?
**RQ2** Can GPT3-D3 accurately _simplify_ while also summarizing such texts?
**RQ3** Can GPT3-D3 _synthesize_--aggregate the findings presented in--multiple input articles in a way that accurately reflects the totality of the evidence?
**RQ4** What sort of factual mistakes does GPT3-D3 make when performing these tasks (if any), and what are the risks implied by such errors?
Overall, we find that GPT3-D3 performs single-document summarization and simplification with reasonably good accuracy. However, it is less able to accurately synthesize evidence reported in _collections_ of trials (in the multi-document case). We
Figure 1: We enlist domain experts to evaluate the factual accuracy of summaries and simplifications of medical articles describing clinical trials. We consider both single- and multi-document settings.
release all model outputs and accompanying annotations to facilitate additional work on this topic.
## 2 Single Document Summarization
DataWe sample 100 articles describing randomized control trials (RCTs) indexed in the Trial-streamer database Marshall et al. (2020), which also provides automatically extracted "key results"2 alongside titles and abstracts. We search for trials published after November 28 2022, following the release date of GPT3-D3, to ensure the model has not seen any of the studies during pre-training.
Footnote 2: Extracted sentence communicating the main findings.
Experimental SetupUsing the RCT data described above, we evaluate the ability of GPT3-D3 to faithfully summarize and simplify biomedical texts in a zero-shot setting. We also compare GPT3-D3 summaries to summaries generated using Flan-T5 Wei et al. (2021), but qualitatively find that GPT3-D3 summaries are much higher quality. We provide results of this comparison in Appendix F.3. Specifically, we prompt GPT3-D3 to separately produce: (i) a technical summary, and, (ii) a plain language summary August et al. (2022). See Appendix C for all prompts.
Study DesignWe designed an evaluation scheme that captures the sensitivity of medical information. To assess factuality, we collect annotations about omissions and errors with respect to main results, and key components of the trials including populations, interventions, and outcomes ("PICO" elements; Richardson et al. 1995). Where appropriate, we ask annotators to highlight spans of generated text that are inconsistent with the input--these might be "new" concepts introduced or spans that directly contradict the input. To gauge overall linguistic quality, we solicit assessments regarding the fluency and usefulness of a summary on a Likert scale (1932). We include additional questions about the simplification of technical terms for the plain language summaries. We provide a complete taxonomy of the survey in Appendix H.
AnnotationsWe recruited 3 domain experts with medical training on the Upwork platform,3 and task them each with annotating 100 samples. In total, we collect 300 annotations (3 annotations per sample). We use Label Studio4 as our interface.
Footnote 3: [https://www.upwork.com](https://www.upwork.com)
Footnote 4: [https://labelstud.io/](https://labelstud.io/)
## 3 Multiple Document Summarization and Evidence Synthesis
DataFor multi-document summarization, we download meta-analyses from the Cochrane Library (these are reviews of medical evidence, usually RCTs).5 Our final sample contains 50 multi-document studies comprising meta-review titles, reference abstracts (inputs), and target conclusions (target summaries) written by domain experts, 10 of which were published post- GPT3-D3 release. 6
Footnote 5: [https://www.ochranelibrary.com/](https://www.ochranelibrary.com/)
Experimental SetupBecause inputs comprise multiple abstracts, these (together with generated tokens) often exceed the token capacity of GPT3-D3. In our dataset, about 41% of the samples exceeded this upper-bound. We report information about our data, including average length, in Appendix B. To address the upper-bound problem, we adopt a simple two-phase strategy for multi-document summarization. First, we generate independent summaries for each abstract, using the single-document summarization prompt described in Section 2. Then, we include all the generated single-document summaries in our multi-document synthesis prompt7 (examples in Appendix C).
Footnote 6: At the time of retrieval we were only able to extract 18 samples post- GPT3-D3 release. We excluded any updates (meta-analyses with \(\leq 1\) reference abstract). There was no discernible difference in the performance, however, more data is needed to evaluate this effect
Footnote 7: Note that we have yet to see prior work systematically investigate a strategy for zero-shot multi-document summarization; due to the prompt-sensitive nature of LLMs Liang et al. (2022), we do not guarantee that we obtained the best prompt despite fairly extensive trials.
Study DesignOur evaluation rubric asks for assessments of generated outputs as compared to: (a) inputs, and, (b) target summaries. Specifically, we ask if generated summaries are supported by the _summaries_ provided as inputs in the multi-document case, and to what extent they agree with target (reference) summaries. We also ask annotators to highlight spans of text in generated outputs that disagree with paired target summaries. We reproduce the full rubric in Appendix H.
With respect to annotators, we use the same procedure described in Section 2; we recruited 3 new medical experts and tasked them each with annotating 50 samples, for a total of 150 annotations.
## 4 Results
Rq1: Does GPT3-D3 produce faithful summaries of medical articles?In the single document setting, we find that GPT3-D3 generates summaries of biomedical abstracts that are fairly high-quality. Figure 2 (a) shows that annotators rated a majority of the summaries as being coherent, useful, and capturing "key results".
When GPT3-D3 does err, it tends to make minor mistakes or omit details. The latter is more common than the former, as shown in Figure 3 (a).
Rq2: Can GPT3-D3 accurately simplify while summarizing medical texts?Shown in Figure 2 (b), GPT3-D3 produces simplified summaries that are similarly deemed to be coherent and useful, and which appear to contain key results. Simplified outputs are scored highly in terms of readability, indicating that these summaries would be understood by someone without medical training.
In comparison to the technical summaries, Figure 3 (b) shows that there are fewer omissions but a slightly higher amount of errors. These may be problematic, but -- importantly -- some omissions are expected in a simplified summary, as certain details that are important for an accurate summary for a technical audience may not be necessary to convey key information to a more general audience.
Rq3: Can GPT3-D3 _synthesize_ findings presented in multiple input articles in a way that accurately reflects the totality of the evidence?We now evaluate GPT3-D3's performance on multi-document summarization, i.e., its ability to synthesize evidence Wang et al. (2022). Figure 4 shows that most summaries generated by GPT3-D3 in this setting are supported by the inputs. This is consistent with our findings in **RQ1**: GPT3-D3 is able to summarize faithfully with respect to given input. However, we find that generated summaries do not consistently agree with the target summaries. Indeed, Figure 4 shows that generated summaries disagree with the targets in over half of cases. This discrepancy suggests that human-written summaries in the biomedical domain require a level of synthesis that is not captured by GPT3-D3.
Rq4: What sort of factual mistakes does GPT3-D3 make and what are the risks?In RQ1, we reported that GPT3-D3 sometimes omits key information. Figure 5 characterizes the types of omissions and errors made, with respect to PICO elements. GPT3-D3 tends to underspecify elements in the summary more often than generating inaccuracies. Appendix F provides further details regarding underspecification. In the simplification task, GPT3-D3 capably simplifies most technical terms in the generated output (Figure 6).
Regarding RQ3, we showed that there are often discrepancies between generated and target summaries, despite the former being supported by the inputs. Human-written summaries of trials may be
Figure 4: Proportion of summaries that reflect the target summary and are supported by the input summaries in the multi-document setting. While most summaries follow from the input, less than half are rated as agreeing with the target summary.
Figure 3: Average number of errors and omissions made in the generated (a) regular and (b) simplified summaries. Most mistakes made in both cases are minor, and omissions are more frequent than errors.
Figure 2: Average scores for assessing overall faithfulness, coherence, and usefulness of generated (a) regular summaries and (b) simplified summaries. GPT3-D3 produces high-quality regular and simplified summaries.
more cautious in their conclusions. We measure the evidence strength and direction of both the target and generated summaries, and find that GPT3-D3 tends to recommend marginal or substantive beneficial effects regarding interventions in the majority of the summaries (Figure 7).
Overall, we find that GPT3-D3 copies frequently from inputs. This results in summaries that are often faithful to the input. It may also be one reason that summaries tend to have more omissions (rather than errors) in the single document case, and it may also explain how summaries in the multi-document case often disagree with the reference synopsis while also being supported by (some subset of) the inputs. We calculate the degree of overlap and similarity between inputs and generated summaries from GPT3-D3 for both single-document and multi-document summarization at the sentence level (Figure 8). GPT3-D3 often copies sentences verbatim. In other cases, it changes phrasings but only very slightly (see Appendix F for examples).
Further, Figure 8 shows how many sentences in each summary have a BLEU score of \(\geq 30\); which indicates the sentences are highly aligned. Over 70% of the summaries have at least a quarter of the sentences copied from the input. Appendix F shows some examples of highly similar summaries and sentence pairs.
## 5 Related Work
More broadly in summarization, several efforts have called for increased emphasis on human (rather than automated) evaluation of generated texts, increased deployment of human-centered systems for text generation evaluation Khashabi et al. (2021), and greater focus on building benchmarks that incorporate human preferences Liang et al. (2022); Fabbri et al. (2021). And indeed, Goyal et al. (2022) find that summaries produced by GPT3-D3 are often preferred by humans over alternative model outputs even when automated metrics disagree. Such findings have motivated the manual analysis we conduct for this work. As far as we know, there has not been any work that assess the degree to which GPT-3 is proficient at summarizing biomedical and clinical data in both single-document and multi-document cases.
Our analysis of summarization in the biomedical space complements recent work analyzing the question answering capabilities of such models in this domain Singhal et al. (2022); Lievin et al. (2022) and the degree to which they encode medical knowledge implicitly Sung et al. (2021). Other work has considered using summarization
Figure 5: Granular omissions and errors annotated in (a) technical and (b) simplified summaries. Most omissions come from underspecifying key components.
Figure 8: Percentage of sentences in the generated summaries with a BLEU score of 30 or higher, which indicates high similarity.
Figure 6: In the simplification case, the model usually replaces complex terms with simpler ones.
Figure 7: Proportion of summaries that are reported as beneficial in the generated summaries and the target summaries. The generated summaries tend to report beneficial effects in most of the summaries.
of biomedical texts as assistive tools for reading (August et al., 2022).
## 6 Conclusions
We evaluate the ability of GPT3-D3 to faithfully summarize and simplify medical literature. The expert annotations we collect indicate that GPT3-D3 performs single-document tasks quite well, but struggles with multi-document summarization. This highlights the ability to aggregate across documents as a direction for future work. We release all data and annotations to facilitate such work in the medical space going forward.
## Limitations
This evaluation focussed on expert manual assessments of model outputs and their factual accuracy. Domain expertise (in medicine) was invaluable for this task, but is also expensive and therefore limited the scale of our evaluation. Consequently, all findings are derived over a modest sample (100s) of triple-annotated instances.
Another limitation here is that we have considered only articles describing _randomized control trials (RCTs)_. We focused on such articles because RCTs are the most reliable means of assessing medical interventions, and therefore inform the practice of evidence-based medicine; summarizing such articles is therefore critical to help physicians stay on top of the evidence. Moreover, RCTs provide a natural grounding with respect to factuality, given that all such trials will investigate the relative efficacy of an intervention for a particular condition (i.e., on a specific population of patients) and with respect to an outcome of interest. That said, this is restrictive by design, and our analysis has therefore excluded large swaths of other types of medical texts.
## Ethical Considerations
In Appendix D, we note the costs of hiring domain experts for annotation.
Large language models (such as GPT3-D3) have been shown capable of generating concise and fluent summaries. But these often contain factual inaccuracies. This poses unique risks in the domain of medicine, where inaccurate summaries of published evidence have the potential to (mis-)inform patient care. This work has attempted to empirically assess the tendency of models to introduce inaccuracies into summaries of medical literature by enlisting domain experts to identify and characterize omissions and errors in model generated summaries. Understanding such issues is a first step toward designing methods to mitigate them.
While we found that GPT3-D3 appears to produce summaries of single biomedical article abstracts that are reasonably factual, relying on such outputs still poses risks, and even in this setting we would caution against trusting model outputs without further verification at present. Moreover, we found that in the multi-document case--i.e., on the task of synthesizing evidence reported across multiple clinical trials--GPT3-D3 struggles to provide synopses that agree with reference (expert written) summaries. In sum, despite their ability to produce consistently plausible outputs, our view is that summaries of medical literature produced by LLMs should not yet be used to directly inform care given the risks of factual inaccuracies. More research is needed to better characterize the kinds of mistakes such models make, and ultimately to mitigate them.
## Acknowledgements
This research was partially supported by National Science Foundation (NSF) grants IIS-2145479 and RI-2211954, and by the National Institutes of Health (NIH) under the National Library of Medicine (NLM) grant 2R01LM012086.
|
2304.09283 | Sliding Block Hashing (Slick) -- Basic Algorithmic Ideas | We present {\bf Sli}ding Blo{\bf ck} Hashing (Slick), a simple hash table
data structure that combines high performance with very good space efficiency.
This preliminary report outlines avenues for analysis and implementation that
we intend to pursue. | Hans-Peter Lehmann, Peter Sanders, Stefan Walzer | 2023-04-18T20:44:56Z | http://arxiv.org/abs/2304.09283v1 | # Sliding Block Hashing (Slick)
###### Abstract
We present **Sl**I**I**I**I**I**I**I**I**I**I**I**I**I**I**I**I**I**I**I**I**I**I**I**I**I**I**I**I**I**I**I**I**I**I**I**I**I**I**I**I**I**I**I**I**I**I**I**I**I**I**I**I**I**I**I**I**I**I**I**I**I**I**I**I**I**I**I**I**I**I**I**I**I**I**I**I**I**I**I**I**I**I**I**I**I**I**I**I**I**I**I**I**I**I**I**I**I**I**I**I**I**I**I**I**I**I**I**I**I**I**I**I**I**I**I**I**I**I**I**I**I**I**I**I**I**I**I**I**I**I**I**I**I**I**I**I**I**I**I**I**I**I**I**I**I**I**I**I**I**I**I**I**I**I**I**I**I**I**I**I**I**I**I**I**I**I**I**I**I**I**I**I**I**I**I**I**I**I**I**I**I**I**I**I**I**I**I**I**I**I**I**I**I**I**I**I**I**I**I**I**I**I**I**I**I**I**I**I**I**I**I**I**I**I**I**I**I**I**I**I**I**I**I**I**I**I**I**I**I**I**I**I**I**I**I**I**I**I**I**I**I**I**I**I**I**I**I**I**I**I**I**I**I**I**I**I**I**I**I**I**I**I**I**I**I**I**I**I**I**I**I**I**I**I**I**I**I**I**I**I**I**I**I**I**I**I**I**I**I**I**I**I**I**I**I**I**I**I**I**I**I**I**I**I**I**I**I**I**I**I**I**I**I**I**I**I**I**I**I**I**I**I**I**I**I**I**I**I**I**I**I**I**I**I**I**I**I**I**I**I**I**I**I**I**I**I**I**I**I**I**I**I**I**I**I**I**I**I**I**I**I**I**I**I**I**I**I****I**I**I**I**I**I**I**I**I**I**I****I**I**I**I**I**I**I**I**I**I**I**I****I**I**I**I**I**I**I**I****I**I**I****I**I****I**I**I**I**I****I**I**I****I**I**I****I**I****I**I****I**I****I**I**I****I**I****I**I**I****I**I****I**I****I**I****I**I**I****I****I**I****I**I****I****I**I****I**I****I**I****I**I****I****I**I****I****I****I****I****I****I****I**I****I****I****I****I****I****I****I****I****I****I****I****I****I****I****I****I****I****I****I****I****I****I****I****I****I****I****I****I****I****I****I****I****I****I****I****I****I****I****I****I****I****I****I****I****I****I****I****I****I**I****I****I****I****I****I****I****I****I****I******I****I****I****I****I****I****I****I****I****I****I****I****I****I****I****I****I****I****I****I****I****I****I****I****I****I****I****I****I****I****I****I****I****I****I****I****I****I******I****I****I******I****I****I****I******I****I****I****I****I******I****I****I******I****I****I****I******I****I******I****I******I****I******I****I****I******I****I******I****I******I******I******I****I****I******I******I****I******I****I****I******I******I****I******I******I******I******I******I****I******I******I****I******I******I****I********I******I******I******I******I****I******I****I******I******I******I******I******I****I********I********I******I******I******I******I******I******I******I******I********I****I******I********I******I******I******I******I******I******I******I******I******I******I******I********I********I******I********I******I******I******I********I******I******I******I********I******I********I******I******I********I******I******I******I******I********I******I********I********I******I********I********I******I******I******I********I******I********I******I********I********I******I******I********I******I********I******I********I********I******I********I********I********I********I******I******I******I******I******I******I******I********I********I********I********I******I********I******I********I********I********I********I********I******I********I******I********I******I******I******I********I******I******I********I******I********I****I******I******I******I******I******I******I********I****I********I******I******I******I********I****I******I******I********I******I******I******I******I******I********I********I******I******I******I********I****I********I******I******I********I******I******I******I******I******I********I****I******I****I******I******I********I******I********I******I******I********I********I******I********I****I******I******I******I******I******I******I******I******I****I********I******I******I********I******I********I****I****I******I********I******I******I******I******I******I******I********I******I******I****I******I********I******I******I******I******I********I********I****I********I********I******I******I******I******I********I******I******I********I******I********I******I******I******I********I****I******I******I********I********I******I******I********I********I******I******I******I******I******I********I******I********I********I********I********I********I********I**********I**********I**********I********I**********I********I******I******I********I********I************I********I********I************I************I************I************I**********I**********I************I************I**********I************I************I**********I**************I************I**************I************I****************I**************I**************I****************I************I**************I**************I******************I****************I****************I****************I****************I****************I********************I******************I******************I****************I********************I**********************I********************I********************I********************I
After discussing related work in Section 5, we conclude in Section 6 with some possible avenues for future work.
## 2 Preliminaries
A hash table stores a set \(S\subseteq E=K\times V\) of \(n=|S|\) key-value pairs for arbitrary universes \(K\) and \(V\). The pairs are also called elements. Every key may only appear in one element, i.e. \(S\) is a functional relation. Closed hashing stores these elements in an array \(T[0..m-1]\) providing space for \(m\) elements.1 A hash function \(h\) applied to the keys helps finding these elements. In the following, we assume that \(h\) behaves like a truly random hash function.
Footnote 1: In this paper, \(a..b\) is a shorthand for \(\{a,\ldots,b\}\).
Our model of computation is the standard RAM model [33] allowing constant time operations on operands of size \(\mathcal{O}(\log n)\) where \(n\) is the input size (see e.g. [30, Section 2.2] for details).
## 3 Basic Sliding Block Hashing
In this section, we first introduce the basic data structure and the find operation in Section 3.1. Then, we explain the insertion operation in Section 3.2 and the bulk construction in Section 3.3. Finally, we give details on the deletion operation in Section 3.4.
### The Data Structure and Operation find
The basic idea behind Slick is very simple. We try to store most elements in table \(T\) as in closed hashing. The main hash function \(h\) maps elements to the range \(0..m/B-1\), i.e., to _blocks_ for which \(T\) has an average capacity of \(B\) available. Ideally (and unrealistically), block \(b_{i}\) would contain up to \(B\) elements and it would be stored in table entries \(T[iB..iB+B-1]\). Slick makes this realistic by storing _metadata_ that indicates the deviation from this ideal situation.
How to implement this precisely, opens a large design space. We now describe a simple solution with a number of tuning parameters. The elements of \(S\) mapped to block \(b_{i}\) are stored contiguously in a range of table entries starting at position \(iB+o_{i}\) where \(o_{i}\) is the _offset_ of \(b_{i}\) - blocks are allowed to _slide_. After this block, there may be a _gap_ of size \(g_{i}\) of unused table cells before the next block starts. Metadata explicitly stores the gap size.2 This has the added benefit, that, in contrast to previous closed hashing schemes, there is no need to explicitly represent an empty element.
Footnote 2: This can be implemented using very little additional space: We only need a single code-word for the metadata of a block to indicate that this block has a nonzero gap. In that case, we have an entire unused table cell available to store the gap size and the remaining metadata for that block. In particular, if we have \(k\)-bit thresholds, we can use \(2^{k}+1\) threshold values and set \(\hat{o}=2^{k}-2\). We then have \(|0..i\times 0..\hat{o}|=2^{2k}-1\) leaving one code word for encoding a nonempty gap.
Metadata should use very little space. We therefore limit the maximum offset to a value \(\hat{o}\). We also want to support fast search and therefore limit the block size by parameter \(\hat{B}\). With these constraints on position and size of blocks, it may not be possible to store all elements of the input set \(S\) in table \(T\). We therefore allow some elements to be _bumped_ to a _backward_\(T^{\prime}\).3 The backyard can be any hash table. Since, hopefully, few elements will be bumped, space and time requirements of the backyard are of secondary concern for now. We
adopt the approach from the BuRR retrieval data structure [10] to base bumping decisions on _thresholds_: A threshold hash function \(\delta(k)\) maps keys to the range \(0.\hat{t}-1\). Metadata stores a threshold \(t_{i}\in 0.\hat{t}\) for block \(b_{i}\) such that elements with \(\delta(k)<t_{i}\) are bumped to \(T^{\prime}\). We also use the observation from BuRR that _overloading_ the table, i.e., choosing \(m<n\) helps to arrive at tables with very few empty cells.
The pseudocode in Figure 2 summarizes a possible representation of the above scheme and gives the resulting search operation. Figure 1 illustrates the data structure. The metadata array \(M\) contains an additional slot \(M[m/B]\) with a sentinel element helping to ensure that block ends can always be calculated and that no elements outside \(T\) are ever accessed.4 This implementation has tuning parameters \(m\), \(B\), \(\hat{B}\), \(\hat{o}\), and \(\hat{t}\). We expect that values in \(\Theta(B)\) will be good choices for \(\hat{B}\), \(\hat{o}\), and \(\hat{t}\). Concretely, we could for example choose \(\hat{B}=2B\) and \(\hat{o}=\hat{t}=B\). This leads to space overhead of \(\mathcal{O}(\log B)\) bits for the metadata of each block which can be amortized over \(B\) elements.
Footnote 4: We could also wrap these blocks back to the beginning of \(T\) as in other closed hashing schemes or extend \(T\) to accommodate slid blocks.
### Insertion
To insert an element \(e\) with key \(k\) into a Slick hash table, we first find the block \(i=h(k)\) to which \(e\) is mapped. Then we check whether the threshold for block \(i\) implies that \(e\) is bumped. In this case, \(e\) must be inserted into the backyard \(T^{\prime}\). Otherwise, the data structure invariants give us quite some flexibility how to insert an element. This implies some case distinctions but also allows quite efficient insertions even when the table is already almost full.
A natural goal is to insert \(e\) into block \(b_{i}\) if this is possible without bumping other elements. The pseudocode in Figure 3 describes one way to do this. This goal is unachievable when \(b_{i}\) already contains the maximal number of \(\hat{B}\) elements. In that case, we must bump some elements to make room for \(e\).5 Once more there are many ways to achieve this. We describe a fast and simple variant.6 We look for the smallest increase in the threshold of \(b_{i}\) that bumps at least one element from that block (including the new element \(e\) itself). We set
Figure 1: Illustration of the data structure. Input objects are colored by their hash function value. \(B=4,\hat{B}=5,m=16,n=15\).
the threshold accordingly and bump the elements implied by this change. Now, either \(e\) is bumped and we are done with the insertion or a free slot after \(b_{i}\) is available.
If block \(b_{i}\) is not filled to capacity \(\hat{B}\), we try to insert \(e\) there. However, this may not be directly possible because the gap behind \(b_{i}\) could be empty. In that case we can try to slide neighboring blocks to open such a gap. Algorithm 3 does that by first trying to slide \(b_{i}\) and some of its left neighbors by one position to the left. This may fail because before finding a nonempty gap, a block with offset \(0\) may be encountered that cannot be slid to the left.7 If sliding left failed, function \(\mathsf{slideGapFromRight}\) tries to slide blocks to the right of \(i\) by one position to the right. Once more, this may fail because blocks that already have maximum offset cannot be slid to the right. If there is a range of slidable blocks starting a \(b_{i+1}\) and ending at a block followed by a nonempty gap, the actual sliding can be done efficiently by moving only one element per block. Figure 4 gives an example. The first element is appended to the block, filling the first gap element and growing the gap of the previous block.
Footnote 7: Indeed, this will always fail when we use no deletions. Hence, the attempt to slide left can be omitted in that situation.
If neither sliding left nor sliding right can open a gap after block \(b_{i}\), the same bumping procedure described for full blocks is used. If sliding was successful (including the case of an empty right-slide for the case that \(b_{i}\) already had a nonempty gap), the gap after \(b_{i}\) is nonempty and element \(e\) can be appended to \(b_{i}\).
```
Class\(\mathsf{SickHash}(m,B,\hat{B},\hat{o},\hat{t}:\mathbb{N}_{+},\,h:E\to 0.m/B-1)\) Class\(\mathsf{MetaData}=\overbrace{o:0.\hat{o}}^{\mathsf{offset}}\times \overbrace{g:0.\hat{B}}^{\mathsf{gap}}\times\overbrace{t:0.\hat{t}}^{ \mathsf{threshold}}\) \(T:\mathsf{Array}\;[0..m-1]\) of \(E\) // main table \(M=(0,B,0)^{m/B}\circ(0,0,0):\mathsf{Array}\;[0..m/B]\) of \(\mathsf{MetaData}\) \(T^{\prime}\): HashTable // backyard Function\(\mathsf{blockStart}(i:\mathbb{N})\) return\(Bi+o_{i}\) Function\(\mathsf{blockEnd}(i:\mathbb{N})\) return\(Bi+B+o_{i+1}-g_{i}-1\) Function\(\mathsf{blockRange}(i:\mathbb{N})\) return\(\mathsf{blockStart}(i)..\mathsf{blockEnd}(i)\) // locate an element with key \(k\) and return a reference in \(e\) Function\(\mathsf{find}(k:\,K,\,e:\,E):\,\{\mathsf{true},\mathsf{false}\}\) \(i:=h(k)\) if\(\delta(k)<t_{i}\)thenreturn\(T^{\prime}.\mathsf{find}(k,e)\) // bumped? if\(\exists j\in\mathsf{blockRange}(h(k)):\mathsf{key}(T[j])=k\)then\(e:=T[j];\)returntrue // found else return false // not found
```
**Figure 2** Pseudocode for a simple representation of Slick and the operation \(\mathsf{find}\).
**Procedure**\(\text{insert}(e\): \(E)\)// insert element \(e\), noop if already present
\(k:=\mathsf{key}(e);\)\(i:=h(k)\)// current block
**if**\(\delta(k)<t_{i}\)**then**\(T^{\prime}.\text{insert}(e);\)**return**// \(e\) is already bumped
**if**\(\exists j\in\mathsf{blockRange}(h(k)):\mathsf{key}(T[j])=k\)**then**return**// already present
\(\text{block too large}\)noempty slot usable
**if**\(\text{\text{\text{\text{\text{\text{\text{\text{\text{\text{\text{\text{\text{\text{\text{ \text
number of blocks to slide. Corollary 1 implies that \(\hat{s}=\mathcal{O}(\hat{B})\) might be a good choice.
We can also consider a more sophisticated insertion routine supported by additional metadata. When routine insert from Figure 3 fails to find a gap to the left or the right, it has identified a _blocked cluster (cluster)_ of full blocks starting with a block with offset \(0\) and ending with a block with offset \(\hat{o}\). When the table is almost full, blusters can be large and they can persist over many insertions.8 Thus, it might help to mark blocks inside blusters. An insertion into a bluster can then immediately bump. If blocking flags are represented by a separate bit array, they can be updated in a bit parallel way.
Footnote 8: In the absence of deletions, blusters can only be broken when bumping happens to bump at least two elements.
### Bulk Construction
Figure 5 gives pseudocode for a simple greedy algorithm for building a Slick hash table. The elements are sorted by block (and threshold) and then processed block by block. The algorithm tries to fit as many elements into the block as permitted by the constraints that a block must contain at most \(\hat{B}\) elements, that the offset must not exceed \(\hat{o}\), and that the last available table cell is \(T[m-1]\). Violations of these constraint are repaired by bumping a minimal number of elements from the current block.
**Theorem 2**: _Construction of a Slick hash table using_ greedyBuild _can be implemented to run in deterministic time \(\mathcal{O}(|S|)\) plus the time for constructing the backyard. As an external memory algorithm, greedy build has the same I/O complexity as sorting._
Proof.: Using LSD-radix-sort (e.g., see [30, Section 5.10]), \(S\) can be sorted in linear time.9
Figure 5: Pseudocode for greedy bulk construction of Slick hash tables.
All other parts of the algorithm are simple to implement in linear time.
For the external memory variant, we observe that sorting the input can pipeline its sorted output into the construction process, requiring internal memory only for one block at a time.
greedyBuild seems to be a good heuristic for bumping few elements. We are also optimistic that we can prove that using slight overloading, the number of empty cells can be made very small (\(me^{-\Omega(\hat{o})}+o(m)\) assuming \(\hat{o}=\mathcal{O}(B)\) using an analysis similar to [10]). Moreover, for fixed values of the tuning parameters, \(B\), \(\hat{B}\), and \(\hat{o}\), we can derive the expected number of empty cells for large \(n\) using Markov chains. This allows us to study the impact of different loads \(n/m\) analytically.
However, greedyBuild is not optimal. For example, suppose we have \(B=\hat{o}=4\), \(o_{2}=3\), \(|b_{2}|=6\), and all elements in block \(b_{2}\) have the same threshold value. Then we have to bump all elements from \(b_{2}\) because otherwise \(o_{3}=3+6-4=5>4\) violating the constraint on the offset of \(b_{3}\). This leaves an empty cell after \(b_{2}\). However, it might have been possible to bump one additional element from \(b_{1}\) which would have resulted in \(o_{2}=2\) so that no elements would have to be bumped from \(b_{2}\). We consider to try heuristics that avoid empty cells in a block when they arise, going backwards to bump elements from previous blocks. We can also compute an optimal placement in time \(\mathcal{O}(n\hat{o})\) using dynamic programming10.
Footnote 10: Roughly, we consider the blocks from left to right and compute, for each possible offset value \(o\), the least number of elements that need to be bumped to achieve an offset of at most \(o\).
### Deletion and Backyard Cleaning
Deleting an element is almost as simple as finding it. We just overwrite it with the last element of the block and then increment the gap size. Figure 6 gives pseudocode.
A problem that we have to tackle in applications with many insertions and deletions is that the routines presented so far bump elements but never unbump them. In a situation with basically stable \(|S|\), the backyard \(T^{\prime}\) will therefore keep growing. This effect can be countered by _backyard cleaning_. When there are enough empty cells in the primary table \(T\) to accommodate \(T^{\prime}\), we can reinsert all elements from \(T^{\prime}\) into \(T\). For each block \(b_{i}\), its threshold can be reset to \(0\) unless insertion for a backyard element causes bumping. An important observation here is that we never have to bump an element that was in \(T\) before backyard cleaning and that we do not have to look at threshold values of those elements. This implies that the \(\mathcal{O}(\hat{B})\) term from Corollary 1 can be replaced by the number of backyard elements inserted into the affected block. We also expect that we can design a cleaning operation that works more efficiently than inserting backyard elements one-by-one. For example, this can
Figure 6: Pseudocode for deletion from Slick Hash Tables.
be done by sorting the backyard similar to the build operation and then merging backyard and main table in a single sweep.
Now suppose the backyard has size \(\mathcal{O}(m/B)\). Then backyard cleaning can be implemented to run in expected time \(\mathcal{O}(m/B)\). If we choose the table size in such a way that \(\mathcal{O}(m)\) insert or delete operations cause only \(\mathcal{O}(m/B)\) bumps, then the backyard and its management will incur only a factor \(\mathcal{O}(1/B)\) overhead in space or time.
## 4 Variants and Advanced Features
We begin with two variants of Slick that may be of independent interest. Linear cuckoo hashing described in Section 4.1 is almost a special case of Slick that has advantages with respect to the memory hierarchy. Bumped Robin Hood hashing from Section 4.3 is conceptually even simpler than Slick and may allow even faster search at the price of slower insertions.
Section 4.4 is a key result of this section showing how to further reduce space consumption up to the point that the data structure gets succinct. Using bit parallelism allows that while maintaining constant operation times. Section 4.5 briefly discusses how Slick can be parallelized.
### Linear Cuckoo (Luckoo) Hashing
**Linear Cuckoo** (Luckoo) Hashing is closely related to Slick hashing but more rigidly binds the elements to blocks. Luckoo hashing subdivides the table \(T\) into blocks of size \(B\) and maintains the invariant that each unbumped element \(e\in S\) is either stored in block \(h(e)\) or in block \(h(e)+1\).11 This can be implemented as a special case of Slick hashing with \(\hat{o}=B\), \(\hat{B}=2B\), and the additional constraint that \(o_{i}+|b_{i}|\leq 2B\). The main advantage over general Slick hashing is that we can now profit from interleaving metadata with table entries in physical memory blocks of the machine. This way, a find-operation incurs at most two cache faults.12 Another potential advantage is that storage of offset and gap metadata is now optional as the data structure invariant already defines which \(2B\) table cells contain a sought element. This might help with SIMD-parallel implementations.
Footnote 11: A generalization could look at \(k\) consecutive blocks.
Footnote 12: Note that hardware prefetchers may help to execute two contiguous memory accesses more efficiently than two random ones.
Luckoo is also useful in the context of truly external memory hash table, e.g., when used for hard disks or SSDs. We obtain a dynamic hash table that is able to almost completely fill the table and where find and delete operations access only 2 consecutive blocks. Insertions look at a consecutive range of blocks. No internal memory metadata is needed. Most external memory hash tables strive to support operations that look at only a single physical block most of the time. We can approximate that by subdividing a physical block into \(k\) Slick blocks. That way, find will only access \(1+\frac{1}{k}\) physical blocks in expectation.13
Footnote 13: For example, let us consider the case of an SSD with physical blocks of size 4096, \(k=8\), \(B=8\), and 63-bit elements. Then we have enough space left for 8 bits of metadata per block. On average, one in 8 find operations will have to access two physical blocks.
### Nonbumped Slick - Blocked Robin Hood Hashing (BioRoHo)
Robin Hood hashing [7] is a variant of linear probing that reduces the cost of unsuccessful searches by keeping elements sorted by their hash function value. Slick without bumping can
be seen as a variant of Robin Hood hashing, i.e., elements are sorted by \(h(k)\) and metadata tells us exactly where these elements are. We get expected search time \(\mathcal{O}(B)\) and expected insertion time bounded by \(\mathcal{O}(B+T_{\mathrm{RH}}/B)\) where \(T_{\mathrm{RH}}\) is the expected insertion time of Robin Hood hashing. For sufficiently filled tables and not too large \(B\), this is faster than basic Robin Hood hashing. Since the bumped version of Slick cannot be asymptotically slower, this also gives us an upper bound on the insertion cost of general Slick.
Actually implementing BloRoHo saves the cost and complications of bumping but pays with giving up worst-case constant find. It also has to take care that metadata appropriately represents all offsets which can get large in the worst case. Rehashing or other special case treatments may be needed.
### Bumped Robin Hood Hashing (BuRoHo)
We get another variant of Robin Hood hashing if we start with basic Robin Hood hashing without offset or gap metadata but allow bumping. We can then enforce the invariant that any unbumped element \(e\in S\) is stored in \(T[h(e)..h(e)+B-1]\) for a tuning parameter \(B\). As in linear probing, empty cells are indicated by a special element \(\perp\). Bumping information could be stored in various ways but the most simple way is to store one bit with every table cell \(i\) whether elements \(e\) with \(h(\mathsf{key}(e))=i\) are bumped.
Compared to Slick, BuRoHo more directly controls the range of possible table cells that can contain an element and it obviates the need to store offset and gap metadata. Searches will likely be faster than in Slick. However, insertions are considerably more expensive as we have to move around many elements and evaluate their hash functions. In contrast, Slick can skip entire blocks in constant time using the available metadata.
### Succinct Slick
#### Quotienting
Cleary [8] describes a variant of linear probing that infers \(\log m\) bits of a key \(k\) from \(h(k)\). To adapt to displacements of elements, this requires a constant number of metadata bits per element. In Slick we can infer \(\log\frac{m}{B}\) key bits even more easily from \(h(k)\) since the metadata we store anyway already tells us where elements of a block are stored.
To make this work, the keys are represented in a randomly permuted way, i.e., rather than representing a key \(k\) directly, we represent \(\pi(k)\). We assume that \(\pi\) and its inverse \(\pi^{-1}\) can be evaluated in constant time and that \(\pi:K\to 0..|K|-1\) behaves like a random permutation. Using a chain of Feistel permutations [20, 24, 1], this assumption is at least as realistic as the the assumption that a hash function behaves like a random function. Now it suffices to store \(k^{\prime}=\pi(k)\bmod m/B\) in block \(\pi(k)\)**div**\((m/B)\). To reconstruct a key \(k\) stored as \(k^{\prime}\) in block \(i\), we compute \(k=\pi^{-1}(im/B+k^{\prime})\).
With this optimization, the table entries now are stored succinctly except for
1. \(\approx\log B\) bits per element that are lost because we only use bucket indices rather than table indices to infer information on the keys.
2. On top of this come \(\mathcal{O}(\log(B)/B)\) bits per element of metadata.
3. Space lost due to empty cells in the table.
4. Space for the backyard.
Items 1. and 2. can be hidden in \(o(1)\) allowed for succinct data structures. Items 3. and 4. become lower order terms when \(B\in\omega(1)\). Below, we will see how to do that while still having constant time operations.
#### Bit Parallelism
With the randomly permuted storage of keys, any subset of \(f\) key bits can be used as a fingerprint for the key. If we take \(\geq\log\hat{B}\) such fingerprint bits, the expected number of fingerprint collisions will be constant. Moreover, for \(f=\mathcal{O}(\log\hat{B})\), and \(\hat{B}=\mathcal{O}(\log(n)/\log\log n)\), we can use bit parallelism to process all fingerprints of a block in constant time. To also allow bit parallel access to the right fingerprints, we have to store the fingerprints separately from the remaining bits of the elements.14
Footnote 14: For a RAM model implementation of Slick, it seems most simple to have a separate fingerprint array that is manipulated analogously to the element array. Since this can cause additional cache faults in practice, we can also use the Luckoo variant from Section 4.1 with a layout where a physical (sub)block contains first metadata then fingerprints and finally the remaining data for exactly \(B\) elements. A compromise implements general Slick with separately stored metadata (hopefully fitting in cache) but stores \(B\) fingerprints and \(B\) elements in each physical block.
In algorithm theory, bit parallelism can do many things by just arguing that lookup tables solve the problem. However, this is often impractical since these tables incur a lot of space overhead and cache faults while processing only rather small inputs. However, the operations needed for bit parallel Slick are very simple and even supported by SIMD-units of modern microprocessors.
Specifically, find and delete need to replicate the fingerprint of the sought key \(k\) and then do a bit parallel comparison with the fingerprints of block \(h(k)\). Operation insert additionally needs bit parallism for bumping. By choosing fingerprints large enough to fully encode the thresholds \(\delta(\cdot)\), we can determine the elements with minimal fingerprint in a block using appropriate vector-min and vector-compare operations. The elements with minimal fingerprint then have to be bumped one at a time which is possible in constant expected time as the expected number of minima will be a constant for \(f\geq\log\hat{B}\).
We can now extend Corollary 1:
For \(\hat{B}=\mathcal{O}(\log(n)/\log\log n)\) and \(\log|E|=\mathcal{O}(\log n)\), operations find and delete can be implemented to work in constant expected time. The same holds for operation insert if a variant is chosen that slides a constant number of blocks in expectation.
We believe that we can show that constant time operations can be maintained, for example by choosing \(m=n+\Theta(n/B)\) when in expectation \(\mathcal{O}(n/B)\) elements will be bumped. In this situation, we can afford to choose a nonsuccinct representation for the backyard. Overall, for \(B=\mathcal{O}(\log(n)/\log\log n)\) (and \(\hat{B}=2B\), \(\hat{o}=\Theta(B)\), \(\hat{t}>\log B\)) we will get
\[\log\binom{|E|}{n}+\mathcal{O}\left(n\left(\overbrace{\log B}^{1.}+\overbrace{ \log B}^{2.}+\overbrace{\log n}^{3.}+\overbrace{\log n}^{4.}\right)\right)= \log\binom{|E|}{n}+\mathcal{O}(n(\log\log n))\]
bits of space consumption. Deletion and backyard cleaning can be done as described in Section 3.4.
Even lower space consumption seems possible if we overload the table (perhaps \(m=n-\Theta(B)\)) and use a succinct table for the backyard (perhaps using non-overloaded Slick). To maintain constant expected insertion time, we can scan the metadata in a bitparallel way in functions slideGapFromLeft/Right. In addition, we limit the search radius to \(\mathcal{O}(B)\) blocks. Only if we are successful, we incur a nonconstant cost of \(\mathcal{O}(B)\) for actually sliding blocks. Now consider an insert-only scenario. The first \(n-\mathcal{O}(n/B)\) elements can be inserted in
constant expected time as in the non-overloaded scenario. The remaining \(\mathcal{O}(n/B)\) elements will incur insertion cost \(\mathcal{O}(B)\), i.e., \(\mathcal{O}(n)\) in total.
### Parallel Processing
Many find operations can concurrently access a Slick hash table. Operations insert and delete require some kind of locking as do many other hash table data structures (but, notably, not linear probing; e.g., see [30, Section 4.6]). Often locking is implemented by subdividing the table into an array of segments that are controlled by one lock variable.
We can parallelize operation build and bulk insert/delete operations by subdividing the table into segments so that different threads performs operations on separate segments of the table. A simple implementation could enforce independent segments by bumping data that would otherwise be slid across segment boundaries. A more sophisticated implementation could use a postprocessing phase that avoids some of this bumping (perhaps once more in parallel).
## 5 More Related Work
There is a huge amount of work on hash tables. We do not attempt to give a complete overview but concentrate on approaches that are direct competitors or have overlapping ideas.
Compared to linear probing [28, 16, 31], Slick promises faster operations when the table is almost full. In particular, unsuccessful search, insertion and deletion should profit. Search and delete also offer worst case deterministic performance guarantees when an appropriate backyard is used in contrast to expected bounds for linear probing15 An advantage for robust library implementations of hashing is that Slick does not require a special empty element. Advantages of linear probing over Slick are simpler implementation, lockfree concurrent operation, and likely higher speed when ample space is available.
Footnote 15: There are high-probability logarithmic bounds for linear probing that require fairly strong hash functions though [34].
_Robin Hood hashing_[7] improves unsuccessful search performance of linear probing at the expense of slower insertion. Slick and our bumped Robin Hood variant described in Section 4.3 go one step further - bumping obviates the need to scan large clusters of elements during search and metadata allows skipping them block-wise during insertion.
_Hopscotch hashing_[14] and related approaches [32] store per-cell metadata to accelerate search of linear probing. We see Slick as an improvement over these techniques as it stores much less metadata with better effect - once more because, thanks to bumping, clusters are not only managed but their negative effect on search performance is removed.
_Cuckoo hashing_[26, 12, 9, 21] is a simple and elegant way to achieve very high space efficiency and worst case constant find-operations. Its governing invariant is that an element must be located in one of two (or more) blocks determined by individual hash functions. Slick and linear cuckoo hashing (Luckoo) described in Section 4.1 achieve a similar effect by mostly only accessing a single range of table cells and thus improve locality and allow faster insertion.
_Bumping and backyards_ have been used in many forms previously. _Multilevel adaptive hashing_[6] and _filter hashing_[12] bump elements along a hierarchy of shrinking layers. These approaches do not maintain explicit bumping information which implies that find
has to search all levels. In contrast, _filtered retrieval_[23] stores several bits of bumping information per element which is fast and simple but requires more space than the per-block bumping information of Slick. In some sense most similar to Slick is _bumped ribbon retrieval (BuRR)_[10] which is not a hash table, but a static retrieval data structure whose construction relies on solving linear equation systems.
_Seperator hashing_[19, 13, 18] is an approach to external memory hashing that stores per bucket bumping information similar to the thresholds of Slick. However, by rehashing bumped elements into the main table, overloading cannot be used. Also, Slick's approach of allowing blocks to slide allows a more flexible use of available storage.
_Backyard cuckoo hashing_ hashes elements to statically allocated blocks (called bins there) and bumps some elements to a backyard when these blocks overflow. There is no bumping metadata. The backyard is a cuckoo hash table whose insertion routine is modified. It tries to terminate cuckoo insertion by moving elements back to the primary table. When plugging in concrete values, the space efficiency of this approach suffers from a large number of empty table entries. For example, to achieve space overhead below 10 %, this approach uses blocks of size at least 333.
_Iceberg hashing_ also hashes elements to statically allocated blocks. Metadata counts overflowing elements but this implies that find still has to search both main table and backyard for overflowing blocks. Iceberg hashing also needs much larger blocks than Slick since no sliding or other balancing approaches besides bumping are used. For example, a practical variant of iceberg hashing [27] uses \(B=64\) and still has 15 % empty cells. A theoretical variant that achieves succinctness [3] uses blocks of size \(\mathcal{O}\big{(}\log^{2}n\big{)}\) and uses complex metadata inside blocks to allow constant time search.
There has been intensive further theoretical work on achieving succinctness and constant time operations. We refer to [3, 5] for a recent overviews. Slick does not achieve all the features mentioned in these results, e.g., with respect to stability or high probability bounds. However, Slick is not only simpler and likely more practical than these approaches, but may also improve some of the crucial bounds. For example, the main result in [5] is a tradeoff between query time proportional to a parameter \(k\) and per element space overhead of \(\mathcal{O}(\log^{(k)}n)\) bits where Slick achieves a similar effect by simply choosing an appropriate block size without direct impact on query time. Future (theoretical) work on Slick could perhaps achieve \(\mathcal{O}(1)\) bits of overhead per element by exploring the remaining flexibility in arranging elements within a block that can encode \(\log B-\mathcal{O}(1)\) bits of information per element using a standard trick of implicit algorithms.
_PaCHash_[17] is a static hash table that allows storage of the elements without any gaps using a constant number of metadata bits per block. This even works for objects of variable size. It seems difficult though to make PaCHash dynamic and PaCHash-find is likely to be slower as it needs predecessor search in an Elias-Fano encoded monotone sequence of integers in addition to scanning a block of memory.
A technique resembling the sliding approach in Slick are sparse tables used for representing sorted sequences [15]. Here a sorted array is made dynamic by allowing for empty cells in the array. Insertion is rearranging the elements to keep the gaps well distributed. Slick can slide faster as it is not bound to keep them sorted and bumping further reduces the high reorganization overhead of sparse tables.
## 6 Conclusions and Future Work
With Slick (and its variants Luckoo and blocked/bumped Robin Hood), we have described an approach to obtain hash tables that are simple, fast and space efficient. We are in the process of implementing and analyzing Slick. This report already outlines a partial analysis but we need to get a more concrete grip on how the insertion time depends on the number of empty slots.
Several further algorithmic improvements and applications suggest themselves. In particular, we believe that Slick can be adapted to be space efficient also when the final size of the table is not known in advance. We believe that Slick can be used to implement a space efficient approximate membership query filter (AMQ, aka Bloom filter). Concretely, space efficient AMQs can be represented as a set of hash values (this can also be viewed as a _compressed single shot Bloom filter_) [29]. The successful dynamic _quotient filter_ AMQ [25, 4, 22] can be viewed as an implementation of this idea using Cleary's compact hashing [8]. Doing this with Slick instead promises a better space-performance tradeoff.
On the practical side, an interesting question is whether Slick could help to achieve better hardware hash tables, as its simple find function could in large parts be mapped to hardware (perhaps with a software handled FAIL for bumped elements). Existing hardware hash tables (e.g., [11]) seem to use a more rigid non-slidable block structure.
Acknowledgements.The authors would like to thank Peter Dillinger for early discussions eventually leading to this paper. This project has received funding from the European Research Council (ERC) under the European Union's Horizon 2020 research and innovation programme (grant agreement No. 882500). Stefan Walzer is funded by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) 465963632.
|
2303.14513 | Shadows in dyonic Kerr-Sen black holes | Black holes with dyonic charges in Einstein-Maxwell-dilaton-axion
supergravity theory are revisited in the context of black hole shadows. We
consider static as well as rotating (namely the dyonic Kerr-Sen) black holes.
The matter stress-energy tensor components, sourced by the Maxwell, axion and
dilaton fields satisfy the standard energy conditions. The analytical
expressions for the horizon and the shadow radius of the static spacetimes
demonstrate their dependence on $P^2+Q^2$ ($P$, $Q$ the magnetic and electric
charges, respectively) and the mass parameter $M$. The shadow radius lies in
the range $2M <R_{shadow}<3\sqrt{3} M$ and there is no stable photon orbit
outside the horizon. Further, shadows cast by the rotating dyonic Kerr-Sen
black holes are also studied and compared graphically with their Kerr-Newman
and Kerr-Sen counterparts. Deviation of the shadow boundary is prominent with
the variation of the magnetic charge, for the relatively slowly rotating dyonic
Kerr-Sen spacetimes. We test any possible presence of a magnetic monopole
charge in the backdrop of recent EHT observations for the supermassive black
holes M87$^*$ and Sgr A$^*$. Deviation from circularity of the shadow boundary
($\Delta C$) and deviation of the average shadow radius from the Schwarzschild
shadow radius (quantified as the fractional deviation parameter $\delta$) are
the two observables used here. Observational bound on $\Delta C$ (available
only for M87$^*$) is satisfied for all theoretically allowed regions of
parameter space and thus cannot constrain the parameters. The observational
bound on $\delta$ available for Sgr A$^*$ translates into an upper limit on any
possible magnetic monopole charge linked to Sgr A$^*$ and is given as
$P\lesssim 0.873\, M$. Such a constraint on $P$ is however expected to be far
more stringent for other astrophysical tests. | Soumya Jana, Sayan Kar | 2023-03-25T16:38:08Z | http://arxiv.org/abs/2303.14513v3 | # Shadows in dyonic Kerr-Sen black holes
###### Abstract
Black holes with dyonic charges in Einstein-Maxwell-dilaton-axion supergravity theory are revisited in the context of black hole shadows. We consider static as well as rotating (namely the dyonic Kerr-Sen) black holes. The matter stress-energy tensor components, sourced by the Maxwell, axion and dilaton fields satisfy the standard energy conditions. The analytical expressions for the horizon and the shadow radius of the static spacetimes demonstrate their dependence on \(P^{2}+Q^{2}\) (\(P\), \(Q\) the magnetic and electric charges, respectively) and the mass parameter \(M\). The shadow radius lies in the range \(2M<R_{shadow}<3\sqrt{3}M\) and there is no stable photon orbit outside the horizon. Further, shadows cast by the rotating dyonic Kerr-Sen black holes are also studied and compared graphically with their Kerr-Newman and Kerr-Sen counterparts. Deviation of the shadow boundary is prominent with the variation of the magnetic charge, for the relatively slowly rotating dyonic Kerr-Sen spacetimes. We test any possible presence of a magnetic monopole charge in the backdrop of recent EHT observations for the supermassive black holes M87\({}^{*}\) and Sgr A\({}^{*}\). Deviation from circularity of the shadow boundary (\(\Delta C\)) and deviation of the average shadow radius from the Schwarzschild shadow radius (quantified as the fractional deviation parameter \(\delta\)) are the two observables used here. Observational bound on \(\Delta C\) (available only for M87\({}^{*}\)) is satisfied for all theoretically allowed regions of parameter space and thus cannot constrain the parameters. The observational bound on \(\delta\) available for Sgr A\({}^{*}\) translates into an upper limit on any possible magnetic monopole charge linked to Sgr A\({}^{*}\) and is given as \(P\lesssim 0.873\,M\). Such a constraint on \(P\) is however expected to be far more stringent for other astrophysical tests.
Introduction
The Reissner-Nordstrom (RN) geometry representing the gravitational field due to a charged massive object is among the earliest known exact solutions in General Relativity coupled to electromagnetism, i.e. the Einstein-Maxwell theory. A straightforward generalisation of this solution is the dyonic RN spacetime [1] which can be written down by just replacing the \(Q^{2}\) in RN spacetime with \(P^{2}+Q^{2}\), where \(P\) represents the'magnetic' charge and \(Q\) is its 'electric' counterpart. However, for the dyonic solution, the definition of the electromagnetic potential \(A_{i}\) is a little tricky -one needs, as expected for a dyon, a two-patch definition - one for the northern hemisphere and the other for the southern. The standard electric-magnetic duality which arises when both magnetic and electric charges are present keeps the solution unchanged. The horizons and other features for the dyonic RN spacetime resemble those for the usual RN geometry modulo the presence of the extra magnetic charge.
It is also known that additional matter fields (other than Maxwell) such as the dilaton and/or the axion appear in the context of supergravity theories or in low energy effective actions which emerge out of full string theory [2; 3; 4]. In such scenarios too one expects dyonic solutions representing gravitational fields of such objects. Among various known solutions [2; 3; 4; 5; 6; 7] there are static, spherically symmetric ones as well stationary spacetimes wherein rotation is present. The purpose of this article is to revisit such known solutions without and with rotation. Our primary aim is to learn how the various theory parameters (eg. electric, magnetic as well as other charges) control the nature and profile of the shadow/silhouette created by the gravitational field representing such solutions. We also try and see if any meaningful constraint can be placed on the various charges, using the known shadow observations for the supermassive compact object present in M87\({}^{*}\)[8; 9; 10] and for Sgr A* [11; 12]. Though, dyonic scenarios presently have little to do with observations in other contexts, we will see how one may place bounds on their viability through shadow observations. In other words, we try to show that what is seen in the images may also be explained using hypothetical constructs which, by no means can be ruled out altogether, unless other observations imply mismatches and contradictions.
Shadows in Kerr-Sen Black holes [4] have already been studied by several authors [13; 14; 15]. The rotating version of the dyonic black holes in Einstein-Maxwell-dilaton theory [2; 16] and its shadows were studied in [17]. In [18], the authors investigated shadows
of regular (Bardeen) black holes having magnetic monopole charge sourced by nonlinear electrodynamics coupled to GR. Presence of axionic hair or the Kalb-Ramond field and their influence on the shadow of M87\({}^{*}\) was investigated in [19]. In [20], the authors investigated the effect of QED on the shadows of the static black holes with magnetic monopoles. There are also several other studies on the shadows of hairy black holes. For example, the authors of [21] studied the shadows cast by the rotating black holes with anisotropic matter fields which could describe an extra \(U(1)\) field as well as diverse dark matters. Studies on the shadows of braneworld blackholes such as in [22; 23] are among other examples. For a more recent study on the shadows of the black holes in the extended or alternative gravity theories, in the light of the observations of Sgr A\({}^{*}\), see [24].
Our article is organised as follows. In Section II we recall the static black hole solutions and discuss the energy conditions in Einstein-Maxwell-dilaton-axion (EMDA) supergravity theory. Section III provides a summary of the corresponding stationary solutions (dyonic) which include rotation. Shadow calculations and related details are presented in Section IV and connections/comparisons with observations are outlined in Section V. The final Section VI is a summary with concluding remarks.
## II Black holes in Einstein -Maxwell- Dilaton-Axion Supergravity Theory
The Einstein-Maxwell -Dilaton- Axion (EMDA) supergravity theory is described by the action [6]
\[S_{\rm EMDA}=\int{\rm d}^{4}x\sqrt{-g}\left[R-\frac{1}{2}(\partial\Phi)^{2}- \frac{1}{2}e^{2\Phi}(\partial\xi)^{2}-e^{-\Phi}F^{2}+\xi F_{\mu}\tilde{F}^{\mu \nu}\right] \tag{1}\]
where \(\Phi\) and \(\xi\) are the dilaton and axion fields, \(F_{\mu\nu}\) is the usual electromagnetic field tensor, \(F^{2}=F_{\mu\nu}F^{\mu\nu}\), and \(\tilde{F}^{\mu\nu}=\frac{1}{2\sqrt{-g}}\epsilon^{\mu\nu\alpha\beta}F_{\alpha\beta}\) is the dual electromagnetic field tensor. The equations of motion of the dilaton, axion fields are obtained as
\[\Box\Phi-e^{2\Phi}(\partial\xi)^{2}+e^{-\Phi}F^{2}=0, \tag{2}\]
and
\[\Box\xi+2\nabla^{\mu}\Phi\nabla_{\mu}\xi+e^{-2\Phi}F_{\mu\nu}\tilde{F}^{\mu \nu}=0. \tag{3}\]
The equation of motion for the electromagnetic vector potential \(A_{\mu}\) ( where \(F_{\mu\nu}=\partial_{\mu}A_{\nu}-\partial_{\nu}A_{\mu}\)) is obtained as
\[\nabla_{\mu}\left(-e^{-\Phi}F^{\mu\nu}+\xi\tilde{F}^{\mu\nu}\right)=0, \tag{4}\]
along with the usual Bianchi identity,
\[\nabla_{\mu}\tilde{F}^{\mu\nu}=0. \tag{5}\]
The equation of motion for the metric tensor \(g_{\mu\nu}\) is obtained by varying the action (1) with respect to \(g_{\mu\nu}\). We get
\[R_{\mu\nu}=\frac{1}{2}\nabla_{\mu}\nabla_{\nu}\Phi+\frac{1}{2}e^{2\Phi}\nabla_ {\mu}\xi\nabla_{\nu}\xi+2e^{-\Phi}F_{\mu\alpha}F_{\nu}{}^{\alpha}-\frac{1}{2} g_{\mu\nu}e^{-\Phi}F^{2}, \tag{6}\]
where \(R_{\mu\nu}\) are the Ricci tensor components.
### Static Black Hole solution
The static black hole solution in such a system has already been obtained in [6], where the authors used symmetry transformations on the axion and dilaton fields to obtain its form. In this section, we first outline the derivation of the same solution by solving directly, the Einstein field equations. Thereafter, we analyze the structure of the black hole solution.
We consider the ansatz for the spherically symmetric static line element
\[\mathrm{d}s^{2}_{static}=-\Delta^{2}(R)\mathrm{d}t^{2}+\frac{\psi^{2}(R)}{ \Delta^{2}(R)}\mathrm{d}R^{2}+R^{2}\left(\mathrm{d}\theta^{2}+\sin^{2}\theta \mathrm{d}\phi^{2}\right). \tag{7}\]
We also assume nonvanishing components of the electromagnetic field tensor
\[F_{01}=-F_{10}=\mathcal{E}(R),\ \ \ \ F_{23}=-F_{32}=\mathcal{B}(R)\sin\theta. \tag{8}\]
Then the Eq. (4) becomes
\[\left(\frac{e^{-\Phi}\mathcal{E}R^{2}}{\psi}+\xi\mathcal{B}\right)^{\prime}=0, \tag{9}\]
where the prime (\({}^{\prime}\)) denotes the derivative with respect to the radial coordinate \(R\). The Bianchi identity is satisfied for \(\mathcal{B}(R)=P\) (a constant) and \(P\) is therefore identified as the magnetic charge since
\[\tilde{F}^{10}=-\frac{P}{R^{2}\psi}. \tag{10}\]
Integrating Eq. (2.9), we get
\[{\cal E}=\psi e^{\Phi}\left(\frac{Q-\xi P}{R^{2}}\right), \tag{2.11}\]
where the integration constant \(Q\) is identified as the electric charge, since, at large \(R\), \({\cal E}\sim Q/R^{2}\). The equations of motion [Eqs.(2.2)-(2.3)] for the dilaton and the axion field become
\[\left(\frac{\Delta^{2}R^{2}\Phi^{\prime}}{\psi}\right)^{\prime}=2e^{-\Phi}\psi R ^{2}\left(\frac{{\cal E}^{2}}{\psi^{2}}-\frac{P^{2}}{R^{4}}\right)+e^{2\Phi} \frac{R^{2}\Delta^{2}}{\psi}\xi^{\prime 2}, \tag{2.12}\]
and
\[\left(\frac{\Delta^{2}R^{2}\xi^{\prime}}{\psi}\right)^{\prime}=-2\frac{\Delta ^{2}R^{2}}{\psi}\Phi^{\prime}\xi^{\prime}-4e^{-2\Phi}P{\cal E}. \tag{2.13}\]
Note from Eq. (2.13) that for \(\xi=0\), \(P=0\) or \(Q=0\). From Eq. (2.6), we get three non-vanishing components which are
\[R_{00}=\frac{\Delta^{4}}{\psi^{2}}\left[\frac{{\Delta^{\prime}}^ {2}}{\Delta^{2}}-\frac{\Delta^{\prime}\psi^{\prime}}{\Delta\psi}+\frac{( \Delta^{\prime}R^{2})^{\prime}}{\Delta R^{2}}\right]=e^{-\Phi}\Delta^{2}\left( \frac{{\cal E}^{2}}{\psi^{2}}+\frac{P^{2}}{R^{4}}\right), \tag{2.14}\] \[R_{11}=\frac{2\psi^{\prime}}{R\psi}-\frac{{\Delta^{\prime}}^{2} }{\Delta^{2}}+\frac{\Delta^{\prime}\psi^{\prime}}{\Delta\psi}-\frac{(\Delta^{ \prime}R^{2})^{\prime}}{\Delta R^{2}}=\frac{1}{2}{\Phi^{\prime}}^{2}+\frac{1}{ 2}e^{2\Phi}\xi^{\prime 2}-e^{-\Phi}\frac{\psi^{2}}{\Delta^{2}}\left(\frac{{\cal E}^{2}}{ \psi^{2}}+\frac{P^{2}}{R^{4}}\right),\] (2.15) \[R_{22}=1-2R\frac{\Delta\Delta^{\prime}}{\psi^{2}}+\frac{\Delta^ {2}R^{2}}{\psi^{3}}\left(\frac{\psi}{R}\right)^{\prime}=R^{2}e^{-\Phi}\left( \frac{{\cal E}^{2}}{\psi^{2}}+\frac{P^{2}}{R^{4}}\right). \tag{2.16}\]
Using Eqs. (2.14) and (2.15), we get
\[\frac{2}{R}\frac{\psi^{\prime}}{\psi}=\frac{1}{2}\Phi^{\prime 2}+\frac{1}{2}e^ {2\Phi}\xi^{\prime 2}. \tag{2.17}\]
Demanding proper asymptotic behaviour, i.e. \(\Phi\to 0\) and \(\xi\to 0\), \(\psi\to 1\) for \(R\rightarrow\infty\), we assume,
\[\frac{\psi^{\prime}}{\psi}=\frac{\sigma^{2}}{R\left(R^{2}+\sigma^{2}\right)}, \hskip 14.226378pt\mbox{or,}\hskip 28.452756pt\psi^{2}=\frac{R^{2}}{R^{2}+ \sigma^{2}}, \tag{2.18}\]
where \(\sigma^{2}\) is a constant. From Eqs. (2.14) and (2.16), we get
\[\left[\frac{1}{2}\frac{\left(\Delta^{2}R^{2}\right)^{\prime}}{\psi}\right]^{ \prime}=\psi. \tag{2.19}\]
Solving this equation with the assumption on \(\psi\) given in Eq. (2.18), we obtain the solution for \(\Delta(r)\) as
\[\Delta^{2}(R)=1-\frac{2M\sqrt{R^{2}+\sigma^{2}}}{R^{2}}+\frac{P^{2}+Q^{2}}{R^ {2}}, \tag{2.20}\]
where the integration constants are identified as the mass \(M\) and the sum of the square of the charges \((P^{2}+Q^{2})\). The black hole resembles the Reissner-Nordstrom black holes
asymptotically. Using the Eqs. (18) and (20) the solution for the equations of motion for the dilaton and axion fields are
\[e^{\Phi} =1+\frac{2d}{R^{2}}\sqrt{R^{2}+k^{2}+d^{2}}+\frac{2(k^{2}+d^{2})}{R ^{2}}, \tag{21}\] \[\xi =\frac{2k\sqrt{R^{2}+k^{2}+d^{2}}}{R^{2}+2d\sqrt{R^{2}+k^{2}+d^{2}} +2(k^{2}+d^{2})}, \tag{22}\]
where, \(d=\frac{(P^{2}-Q^{2})}{2M}\) and \(k=\frac{PQ}{M}\) are dilaton and axion charges respectively and \(\sigma^{2}=k^{2}+d^{2}\).
In the absence of both electric and magnetic charges (i.e. \(P=Q=0\)), the dyonic and axionic charges also vanish, i.e. \(k=d=0\), and we recover the Schwarzschild black hole. For any non-zero \(P\) and/or \(Q\), the line element does not resemble the Riessner-Nordstrom black holes. This signifies that these black holes are hairy. Another distinguishing feature of these black holes are that they have single horizons - unlike the Riessner-Nordstrom. Using the relation \(k^{2}+d^{2}=(P^{2}+Q^{2})^{2}/4M^{2}\) in the \(f(R_{hz})=0\), we get the horizon radius as,
\[R_{hz}=2M\sqrt{1-\frac{P^{2}+Q^{2}}{2M^{2}}}. \tag{23}\]
This feature is also different from Riessner-Nordstrom and the dyonic black holes with dilaton field as the only scalar hair. We notice a double horizon in general and extremal horizon in a special situation. However, the static version of the Kerr-Sen black holes share the similar feature of a single horizon. For black holes,
\[M\geq\sqrt{\frac{P^{2}+Q^{2}}{2}}, \tag{24}\]
otherwise, we have naked singularities. This is illustrated in Fig. 1.
### Energy conditions
By identifying the non-zero components of the stress-energy tensors as \(T^{0}{}_{0}=-\rho\), \(T^{1}{}_{1}=\tau\), and \(T^{2}{}_{2}=T^{3}{}_{3}=p\), where \(\rho\) (energy density), \(\tau\) (radial pressure), and \(p\) (tangential pressure) are defined in the orthonormal frame basis, and using the Einstein field equations (\(T_{\mu\nu}=G_{\mu\nu}/8\pi G\)), we analyze all the energy conditions. We find that:
\((a)\) The Null Energy Conditions (NEC), i.e. \(\rho+\tau\geq 0\) and \(\rho+p\geq 0\) are satisfied when \(\Delta^{2}(R)\geq 0\). This implies that the NEC is satisfied on and outside the horizon of the black hole. For a naked singularity, the NEC is satisfied for all \(R\).
(\(b\)) The Weak Energy Conditions (WEC) implies \(\rho\geq 0\) in addition to the NEC. Using the Einstein field equation
\[\rho=\frac{1}{8\pi G}\left[\frac{1}{R^{2}}-\frac{(\Delta^{2})^{\prime}}{R\psi^{2} }-\frac{\Delta^{2}}{R^{2}\psi^{2}}+2\frac{\Delta^{2}\psi^{\prime}}{R\psi^{3}} \right]. \tag{25}\]
One can check that \(\rho\geq 0\) for \(R\geq R_{hz}/\sqrt{3}\) for black holes, and for all \(R\) in case of naked singularities. Thus, the WEC is also satisfied on and outside the horizon of the black holes.
(\(c\)) The Strong Energy Condition (SEC), i.e. \(\rho+\tau+2p\geq 0\) is satisfied for all \(R\) irrespective of black holes or naked singularities.
(\(d\)) The Dominant Energy Conditions (DEC), i.e. \(\rho\geq 0\), \(\rho\geq|\tau|\), and \(\rho\geq|p|\) are satisfied on and outside of the black hole horizon, and for all \(R\) in case of naked singularities.
It was conjectured [25], [26] that "a violation of either the dominant or the strong energy condition is a necessary condition for the existence of an anti-photon sphere outside a regular black hole horizon". Thus, according to our analysis of the energy conditions, the static dyonic black holes in the EMDA theory do not consist anti-photon sphere or stable photon orbits.
Figure 1: \(\frac{P}{M}\) vs. \(\frac{Q}{M}\) parameter space is plotted. The shaded circular region indicates the allowed parameter space for black holes and the exterior region in parameter space corresponds to naked singularities.
## III Dyonic Kerr-Sen Black Holes
Dyonic Kerr-Sen black holes are the rotating versions of the static black holes. The Newman-Janis (NJ) algorithm [27; 28] can be applied to obtain such rotating black holes. After introducing a new radial coordinate \(r\) such that the squared area radius \(R^{2}=r^{2}-2dr-k^{2}\) or \(r=\sqrt{R^{2}+k^{2}+d^{2}}+d\), the static line element (2.7,2.18, 2.20) becomes
\[ds^{2}=-f(r)dt^{2}+\frac{dr^{2}}{g(r)}+h(r)\left(d\theta^{2}+\sin^{2}\theta d \phi^{2}\right), \tag{3.1}\]
where
\[\begin{split} f(r)=g(r)&=1-\frac{2M(r-d)-P^{2}-Q^{ 2}}{r^{2}-2dr-k^{2}}\\ &=\left(1-\frac{2(d+M)}{r}+\frac{2P^{2}-k^{2}}{r^{2}}\right) \left(1-\frac{2d}{r}-\frac{k^{2}}{r^{2}}\right)^{-1},\end{split} \tag{3.2}\]
and
\[\begin{split} h(r)=R^{2}(r)&=r^{2}-2dr-k^{2}\\ &=r^{2}\left(1-\frac{2d}{r}-\frac{k^{2}}{r^{2}}\right).\end{split} \tag{3.3}\]
In terms of the advanced Eddington-Finkelstein coordinates \((u,r,\theta,\phi)\), where \(du=dt-dr/f(r)\), Eq. (3.1) is written as
\[ds^{2}=-f(r)du^{2}-2dudr+h(r)\left(d\theta^{2}+\sin^{2}\theta d\phi^{2}\right). \tag{3.4}\]
The inverse metric components of the line element (3.4) can be decomposed using the null tetrad \(Z^{\mu}{}_{\alpha}=(l^{\mu},n^{\mu},m^{\mu},\bar{m}^{\mu})\) as
\[g^{\mu\nu}=-l^{\mu}n^{\nu}-l^{\nu}n^{\mu}+m^{\mu}\bar{m}^{\nu}+m^{\nu}\bar{m}^ {\mu}, \tag{3.5}\]
where
\[l^{\mu}=\delta^{\mu}{}_{r},\hskip 14.226378ptn^{\mu}=\delta^{\mu}_{u}-\frac{f}{ 2}\delta^{\mu}_{r},\hskip 14.226378ptm^{\mu}=\frac{1}{\sqrt{2h}}\left(\delta^{ \mu}_{\theta}+\frac{i}{\sin\theta}\delta^{\mu}_{\phi}\right). \tag{3.6}\]
Using the complex transformation
\[r\to r^{\prime}=r+ia\cos\theta,\hskip 14.226378ptu\to u^{\prime}=u-ia\cos\theta, \tag{3.7}\]
where \(a\) is the rotation parameter, and replacing the terms \(r^{2}\) by \(\hat{\rho}^{2}=r^{\prime}r^{\prime*}=r^{2}+a^{2}\cos^{2}\theta\) and \(\frac{2}{r}\) by \((\frac{1}{r^{\prime}}+\frac{1}{r^{\prime*}})=\frac{2r}{\hat{\rho}^{2}}\), we get the new metric in the Eddington-Finkelstein coordinates
[17, 29]
\[\begin{split} ds^{2}&=-F(r,\theta)du^{2}-2dudr+2a\sin^{ 2}\theta\left[F(r,\theta)-1\right]dud\phi+2a\sin^{2}\theta drd\phi\\ &\qquad+H(r,\theta)d\theta^{2}+\sin^{2}\theta\left[H(r,\theta)+a^ {2}\sin^{2}\theta(2-F)\right]d\phi^{2},\end{split} \tag{3.8}\]
where \(F(r,\theta)\) and \(H(r,\theta)\) are complexified forms of \(f(r)\) and \(h(r)\) respectively. In our case using Eqs. (3.2), (3.3) we get
\[f(r)\to F(r,\theta) =\left(1-\frac{2(d+M)r}{\hat{\rho}^{2}}+\frac{2P^{2}-k^{2}}{\hat{ \rho}^{2}}\right)\left(1-\frac{2dr}{\hat{\rho}^{2}}-\frac{k^{2}}{\hat{\rho}^{ 2}}\right)^{-1}, \tag{3.9}\] \[h(r)\to H(r,\theta) =\hat{\rho}^{2}\left(1-\frac{2dr}{\hat{\rho}^{2}}-\frac{k^{2}}{ \hat{\rho}^{2}}\right). \tag{3.10}\]
In Boyer-Lindquist coordinates, the new metric Eq. (3.8) for the rotating black hole finally takes the form [17]
\[\begin{split} ds^{2}&=-Fdt^{2}-2a(1-F)\sin^{2} \theta dtd\phi+\frac{H}{FH+a^{2}\sin^{2}\theta}dr^{2}+Hd\theta^{2}\\ &\qquad+\sin^{2}\theta\left[H+a^{2}\sin^{2}\theta(2-F)\right]d \phi^{2}.\end{split} \tag{3.11}\]
After simplification using the explicit forms of \(F(r,\theta)\) and \(H(r,\theta)\) in our case, we arrive at the line element for a rotating dyonic black hole in Boyer-Lindquist coordinates
\[\begin{split}\mathrm{d}s^{2}=&-\left(1-\frac{2M(r -d)-P^{2}-Q^{2}}{\hat{\Sigma}}\right)\mathrm{d}t^{2}-\frac{2a\sin^{2}\theta}{ \hat{\Sigma}}(2M(r-d)-P^{2}-Q^{2})\mathrm{d}t\mathrm{d}\phi\\ &+\left(r^{2}-2dr-k^{2}+a^{2}+\frac{a^{2}\sin^{2}\theta}{\hat{ \Sigma}}\left(2M(r-d)-P^{2}-Q^{2}\right)\right)\sin^{2}\theta\mathrm{d}\phi^{ 2}+\frac{\hat{\Sigma}}{\hat{\Delta}}\mathrm{d}r^{2}+\hat{\Sigma}\mathrm{d} \theta^{2}\end{split} \tag{3.12}\]
where the functions \(\hat{\Delta}(r)\) and \(\hat{\Sigma}(r,\theta)\) are
\[\hat{\Delta}=r^{2}-2dr-2M(r-d)-k^{2}+a^{2}+P^{2}+Q^{2}, \tag{3.13}\]
and
\[\hat{\Sigma}=r^{2}-2dr-k^{2}+a^{2}\cos^{2}\theta. \tag{3.14}\]
This metric was already derived in [30], [31] using a different method. This is known as the dyonic Kerr-Sen black hole spacetime. Here \(M\) and \(a\) are the mass and rotation parameters of the black hole. \(Q\) and \(P\) are the electric and magnetic charges, respectively. \(d=(P^{2}-Q^{2})/2M\) and \(k=PQ/M\) are the dyonic charge and axion charge respectively. If the magnetic charge of the black hole vanishes, i.e. \(P=0\) then it reduces to the Kerr-Sen black hole. For
the special case, \(P=Q\), the dyonic charge vanishes, i.e. \(d=0\), but axion charge \(k\neq 0\). This is a distinguishing feature of dyonic Kerr-Sen black holes.
There is a curvature singularity at \(r=0\) covered by the radius
\[r_{\pm}=M+\frac{P^{2}-Q^{2}}{2M}\pm\sqrt{\left(M-\frac{P^{2}+Q^{2}}{2M}\right)^ {2}-a^{2}}. \tag{3.15}\]
The corresponding event horizon and Cauchy horizon are given by \(R_{+}=\sqrt{r_{+}^{2}-2dr_{+}-k^{2}}\) and \(R_{-}=\sqrt{r_{-}^{2}-2dr_{-}-k^{2}}\) respectively. The horizon can exist only for
\[\left(1-\frac{Z_{c}^{2}}{2M^{2}}\right)^{2}\geq\frac{a^{2}}{M^{2}}, \tag{3.16}\]
where \(Z_{c}^{2}=P^{2}+Q^{2}\). Otherwise, the spacetime describes a naked singularity. This is illustrated in Fig. 2.
Figure 2: In (a) the shaded region in the \(a/M\)- \(Z_{c}\) parameter space corresponds to black holes. In (b) the plot is extended into the full 3-D parameter space of \(a/M\), \(P/M\), and \(Q/M\). Note that \(Z_{c}^{2}=P^{2}+Q^{2}\).
Black hole shadows
In this section we study the shadow cast by the black holes (both rotating and non-rotating) on the observer's sky. In order to separate the radial (\(r\)) and angular (\(\theta\)) equation of motion for photons, we use the Hamilton-Jacobi method. For the rotating case we use the Boyer-Lindquist coordinates, while for the non-rotating case Schwarzschild coordinates are used.
### Shadows of (rotating) dyonic Kerr-Sen black holes
The Hamilton-Jacobi (HJ) equation for photon trajectories is given by
\[H(x^{\mu},p_{\mu})+\frac{\partial S}{\partial\lambda}=0 \tag{10}\]
where \(S(x^{\mu},\lambda)\) is the Jacobi action, \(\lambda\) is the affine parameter, and \(H(x^{\mu},p_{\mu})\) is the Hamiltonian corresponding to the Lagrangian null trajectories given by \(\mathcal{L}=\frac{1}{2}g_{\mu\nu}\dot{x}^{\mu}\dot{x}^{\nu}=0\), and 'dot' represents the derivative with respect to the affine parameter \(\lambda\). The conjugate momentum is \(p_{\mu}=\frac{\partial S}{\partial x^{\mu}}=\frac{\partial\mathcal{L}}{ \partial\dot{x}^{\mu}}\).
We can write \(S\), using separation of variables, as
\[S=-Et+L\phi+S^{r}(r)+S^{\theta}(\theta), \tag{11}\]
where \(E=-p_{t}\) and \(L=p_{\phi}\) are the constants of motion. As \(S\) does not depend explicitly on \(\lambda\), the HJ equation becomes \(H=\frac{1}{2}g^{\mu\nu}p_{\mu}p_{\nu}=0\). Using \(p_{r}=\frac{\partial S}{\partial r}=\frac{\mathrm{d}S^{r}}{\mathrm{d}r}\) and \(p_{\theta}=\frac{\partial S}{\partial\theta}=\frac{\mathrm{d}S^{\theta}}{ \mathrm{d}\theta}\) and the inverse metric components
\[g^{tt} =-\frac{\left(r^{2}-2dr-k^{2}+a^{2}\right)^{2}-\hat{\Delta}a^{2} \sin^{2}\theta}{\hat{\Sigma}\hat{\Delta}}, \tag{12}\] \[g^{t\phi} =g^{\phi t}=-\frac{a}{\hat{\Sigma}\hat{\Delta}}\left(2M(r-d)-P^{ 2}-Q^{2}\right),\] (13) \[g^{\phi\phi} =\frac{\hat{\Delta}-a^{2}\sin^{2}\theta}{\hat{\Sigma}\hat{ \Delta}\sin^{2}\theta},\] (14) \[g^{rr} =\frac{\hat{\Delta}}{\hat{\Sigma}},\hskip 14.226378ptg^{ \theta\theta}=\frac{1}{\hat{\Sigma}}, \tag{15}\]
we expand the HJ equation and obtain the separated angular and radial equations of motion of photons. The angular equation of motion is
\[\begin{split}\frac{\mathrm{d}S^{\theta}}{\mathrm{d}\theta}=& E\sqrt{\Theta(\theta)}\\ =& E\sqrt{\chi-l^{2}\cot^{2}\theta+a^{2}\cos^{2} \theta},\end{split} \tag{16}\]
where \(\chi=C/E^{2}\) (\(C\) is the Carter constant) and \(l=L/E\). The radial equation is
\[\frac{\mathrm{d}S^{r}}{\mathrm{d}r}=E\sqrt{-V(r)}, \tag{4.8}\]
where the effective potential
\[V(r)=\frac{(l-a)^{2}+\chi}{\hat{\Delta}}-\frac{(r^{2}-2dr-k^{2}+a^{2}-al)^{2}}{ \hat{\Delta}^{2}}. \tag{4.9}\]
For unstable photon orbits \(V(r_{ph})=V^{\prime}(r_{ph})=0\). After simplification, we obtain \(l(r_{ph})\) and \(\chi(r_{ph})\) as
\[l(r_{ph}) =\frac{1}{a}\left(r_{ph}^{2}+a^{2}-2dr_{ph}-k^{2}-4(r_{ph}-d) \frac{\hat{\Delta}(r_{ph})}{\hat{\Delta}^{\prime}(r_{ph})}\right), \tag{4.10}\] \[\chi(r_{ph}) =\frac{16(r_{ph}-d)^{2}\hat{\Delta}(r_{ph})}{\hat{\Delta}^{ \prime 2}(r_{ph})}-\frac{1}{a^{2}}\left(r_{ph}^{2}-2dr_{ph}-k^{2}-4(r_{ph}-d) \frac{\hat{\Delta}(r_{ph})}{\hat{\Delta}^{\prime}(r_{ph})}\right)^{2}, \tag{4.11}\]
where \(\hat{\Delta}(r_{ph})\) and \(\hat{\Delta}^{\prime}(r_{ph})\) can be obtained from Eq. (3.13). The celestial coordinates for the observer's sky are
\[\alpha=-\frac{l}{\sin\theta_{0}},\ \ \ \ \beta=\pm\sqrt{\Theta(\theta_{0})}, \tag{4.12}\]
where \(\theta_{0}\) is the inclination angle. The parametric plot \(\alpha(r_{ph})\) versus \(\beta(r_{ph})\) using the Eqs. (4.10), (4.11), and (4.12) gives the shadow profile.
In Fig. 3, the comparison between the shadow profiles for different values of \(a/M\), \(Q/M\), and \(P/M\) are shown for the Kerr-Newmann, Kerr-Sen, and dyonic Kerr-Sen black holes. We note that as we increase the \(P/M\) values the deviations from the Kerr-Sen and the Kerr-Newmann black holes are more prominent. However, as the \(a/M\) value (rotation parameter) is increased the maximum deviation from the Kerr-Sen black holes is found to decrease. This is more prominent in Fig. 4, where all the shadow boundaries approach the outermost black solid curve for the Kerr black holes, as the rotation parameter value is increased.
### Shadows of static black holes
Using the line element [Eq. (2.7)] for static black holes, the equations of motion for the photon trajectories are
\[\frac{\mathrm{d}S^{\theta}}{\mathrm{d}\theta}=E\sqrt{\chi-l^{2} \cot^{2}\theta}, \tag{4.13}\] \[\frac{\mathrm{d}S^{R}}{\mathrm{d}R}=E\sqrt{-V(R)}, \tag{4.14}\]
Figure 3: Shadow boundaries are plotted in the observer’s sky, i.e. \(\frac{\alpha}{M}\) vs \(\frac{\beta}{M}\) space. The black dashed and the solid blue curves in each figure denote the shadow boundaries of the Kerr-Newmann and the Kerr-Sen black holes respectively. The solid red curve denotes the dyonic Kerr-Sen black holes. In the top panel, \(a/M=0.5\) and \(Q/M=0.85\) for all three figures (\((a)\), \((b)\), \((c)\)) but \(P/M\) increases for the dyonic Kerr-Sen black holes as we go from \((a)\) to \((c)\). In the bottom panel, the \(a/M\) value is increased to \(a/M=0.65\) for the figures \((d)\) to \((f)\). The \(Q/M\) value is also fixed at \(Q/M=0.75\) but \(P/M\) values are increased for dyonic Kerr-Sen black holes as in the top panel figures. The inclination angle \(\theta_{0}=90\) degrees for all the figures.
where \(\chi=C/E^{2}\), \(l=L/E\), \(C\) is Carter constant, \(L\) is angular momentum. The effective potential for the radial equation of motion
\[V(R)=\frac{\psi^{2}}{\Delta^{2}}\left[\frac{\chi+l^{2}}{R^{2}}-\frac{1}{\Delta^{ 2}}\right]. \tag{4.15}\]
For the photon sphere radius (corresponding to the unstable orbits), \(V(R_{ph})=V^{\prime}(R_{ph})=0\), which leads to the relation
\[R_{ph}=\frac{\Delta(R_{ph})}{\Delta^{\prime}(R_{ph})}. \tag{4.16}\]
Using the expression of \(\Delta(R)\) [Eq. (2.20)], we obtain a quadratic equation
\[x^{2}+bx+c= 0,\] \[\text{where, }\quad b= \frac{\tilde{Z}_{c}^{4}}{4}+4\tilde{Z}_{c}^{2}-9, \tag{4.17}\] \[\text{and, }\quad c= \tilde{Z}_{c}^{6}-2\tilde{Z}_{c}^{4},\]
Figure 4: Shadow boundaries are plotted in the observer’s sky, i.e. \(\frac{\alpha}{M}\) vs \(\frac{\beta}{M}\) space. The black dashed and the solid blue curves in each figure denote the shadow boundaries of the Kerr-Newmann and the Kerr-Sen black holes respectively. The solid red curve denotes values for the dyonic Kerr-Sen black holes. The outer black solid curve is for Kerr black holes with corresponding rotation parameter (\(a/M\)) value. The inclination angle \(\theta_{0}=90\) degrees for all the figures.
where \(x=R_{ph}^{2}/M^{2}\) and \(\tilde{Z}_{c}^{2}=(P^{2}+Q^{2})/M^{2}\). Then we obtain the photon radius from the root of the quadratic equation, i.e.
\[x=\frac{R_{ph}^{2}}{M^{2}}=\frac{-b+\sqrt{b^{2}-4c}}{2}. \tag{4.18}\]
For the celestial coordinates \(\alpha\) and \(\beta\) as defined earlier [Eq. (4.12)] with \(a=0\), we obtain
\[\alpha^{2}+\beta^{2}=\chi+l^{2}=\frac{R_{ph}^{2}}{\Delta^{2}(R_{ph})}=\frac{1} {\Delta^{\prime 2}(R_{ph})}. \tag{4.19}\]
Therefore the shadow radius for static black holes is
\[\begin{split} R_{shadow}=&\frac{R_{ph}}{\Delta(R_{ ph})}\\ =& M\left[\frac{x^{2}}{x-2\sqrt{x+\frac{Z_{c}^{4}}{ 4}}+\tilde{Z}_{c}^{2}}\right]^{1/2},\end{split} \tag{4.20}\]
where \(x\) is given by Eq. (4.18). For the critical value \(\tilde{Z}_{c}^{2}=2\) the photon-sphere radius vanishes (\(R_{ph}=0\)) but the shadow radius does not vanish (\(R_{shadow}=2M\)). Thus, the shadow does not exist for naked singularities. For, \(P=Q=0\), i.e. \(\tilde{Z}_{c}=0\), \(R_{shadow}=3\sqrt{3}M\), which is the case for the Schwarzschild black hole.
## V Observational Bound on Rotating Black Holes
We can test the possible existence of rotating dyonic Kerr-Sen black holes using the observations of black hole shadows of M87\({}^{*}\) and Sgr A\({}^{*}\). To do this, we may define two observational quantities which are- (\(i\)) deviation from circularity (\(\Delta C\)) [32] and (\(ii\)) fractional deviation parameter (\(\delta\)) related to the average shadow diameter [33; 12; 34]. These two quantities are described as follows.
We note that the shadow profile is symmetric about \(\beta=0\), i.e. the \(\alpha\) axis. Geometric centre of the shadow image on the \(\alpha\)- axis is obtained by taking its mean. Therefore, the centre of the shadow profile is
\[\alpha_{c}=\frac{\int\alpha\,\mathrm{d}A}{\int\mathrm{d}A},\ \ \ \ \beta_{c}=0, \tag{5.1}\]
where \(\mathrm{d}A=2\beta d\alpha\) is the area element on the shadow image. From the geometric centre of the shadow image, the radial distance \(\ell(\phi)\) to any point on the shadow boundary, making
an angle \(\phi\) with respect \(\alpha\)- axis, can be expressed as
\[\ell(\phi)=\sqrt{\left(\alpha(\phi)-\alpha_{c}\right)^{2}+\beta(\phi)^{2}}. \tag{5.2}\]
Then the average shadow radius can be defined as the root mean squared radius, i.e.
\[R_{avg}^{2}=\frac{1}{2\pi}\int_{0}^{2\pi}\mathrm{d}\phi\ell^{2}(\phi). \tag{5.3}\]
Finally, the deviation from circularity is defined as [32]
\[\Delta C=\frac{1}{R_{avg}}\sqrt{\frac{1}{2\pi}\int_{0}^{2\pi}\mathrm{d}\phi \left(\ell(\phi)-R_{avg}\right)^{2}}. \tag{5.4}\]
For the computation, it is more convenient to use \(r_{ph}\) as the parameter instead of \(\phi\). Then, we can express \(R_{avg}\) and \(\Delta C\) as
\[R_{avg}^{2} =\frac{1}{\pi}\int_{r_{ph+}}^{r_{ph-}}\left(\beta^{\prime}(\alpha -\alpha_{c})-\beta\alpha^{\prime}\right)\mathrm{d}r_{ph}, \tag{5.5}\] \[\Delta C =\frac{1}{R_{avg}}\sqrt{\frac{1}{\pi}\int_{r_{ph+}}^{r_{ph-}} \left(\beta^{\prime}(\alpha-\alpha_{c})-\beta\alpha^{\prime}\right)\left(1- \frac{R_{avg}}{\ell}\right)^{2}\mathrm{d}r_{ph}}, \tag{5.6}\]
where \(\beta^{\prime}=\frac{\mathrm{d}\beta}{\mathrm{d}r_{ph}}\) and \(\alpha^{\prime}=\frac{\mathrm{d}\beta}{\mathrm{d}r_{ph}}\). Here \(r_{ph+}\) and \(r_{ph-}\) are obtained from the roots of \(\beta(r_{ph})=0\), i.e. the values of \(r_{ph}\) for which the shadow boundary cuts the \(\alpha\)-axis. In other words, \(\phi(r_{ph+})=0\) and \(\phi(r_{ph-})=\pi\). The geometric centre of the shadow (\(\alpha_{c}\), \(\beta_{c}\)) can also be expressed in terms of the parameter \(r_{ph}\) as
\[\alpha_{c}=\frac{\int_{r_{ph+}}^{r_{ph-}}\alpha\beta\alpha^{\prime}\mathrm{d}r _{ph}}{\int_{r_{ph+}}^{r_{ph-}}\beta\alpha^{\prime}\mathrm{d}r_{ph}},\ \ \ \ \beta_{c}=0. \tag{5.7}\]
Note that \(\alpha(r_{ph})\), \(\beta(r_{ph})\) are obtained from the Eqs. (4.10), (4.11), and (4.12).
From the observation of shadow of M87\({}^{*}\), the EHT collaboration has given a bound \(\Delta C\lesssim 0.1\) for an inclination angle \(\theta_{0}=17^{o}\)[8; 9; 10]. However, the bound on \(\Delta C\) from SgrA\({}^{*}\) is not available. From Fig. 5(a), we note that for Kerr black holes the maximum value \(\Delta C\lesssim 0.07\) for all inclination angle (\(\theta_{0}\)). Considering the orientation of the observed relativistic jets from the M87\({}^{*}\), the inclination angle is estimated to be \(\theta_{0}=17^{o}\)[35]. In Fig. 5(b), the variation of \(\Delta C\) is shown as the function of \(a/M\). Note that the maximum value of \(\Delta C\lesssim 0.005\) for the inclination angle \(\theta_{0}=17^{o}\). For the same inclination angle, we scanned the parameter space \(a/M-Z_{c}/M\) for the dyonic Kerr-Sen black holes. From the Fig. 6(a) we note that the for \(\Delta C\lesssim 0.00534\). As \(\Delta C\) increases with the inclination
angle \(\theta_{0}\), we have also scanned the parameter space for the inclination angle \(\theta_{0}=90^{o}\) in Fig. 6(b). The maximum possible deviation is \(\Delta C\lesssim 0.072\). Therefore, we conclude that all black hole parameters are allowed and the present observational bound on the deviation from circularity cannot constrain the parameter space of the dyonic Kerr-Sen black holes.
The recent EHT papers on SgrA\({}^{*}\) observations have used the fractional deviation parameter \(\delta\) to constrain the spacetime geometries different from the Schwarzschild or Kerr black holes. The definition of \(\delta\) is as follows
\[\delta=\frac{d_{sh}}{d_{sh,Sch}}-1=\frac{R_{avg}}{3\sqrt{3}M}-1, \tag{5.8}\]
where the average diameter of the shadow, \(d_{sh}=2R_{avg}\). Using the observations of the shadow of SgrA\({}^{*}\) and two separate set of prior values of mass and distance of SgrA\({}^{*}\) from the VLTI and Keck observations, the EHT collaboration provided the bound on \(\delta\)[11; 12] as
\[\delta=\begin{cases}-0.08^{+0.09}_{-0.09}&\text{(VLTI)}\\ -0.04^{+0.09}_{-0.10}&\text{(Keck)}\end{cases} \tag{5.9}\]
Therefore, combining both these bounds, we have \(-0.14<\delta<0.01\).
Figure 5: \((a)\) The contour plots for different values of \(\Delta C\) is shown over \(a/M\) vs. \(\theta_{0}\) parameter space, for Kerr black holes. The black dashed line corresponds to \(\theta_{0}=17^{o}\). \((b)\)\(\Delta C\) is plotted as a function of \(a/M\) for the inclination angle \(\theta_{0}=17^{o}\).
In Fig. 7, we scan the parameter space \(a/M-\theta_{0}\) with the contours labelled by different values of \(\delta\) for the Kerr black holes. It is noted that \(-0.0704\lesssim\delta<0\) for all parameter values. Thus, the Kerr black hole parameters are unconstrained from the observational
Figure 6: The contour plots for different values of \(\Delta C\) is shown over \(a/M\) vs. \(Z_{c}/M\) parameter space for dyonic Kerr-Sen black holes. For plot \((a)\) the inclination angle is \(\theta_{0}=17^{o}\) and for plot \((b)\)\(\theta_{0}=90^{o}\). In the plots, \(Z_{c}=\sqrt{P^{2}+Q^{2}}\). The white excluded region of the parameter space is for naked singularities.
Figure 7: The contour plots for different values of \(\delta\) is shown over \(a/M\) vs. \(\theta_{0}\) parameter space, for Kerr black holes. The black dashed line corrspends to \(\theta_{0}=50^{o}\).
bound on \(\delta\) from SgrA\({}^{*}\). Further, we note that the variation in \(\delta\) is less sensitive to the variation of \(\theta_{0}\). Moreover, in the observation of SgrA\({}^{*}\), inclination angle greater than \(50^{o}\) is disfavored.
Therefore, we choose \(\theta_{0}=50^{o}\) in the Fig. 8(a), where we scan the parameter space \(a/M-Z_{c}/M\) with contours of different \(\delta\) values for the dyonic Kerr-Sen black holes. In the blue shaded region of the plot \(\delta<-0.14\) which means the parameter values \((a/M,Z_{c}/M)\) are not allowed according to the observations of the SgrA\({}^{*}\). Further, we note that for a \(Z_{c}/M\) parameter value greater than the critical limit \(0.873\), any \(a/M\) parameter value does not satisfy the observational constraint. This critical value of \(Z_{c}/M\) parameter is independent of the inclination angle \(\theta_{0}\) as the critical limit corresponds to the \(a=0\), i.e. the static black holes. In Fig. 8(b), we show the variation of \(\delta\) as the function of \(Z_{c}/M\) for static black holes using the analytical expression for the shadow radius given in Eq. (4.20). There, we explicitly show the critical limit of \(Z_{c}/M\). We conclude that \(Z_{c}/M\lesssim 0.873\) and for any of
Figure 8: \((a)\) The contour plots for different values of \(\delta\) is shown over \(a/M\) vs. \(Z_{c}/M\) parameter space for dyonic Kerr-Sen black holes. The inclination angle is \(\theta_{0}=50^{o}\). The blue shaded region is observationally disfavored as there \(\delta<-0.14\). \((b)\) The black solid line is the plot of \(\delta\) as the function of \(Z_{c}/M\) for static black holes, i.e. \(a=0\). The red dashed line corresponds to \(\delta=-0.14\) (the observational limit). It intersects the black solid curve at \(Z_{C}/M=0.873\), meaning \(Z_{C}/M\lesssim 0.873\) to satisfy the observational constraint.
allowed \(Z_{c}/M\) parameter value, the allowed range of \(a/M\) parameter values must be in the allowed region of space shown in Fig. 8(a).
## VI Discussion and Conclusions
In this paper we have revisited black holes with dyonic charges in Einstein-Maxwell-Dilaton-Axion (EMDA) theory in the context of the observations on shadows of the supermassive black holes M87\({}^{*}\) and SgrA\({}^{*}\).
First we outlined the derivation of the static black hole solution by direct integration of the field equations of EMDA theory. Further, the rotating version (the dyonic Kerr-Sen black hole) was obtained from the static solution by applying the Newman-Janis algorithm. Thereafter, we analyzed the structure of the static black holes in detail. The differences with the standard Reissner-Nordstrom (RN) black holes are parametrized by the quantity \(Z_{c}^{2}=P^{2}+Q^{2}\). These static black holes have a single horizon (the event horizon) unlike non-extremal RN black holes. For either \(P=0\) or \(Q=0\) the axion field vanishes. However the dilaton field vanishes only when both \(P=0\) and \(Q=0\), which is the case for the Schwarzschild black hole. Thus there is no solution with only axion and Maxwell's electromagnetic field but without dilaton field. The reason lies in the structure of the EMDA action where the axion field is linearly coupled with \(F_{\mu\nu}\tilde{F}^{\mu\nu}\). The static dyonic black holes satisfy all energy conditions outside the event horizon. There is also no stable photon orbit outside the event horizon of these static dyonic black holes.
For the rotating case (\(a\neq 0\)) the axion field is non-zero even if the axion charge is zero (\(k=PQ/M=0\)) which is the case for the Kerr-Sen black hole (\(P=0\), \(Q\neq 0\), and \(a\neq 0\)). In the static limit of the Kerr-Sen black holes (i.e. setting \(a=0\)), the axion field vanishes. This is an interesting difference between the Kerr-Sen black holes and the dyonic Kerr-Sen black holes, in general.
We study the shadow profiles for both rotating and static black holes. We have obtained the exact expression for the shadow radius for static black holes. The parameter \(Z_{c}<\sqrt{2}M\) and the shadow radius \(R_{shadow}\) obeys \(2M<R_{shadow}<3\sqrt{3}M\). The effect of magnetic charge/ monopole on the shadow profile of rotating black holes has been investigated graphically by comparing the dyonic Kerr-Sen black holes with the Kerr-Newman and the Kerr-Sen black holes (Fig. 3). The deviation increases with the increase in magnetic
charge. However, the deviation is not so prominent for higher rotation parameter (Fig. 4).
Finally, we test whether the known supermassive black holes at galactic centres could be modeled as such dyonic Kerr-Sen black holes. In other words, we look for metric parameter values for which the shadow features match with those in the observed shadow images of M87\({}^{*}\) and SgrA\({}^{*}\). We have used two observational quantities related to black hole shadows - (\(i\)) the deviation from circularity of the observed shadow boundary (\(\Delta C\)) and (\(ii\)) the fractional deviation parameter (\(\delta\)) representing the deviation of the observed average shadow diameter from that for a Schwarzschild black hole. The observational bound on \(\Delta C\) for M87\({}^{*}\) is satisfied for all parameter values for the dyonic black holes. Thus, it cannot be constrained. For SgrA\({}^{*}\), the EHT collaboration provided a bound on \(\delta\) which gives a constraint on the black hole parameter \(Z_{c}=\sqrt{P^{2}+Q^{2}}\lesssim 0.873M\). Thus we get an upper bound on the magnetic monopole charge (if any) for SgrA\({}^{*}\) as \(P\lesssim 0.873M\) where \(M=4.154\times 10^{6}M_{\odot}\) for SgrA\({}^{*}\). In natural units, a magnetic charge \(Q_{m}=P/m_{P}\) where \(m_{P}=G^{-1/2}=1.22\times 10^{19}\)\(GeV\) is the Planck mass. In these units, the obtained bound on the magnetic charge of SgrA\({}^{*}\) is \(Q_{m}\lesssim 6.955\times 10^{44}\).
Magnetic monopoles are a natural outcome of grand-unified theories (GUTs) [36] and was an original motivation for cosmic inflation [37; 38]. They have been searched in various experiments [39; 40; 41]. Primordial black holes with magnetic charges, which reached the extremality condition in due course of cosmic evolution and do not Hawking radiate anymore, can be a possible dark matter candidate [42]. They are termed as the extremal Magnetic Black Holes (EMBHs). Various astrophysical limits on such EMBHs was discussed in [42]. However, the authors put constraints on the black holes with mass range having the upper limit \(M<10^{33}\,g\). Therefore these black holes cannot be modeled as the supermassive black holes like M87\({}^{*}\) and SgrA\({}^{*}\) and thus cannot be directly compared with the bound that we obtained from the observations of shadow images. However, we can have some idea on what type of physical process can give bounds on the magnetic charge of the supermassive black holes at the center of our galaxy. For example, from the observed temperature of clouds of Warm Ionized Medium (WIM) in the Milky Way [43], we can put some upper bound on the magnetic charge of SgrA*. Following the calculations of [42], we get a rough estimation on the upper bound on the magnetic charge of SgrA\({}^{*}\) to be \(P\lesssim 10^{-13}M\). Another astrophysical constraint comes from the Parker bound [44] which is based upon the survival of today's galactic magnetic field, as the field energy is drained out by the magnetic monopoles while
moving through the field. This puts an upper limit on the flux of magnetic monopoles. From the Parker bound our rough estimate on the magnetic charge of SgrA\({}^{*}\) is \(P\lesssim 359\,M_{\odot}\) and thus \(P/M\lesssim 10^{-4}\). Clearly, the other astrophysical constraints are expected to be far more stringent by several orders of magnitude.
Thus, any possible observational relevance of our results, atleast in the context of shadows and the viable presence of magnetic charges seems quite a distant dream. However, the theoretical analysis we have carried out, may be of use once more imaging observations on shadows of black holes are carried out and presented in future.
## Acknowledgements
Research of SJ is partially supported by the SERB, DST, Govt. of India, through a TARE fellowship grant no. TAR/2021/000354, hosted by the department of Physics, Indian Institute of Technology Kharagpur.
|
2309.01623 | Post-reionization HI 21-cm signal: A probe of negative cosmological
constant | In this study, we investigate a cosmological model involving a negative
cosmological constant (AdS vacua in the dark energy sector). We consider a
quintessence field on top of a negative cosmological constant and study its
impact on cosmological evolution and structure formation. We use the power
spectrum of the redshifted HI 21 cm brightness temperature maps from the
post-reionization epoch as a cosmological probe. The signature of baryon
acoustic oscillations (BAO) on the multipoles of the power spectrum is used to
extract measurements of the angular diameter distance $D_A(z)$ and the Hubble
parameter $H(z)$. The projected errors on these are then subsequently employed
to forecast the constraints on the model parameters ($\Omega_\Lambda, w_0,
w_a$) using Markov Chain Monte Carlo techniques. We find that a negative
cosmological constant with a phantom dark energy equation of state (EoS) and a
higher value of $H_0$ is viable from BAO distance measurements data derived
from galaxy samples. We also find that BAO imprints on the 21cm power spectrum
obtained from a futuristic SKA-mid like experiment yield a $1-\sigma$ error on
a negative cosmological constant and the quintessence dark energy EoS
parameters to be $\Omega_\Lambda=-1.030^{0.589}_{-1.712}$ and
$w_0=-1.023^{0.043}_{-0.060}$, $w_a=-0.141^{0.478}_{-0.409}$ respectively. | Chandrachud B. V. Dash, Tapomoy Guha Sarkar, Anjan A. Sen | 2023-09-04T14:15:05Z | http://arxiv.org/abs/2309.01623v2 | # Post-reionization H i 21-cm signal: A probe of negative cosmological constant
###### Abstract
In this study, we investigate a cosmological model involving a negative cosmological constant (AdS vacua in the dark energy sector). We consider a quintessence field on top of a negative cosmological constant and study its impact on cosmological evolution and structure formation. We use the power spectrum of the redshifted HI 21 cm brightness temperature maps from the post-reionization epoch as a cosmological probe. The signature of baryon acoustic oscillations (BAO) on the multipoles of the power spectrum is used to extract measurements of the angular diameter distance \(D_{A}(z)\) and the Hubble parameter \(H(z)\). The projected errors on these are then subsequently employed to forecast the constraints on the model parameters \((\Omega_{\Lambda},w_{0},w_{a})\) using Markov Chain Monte Carlo techniques. We find that a negative cosmological constant with a phantom dark energy equation of state (EoS) and a higher value of \(H_{0}\) is viable from BAO distance measurements data derived from galaxy samples. We also find that BAO imprints on the 21cm power spectrum obtained from a futuristic SKA-mid like experiment yield a \(1-\sigma\) error on a negative cosmological constant and the quintessence dark energy EoS parameters to be \(\Omega_{\Lambda}=-0.883^{0.978}_{-2.987}\) and \(w_{0}=-1.030^{0.023}_{-0.082}\), \(w_{a}=-0.088^{0.162}_{-0.343}\) respectively, which is competitive with other probes reported in the literature.
keywords: Dark energy, 21-cm cosmology
## 1 Introduction
One of the most significant discoveries of the twenty-first century was the fact that the expansion of the Universe is accelerated (Amendola & Tsujikawa, 2010). Several independent observations confirm the counter-intuitive phenomenon of dark energy (Riess et al., 1998; Perlmutter et al., 2003; McDonald & Eisenstein, 2007; Scranton et al., 2003; Eisenstein et al., 2005). Observations indicate that about \(\sim 64\%\) of the universe's total energy budget is made up of dark energy, which has a large \(-ve\) pressure and acts as a repulsive force against gravity (Padmanabhan, 2003; Ratra & Peebles, 1988). In the last few decades, cosmological observations have attained an unprecedented level of precision. The \(\Lambda\)CDM model Carroll (2001); Ratra & Peebles (1988); Bull (2016b) provides a good description towards explaining most properties of a wide range of astrophysical and cosmological data, including distance measurements at high redshifts (Riess et al., 1998; Perlmutter et al., 2003; Padmanabhan & Choudhury, 2003), the cosmic microwave background (CMB) anisotropies power spectrum (Spergel et al., 2007), the statistical properties of large scale structures of the Universe (Bull, 2016a) and the observed abundances of different types of light nuclei (Schramm & Turner, 1998; Steigman, 2007; Cyburt et al., 2016). All these observations point towards an accelerated expansion history of the Universe.
Despite the overwhelming success of the \(\Lambda\) model as a standard model explaining these diverse observations, it still leaves significant uncertainties and is plagued with difficulties (Weinberg, 1989; Burgess, 2015; Zlatev et al., 1999; Copeland et al., 2006; Di Valentino et al., 2021; et.al, 2016; Abdalla et al., 2022; Anchordoqui, 2021; et.al., 2022). This is motivated by a wide range of
observational results which has been in tension with the model. Some of the issues faced by \(\Lambda\)CDM cosmological model other than the theoretical issues like the fine-tuning problem (Weinberg, 1989) etc, are posed by observational anomalies. Some of these anomalies at a \(>2-3\sigma\) level are the Hubble tension (Di Valentino, 2021; Riess, 2020; Saridakis et al., 2021)/ growth tension (Abbott et al., 2018; Basilakos and Nesseris, 2017; Joudaki et al., 2018) CMBR anomalies (Akrami et al., 2020; Schwarz et al., 2016), BAO discrepancy (Addison et al., 2018; Cuceu et al., 2019; Evslin, 2017) and many others (Perivolaropoulos and Skara, 2022).
The overwhelmingly large observational evidence for a positive \(\Lambda\) is usually interpreted as a scalar field at the positive minimum of its potential. A Quintessence (Carroll, 1998; Brax and Martin, 1999; Caldwell and Linder, 2005; Nomura et al., 2000) scalar field, on the contrary, slowly rolls towards the minimum in the positive part of the potential giving rise to a dynamical dark energy with a time dependent equation of state \(w(a)=P_{DE}/\rho_{DE}\). Several reports of the Hubble tension (Di Valentino et al., 2016, 2020; Vagnozzi, 2020; Aestas et al., 2020; Anchordoqui et al., 2020; Banerjee et al., 2021; Di Valentino et al., 2021; et.al., 2022) has led to the proposal of a wide range of dark energy models. There are certain proposed quintessence models with an AdS vacuum (Dutta et al., 2020; Calderon et al., 2021; Akarsu et al., 2020; Visinelli et al., 2019; Ye and Piao, 2020; Yin, 2022) which do not rule out the possibility of a negative \(\Lambda\). We have considered Quintessence models, with a non zero vacuum, which can be effectively seen as as a rolling scalar field \(\phi\) on top of a cosmological constant \(\Lambda\neq 0\). The combination \(\rho_{{}_{DE}}=\rho_{\phi}+\Lambda\) satisfying the energy condition \(\rho_{DE}>0\) drives an accelerated expansion (Sen et al., 2023).
We consider the post-reionization HI 21 cm brightness temperature maps as a tracer of the underlying dark matter distribution and thereby a viable probe of structure formation. The intensity mapping (Bull et al., 2015) of the post-reionization H i 21 cm signal (Bharadwaj and Ali, 2004) is a promising observational tool to measure cosmological evolution and structure formation tomographically (Mao et al., 2008; Loeb and Zaldarriaga, 2004; Bharadwaj and Ali, 2004). The 21-cm power spectrum is expected to be a storehouse of cosmological information about the nature of dark energy (Wyithe et al., 2007; Chang et al., 2008; Bharadwaj et al., 2009; Mao et al., 2008; Sarkar and Datta, 2015; Hussain et al., 2016; Dash and Guha Sarkar, 2021, 2022), and several radio interferometers like the SKA1, GMRT2, OWFA3, MEERKAT4, MW5, CHIME6 aims to measure this weak signal (Chang et al., 2010; Masui et al., 2013; Switzer et al., 2013). At low redshifts \(z<6\) following the complex epoch of reionization (Gallerani et al., 2006), the H i distribution is believed to be primarily housed in self-shielded DLA systems (Wolfe et al., 2005; Prochaska et al., 2005). The post reionization H i 21-cm modeled as a tracer of the underlying dark matter distribution, quantified by a bias (Bagla et al., 2010; Guha Sarkar et al., 2012; Sarkar et al., 2016; Carucci et al., 2017) and a mean neutral fraction (which does not evolve with redshift) (Lanzetta et al., 1995; Storrie-Lombardi et al., 1996; Peroux et al., 2003). Several works report the possibility of extracting cosmological information from the post-reionization 21-cm signal (Bharadwaj and Sethi, 2001; McQuinn et al., 2006; Wyithe and Loeb, 2009; Mao et al., 2008; Bharadwaj et al., 2001; Wyithe and Loeb, 2007; Loeb and Wyithe, 2008; Wyithe and Loeb, 2008; Visbal et al., 2009; Bharadwaj and Pandey, 2003; Bharadwaj and Srikant, 2004; Subramanian and Padmanabhan, 1993; Kumar et al., 1995; Bagla et al., 1997; Padmanabhan et al., 2015).
Footnote 1: [https://www.skatelescope.org/](https://www.skatelescope.org/)
Footnote 2: [http://gmrt.ncra.tifr.res.in/](http://gmrt.ncra.tifr.res.in/)
Footnote 3: [https://arxiv.org/abs/1703.00621](https://arxiv.org/abs/1703.00621)
Footnote 4: [http://www.ska.ac.za/meerkat/](http://www.ska.ac.za/meerkat/)
Footnote 5: [https://www.markelecosp.org/](https://www.markelecosp.org/)
Footnote 6: [http://chime.phas.ub.ca/](http://chime.phas.ub.ca/)
In this paper, we have made projections of uncertainties on the dark energy parameters in Quintessence models, with a non zero vacuum, using a proposed future observation of the power spectrum of the post-reionization 21 cm signal. We have used a Fisher / Monte-carlo analysis to indicate how the error projection on the binned power spectrum allow us to constrain dark energy models with a negative \(\Lambda\).
The paper is organized as follows: In Section-2 we discuss the dark energy models and constraints of observable quantities like the Hubble parameter and growth rate of density perturbations from diverse observations. In Section-3 we discuss the 21-cm signal from the post reionization epoch and noise projections using the futuristic SKA1 -mid observations. We also constrain dark energy model parameters using Markov Chain Monte Carlo (MCMC) simulation. We discuss our results and other pertinent observational issues in the concluding section.
## 2 Quintessence dark energy with non-zero vacuum
We consider a Universe where the Quintessence field (\(\phi\)) and cosmological constant \(\Lambda\) both contribute to the overall dark energy density i.e. \(\rho_{DE}=\rho_{\phi}+\Lambda\) with the constraint that \(\rho_{DE}>0\) to ensure the late time cosmic acceleration (Sen et al., 2023). Instead of working with a specific form of the Quintessence potential we chose to use a broad equation of state (EoS) parametrization \(w_{\phi}(z)\). It has been shown that at most a two-parameter model can be optimally constrained from observations (Linder and Huterer, 2005). We use the CPL model proposed by Chevallier and Polarski (2001) and Linder (2003) which gave a phenomenological model-free parametrization and incorporate several features of dark energy. This model has been extensively
used by the Dark Energy Task force (Albrecht et al., 2006) as the standard two parameter description of dark energy dynamics. It has also been shown that a wide class of quintessence scalar field models can be mapped into the CPL parametrization (Pantazis et al., 2016). The equation of state (EoS) is given by
\[w_{\phi}(z)=w_{0}+w_{a}\left(\frac{1}{1+z}\right).\]
This model gives a smooth variation of \(w_{\phi}(z)=w_{0}+w_{a}\) as \(z\rightarrow\infty\) to \(w_{\phi}(z)=w_{0}\) for \(z=0\) and the corresponding density of the quintessence field varies with redshift as \(\rho_{\phi}(a)\propto a^{-3(1+w_{0}+w_{a})}\exp^{3w_{a}a}\). In a spatially flat Universe, evolution of the Hubble parameter \(H(a)\) is given by
\[\frac{H(a)}{H_{0}}=\sqrt{\Omega_{m_{0}}a^{-3}+\Omega_{\phi 0}\,\exp\left[-3 \int_{1}^{a}da^{\prime}\frac{1+w_{\phi}(a^{\prime})}{a^{\prime}}\right]+\Omega _{\Lambda 0}} \tag{1}\]
with \(\Omega_{m0}+\Omega_{\phi 0}+\Omega_{\Lambda}=1\). We shall henceforth call this model with \(\Lambda\) along with a scalar field as the CPL-\(\Lambda\)CDM model.
We consider two important cosmological observables. Firstly we consider a dimensionless quantifier of cosmological distances (Eisenstein et al., 2005)
\[r_{{}_{BAO}}(z)=\frac{r_{s}}{D_{{}_{V}}(z)} \tag{2}\]
where \(r_{s}\) denotes the sound horizon at the recombination epoch and \(D_{{}_{V}}(z)\) is the BAO effective distance \(D_{{}_{V}}\)(Amendola & Tsujikawa, 2010) is defined as
\[D_{{}_{V}}(z)=\left[(1+z)^{2}D_{A}^{2}(z)\frac{cz}{H(z)}\right]^{1/3} \tag{3}\]
This dimension-less distance \(r_{{}_{BAO}}\) is a quantifier of the background cosmological model (density parameters) and is thereby sensitive to the dynamical evolution of dark energy.
Secondly we use the growth rate of density fluctuations as a quantifier of cosmological structure formation. Clustering of galaxies in spectroscopic surveys (Zhao et al., 2021), counts of galaxy clusters (Campanelli et al., 2012; Sakr et al., 2022) aim to measure the quantity called the growth rate of matter density perturbations and the root mean square normalization of the matter power spectrum \(\sigma_{8}\) given by:
\[f(a)\equiv\frac{d\,\,\log\,D_{+}(a)}{d\,\,\log\,\,a}\,\,\,\,\,\text{and}\,\,\, \,\,\sigma_{8}(a)\equiv\sigma_{8,0}\frac{D_{+}(a)}{D_{+}(a=1)} \tag{4}\]
A more robust and reliable quantity \(f\sigma_{8}(a)\) that is measured by redshift surveys is the combination of the growth rate \(f(a)\) and \(\sigma_{8}(a)\).
Figure (1) shows variation of \(r_{BAO}\) in the \((\Omega_{\Lambda},w_{0})\) plane for the CPL-\(\Lambda\)CDM model with \(H_{0}=72\) Km/s/Mpc. We have chosen \(w_{a}=0\) for simplicity. We note that \(\Omega_{\Lambda}\) is negative in the second and third quadrant. The red contour line corresponds to the observational data and the blue shaded region depicts the \(1\sigma\) errors. The first figure in the panel corresponds to \(z=0.2\) and the red contour corresponds to observations from the 2df galaxy redshift survey gives the bounds on \(r_{{}_{BAO}}\) as \(r_{{}_{BAO}}(z=0.2)=0.1980\pm 0.0058\)Percival et al. (2007). The second figure in the panel corresponds to \(z=0.35\) with measured \(r_{{}_{BAO}}(z=0.35)=0.1094\pm 0.0033\)(Percival et al., 2007). The analysis of BOSS (SDSS III) CMASS sample along with Luminous red galaxy sample (Anderson et al., 2012) from SDSS-II gives \(r_{{}_{BAO}}(z=0.57)=0.07315\pm 0.002\), as is shown in the third figure of the panel. We also show the contour for \(r_{BAO}\) at the corresponding to that redshift for a pure \(\Lambda\)CDM cosmology with cosmological parameters (Aghanim et al., 2020) results \((\Omega_{m_{0}},\Omega_{\Omega_{0}},H_{0},n_{s},\sigma_{8},\Omega_{K})=(0.315, \ 0.0496,\ 67.4,\ 0.965,\ 0.811,\ 0).\)
Figure 1: shows \(r_{BAO}\) in the \((\Omega_{\Lambda},w_{0})\) plane. The red contour line corresponds to the observational data point and the blue shaded region depicts the \(1\sigma\) errors. The data points in the left two figures come from the 2df galaxy survey at redshifts of \(z=0.2\) and \(z=0.35\) respectively (Percival et al., 2007) and the third figure shows the high redshift data at \(z=0.57\) from BOSS SDSS-III survey (Anderson et al., 2012). The red dotted contour correspond to \(r_{BAO}\) computed for a \(\Lambda\)CDM model.
All these observations are consistent with the possibility of models with negative \(\Lambda\) with varying uncertainties. It is clear from the observations that there are two separate regions consistent with data: The third quadrant corresponds to Phantom models with negative \(\Lambda\) and the first quadrant which corresponds to non-phantom models with positive \(\Lambda\). It is also clear that in spatially flat cosmologies with conditions \(\rho_{m}>0\) and \(\rho_{\phi}>0\) implies that \(\Omega_{\Lambda}<1\) which is not supported by data. The addition of a negative cosmological constant to a phantom dark energy model seems viable from the data. We find that the CPL-\(\Lambda\) CDM with a phantom field and negative \(\Lambda\) and \(H_{0}=72\) Km/s/Mpc, the observational data as also \(\Lambda\)CDM with \(H_{0}=67.4\)Km/s/Mpc are all qualitatively consistent.
Figure (2) shows variation of \(f\sigma_{8}(z)\) in the \((\Omega_{\Lambda},w_{0})\) plane. The solid red line corresponds to the observational data from SDSS-III BOSS \(f\sigma_{8}(z=0.51)=0.470\pm 0.041\)(Sanchez et al., 2017), \(f\sigma_{8}(z=0.61)=0.457\pm 0.052\)(Chuang et al., 2017) and eBOSS DR16 LRGxELG data \(f\sigma_{8}(z=0.7)=0.4336\pm 0.05003\)(Zhao et al., 2021) respectively. While the mean observational \(f\sigma_{8}\) data falls in the non-phantom sector with negative \(\Lambda\), the error bars are quite large and again, the \(\Lambda\)CDM predictions (with \(H_{0}=67.4\)Km/s/Mpc), observed data and CPL-\(\Lambda\)CDM with phantom field and negative \(\Lambda\) for \(H_{0}=72\) Km/s/Mpc are all consistent within \(1-\sigma\) errors. The addition of a negative \(\Lambda\) to a phantom dark energy model seems to also push \(H_{0}\) to a higher value.
## 3 The post-reionization H i 21-cm signal
The epoch reionization epoch is believed to have ended around \(z\sim 6\)(Gallerani et al., 2006). Subsequently only a small fraction of H i survives the process of ionization and remains housed in the over-dense regions of the IGM. These neutral clumped, dense gas clouds remain neutral and shielded from background ionizing radiation. These are now believed to be the damped Lyman-\(\alpha\) systems (DLAs) (Wolfe et al., 2005) associated with galaxies. The predominant source of the 21-cm radiation in epochs \(z<6\) are these DLA system which stores \(\sim 80\%\) of the H i at \(z<4\) Prochaska et al. (2005) with H i column density greater than \(2\times 10^{20}\)atoms/cm\({}^{2}\)(Lanzetta et al., 1995; Storrie-Lombardi et al., 1996; Peroux et al., 2003). The study of clustering of DLAs indicate their association with galaxies. These gas clumps are hence have a biased presence in regions where matter over densities are highly non-linear Cooke et al. (2006); Zwaan et al. (2005); Nagamine et al. (2007). The possibility of the presently functioning and upcoming radio telescopes to detect the cosmological 21-cm signal from low redshifts has led to an extensive literature on the post-reionization H i signal (Subramanian & Padmanabhan, 1993; Visbal et al., 2009; Bharadwaj & Sethi, 2001; Bharadwaj et al., 2001; Bharadwaj & Pandey, 2003; Bharadwaj & Srikant, 2004; Wyithe & Loeb, 2009). Though flux from individual DLA clouds is extremely weak (\(<10\mu\)Jy) to be detected in radio observations, even with the next generation radio arrays, it is possible to detect the collective diffuse radiation without requiring to resolve the individual sources. Such an intensity mapping of the diffused background in all radio-observations at the observation frequencies less than 1420MHz is believed to give a wealth of cosmological and astrophysical information. Measuring the statistical properties of the fluctuations of the diffuse 21-cm intensity distribution on the plane of the sky and as a function of redshift gives a way to study cosmological structure formation tomographically. Modeling the post-reionization H i signal is based on several simplifying assumptions which are supported by extensive numerical simulations and astrophysical observations.
* _Post-reionization 21-cm Spin temperature :_ In the post-reionization epoch the spin temperature \(T_{s}>>T_{\gamma}\) where \(T_{\gamma}\) is the CMB temperature. This is due to the Wouthheusen field coupling which leads to an enhanced population of the triplet state of H i. Consequently radiative transfer of CMBR through a gas cloud in this epoch shall cause the 21-cm radiation is seen in emission against the background CMBR
Figure 2: shows variation of \(f\sigma_{8}(z)\) in the \((\Omega_{\Lambda},w_{0})\) plane. The solid red line corresponds to the observational data points from SDSS-III BOSS \(f\sigma_{8}(z=0.51)=0.470\pm 0.041\)(Sánchez et al., 2017), \(f\sigma_{8}(z=0.61)=0.457\pm 0.052\)(Chuang et al., 2017) and eBOSS DR16 LRGxELG data \(f\sigma_{8}(z=0.7)=0.4336\pm 0.05003\)(Zhao et al., 2021). The red dotted contour corresponds to \(f\sigma_{8}(z)\) computed for a \(\Lambda\)CDM model.
(Madau et al. 1997; Bharadwaj & Ali 2004; Loeb & Zaldarriaga 2004). Further, the kinetic gas temperature remains strongly coupled to the Spin temperature through Lyman-\(\alpha\) scattering or collisional coupling (Madau et al. 1997).
* _Mean neutral fraction:_ Lyman-\(\alpha\) forest studies indicate that the value of the density parameter of the neutral gas is \(\Omega_{gas}\sim 10^{-3}\) for \(\approx 1\leq z\leq 3.5\) (Prochaska et al. 2005). Thus the mean neutral fraction is \(\bar{x}_{\rm HI}=\Omega_{gas}/\Omega_{b}\sim 2.45\times 10^{-2}\). This value remains constant in the post-reionization epoch for \(z\leq 6\).
* _Peculiar flow of H i :_ The theory of cosmological perturbation shows that on large sacles the baryonic matter falls into the regions of dark matter overdensities. Thus the non-Hubble H i peculiar flow of the gas is primarily determined by the dark matter distribution on large scales. The H i peculiar velocity manifests as a redshift space distortion anisotropy in the 21-cm power spectrum in a manner similar to the Kaiser effect seen in galaxy surveys (Hamilton 1998).
* _Intensity mapping and noise due to discrete clouds:_ The source of the 21-cm signal are DLA clouds. Intensity mapping ignores the discrete nature of the sources and aims to map the smoothed diffuse intensity distribution (Furlanetto et al. 2006; Pritchard & Loeb 2012; Bull et al. 2015). The discreteness of the source will introduce a Poisson sampling noise. We neglect this noise in our modeling since the number density \(n\) of the DLA sources is very large (Bharadwaj & Srikant 2004) and the Poisson noise typically goes as \(1/n\).
* _Gaussian fluctuations:_ The overdensity field of dark matter distribution is believed to be generated by Gaussian process in the very early Universe leading to a scale invariant primordial power spectrum. We assume that there are no non-gaussianities, whereby the statistics of the random overdensity field is completely exhausted by studying the two-point correlation/power-spectrum. All \(p\)-point correlation functions where \(p\) is odd, are assumed to be zero in the first approximation. Primordial non-gaussianity and non-linear structure formation will make the field non-gaussian, but this is neglected as a first approximation. The gas is believed to follow the dark matter and also expected to not show any non-gaussian effects.
* _Post-reionization H i as a biased tracer:_ The distribution of baryonic matter in the form of neutral hydrogen is an unsolved problem in cosmology. The linear theory predictions indicate that on large scales, baryonic matter follows the underlying dark matter distribution. However, at low redshifts, the growth of density fluctuations is likely to be plagued by non-linearilites and it is not _apriori_ meaningful to extrapolate the predictions of linear theory in this epoch where overdensities \(\delta\sim 1\). Galaxy redshift surveys show that the galaxies trace the underlying dark matter distribution (Dekel & Lahav 1999; Mo et al. 1996; Yoshikawa et al. 2001) with a bias. If we model the post-reionization H i to be primarily stored in dark matter haloes, it is plausible to expect that the gas to trace the underlying dark matter density field with a possible bias as well.
We define a bias function \(b_{T}(k,z)\) as
\[b_{T}(k,z)=\left[\frac{P_{\rm HI}(k,z)}{P_{m}(k,z)}\right]^{1/2}\]
where \(P_{\rm HI}(k,z)\) and \(P_{m}(k,z)\) denote the H i and dark matter power spectra respectively. With this definition of a general function \(b_{T}(k,z)\), we merely relocate the lack of knowledge of H i distribution to a scale and redshift dependent function that quantifies the properties of post-reionization H i clustering.
Theoretical considerations show that the bias is scale dependent on small scales below the Jean's length (Fang et al. 1993). However, on large scales the bias is expected to be scale-independent. The scales above which the linear bias approximation is acceptable is however, dependent on the redshift. While the neutral fraction on the post-reionization epoch is believed to be a constant, studies (Wyithe & Loeb 2009) show that small fluctuations in the ionizing background may also contribute to a scale dependency in the bias \(b_{T}(k,z)\). The most compelling studies of the post-reionization H i has been through the use of N-body numerical simulations (Bagla et al. 2010; Guha Sarkar et al. 2012; Sarkar et al. 2016; Carucci et al. 2017). These simulations uses diverse rules for populating neutral hydrogen to dark matter halos in a certain mass range and identifying them as DLAs.
Similar to the behaviour of galaxy bias (Fry 1996; Dekel & Lahav 1999; Mo et al. 1996, 1999), these N-body simulations of the post-reionization H i agree on the generic qualitative behaviour. On large scale the bias is found to be linear (scale independent) and is a monotonically rising function of redshift for \(1<z<4\) (Marin et al. 2010). However, on small scales the bias becomes scale-dependent as rises steeply on small scales. The rise of the bias on small scales owes it origin to the absence of small mass halos as is expected from the CDM power spectrum and consequent distribution of H i in larger mass halos. In this work we use the fitting formula for \(b_{T}(k,z)\) obtained from numerical simulations (Sarkar et al. 2016).
### The post-reionization H i 21cm power spectrum
Adopting all the modeling assumptions discussed in the last section, the power spectrum of post-reionization H i 21-cm excess brightness temperature field \(\delta T_{b}\) from redshift \(z\)(Furlanetto et al. 2006; Bull et al. 2015; Bharadwaj & Ali 2004; Bharadwaj
et al., 2009) is given by
\[P_{21}(k,z,\mu)={\cal A}_{T}^{2}(b_{T}+f\mu^{2})^{2}P_{m}(k,z) \tag{5}\]
where
\[{\cal A}_{T}=4.0\,{\rm mK}\,br\,\bar{x}_{\rm HI}(1+z)^{2}\left(\frac{\Omega_{ 00}h^{2}}{0.02}\right)\left(\frac{0.7}{h}\right)\left(\frac{H_{0}}{H(z)}\right) \tag{6}\]
The term \(f(z)\mu^{2}\) has its origin in the H i peculiar velocities (Bharadwaj et al., 2001; Bharadwaj & Ali, 2004) which, is also assumed to be sourced by the dark matter fluctuations.
Since our cosmological model is significantly different from the fiducial one (i.e., \(\Lambda\)CDM), the difference will introduce additional anisotropies in the correlation function through the Alcock-Paczynski effect (Simpson & Peacock, 2010; Samushia et al., 2012; Montanari & Durrer, 2012). In the presence of the Alcock-Paczynski effect, the redshift-space HI 21-cm power spectrum is given by: (Furlanetto et al., 2006; Bull et al., 2015)
\[P_{21}(k,z,\mu)=\frac{{\cal A}_{T}^{2}}{\alpha_{\parallel}\alpha_{\perp}^{2} }\left[b_{T}+\frac{f(z)\mu^{2}}{F^{2}+\mu^{2}(1-F^{2})}\right]^{2}P_{m}\left( \frac{k}{\alpha_{\perp}}\sqrt{1+\mu^{2}(F^{-2}-1)},z\right) \tag{7}\]
where \(F=\alpha_{\parallel}/\alpha_{\perp}\), with \(\alpha_{\parallel}\) and \(\alpha_{\perp}\) being the ratios of angular and radial distances between fiducial and real cosmologies, \(\alpha_{\parallel}=H^{f}/H^{r}\), \(\alpha_{\perp}=D_{A}^{r}/D_{A}^{r}\).
The overall factor \(\alpha_{\parallel}\alpha_{\perp}^{2}\) is due to the scaling of the survey's physical volume. As the real geometry of the Universe differs from the one predicted by the fiducial cosmology, we introduce additional distortion in the redshift space. The AP test is sensitive to the isotropy of the Universe and can help differentiate between different cosmological models. We note that the geometric factors shall also imprint in the BAO feature of the power spectrum. Since \(0\leq\mu 1\) the redshift space 21cm power spectrum can be decomposed in the basis of Legendre polynomials \({\cal P}_{\ell}(\mu)\) as (Hamilton, 1998)
\[P_{21}(k,\mu,z)=\sum_{\ell}P_{\ell}(z,k){\cal P}_{\ell}(\mu) \tag{8}\]
The odd harmonics vanish by pair exchange symmetry and non-zero azimuthal harmonics. ( as \(Y_{\ell,m}\)'s with \(m\neq 0\) vanish by symmetry about the line of sight). Using the standard normalization
\[\int_{-1}^{+1}{\cal P}_{\ell}(\mu){\cal P}_{\tau}(\mu)d\mu=\frac{2}{2\ell+1} \delta_{\ell,r}\]
the first few Legendre polynomials are given by
\[{\cal P}_{0}(\mu)=1,\ \ {\cal P}_{2}(\mu)=\frac{1}{2}\left(3\mu^{2}-1\right), \ \ {\cal P}_{4}(\mu)=\frac{1}{8}(35\mu^{2}-30\mu^{2}+3) \tag{9}\]
The coefficients of the expansion of the 21cm power spectrum, can be found by inverting the equation (8). Thus we have
\[P_{\ell}(z,k)=\frac{(2\ell+1)}{2}\int_{-1}^{+1}d\mu\ {\cal P}_{\ell}(\mu)P_{21}(z,k,\mu) \tag{10}\]
While full information is contained in an infinite set of functions \(\{P_{\ell}(z,k)\}\), we shall be interested in the first few of these function which has the dominant information.
### The BAO feature in the multipoles of 21-cm power spectrum
The sound horizon at the recombination epoch (\(z\sim 1000\)) provides a standard ruler, which can be used to calibrate cosmological distances. Baryons imprint the cosmological power spectrum through a distinct oscillatory signature (White, 2005; Eisenstein & Hu, 1998). The BAO imprint on the 21-cm signal has been studied extensively (Sarkar & Bharadwaj, 2013, 2011). The baryon acoustic oscillation (BAO) is an important probe of cosmology (Eisenstein et al., 2005; Percival et al., 2007; Anderson et al., 2012; Shoji et al., 2009; Sarkar & Bharadwaj, 2013) as it allows us to measure the angular diameter distance \(D_{A}(z)\) and the Hubble parameter \(H(z)\) using the transverse and the longitudinal oscillatory features respectively (Lopez-Corredoira, 2014).
The sound horizon at the epoch of recombination is given by
\[s(z_{d})=\int_{0}^{a_{r}}\frac{c_{s}da}{a^{2}H(a)} \tag{11}\]
where \(a_{r}\) is the scale factor at the epoch of recombination (redshift \(z_{d}\)) and \(c_{s}\) is the sound speed given by \(c_{s}(a)=c/\sqrt{3(1+3\rho_{b}/4\rho_{\gamma})}\) where \(\rho_{b}\) and \(\rho_{\gamma}\) denotes the baryonic and photon densities, respectively. The WMAP 5-year data constrains the value of \(z_{d}\) and \(s(z_{d})\) to be \(z_{d}=1020.5\pm 1.6\) and \(s(z_{d})=153.3\pm 2.0\)Mpc (Komatsu et al., 2009). We shall use these as the fiducial values in our subsequent analysis. The standard ruler '\(s\)' defines a transverse angular scale and a redshift interval in the radial direction as
\[\theta_{s}(z)=\frac{s(z_{d})}{(1+z)D_{A}(z)}\ \ \ \ \ \ \ \ \ \delta z_{s}=\frac{s(z_{d})H(z)}{c} \tag{12}\]
Measurement of \(\theta_{s}\) and \(\delta z_{s}\), allows the independent determination of \(D_{A}(z)\) and \(H(z)\). The BAO feature comes from the baryonic part of \(P(k)\). In order to isolate the BAO feature, we subtract the cold dark matter power spectrum from total \(P(k)\) as \(P_{b}(k)=P(k)-P_{c}(k)\). Owing to significant deviations between the assumed cosmology and the fiducial cosmology, our longitudinal and tangential coordinates are rescaled by \(\alpha_{\parallel}\) and \(\alpha_{\perp}\) respectively, the true power spectrum scaled as \(k^{\prime}=k\sqrt{1+\mu^{2}(F^{-2}-1)}/\alpha_{\perp}\) from the apparent one (Matsubara & Suto, 1996; Ballinger et al., 1996; Simpson & Peacock, 2010). Incorporating the Alcock-Paczynski corrections explicitly in the BAO power spectrum can be written as (Hu & Sugiyama, 1996; Seo & Eisenstein, 2007)
\[P_{b}(k^{\prime})=A\frac{\sin x}{x}e^{-(k^{\prime}\sum_{s})^{1.4}}e^{-k^{ \prime 2}\sum_{nl}^{2}/2} \tag{13}\]
where \(A\) is a normalization, \(\sum_{s}=1/k_{silk}\) and \(\sum_{s}=1/k_{nl}\) denotes the inverse scale of 'Silk-damping' and 'non-linearity' respectively. In our analysis we have used \(k_{nl}=(3.07h^{-1}Mpc)^{-1}\)and \(k_{silk}=(8.38h^{-1}Mpc)^{-1}\) from Seo & Eisenstein (2007) and \(x=\sqrt{k^{2}(1-\mu^{2})s_{\perp}^{2}+k^{2}\mu^{2}s_{\parallel}^{2}}\). The changes in \(D_{A}(z)\) and \(H(z)\) are reflected as changes in the values of \(s_{\perp}\) and \(s_{\parallel}\) respectively, and the errors in \(s_{\perp}\) and \(s_{\parallel}\) corresponds to fractional errors in \(D_{A}\) and \(H(z)\) respectively. We use \(p_{1}=\ln(s_{\perp}^{-1})\) and \(p_{2}=\ln(s_{\parallel})\) as parameters in our analysis. The Fisher matrix is given by
\[F_{ij}=\left(\frac{2\ell+1}{2}\right)\int dk^{\prime}\ \int_{-1}^{+1}d\mu\ \frac{A_{T}^{2}}{\alpha_{ \parallel}\alpha_{\perp}^{2}}\left[b_{T}+\frac{f(z)\mu^{2}}{F^{2}+\mu^{2}(1-F^ {2})}\right]^{2}\frac{\mathscr{P}_{\ell}(\mu)}{\delta P_{21}^{2}(k,z,\mu)} \frac{\partial P_{b}(k^{\prime})}{\partial p_{i}}\frac{\partial P_{b}(k^{ \prime})}{\partial p_{j}} \tag{14}\]
\[=\left(\frac{2\ell+1}{2}\right)\int dk^{\prime}\ \int_{-1}^{+1}d\mu\ \frac{A_{T}^{2}}{\alpha_{ \parallel}\alpha_{\perp}^{2}}\left[b_{T}+\frac{f(z)\mu^{2}}{F^{2}+\mu^{2}(1-F ^{2})}\right]^{2}\frac{\mathscr{P}_{\ell}(\mu)}{\delta P_{21}^{2}(k,z,\mu)} \left(\cos x-\frac{\sin x}{x}\right)^{2}f_{i}f_{j}A^{2}e^{-2(k^{\prime}\sum_{s })^{1.4}}e^{-k^{\prime 2}\sum_{nl}^{2}} \tag{15}\]
where \(f_{1}=\mu^{2}-1\) and \(f_{2}=\mu^{2}\).
We choose SKA's a Medium-Deep Band-2 survey that covers a sky area of 5,000 deg\({}^{2}\) in the frequency range \(0.95-1.75\)GHz (\(z=[0-0.5]\)) and a Wide Band-1 survey that covers a sky area of 20,000 deg\({}^{2}\) in the frequency range \(0.35-1.05\)GHz (\(z=[0.35-3]\)) (Bacon et al., 2020). We calculate the expected error projections on \(D_{A}(z)\) and \(H(z)\) in five evenly spaced, non-overlapping redshift bins, in the redshift range [z=0-3] with \(\Delta z=0.5\). Each of the six bins is taken to be independent and is centered at redshifts of \(z=[0.25,0.75,1.25,1.75,2.25]\).
### Visibility correlation
We use a visibility correlation approach to estimate the noise power spectrum for the 21-cm signal (Bharadwaj & Sethi, 2001; Bharadwaj & Ali, 2005; McQuinn et al., 2006; Geil et al., 2011; Villaescusa-Navarro et al., 2014; Sarkar & Datta, 2015). A radiointerferometric observation measures the complex visibility. The measured visibility written as a function of baseline \(\mathbf{U}=(u,v)\) and frequency \(\nu\) is a sum of signal and noise
\[\mathcal{V}(\mathbf{U},\nu)=\mathcal{S}(\mathbf{U},\nu)+\mathcal{N}(\mathbf{U},\nu) \tag{16}\]
\[\mathcal{S}(\mathbf{U},\nu)=\frac{2k_{B}}{\lambda^{2}}\int d\vec{\theta}\ A( \vec{\theta})e^{2\pi i\mathbf{U}\cdot\vec{\theta}}\ \delta T_{b}(\vec{\theta},\nu) \tag{17}\]
where, \(\delta T_{b}(\vec{\theta},\nu)\) is the fluctuations of the 21-cm brightness temperature and \(A(\vec{\theta})\) is the telescope beam. The factor \(\left(\frac{2k_{B}}{\lambda^{2}}\right)^{2}\) converts brightness temperature to intensity (Raleigh Jeans limit). Defining \(\Delta\nu\) as the difference from the central frequency, a further Fourier transform in frequency \(\Delta\nu\) gives us
\[s(\mathbf{U},\tau)=\frac{2k_{B}}{\lambda^{2}}\int d\vec{\theta}\ d\nu\ A(\vec{ \theta})B(\Delta\nu)\ e^{2\pi i(\mathbf{U}\cdot\vec{\theta}+\tau\Delta\nu)}\ \delta T_{b}(\vec{\theta},\nu) \tag{18}\]
where \(B(\Delta\nu)\) is the frequency response function of the radio telescope.
\[s(\mathbf{U}_{a},\tau_{m})=\frac{2k_{B}}{\lambda^{2}}\int d\vec{\theta}\ d \Delta\nu\int\frac{d^{3}\mathbf{k}}{(2\pi)^{3}}\ e^{-i(\mathbf{k}_{\perp}\tau \cdot\vec{\theta}+k_{\parallel}\nu^{\prime}\Delta\nu)}\ A(\vec{\theta})B( \Delta\nu)\ e^{2\pi i(\mathbf{U}_{a}\cdot\vec{\theta}+\tau_{m}\Delta\nu)}\ \widetilde{ \delta T_{b}}(\mathbf{k}_{\perp},k_{\parallel}) \tag{19}\]
where the tilde denotes a fourier transform and \(r^{\prime}=dr(\nu)/d\nu\).
\[s(\mathbf{U}_{a},\tau_{m})=\frac{2k_{B}}{\lambda^{2}}\int d\vec{\theta}\ d \Delta\nu\int\frac{d^{3}\mathbf{k}}{(2\pi)^{3}}\ e^{-i(\mathbf{k}_{\perp}\tau-2 \pi\mathbf{U}_{a})\cdot\vec{\theta}}e^{-i(k_{\parallel}\tau^{\prime}-2\pi\tau _{m})\Delta\nu}\ A(\vec{\theta})B(\Delta\nu)\ \widetilde{\delta T_{b}}(\mathbf{k}_{\perp},k_{ \parallel}) \tag{20}\]
Performing the \(\vec{\theta}\) and \(\Delta\nu\) integral we have
\[s(\mathbf{U}_{a},\tau_{m})=\frac{2k_{B}}{\lambda^{2}}\int\frac{d^{3}\mathbf{k}}{(2 \pi)^{3}}\ \widetilde{A}\left(\frac{\mathbf{k}_{\perp}\tau}{2\pi}-\mathbf{U}_{a}\right) \widetilde{B}\left(\frac{k_{\parallel}\tau^{\prime}}{2\pi}-\tau_{m}\right)\ \widetilde{ \delta T_{b}}(\mathbf{k}_{\perp},k_{\parallel}) \tag{21}\]
Defining new integration variables as \({\bf U}=\frac{{\bf k}_{\perp}r}{2\pi}\) and \(\tau=\frac{k_{\parallel}r^{\prime}}{2\pi}\) we have
\[\langle s({\bf U}_{a},\tau_{m})s^{*}({\bf U}_{b},\tau_{n})\rangle=\left(\frac{2k _{B}}{\lambda^{2}}\right)^{2}\frac{1}{r^{2}r^{\prime}}\int d{\bf U}\ d\tau \widetilde{A}\left({\bf U}-{\bf U}_{a}\right)\widetilde{A}^{*}\left({\bf U}-{ \bf U}_{b}\right)\widetilde{B}\left(\tau-\tau_{m}\right)\widetilde{B}^{*}\left( \tau-\tau_{n}\right)P_{21}\left(\frac{2\pi{\bf U}}{r},\frac{2\pi\tau}{r^{ \prime}}\right) \tag{22}\]
Approximately, we may write
\[\int\widetilde{B}\left(\tau-\tau_{m}\right)\widetilde{B}^{*}\left(\tau-\tau_{n }\right)\approx B\delta_{m,n}\ \ \ \ \mbox{and}\ \ \ \int d{\bf U}\ \widetilde{A}\left({\bf U}-{\bf U}_{a}\right)\widetilde{A}^{*}\left({\bf U} -{\bf U}_{b}\right)\approx\frac{\lambda^{2}}{A_{e}}\delta_{a,b} \tag{23}\]
where \(B\) is the bandwidth of the telescope and where \(A_{e}\) is the effective area of each dish. Hence
\[\langle s({\bf U}_{a},\tau_{m})s^{*}({\bf U}_{b},\tau_{n})\rangle\approx \left(\frac{2k_{B}}{\lambda^{2}}\right)^{2}\frac{B\lambda^{2}}{r^{2}r^{\prime}A _{e}}\ \ P_{21}\left(\frac{2\pi{\bf U}_{a}}{r},\frac{2\pi\tau}{r^{\prime}}\right) \delta_{m,n}\delta_{a,b} \tag{24}\]
The noise in the visibilities measured at different baselines and frequency channels are uncorrelated. We then have
\[\langle{\cal N}({\bf U}_{a},\nu_{m})\ {\cal N}^{*}({\bf U}_{b},\nu_{n}) \rangle=\delta_{a,b}\delta_{m,n}2\sigma^{2} \tag{25}\]
where
\[\sigma=\frac{\sqrt{2}k_{B}T_{sys}}{A_{e}\sqrt{\Delta\nu t}} \tag{26}\]
where \(A_{e}\) is the effective area of the dishes, \(t\) is the correlator integration time and \(\Delta\nu\) is the channel width. If \(B\) is the observing bandwidth, there would be \(B/\Delta\nu\) channels. The system temperature \(T_{sys}\) can be written as
\[T_{sys}=T_{inst}+T_{sky} \tag{27}\]
where
\[T_{sky}=60{\rm K}\left(\frac{\nu}{300\ {\rm MHz}}\right)^{-2.5} \tag{28}\]
Under a Fourier transform
\[n({\bf U},\tau)=\sum_{i=1}^{B/\Delta\nu}{\cal N}({\bf U},\nu_{i})\Delta\nu\ \ e^{2\pi i\nu_{i}\tau} \tag{29}\]
\[\langle n({\bf U}_{a},\tau)\ n^{*}({\bf U}_{b},\tau)\rangle=2\sigma^{2}\delta _{a,b}\Delta\nu^{2}\frac{B}{\Delta\nu}=2\sigma^{2}\delta_{a,b}\Delta\nu B \tag{30}\]
\[\langle n({\bf U}_{a},\tau)\ n^{*}({\bf U}_{b},\tau)\rangle=\frac{4k_{B}^{2}T _{sys}^{2}B}{A_{e}^{2}t}=\left(\frac{2k_{B}}{\lambda^{2}}\right)^{2}\left( \frac{\lambda^{2}T_{sys}}{A_{e}}\right)^{2}\frac{B}{t} \tag{31}\]
Now cosidering a total observation time \(T_{o}\) and a bin \(\Delta{\bf U}\), there is a reduction of noise by a factor \(\sqrt{N}_{p}\) where \(N_{p}\) is the number of visibility pairs in the bin
\[N_{p}=N_{vis}(N_{vis}-1)/2\approx N_{vis}^{2}/2 \tag{32}\]
where \(N_{vis}\) is the number of visibilities in the bin. We may write
\[N_{vis}=\frac{N_{ant}(N_{ant}-1)}{2}\frac{T_{o}}{t}\rho({\bf U})\delta^{2}U \tag{33}\]
where \(N_{ant}\) is the total number of antennas and \(\rho({\bf U})\) is the baseline distribution function.
\[\langle n({\bf U}_{a},\tau)\ n^{*}({\bf U}_{b},\tau)\rangle=\left(\frac{2k_{B }}{\lambda^{2}}\right)^{2}\left(\frac{\lambda^{2}T_{sys}B}{A_{e}}\right)^{2} \frac{2\delta_{a,b}}{N_{ant}(N_{ant}-1)B\ T_{o}\ \rho({\bf U})\delta^{2}U} \tag{34}\]
where an additional reduction by \(\sqrt{2}\) is incorporated by considering visibilities in half plane. The 21 cm power spectrum is not spherically symmetric, due to redshift space distortion but is symmetric around the polar angle \(\phi\). Because of this symmetry, we want to sum all the Fourier cells in an annulus of constant (\(k,\ \mu=\cos\theta=k_{\parallel}/k\)) with radial width \(\Delta k\) and angular width \(\Delta\theta\) for a statistical detection. The number of independent cells in such an annulus is
\[N_{c}=2\pi k^{2}\sin(\theta)\Delta k\Delta\theta\frac{Vol}{(2\pi)^{3}}=2\pi k ^{2}\Delta k\Delta\mu\frac{Vol}{(2\pi)^{3}} \tag{35}\]
where
\[Vol=\frac{r^{2}\lambda^{2}r^{\prime}B}{A_{e}} \tag{36}\]
Thus the full covariance matrix for visibility correlation is (Villaescusa-Navarro et al., 2014; Sarkar & Datta, 2015; Geil et al., 2011; McQuinn et al., 2006)
\[C_{a,b}=\frac{1}{\sqrt{N}_{c}}\left(\frac{2k_{B}}{\lambda^{2}}\right)^{2} \left[\frac{B\lambda^{2}}{r^{2}r^{\prime}A_{e}}\ \ P_{21}\left(\frac{2\pi{\bf U}_{a}}{r},\frac{2\pi\tau}{r^{\prime}}\right)+ \left(\frac{\lambda^{2}T_{sys}B}{A_{e}}\right)^{2}\frac{2}{N_{ant}(N_{ant}-1)B \ T_{o}\ \rho({\bf U})\delta^{2}U}\right]\delta_{a,b} \tag{37}\]
We choose \(\delta^{2}U=A_{e}/\lambda^{2}\), \(\Delta k=k/10\), \(\Delta\mu=\mu/10\).
The baseline distribution function \(\rho(\mathbf{U})\) is normalized as
\[\int d\mathbf{U}\rho(\mathbf{U})=1 \tag{38}\]
For uniform baseline distribution
\[\rho(\mathbf{U})=\frac{1}{\pi(U_{max}^{2}-U_{min}^{2})} \tag{39}\]
Generally
\[\rho(\mathbf{U})=c\int d^{2}\mathbf{r}\rho_{ant}(\mathbf{r})\rho_{ant}(\mathbf{ r}-\lambda\mathbf{U}) \tag{40}\]
Where \(c\) is fixed by normalization of \(\rho(\mathbf{U})\) and \(\rho_{ant}\) is the distribution of antennae. The covariance matrix in Eq (37) is used in our analysis to make noise projections on the 21-cm power spectrum and its multipoles. Observations with total time time time exceeding a limiting value will make the instrumental noise insignificant and the Signal to Noise Ratio is primarily influenced by cosmic variance for such observations. Therefore, by introducing \(N_{point}\) as the number of independent pointings, the covariance is further reduced by a factor of \(1/\sqrt{N_{point}}\).
## 4 Results and Discussion
In this section we discuss the results of our investigation. The figure (3) shows the dimensionless 3D 21-cm power spectrum (\(\Delta_{21}^{2}=k^{3}P_{21}(\mathbf{k},z)/(2\pi^{2})\)) in redshift space at the fiducial redshift \(z=1\). In the plane of \(k_{\parallel}\) and \(k_{\perp}\), the power spectrum shows the anisotropy of the redshift space power spectrum. The contours colored in blue correspond to the fiducial \(\Lambda\)CDM model, while those in red pertain to the CPL-\(\Lambda\)CDM model. We choose the best-fit value on CPL-\(\Lambda\)CDM model parameters (\(\Omega_{m}=0.289,\Omega_{\Lambda}=-0.781,w_{0}=-1.03,w_{a}=-0.10\)) obtained from the combined data CMB+BAO+Pantheon+R21 (Sen et al., 2023). The Alcock-Paczynski effect makes a notable contribution, intensifying the anisotropy observed in the power spectrum. The significant departure of the CPL-\(\Lambda\)CDM model \(\sim 5\%\) at \(k\sim 1Mpc^{-1}\) indicates that a closer investigation of the possibility of discerning such models from the \(\Lambda\)CDM model is justified.
For the measurement of the 21-cm power spectrum we consider a radio-interferometric observation using a futuristic SKA1-Mid like experiment. The typical telescope parameters used are summarized in the table below. We also assume that the antenna distribution falls off as \(1/r^{2}\), whereby the baseline coverage on small scales is suppressed.
We consider 250 dish antennae each of diameter 15m and efficiency 0.7. We assume \(T_{sys}=60K\) and an observation bandwidth of 128MHz. The \(k\)-range between the smallest and largest baselines is binned as \(\Delta k=\alpha k\) where
Figure 3: shows the 3D H i 21-cm power spectrum at \(z=1\) in the \((k_{\perp},k_{\parallel})\) space. The asymmetry in the signal is indicative of redshift space distortion: the blue dashed line corresponds to the \(\Lambda\)CDM. In contrast, the solid red line represents the CPL-\(\Lambda\)CDM model, where the Alcock-Paczynski effect enhanced the distortions. The colorbar shows the value of the dimensionless quantity \(\Delta_{21}^{2}=k^{3}P_{21}(\mathbf{k})/(2\pi^{2})\) in mK\({}^{2}\).
\(1/N_{bin}\ln(U_{max}/U_{min})\). The minimum value of \(k\) is taken to be \(0.005\)Mpc\({}^{-1}\) the maximum value of \(k\) is taken to be \(0.5\)Mpc\({}^{-1}\) with logarithmically number of bins \(N_{bin}=8\). We consider a total observation time of \(500\times 150\)hrs with 150 independent pointings, we obtain the \(1-\sigma\) errors on \(P_{l}(k,z)\). The fiducial model is chosen to be the \(\Lambda\)CDM. Figure (4) shows the multiples of \(P_{21}(k,z)\) for selective parameter values of CPL-\(\Lambda\)CDM model. The central dotted line corresponds to \(\Lambda\)CDM. The fiducial redshift is chosen to be \(0.2\) (top) and \(0.57\) (bottom). We found that in the \(k\) range \(0.01\)Mpc\({}^{-1}<k<0.1\)Mpc\({}^{-1}\) phantom models are distinguishable from \(\Lambda\)CDM at a sensitivity of \(>3\sigma\). For higher multipoles, they are even more differentiable from fiducial \(\Lambda\)CDM. On the contrary, non-phantom models remain statistically indistinguishable from the \(\Lambda\)CDM model while considering monopole only. They are only distinguishable in higher multipoles.
The BAO imprint on the monopole \(P_{0}(z,k)\) allows us to constrain \(D_{A}(z)\) and \(H(z)\). We perform a Markov Chain Monte Carlo (MCMC) analysis to constrain the model parameters using the projected error constraints obtained on the binned \(H(z)\) and \(D_{A}(z)\) from the \(P_{0}(z,k)\). The analysis uses the Python implementation of the MCMC sampler introduced by Foreman-Mackey et al. (2013). We take flat priors for CPL-\(\Lambda\)CDM model parameters with ranges of \(\Omega_{\Lambda}\in[-7,2],w_{0}\in[-1.5,1.5],w_{a}\in[-0.7,0.7]\). The figure (5) shows the marginalized posterior distribution of the set of parameters \((\Omega_{\Lambda},w_{0},w_{a})\), and the corresponding 2D confidence contours are obtained. The fiducial value of the model parameters are taken from the best fit values of \(\Omega_{\Lambda}=-0.781,w_{0}=-1.03,w_{a}=-0.10\) obtained from the combined data CMB+BAO+Pantheon+R21 (Sen et al., 2023). Constraints on model parameters are tabulated in Table (2). While comparing with the projected error limits for the parameters of the CPL-\(\Lambda\)CDM as obtained in Sen et al. (2023), we find that 21-cm alone doesn't impose stringent constraints on the values of \(\Omega_{\Lambda}\) and \(w_{a}\). However, it does exhibit a reasonably good ability to constrain the parameter \(w_{0}\). To attain more robust constraints on these model parameters, a more comprehensive approach is required. This involves combining the 21-cm power spectrum data with other cosmological observations such as the CMB, BAO, SNIa, galaxy surveys etc. Through the joint analysis, it becomes possible to significantly improve the precision of parameter estimation.
\begin{table}
\begin{tabular}{c|c|c|c|c|c} \hline \hline Parameters & \(\Omega_{\Lambda}\) & \(w_{0}\) & \(w_{a}\) \\ \hline \hline Constraints & \(-0.883^{0.978}_{-2.987}\) & \(-1.030^{0.023}_{-0.082}\) & \(-0.088^{0.162}_{-0.343}\) \\ \hline \hline \end{tabular}
\end{table}
Table 2: The parameter values, obtained in the MCMC analysis are tabulated along the \(1-\sigma\) uncertainty.
Figure 4: shows the 21-cm linear power spectrum monopole (left), quadrupole (middle) and hexadecapole (right). The top panels show the power spectrum at redshift \(z=0.2\) and the bottom panel for redshift \(z=0.57\). The dotted line corresponds to \(\Lambda\)CDM.
\begin{table}
\begin{tabular}{c|c|c|c|c|c} \hline \hline \(N_{ant}\) & Antennae Efficiency & \(D_{dis}\) & \(T_{o}\) & \(T_{sys}\) & \(B\) \\ \hline
250 & 0.7 & 15m & 500hrs & 60K & 200MHz \\ \hline \hline \end{tabular}
\end{table}
Table 1: Table showing the telescope parameters used in our analysis.
## 5 Conclusion
In this work, we study the possibility of constraining negative \(\Lambda\) using the post-reionization HI 21-cm power spectrum. We specifically investigate the quintessence models with the most widely used dark energy EoS parameterization and add a non-zero vacua (in terms of a \(\pm\Lambda\)). Subsequently, we look into the influence of the Alcock-Packzynski effect on 3D HI 21cm power spectrum. Using \(\Lambda\)CDM as a fiducial cosmology, we explore the implications of the first few multipoles of the redshift-space 21cm power spectrum for the upcoming SKA experiment. From the BAO feature of the monopole, we estimated the projected errors on the \(H(z)\) and \(D_{A}(z)\) over a redshift range \(z\sim 0-3\).
We have obtained error projections on the model parameters from the BAO imprint on the post-reionization 21 cm intensity maps. We employ a Bayesian analysis techniques to put constraints on the model parameters, thereby enhancing our understanding of the underlying cosmological dynamics and potential implications of negative \(\Lambda\) values. The projections are idealized since the detection of the 21-cm signal is hampered by various observational challenges, with a primary concern being the substantial interference caused by extensive astrophysical foregrounds (Ghosh et al., 2011). These foregrounds stem from within our own Milky Way galaxy as well as extra galactic sources, and they pose a considerable obstacle to the unambiguous identification of the 21-cm signal. Effectively mitigating this issue demands a substantial effort in foreground subtraction. We have not considered the role of foregrounds in this work. However, we demonstrate that models with quintessence dark energy and a negative cosmological constant may be robustly constrained in futuristic radio-interferometric observations.
## Acknowledgements
AAS acknowledges the funding from SERB, Govt of India under the research grant no: CRG/2020/004347.
## 6 Data Availability
The data are available upon reasonable request from the corresponding author.
|
2301.02508 | End-to-End 3D Dense Captioning with Vote2Cap-DETR | 3D dense captioning aims to generate multiple captions localized with their
associated object regions. Existing methods follow a sophisticated
``detect-then-describe'' pipeline equipped with numerous hand-crafted
components. However, these hand-crafted components would yield suboptimal
performance given cluttered object spatial and class distributions among
different scenes. In this paper, we propose a simple-yet-effective transformer
framework Vote2Cap-DETR based on recent popular \textbf{DE}tection
\textbf{TR}ansformer (DETR). Compared with prior arts, our framework has
several appealing advantages: 1) Without resorting to numerous hand-crafted
components, our method is based on a full transformer encoder-decoder
architecture with a learnable vote query driven object decoder, and a caption
decoder that produces the dense captions in a set-prediction manner. 2) In
contrast to the two-stage scheme, our method can perform detection and
captioning in one-stage. 3) Without bells and whistles, extensive experiments
on two commonly used datasets, ScanRefer and Nr3D, demonstrate that our
Vote2Cap-DETR surpasses current state-of-the-arts by 11.13\% and 7.11\% in
[email protected], respectively. Codes will be released soon. | Sijin Chen, Hongyuan Zhu, Xin Chen, Yinjie Lei, Tao Chen, Gang YU | 2023-01-06T13:46:45Z | http://arxiv.org/abs/2301.02508v1 | # End-to-End 3D Dense Captioning with Vote2Cap-DETR
###### Abstract
3D dense captioning aims to generate multiple captions localized with their associated object regions. Existing methods follow a sophisticated "detect-then-describe" pipeline equipped with numerous hand-crafted components. However, these hand-crafted components would yield sub-optimal performance given cluttered object spatial and class distributions among different scenes. In this paper, we propose a simple-yet-effective transformer framework Vote2Cap-DETR based on recent popular **DE**tection **TR**ansformer (DETR). Compared with prior arts, our framework has several appealing advantages: 1) Without resorting to numerous hand-crafted components, our method is based on a full transformer encoder-decoder architecture with a learnable vote query driven object decoder, and a caption decoder that produces the dense captions in a set-prediction manner. 2) In contrast to the two-stage scheme, our method can perform detection and captioning in one-stage. 3) Without bells and whistles, extensive experiments on two commonly used datasets, ScanRefer and Nr3D, demonstrate that our Vote2Cap-DETR surpasses current state-of-the-arts by 11.13% and 7.11% in [email protected], respectively. Codes will be released soon.
## 1 Introduction
3D dense captioning [11, 7, 38, 36, 18, 4] requires a system to localize all the objects in a 3D scene, and generate descriptive sentences for each object. This problem is challenging given 1) the sparsity of point clouds and 2) the cluttered distribution of objects.
3D dense captioning can be divided into two tasks, object detection and object caption generation. Scan2Cap[11], MORE[18], and SpacCap3D[36] propose well-designed relation reasoning modules to efficiently model relations among object proposals. [42] introduces contextual information from two branches to improve the caption. 3DJCG[4] and D3Net[7] study the correlation between 3D visual grounding and 3D dense captioning, and point out that these two tasks promote each other. Additionally, \(\chi\)-Trans2Cap[38] discusses how to transfer knowledge from additional 2d information to boost 3d dense captioning.
Among existing methods, they all adopt a two-stage "detect-then-describe" pipeline[11, 18, 36, 4, 7, 42] (Figure 1). This pipeline first generates a set of object proposals, then decodes each object by a caption generator with an explicit reasoning procedure. Though these methods have achieved remarkable performance, the "detect-then-describe" pipeline suffers from the following issues: 1) Because of the serial and explicit reasoning, this task highly depends on the object detection performance, which limits the mutual promotion of detection and captioning. 2) The heavy reliance on hand-crafted components, e.g., radii, 3D operators, the definition of proposal neighbors, and post-processing (non-maximum suppression[25]) introduces ad
Figure 1: **Illustration of existing two-stage 3D dense captioning method (upper) and our Vote2Cap-DETR (bottom).** Existing methods adopt a two-stage pipeline that heavily depends on a detector’s output. Therefore, we propose a transformer-based one-stage model, Vote2Cap-DETR, that frames 3D dense captioning as a set prediction problem.
ditional hyper-parameters, leading to a sub-optimal performance given the sparse object surfaces and cluttered object distributions among different indoor scenes. This inspires us to design an one-stage 3D dense captioning system.
To address the above issues, we propose Vote2Cap-DETR, a full transformer encoder-decoder architecture for one-stage 3D dense captioning. Unlike the traditional "detect-then-describe" pipeline, we directly feed the decoder's output into the localization head and caption head in parallel. By casting 3D dense captioning as a set-to-set problem, each target instance and its language annotation is matched with a query in an one-to-one correspondence manner, helping feature representation for proposals be more discriminative to identify each **distinctive** object in a 3D scene. Additionally, we also propose a novel vote query driven decoder to introduce spatial bias for better localization of objects in a cluttered 3D scene.
With the fully attentional design, we resolve 3D dense captioning with the following innovations: 1) Our method treats the 3D dense captioning task as a set prediction problem. The proposed Vote2Cap-DETR directly decodes the features into object sets with their locations and corresponding captions by applying two parallel prediction heads. 2) We propose a novel vote decoder by reformulating the object queries in 3DETR into the format of the vote query, which is a composition of the embeddings of the seeds point and the vote transformation of the box with respect to the seeds. This indicates the connection between the vote query in Vote2Cap-DETR with the VoteNet, but with better localization and higher training efficiencies; 3) We develop a novel query driven caption head, which absorbs the relation and attribute modeling into the self- and cross-attention, so that it can look into both the local and global context to better describe the scene. Extensive experiments on two commonly used datasets, ScanRefer and Nr3D, demonstrate that our approach surpasses prior arts with many hand-crafted procedures by a large margin, which demonstrates the superiority that, full transformer architecture with sophisticated vote head and caption head can inspire many 3D vision and language tasks.
To summarize, the main contributions of this work include:
* We propose a novel one-stage and fully attention driven architecture for 3D dense captioning as a set-to-set prediction problem, which achieves object localization and caption generation in parallel.
* Extensive experiments show that our proposed Vote2Cap approach achieves a new state-of-the-art performance on both Nr3D[1] (45.53% [email protected]) and ScanRefer[11] (73.77% [email protected]).
## 2 Related Work
We briefly summarize works on 3D dense captioning, and DETR-based methods for image and 3D object detection. Additionally, we also introduce some methods for image captioning, which are closely related to our work.
**3D Dense Captioning.** 3D dense captioning, a task that requires translating 3D scene information to a set of bounding boxes and natural language descriptions, is challenging and has raised great interest among scholars recent years. Scan2Cap[11] and MORE[18] build graph on a detector's[29, 17] box estimations with hand-crafted rules to reason complex relations among objects in a 3D scene. SpaCap3D[36] build a spatiality-guided transformer to model spatial relations among the detector's output. 3DJCG[4] and D3Net[7] study the joint promotion of 3D dense captioning and 3D visual grounding. \(\chi\)-Trans2Cap[38] introduces additional 2D prior to complement information for 3D dense captioning with knowledge transfer. Recently, [42] shifts attention to contextual information for the perception of non-object information. These approaches have made great attempts to solve the 3D dense captioning problem. However, they all follow a "detect-then-describe" pipeline, which is heavily dependent on a detector's performance. Our proposed Vote2Cap-DETR differs from existing works in that, our method is a one-stage model that detects and generates captions in parallel, and treats 3D dense captioning as a set prediction problem.
**DETR: from 2D to 3D. DE**ection **T**ransformer(DETR)[5] is a transformer[34] based architecture that treats object detection as a set prediction problem, and does not require non-maximum suppression[25] for post-processing. Though great results have been achieved, DETR suffers from slow convergence. Many follow-up works[43, 39, 14, 23, 9, 16] put efforts on speeding up DETR's training by introducing multi-scale features, cross attention designs, and label assignment techniques. Researchers also attempt to introduce transformer architectures to 3D object detection. GroupFree3D[21] learns proposal features from the whole point cloud through the transformer rather than grouping local points. 3DETR[24] analyzes the potential of the standard transformer model, and generates proposals by uniformly sampling seed points from a 3D scene. In our work, we extend the DETR architecture for 3D dense captioning that makes caption generation and box localization fully interrelated with parallel decoding. Additionally, we propose vote query for better performance and faster convergence.
**Image Captioning.** Image captioning requires a model to generate sentences describing key elements in an image, which has become a hot topic in computer vision. Existing image captioning works adopt an encoder-decoder architecture, where the decoder generates sentences from visual features extracted by the encoder. [2, 12, 15, 27] adopt a detec
tor to extract region features as visual clues for the decoder, while [20, 41] extract grid features directly from an image. Additionally, [26] generates captions with both region and grid visual features. Though these methods are effective in image captioning, they cannot be directly applied to 3D dense captioning, which requires both accurately localizing and describing a 3D object, rather than simply captioning a whole 2D scene image. In contrast, our proposed caption head sufficiently leverages the rich context information in 3D point cloud, receives visual clues from both the object query and its local context, and fuses them to achieve effective 3D dense captioning.
## 3 Method
As shown in Fig. 2, given a 3D scene, our goal is to localize objects of interest and generate informative natural language descriptions for each object. The **input** of our model is a point cloud \(PC=[p_{in};f_{in}]\in\mathbb{R}^{N\times(3+F)}\) representing an indoor 3D scene. Here, \(p_{in}\in\mathbb{R}^{N\times 3}\) is the absolute locations for each point, and \(f_{in}\in\mathbb{R}^{N\times F}\) is additional input feature for each point, such as _color, normal, height_, or _multiview feature_ introduced by [11, 6]. The expected **output** is a set of box-caption pairs \((\hat{B},\hat{C})=\{(\hat{b}_{1},\hat{c}_{1}),\cdots,(\hat{b}_{K},\hat{c}_{K})\}\), representing an estimation of \(K\) distinctive objects in this 3D scene.
Specifically, our system adopts 3DETR[24] encoder as our scene encoder, and transformer decoder to capture both object-object and object-scene interactions by the attention mechanism. Then, we adopt two task-specific heads for object detection and caption generation.
### 3DETR Encoder
Inspired by DETR[5], 3DETR[24] has made a successful attempt at bringing full transformer architecture to the 3D object detection task, which removes many hard-coded design decisions as the popular VoteNet and PointNet++ modules in most two-stage methods.
In 3DETR encoder, the input \(PC\) is first tokenized with a set-abstraction layer[30]. Then, point tokens are fed into a masked transformer encoder with a set-abstraction layer followed by another two encoder layers. We denote the encoded scene tokens as \([p_{enc};f_{enc}]\in\mathbb{R}^{1,024\times(3+256)}\).
### Vote Query
Though 3DETR has achieved initial success in 3D object detection, it suffers from certain limitations. 3DETR proposes the box estimation around the query points (aka proposal centers) sampled from the scenes, which can make these boxes far away from real objects given the sparse object surfaces, resulting in slow convergence to capture discriminative object features with further miss detections.
Prior works on fast convergence DETR models[23, 10, 40] show that by injecting more structured bias to initialize object queries, such as anchor points or content-aware queries, accelerates training. Therefore, we propose the vote query, which introduces both 3D spatial bias and content-related information, for faster convergence and performance improvement.
More specifically, we reformulate the object queries in 3DETR into the format of vote query, as a composition of the embedding of the reference points and vote transformation around them. This helps to build the connection between the object query in 3DETR and the vote set prediction widely studied in VoteNet.
The detailed structure is shown in Figure 3. Here, vote \(\Delta p_{vote}\) is predicted from encoded scene token feature \(f_{enc}\) with a **F**eed **F**orward **N**etwork (FFN) \(FFN_{vote}\) that learns to shift the encoded points to objects' centers spatially:
\[p_{vote}=p_{enc}+\Delta p_{vote}=p_{enc}+FFN_{vote}\left(f_{enc}\right). \tag{1}\]
Then, we sample 256 points \(p_{seed}\) from \(p_{enc}\) with farthest point sampling, and locate each point's offset estimation for \(p_{vq}=p_{seed}+\Delta p_{vote}\). Finally, we gather features from \((p_{enc},f_{enc})\) for \(p_{vq}\) with a set-abstraction layer[30], to formulate the vote query feature \(f_{vq}\in\mathbb{R}^{256\times 256}\). We represent vote query as \((p_{vq},f_{vq})\).
Following 3DETR[24], our model adopts an eight-layer transformer decoder, and the \(i\)-th layer's input query feature \(f^{i}_{query}\) is calculated through
\[f^{i}_{query}=Layer_{i-1}\left(f^{i-1}_{query}+FFN\left(PE\left(p_{vq} \right)\right)\right), \tag{2}\]
where \(f^{0}_{query}=f_{vq}\), and \(PE(\cdot)\) is the 3D Fourier positional encoding function[32]. Experiments in later sections demonstrate that: 1) Vote query injects additional spatial bias to object detection and boosts the detection performance. 2) Encoding features from the point cloud as initial queries accelerates convergence.
### Parallel Decoding
We adopt two task-specific heads for simultaneous object detection and caption generation. The two task heads are agnostic to each other's output.
**Detection Head.** Detecting objects in a 3D scene requires box corner estimation \(\hat{B}\) and class estimation \(\hat{S}\) (containing "no object" class) from each object query feature. Following 3DETR[24], box corner estimation is reformulated into offset estimation from a query point to an object's center, and box size estimation. All subtasks are implemented by FFNs. In practice, the object localization head is shared through different layers in the decoder, following all existing works on DETR[5, 24, 23, 10].
**Caption Head.** 3D dense captioning requires attribute details on an object and its relation with its close surroundings. However, the vote query itself is agnostic to box predictions for the whole scene, and fails to provide adequate attribute
and spatial relations for generating informative captions. Therefore, the main difficulty is how to leverage sufficient surrounding contextual information without confusing the caption head.
To address the above issues, we propose **D**ual-**C**lued Captioner(DCC), a lightweight transformer decoder-based caption head, for 3D dense captioning. DCC consists of a stack of 2 identical transformer decoder blocks, sinusoid position embedding, and a linear classification head. To generate informative captions, DCC receives two streams of visual clue \(\mathcal{V}=(\mathcal{V}^{q},\mathcal{V}^{s})\). Here, \(\mathcal{V}^{q}\) is the last decoder layer's output feature of a vote query, and \(\mathcal{V}^{s}\) is contextual information surrounding the absolute location of each vote query. When generating a caption for a proposal, we substitute the standard **S**tart **O**f **S**eqence('SOS') prefix with \(\mathcal{V}^{q}\) of the described query identifying the object to be described following [36]. Since the vote query is agnostic of actual neighbor object proposals because of the parallel detection branch, we introduce the vote query's \(k_{s}\) nearest local context token features as its local surroundings \(\mathcal{V}^{s}\) as keys for cross attention. During the evaluation, we generate captions through beam search with a beam size of 5.
### Set prediction loss for 3D Dense Captioning
Our proposed Vote2Cap-DETR generates a set of paired box-caption proposals \((\hat{B},\hat{C})\) for 3D dense captioning. It
Figure 4: **Dual-Clued Captioner(DCC).** DCC is a lightweight transformer based caption head that uses vote query feature \(\mathcal{V}_{q}\) as caption prefix to identify the described region, and contextual features \(\mathcal{V}_{s}\) surrounding the vote query to complement with more surrounding information for more descriptive caption generation.
Figure 3: **Vote Query Generation.** Vote query \(p_{eq}\) contains spatial bias (\(\Delta p_{vote}\)) to initial object queries (\(p_{seed}\)), which are sampled from the scene with farthest point sampling (FPS) and gathered feature \(f_{vq}\) from the point cloud for each query.
Figure 2: **Approach.** Vote2Cap-DETR is an one-stage transformer model that takes a 3D point cloud as its input, and generates a set of box predictions and sentences localizing and describing each object in the point cloud. The scene encoder first generates encoded scene tokens \((p_{enc},f_{enc})\) from the input point cloud. Then, we generate vote query \((p_{eq},f_{vq})\) from the encoded scene tokens, which introduce both spatial bias \(p_{vq}\) and content-aware feature \(f_{vq}\) to initial object queries. The transformer decoder decodes each vote query with two parallel task heads for captioning and detection. We optimize Vote2Cap-DETR with a set loss.
requires supervision for vote query (\(\mathcal{L}_{vq}\)), detection head (\(\mathcal{L}_{det}\)), and caption head (\(\mathcal{L}_{cap}\)).
**Vote Query Loss.** We borrow vote loss from VoteNet[29] as \(\mathcal{L}_{vq}\), to help the vote query generation module learn to shift points \(p_{enc}\) to an object's center:
\[\mathcal{L}_{vq}=\frac{1}{M}\sum_{i=1}^{M}\sum_{j=1}^{N_{gt}}\left\|p_{vote}^{ i}-cnt_{j}\right\|_{1}\cdot\mathbb{I}\left\{p_{enc}^{i}\in I_{j}\right\}. \tag{3}\]
Here, \(\mathbb{I}(\cdot)\) is an indicator function that equals \(1\) when the condition meets and \(0\) otherwise, \(N_{gt}\) is the number of instances in a 3D scene, \(M\) is the size of \(p_{vote}\), and \(cnt_{j}\) is the center of \(j\)th instance \(I_{j}\).
**Detection Loss.** Following 3DETR[24], we use the same Hungarian algorithm to assign each proposal with a ground truth label. Since 3D dense captioning is closely related to the object localization ability, we apply a larger weight on the gIoU loss component for total set loss[24]:
\[\mathcal{L}_{set}=\alpha_{1}\mathcal{L}_{giou}+\alpha_{2}\mathcal{L}_{cls}+ \alpha_{3}\mathcal{L}_{center-reg}+\alpha_{4}\mathcal{L}_{size-reg}, \tag{4}\]
where \(\alpha_{1}=10\), \(\alpha_{2}=1\), \(\alpha_{3}=5\), \(\alpha_{4}=1\) are set heuristically. The set loss \(\mathcal{L}_{set}\) is applied to all \(n_{dec-layer}\) layers in the decoder for better convergence.
**Caption Loss.** Following the standard practice of image captioning, we train our caption head first with standard cross-entropy loss (MLE training), and then fine-tune it with **S**elf-**C**ritical **S**equence **T**raining (SCST)[31]. During MLE training, the model is trained to predict the \((t+1)\)th word \(c_{i}^{t+1}\), given the first \(t\) words \(c_{i}^{[1:t]}\) and the visual clue \(\mathcal{V}\). The loss function for a \(T\)-length sentence is defined as:
\[\mathcal{L}_{c_{i}}=\sum_{i=1}^{T}\mathcal{L}_{c_{i}}(t)=-\sum_{i=1}^{T}\log \hat{P}\left(c_{i}^{t+1}|\mathcal{V},c_{i}^{[1:t]}\right). \tag{5}\]
After the caption head is trained under word-level supervision, we fine-tune it with SCST. During SCST, the model generates multiple captions \(\hat{c}_{1,\cdots,k}\) with a beam size of \(k\), and another \(\hat{g}\) through greedy search as a baseline. The loss function for SCST is defined as:
\[\mathcal{L}_{c_{i}}=-\sum_{i=1}^{k}\left(R\left(\hat{c}_{i}\right)-R\left( \hat{g}\right)\right)\cdot\frac{1}{|\hat{c}_{i}|}\log\hat{P}\left(\hat{c}_{i}| \mathcal{V}\right). \tag{6}\]
Here, the reward function \(R\left(\cdot\right)\) is the CIDEr metric for caption evaluation, and the log probability of caption \(\hat{c}_{i}\) is normalized by caption length \(|\hat{c}_{i}|\), to encourage the model to treat captions with different length equally important.
**Set to Set Training for 3D Dense Captioning.** We propose an easy-to-implement set-to-set training strategy for 3D dense captioning. Given a 3D scene, we randomly sample one sentence from the corpus for each annotated instance. Then, we assign language annotations to the corresponding number of proposals in the corresponding scene with the same Hungarian algorithm. During training, we average losses for captions \(\mathcal{L}_{c_{i}}\) on all annotated instances in a batch, to compute the caption loss \(\mathcal{L}_{cap}\). To balance losses for different tasks, our loss function for the whole system is defined as:
\[\mathcal{L}=\beta_{1}\mathcal{L}_{vq}+\beta_{2}\sum_{i=1}^{n_{dec-layer}} \mathcal{L}_{set}+\beta_{3}\mathcal{L}_{cap}, \tag{7}\]
where \(\beta_{1}=10\), \(\beta_{2}=1\), \(\beta_{3}=5\) are set heuristically.
## 4 Experiments
We first present the datasets, metrics, and implementation details for 3D dense captioning (section 4.1). Then, we provide comparisons with all state-of-the-art methods (section 4.2). We also provide studies on the effectiveness of different parts in our model (section 4.3). Finally, we visualize several qualitative results to address the effectiveness of our method (section 4.4).
### Datasets, Metrics, and Implementation Details
**Datasets**. We report results on two commonly used datasets, ScanRefer [6] and Nr3D[1], both of which are built on 3D scenes from ScanNet[13]. ScanNet[13] contains 1,201 indoor 3D scenes for training and 312 for validation. ScanRefer/Nr3D contains 36,665/32,919 free-form language annotations describing 7,875/4,664 objects from 562/511 3D scenes for training, and evaluates on 9,508/8,584 sentences for 2,068/1,214 objects from 141/130 3D scenes.
**Evaluation Metrics**. Following [11, 4, 18, 36], we first apply NMS on object proposals to drop duplicate object predictions. Each object proposal is a box-sentence pair \((\hat{b}_{i},\hat{c}_{i})\), containing box corner prediction \(\hat{b}_{i}\) and generated sentence \(\hat{c}_{i}\). Then, each instance is assigned an object proposal with the largest IoU among the remaining proposals. Here, we use \((b_{i},C_{i})\) to represent an instance's label, where \(b_{i}\) is a box corner's label and \(C_{i}\) is the corpus containing all caption annotations for this instance. To jointly evaluate the model's localization and caption generation capability, we adopt the \(m@kIoU\) metric[11]:
\[m@kIoU=\frac{1}{N}\sum_{i=1}^{N}m\left(\hat{c}_{i},C_{i}\right)\cdot\mathbb{I} \left\{IoU\left(\hat{b}_{i},b_{i}\right)\geq k\right\}. \tag{8}\]
Here, \(N\) is the number of total annotated instances in the evaluation dataset, and \(m\) could be any metric for natural language generation, such as CIDEr[35], METEOR[3], BLEU-4[28], and ROUGE-L[19].
**Implementation Details**. We offer implementation details of different baselines. "w/o additional 2D" means the input \(\mathcal{PC}\in\mathbb{R}^{40,000\times 10}\) contains absolute location as well as
_color_, _normal_ and _height_ for \(40,000\) points representing a 3D scene. "additional 2D" means we replace color information with \(128\)-dimensional _multiview_ feature extracted by ENet[8] from 2D images following [11].
We first pre-train the whole network without the caption head, on ScanNet[13] detection dataset with ScanRefer[6] categories for \(1,080\) epochs (about 163k iterations, 34 hours), using the AdamW optimizer[22] with a learning rate decaying from \(5\times 10^{-4}\) to \(10^{-6}\) by a cosine annealing scheduler, a weight decay of \(0.1\), a gradient clipping of \(0.1\), and a batch size of \(8\) following [24]. Then, we load the pre-trained detector, and train our caption head with MLE loss for another 720 epochs (51k/46k iterations for ScanRefer/Nr3D, 11/10 hours). To prevent overfitting, we fix the learning rate of the detector as \(10^{-6}\), and set that of the caption head decaying from \(10^{-4}\) to \(10^{-6}\) using another cosine annealing scheduler. Due to the high memory cost of SCST, we tune the caption head with a batch size of 2 and freeze the detector for 180 epochs (50k/46k iterations for ScanRefer/Nr3D, 14/11 hours) with a fixed learning rate of \(10^{-6}\). We evaluate the model every \(2,000\) iterations during training for consistency with existing works[11, 36], and all experiments mentioned above are conducted on a single RTX3090 GPU.
### Comparison with Existing Methods
In this section, we compare performance with existing works on metrics **C**, **M**, **B-4**, **R** as abbreviations for CIDEr[35], METEOR[3], BLEU-4[28], Rouge-L[19] under IoU thresholds of 0.25, 0.5 for ScanRefer (Table 1) and 0.5 for Nr3D (Table 2). "-" indicates that neither the original paper nor any follow-up works provide such results. Since different supervision on the caption head has a huge influence on the captioning performance, we make separate comparisons for MLE training and SCST. Among all the listed methods, experiments other than D3Net[7] and 3DJCG[4] utilize the standard VoteNet[29] detector. Meanwhile, D3Net[7] adopts PointGroup[17], a 3D instance segmentation model, for better object detection. 3DJCG[4] improves VoteNet's localization performance with an FCOS[33] head, which predicts distance from a voting point to each side of a bounding box. Additionally, 3DJCG and D3Net focus on the joint promotion of 3D dense captioning and 3D visual grounding, therefore their reported models are trained with data from both tasks. Among methods listed under SCST, \(\chi\)-Trans2Cap[38] combines MLE training with standard SCST in an additive manner, Scan2Cap and D3Net[7] adopt the same reward combining CIDEr score and listener losses with a weighted sum. It's worth mentioning that our model adopts the standard SCST, whose reward function is CIDEr score.
Table 1 reports comparisons on ScanRefer[6] validation dataset. Our Vote2Cap-DETR surpasses current state-of-the-art methods. For example, under MLE training with additional 2D inputs, our Vote2Cap-DETR achieves 59.32% [email protected] while 3DJCG[4] achieves 49.48% (9.84% [email protected]\(\uparrow\)) with additional training data. Additionally, under SCST, our Vote2Cap-DETR achieves 70.63% [email protected], while 62.64% (7.99% [email protected]\(\uparrow\)) for current state-of-the-art D3Net[7] with more training labels and semi-supervised training on more training data.
In Table 2, we list results on the Nr3D[1] dataset with additional 2D input following [36]. Since Scan2Cap[11] has not reported results on Nr3D, we adopt the best-reported result from [4]. Our proposed Vote2Cap-DETR also surpasses current state-of-the-art methods.
### Ablation Study
Since 3D dense captioning concerns both localization and caption generation, we perform ablation studies to understand the effectiveness of different components.
**Does the vote query improve 3DETR?** We performed ablation experiments in Table 3 and Figure 5 to see if the vote query can improve 3DETR's localization and convergence. Introducing position features \(p_{vq}\) alone helps improve detection performance (0.97% mAP50\(\uparrow\)). However, it (green line in Figure 5) converges slower in the earlier training procedure than the 3DETR baseline (blue line in Figure 5), inferring the vote query generation module is not well learned to predict accurate spatial offset estimations at early training epochs. Introducing additional content feature \(f_{vq}\) in vote query features results in another boost in both detection performance (2.98% mAP50\(\uparrow\)) and training speed (red line in Figure 5). The overall localization performance of Vote2Cap-DETR is about 7.2% mAP higher than the popular VoteNet.
**Does 3D context feature help captioning?** Since the per
Figure 5: **Vote query and convergence.** We take out convergence study on a different combination of content feature \(f_{vq}\) and position \(p_{vq}\) in vote query. The baseline model \((p_{query},f^{0}_{query})=(p_{seed},\mathbf{0})\) downgrades to 3DETR. Introducing \(p_{vq}\) boosts performance but decelerates training since \(FFN_{vote}\) requires time to converge, and \(f_{vq}\) accelerates training.
formance of 3D dense captioning is affected by both localization and caption capability, we freeze all parameters other than the caption head, and train with 3D only input and standard cross entropy loss (MLE training) for a fair evaluation. We use object-centric decoder[36] as our baseline, which is a decoder that generates captions with object feature as a caption's prefix. In Table 4, "-" refers to the object-centric decoder baseline, "global" means naively including all context tokens extracted from the scene encoder in the decoder, "local" is our proposed caption head that includes a vote query's \(k_{s}\) (\(k_{s}=128\) empirically) nearest context tokens extracted from the scene encoder.
With the object feature as a caption's prefix, caption generation performance benefits from introducing additional contextual information. Additionally, compared with naively introducing contextual information from the whole scene, introducing local information could be more beneficial. This demonstrates our motivation that close surroundings matter when describing an object.
**Do Set-to-Set Training benefit dense captioning?** To analyze effectiveness of set-to-set training, we follow the training procedure that utilize a smaller learning rate for all parameters other than the caption head, and freeze these parameters during SCST. We name the baseline training strategy as "Sentence Training", which traverses through all sentence annotations in the dataset and is widely adopted in various works[11, 36]. As is shown in Figure 7, our proposed "Set-to-Set" training achieves comparable results with the traditional "Sentence Training" during MLE training, and converges faster because of a bigger batch size on the caption head, which also benefits SCST.
**Is Vote2Cap-DETR robust to NMS?** Similar to other DETR works, the set loss will encourage the model to produce compact predictions. We compare performance on
\begin{table}
\begin{tabular}{c c c c c c c c c c c c c c c c} \hline \hline \multirow{2}{*}{Method} & \multirow{2}{*}{\(\mathcal{L}_{des}\)} & \multicolumn{6}{c}{IoU = 0.25} & \multicolumn{6}{c}{IoU = 0.50} & \multicolumn{6}{c}{IoU = 0.25} & \multicolumn{6}{c}{IoU = 0.50} \\ \cline{3-14} & & C\(\uparrow\) & B-4\(\uparrow\) & M\(\uparrow\) & R\(\uparrow\) & C\(\uparrow\) & B-4\(\uparrow\) & M\(\uparrow\) & R\(\uparrow\) & C\(\uparrow\) & B-4\(\uparrow\) & M\(\uparrow\) & R\(\uparrow\) \\ \hline ScanCap[11] & & 53.73 & 34.25 & 26.14 & 54.95 & 35.20 & 22.36 & 21.44 & 43.57 & 56.82 & 34.18 & 26.29 & 55.27 & 39.08 & 23.32 & 21.97 & 44.78 \\ MORE[18] & & 58.89 & 35.41 & 26.36 & 55.41 & 38.98 & 23.01 & 21.65 & 44.33 & 62.91 & 36.25 & 26.75 & 56.33 & 40.94 & 29.93 & 21.66 & 44.42 \\ SpCapCap[34][36] & & 58.06 & 35.30 & 26.16 & 55.03 & 47.26 & 25.38 & 22.84 & 45.66 & 63.30 & 36.46 & 26.71 & 57.51 & 44.02 & 25.26 & 22.33 & 45.36 \\
3DICP[4] & MLE & 60.86 & **39.67** & 27.45 & 59.02 & 47.63 & 31.53 & 24.28 & 51.80 & 64.70 & **40.17** & 27.66 & **59.23** & 49.48 & 31.03 & 24.22 & 50.80 \\ D3Net[7] & - & - & - & - & - & - & - & - & - & - & 46.07 & 30.29 & 24.35 & 51.67 \\ Ours & **71.45** & 39.34 & **28.25** & **59.33** & **61.81** & **34.46** & **26.22** & **54.40** & **72.79** & 39.17 & **28.06** & **59.23** & **59.32** & **32.42** & **25.28** & **52.53** \\ \hline \(\chi\)-Trans2Cap[38] & & 58.81 & 34.17 & 25.81 & 54.10 & 41.52 & 23.83 & 21.90 & 44.97 & 61.83 & 35.65 & 26.61 & 54.70 & 43.87 & 25.05 & 22.46 & 45.28 \\ Scan2Cap[11] & - & - & - & - & - & - & - & - & - & - & 48.38 & 26.09 & 21.25 & 44.74 \\ D3Net[7] & SCST & - & - & - & - & - & - & - & - & - & - & 62.64 & 35.68 & **25.72** & **53.90** \\ Ours & **84.15** & **42.51** & **28.47** & **59.26** & **73.77** & **38.21** & **26.64** & **54.71** & **86.28** & **42.64** & **28.27** & **59.07** & **70.63** & **35.69** & 25.51 & 52.28 \\ \hline \hline \end{tabular}
\end{table}
Table 1: **Evaluating Vote2Cap-DETR on ScanRefer[6].** We compare Vote2Cap-DETR with all published state-of-the-art 3D dense caption methods on the ScanRefer dataset. Though our method does not depend on hand-crafted NMS[25] to drop overlapped boxes, we follow the standard evaluation protocol from [11] for fair comparison and provide evaluation without NMS in Table 6. Our proposed Vote2Cap-DETR achieves new state-of-the-art under both MLE training and SCST.
\begin{table}
\begin{tabular}{c c c c c c c c} \hline \hline \multirow{2}{*}{key} & \multicolumn{3}{c}{IoU=0.25} & \multicolumn{3}{c}{IoU=0.5} & \multicolumn{3}{c}{IoU=0.50} \\ \cline{2-7} & C\(\uparrow\) & B-4\(\uparrow\) & M\(\uparrow\) & R\(\uparrow\) & C\(\uparrow\) & B-4\(\uparrow\) & M\(\uparrow\) & R\(\uparrow\) \\ \hline - & 68.62 & 38.61 & 27.67 & 58.47 & 60.15 & 34.02 & 25.80 & 53.82 \\ global & 70.05 & 39.23 & 27.84 & 58.44 & 61.20 & 34.66 & 25.93 & 53.79 \\ local & **70.42** & **39.98** & **27.99** & **58.89** & **61.39** & **35.24** & **26.02** & **54.12** \\ \hline \hline \end{tabular}
\end{table}
Table 4: **Different keys for caption generation.** We provide a comparison on different keys used in caption generation. Introducing contextual information relates to more informative captions generated. Since 3D dense captioning is more object-centric, introducing vote queries’ local contextual feature is a better choice.
\begin{table}
\begin{tabular}{c c c c c c c} \hline \hline \multirow{2}{*}{key} & \multirow{2}{*}{\(f_{query}^{0}\)} & \multicolumn{3}{c}{IoU=0.25} & \multicolumn{3}{c}{IoU=0.50} & \multicolumn{3}{c}{1st layer IoU=0.50} \\ \cline{2-7} & mAP\(\uparrow\) & AR\(\uparrow\) & mAP\(\uparrow\) & AR\(\uparrow\) & mAP\(\uparrow\) & AR\(\uparrow\) \\ \hline VoteNet Baseline & 63.42 & 82.18 & 44.96 & 60.65 & - & - \\ \hline \(p_{red}\) & **0** & 67.25 & 84.91 & 48.18 & 64.98 & 34.80 & 55.06 \\ \(p_{ty}\) & **0** & 67.33 & 85.60 & 40.15 & 66.38 & 30.23 & 58.44 \\ \(p_{ty}\) & \(f_{vq}\) & **69.61** & **87.20** & **52.13** & **69.12** & 46.53 & 66.51 \\ \hline \hline \end{tabular}
\end{table}
Table 3: **Vote query and performance.** We provide quantitative results for Figure 5. Introducing \(p_{vq}\) as query positions improves detection, and gathering \(f_{vq}\) from content further boosts performance.
\begin{table}
\begin{tabular}{c c c c c c c c c c c c c c} \hline \hline \multirow{2}{*}{Training} & \multirow{2}{*}{\(\mathcal{L}_{des}\)} & \multicolumn{3}{c}{IoU = 0.25} & \multicolumn{6}{c}{IoU = 0.50} & \multicolumn{3}{c}{IoU = 0.50} & \multicolumn{3}{c}{IoU = 0.50} \\ \cline{2-13} & C\(\uparrow\) & B-4\(\uparrow\) & M\(\uparrow\) & R\(\uparrow\) & C\(\uparrow\) & B-4\(\uparrow\) & M\(\uparrow\) & R\(\uparrow\) & C\(\uparrow\) & B-4\(\uparrow\) & M\(\uparrow\) & R\(\uparrow\) \\ \hline \multirow{2}{*}{Sentence Set-to-Set} & \multirow{2}{*}{MLE} & 61.21 & **35.35
both 3D dense caption ([email protected]) and detection (mAP50, AR50) in Table 6. Since the \(m@kIoU\) metric (Eq. 8) does not contain any penalties on redundant predictions, getting rid of NMS[25] results in performance growth on [email protected]. Absence of NMS restricts the detection precision performance (mAP50) of SpaCap3D (14.47% mAP50 \(\downarrow\)) and 3DJCG (17.55% mAP50 \(\downarrow\)), however that of Vote2Cap-DETR remains stable.
### Qualitative Results
We compare qualitative results with two state-of-the-art models, SpaCap3D[36] and 3DJCG[4] in Figure6. One can see that our method produces tight bounding boxes close to the ground-truth. Moreover, our method can produce accurate descriptions of object attributes, classes, and spatial relationships.
## 5 Conclusion.
In this work, we present Vote2Cap-DETR, a transformer based one-stage approach, for 3D dense captioning. The proposed Vote2Cap-DETR adopts a fully transformer encoder-decoder architecture that decodes a set of vote queries to box predictions and captions in parallel. We show that by introducing spatial bias and content-aware features, vote query boosts both convergence and detection performance. Additionally, we develop a novel lightweight query-driven caption head for informative caption generation. Experiments on two widely used datasets for 3D dense
\begin{table}
\begin{tabular}{c c c c c c} \hline \hline \multirow{2}{*}{Models} & \multicolumn{2}{c}{w/ NMS} & \multicolumn{3}{c}{w/o NMS} \\ \cline{2-6} & [email protected]\(\uparrow\) & mAP50\(\uparrow\) & AR50\(\uparrow\) & [email protected]\(\uparrow\) & mAP50\(\uparrow\) & AR50\(\uparrow\) \\ \hline SpaCap3D & 43.93 & 37.77 & 53.96 & 51.35 & 23.30 & 64.14 \\
3DJCG & 50.22 & 47.58 & 62.12 & 54.94 & 30.03 & 68.69 \\ Vote2Cap-DETR & 70.63 & 52.79 & 66.09 & 71.57 & 52.82 & 67.80 \\ \hline \hline \end{tabular}
\end{table}
Table 6: **Effect of NMS.** We analyze whether the absence of NMS affects the 3D dense captioning performance ([email protected]) as well as detection performance (mAP50, AR50).
Figure 6: **Qualitative Comparisons.** We compare qualitative results with two state-of-the-art “detect-then-describe” methods, 3DJCG[4] and SpaCap3D[36]. We underline phrases describing spatial locations, and mark correct attribute words in green and wrong descriptions in red. Our method produces tight bounding boxes close to ground truth annotations and produce accurate descriptions of object attributes, classes and spatial relationship.
Figure 7: **Set-to-Set training and convergence.** Convergence speed analysis of two different training strategies with MLE training as well as SCST. Set-to-Set training enables a larger batch size for the caption head, which accelerates convergence on 3D dense captioning.
captioning validates that our propose one-stage Vote2Cap-DETR model surpasses prior works with heavy dependence on hand-crafted components by a large margin.
|
2305.07187 | Point convolutional neural network algorithm for Ising model ground
state research based on spring vibration | The ground state search of the Ising model can be used to solve many
combinatorial optimization problems. Under the current computer architecture,
an Ising ground state search algorithm suitable for hardware computing is
necessary for solving practical problems. Inspired by the potential energy
conversion of springs, we propose a point convolutional neural network
algorithm for ground state search based on spring vibration model, called
Spring-Ising Algorithm. Spring-Ising Algorithm regards the spin as a moving
mass point connected to a spring and establish the equation of motion for all
spins. Spring-Ising Algorithm can be mapped on the GPU or AI chips through the
basic structure of the neural network for fast and efficient parallel
computing. The algorithm has very productive results for solving the Ising
model and has been test in the recognized test benchmark K2000. The algorithm
introduces the concept of dynamic equilibrium to achieve a more detailed local
search by dynamically adjusting the weight of the Ising model in the spring
oscillation model. Finally, there is the simple hardware test speed evaluation.
Spring-Ising Algorithm can provide the possibility to calculate the Ising model
on a chip which focuses on accelerating neural network calculations. | Zhelong Jiang, Gang Chen, Ruixiu Qiao, Pengcheng Feng, Yihao Chen, Junjia Su, Zhiyuan Zhao, Min Jin, Xu Chen, Zhigang Li, Huaxiang Lu | 2023-05-12T01:01:01Z | http://arxiv.org/abs/2305.07187v1 | **Point convolutional neural network algorithm for Ising model ground state research based on spring vibration**
## Abstract
The ground state search of the Ising model can be used to solve many combinatorial optimization problems. Under the current computer architecture, an Ising ground state search algorithm suitable for hardware computing is necessary for solving practical problems. Inspired by the potential energy conversion of springs, we propose a point convolutional neural network algorithm for ground state search based on spring vibration model, called Spring-Ising Algorithm. Spring-Ising Algorithm regards the spin as a moving mass point connected to a spring and establish the equation of motion for all spins. Spring-Ising Algorithm can be mapped on the GPU or AI chips through the basic structure of the neural network for fast and efficient parallel computing. The algorithm has very productive results for solving the Ising model and has been test in the recognized test benchmark K\({}_{\text{code}}\). The algorithm introduces the concept of dynamic equilibrium to achieve a more detailed local search by dynamically adjusting the weight of the Ising model in the spring oscillation model. Finally, there is the simple hardware test speed evaluation. Spring-Ising Algorithm can provide the possibility to calculate the Ising model on a chip which focuses on accelerating neural network calculations.
## Introduction
Combinatorial optimization problems, a subfield of optimization with discrete variables, are ubiquitous in many fields of research. In many cases, we can find a mapping to the decision form of the Ising model with a polynomial number of steps for the NPC (Non-deterministic Polynomial Complete) problem [1, 2, 3, 4]. Therefore, many optimization problems can be formulated as Ising models to find the ground state, or the lowest energy
configuration. So that, solving the Ising model becomes a general method for solving many NP problems, like partitioning problems [2], linear programming [1, 3, 5], inequality problems [6], coloring problems [2, 7] and so on. However, the Ising model is known to be NP-hard (Non-deterministic Polynomial Hard) problem [8]. So, it is difficult but important to find the ground state of the Ising model quickly and accurately.
The Ising model is mainly used in statistical physics and scientific computing. In statistical physics, the Ising model is widely used to study the phase transition phenomenon [9, 10, 11]. In scientific computing, the actual combinatorial optimization problem is mapped to the Ising model for finding the ground state in the N spins state space [12, 13, 14]. With N spins, there are \(2^{N}\) spin state to search the global minimum of the energy state, which is a great challenge for using conventional computing [15]. Special-purpose hardware devices for the ground state search, known as Ising machines, have recently attracted attention because of their potential to substantially speed up the solution of optimization problems [16]. Various schemes have been proposed and demonstrated for the Ising model, including quantum annealers [17, 18, 19, 20, 21], coherent Ising machine [22, 23, 24, 25, 26, 27, 28, 29, 30, 31] and so on. Limited by current technology, the above methods have difficulties such as large-scale expansion and complicated parameter configuration. Quantum computer is expected to solve exponential combinatorial optimization problems, but related work is still in its infancy [32].
The CMOS implementations [16, 33, 34, 35, 36, 37] are easy to integrate and expand, which means that it is a more suitable strategy for mapping and solving large-scale practical Ising model problem. In application, CMOS Ising machine has advantages such as tiny size, flexible expansion, high integration, low system power consumption and so on [36]. Most CMOS chips are based on non-fully connected structures, like lattice graph [15, 33, 35, 36], king graph[34, 38, 39, 40], and Hexagonally graph [41]. All-to-all connected Ising models are more practical value than sparse ones, but communication and synchronization between the spins degrade the speed performance in CMOS [16]. As the result of that, the spin scale of a CMOS chip based on an all-to-all connected topology design is very limited. The non-uniform design limits the popularization of CMOS chips and increases the design cost of ASIC for Ising model.
Al (Artificial Intelligence) chips have numerous computing resources, which are used for training and Inference of various Al algorithms. and are important available resources for computing large-scale problems. At present, Al chips have solved many problems such as classification, detection, and tracking by virtue of their powerful computing power[42, 43]. Commercial Al chips have the characteristics of high energy efficiency, high parallelism, and high scalability. These chips, which are optimized for communication and synchronization, have been used in a large number of large models. The computing architecture of the Al chip sets the computing engine for the multiply accumulate operation and realizes the parallel computing through efficient scheduling, thereby reducing computing time and off-chip storage access [44]. Using these computing hardware resources to solve the Ising model with numerous parameters is an extremely effective method since we have not yet achieved quantum computers.
The paper is organized as follows. In this paper, we propose a new algorithm, Spring-Ising Algorithm, that can solve the all-to-all connected Ising model directly on the Al chip.
First, we introduce how Spring-Ising Algorithm inspired by spring vibrations can be used to find the ground state of the Ising model. Then, we design the algorithm as a network structure based on point convolution and residual modules, which implements the solution iteration of the Ising model through point convolution and residual modules. Through our method, the optimization problem is transformed into the general formula of Al chips calculation by constructing the Ising model paradigm and Al chips accelerate Spring-Ising Algorithm for the ground state finding. Finally, the network structure is demonstrated on Al chip architecture from Ref. [45] to solve the Max-cut problem and both numerical and analytical investigation are conducted.
## Result
In this chapter, we propose the physical prototype of Spring-Ising Algorithm and how to apply Lagrange's equations to iterate spin states by symplectic method. Spring-Ising Algorithm is inspired by physical phenomena, spring vibrations. The detail of physical prototype is introduced as follow.
### Spring vibration model
The Ising model is defined as follow:
\[H=-\sum_{\textbf{i}\in\textbf{i}\in\textbf{i}\in\textbf{s}N}J_{ij}\sigma_{i} \sigma_{j}-\sum_{\textbf{i}\in\textbf{i}\in\textbf{s}N}h_{i}\sigma_{i} \tag{1}\]
The discrete variable \(\sigma_{i}\) is the \(i\)th Ising spin state such that \(\sigma_{i}\in\{-1,+1\}\). In Pauli matrices, the variable \(\sigma_{i}\) assigns values \(\{-1,+1\}\) to spin values \(\{1,\dagger\}\)[17]. \(J_{ij}\) denotes a coupling coefficient between the \(i\)th and \(j\)th spins and \(h_{i}\) is an external magnetic coefficient for the \(i\)th spin. \(H\) is the total energy of the Ising model and it tends to the lowest energy.
Inspired by the steady-state analysis of multiple mass-spring system in analytical mechanics, the ground state research method of the Ising model in this paper is designed. In Ising model, the state of the \(i\)th spin \(\uparrow\) (1) is encoded as a discrete variable corresponding to a value of \(+1(-1)\). We regard the discrete variable as the continuous change of the mass point in the macroscopic position, which is defined as the generalized coordinate \(q_{i}\in[-1,1]\). On this basis, the spring model is designed by considering a mass point connected at an ideal spring with no initial length and the spring force on the mass point is always pointing to one point, called the origin point. As shown in Fig. 1(a), the spring is fixed at the origin point, and the other end is the mass point representing the state of spin. Since the initial length of the spring is zero, when the mass point moves away from the origin, it is pulled by the spring. In this model, the mass point is above(below) the origin to represent the spin \(\uparrow\) (1), and the distance is represented as a degree of confidence. According to the coupling coefficient and spin state, the Ising model produces a number of forces along a line along the \(q_{i}\) axis. Therefore, the direction of the resultant force is also on the \(q_{i}\) axis, as shown in Fig. 1(b).
In the model, while a spin considered as a mass point is called the target spin, the other spins are called the source spins providing external force to the target spins. The magnitude and direction of \(F_{i}\) depend on the combined effect of multiple source spins but have nothing to do with the state of the target spin. Fig. 2(a) gives a specific example, when the state of source spin is \(\mathbf{+1}\), if the coupling coefficient is positive, an upward force will be generated. The greater the coupling coefficient, the greater the force generated. In the same way, if the coupling coefficient is negative, a downward force will be generated. When the coupling coefficient is zero, the source spin provides no force. The superposition of all the forces provided by the source spin is the force of the Ising model coupling relationship for the mass point \(i\). When the state of origin spin is \(\mathbf{-1}\), the direction of the force is opposite, as shown in Fig. 2(b).
Figure 1: Spring vibration model based on Ising model. The red sphere represents the spin, and the arrow in it indicates the spin state. The four bright red spheres on the upper left represent the four spins mapped by the Ising model. The green connection line between the red spheres represents the coupling relationship. The fuzzy sphere in the gray dashed box represents the opposite spin state of the blue dashed box. The two dashed boxes are used to represent the same spin in two spin states, expressing the two particle positions of the spring model. Correspondingly, the gray part in the spring model is another spin state. (a) The spin of Ising model is mapping to the position of the mass point on the spring vibration model. Take the part in the blue dashed box as an example, the spin state is up, the mass point is above the origin. The blue dashed box vice versa. (b) The distance between the mass point and the origin point is effected by the coupling relationship and the spring.
Figure 2: The specific example shows that the coupling relationship between spins affects the external force received on the mass point. \(\alpha_{i}\) is the \(i\)th spin which is regarded as the target spin and \(\alpha_{j}\) is the \(j\)th spin which is regarded as the source spin. The blue line between the spins means that the coupling relationship of the two spins is positive, the green line is negative. The force on the mass point is the resultant force produced by the sum of all coupling relations. (a) When the source spin \(\alpha_{i}\) is \(\mathbf{+1}\), the coupling relationship produces multiple forces on the mass point \(i\). (b) When the spin state \(\alpha_{i}\) is \(\mathbf{-1}\), the direction of the force is opposite.
The generalized coordinate introduced by the model is a continuous variable, which means that the magnitude of the force is also affected by the absolute value of the generalized coordinate from the source spin. So, the source spin is changed to the source spin generalized coordinate: \(\sigma_{i}\in\{-1,1\}\to q_{i}\in[-1,1]\). When the absolute value of the generalized coordinates is greater, the spring potential energy contained in the spring vibration model is greater. For the Ising model, the greater source spin has a greater overall influence on the system to the target spin and vice versa. Therefore, the discrete Ising model energy in Eq. (1) is set to the continuous Ising model energy in the spring vibration model.
2. Ground state search method
After establishing the spring vibration model, the following is how to use the model to find the ground state of Ising model. This method regards the potential energy of the Ising model as the ordinary potential energy and converts the potential energy of the Ising model into the potential energy of the spring and the kinetic energy of the system. The Ising model energy gradually decreases and transforms into the potential energy of the spring. Then, due to the constraints of generalized coordinates and generalized velocity, the energy originally converted into spring potential energy, continuous Ising model energy and kinetic energy is absorbed by the inelastic wall. The energy of the whole system gradually decreases and thus converges to the spring potential energy and the energy of the Ising model near the ground state.
According to the various constraints of this system, the Lagrangian equation is constructed as follows:
\[L(q_{i},\dot{q_{i}},t)=\sum_{i}m\dot{q_{i}}^{2}-\sum_{i}\frac{1}{2}k(q_{i}-q_ {0})^{2}-\zeta H_{Ising}(\mathbf{q}) \tag{2}\]
Where \(\mathbf{m}\) is the mass coefficient, \(k\) is the elastic coefficient, and \(\zeta\) is the scaling coefficient of the Ising model energy. The three terms of the mass point in Eq. (2) are the kinetic energy term, the spring potential energy term and the continuous Ising model energy term. In the spring vibration model, the generalized coordinates are independent of \(t\). It can be seen from the formula that the movement of the mass points is affected by the potential energy of the spring and the energy of the Ising model. The movement of the mass points is manifested as a continuous vibration on the ideal springs. From another perspective, it can be considered that when the spring is doing simple harmonic motion, a set of external forces are applied from the outside. Affected by the coupling coefficient of the Ising model, the oscillations of the mass points are biased towards the lower Ising model energy.
3. Symplectic method
Since the size of the Ising model depends on the number of spins, the solution scale is quite large. Therefore, it is very difficult to solve the Lagrangian equation directly and accurately. In this paper, referring to the Hamiltonian and symplectic method [46], the numerical iterative calculation of the spring vibration model is carried out. The Hamiltonian describes the total energy of the system and can be used to describe the system's dynamic behavior. Symplectic method is numerical method used to solve Hamilton's equations and it preserves energy conservation of the system.
According to the definition, the generalized momentum \(p_{i}\) is obtained as \(\partial L/\partial\dot{q}=m\dot{q}_{i}\). The Hamiltonian of the system is obtained by performing the Legendre transformation on the Lagrangian quantity:
\[H(q,p,t)=\sum_{i}q_{i}p_{i}-L(q_{i},q_{i},t)=\sum_{i}\frac{1}{2}q_{i}p_{i}+\sum_ {i}\frac{1}{2}k(q_{i}-q_{0})^{2}+\zeta H_{tsng}(\mathbf{q}) \tag{3}\]
Get Hamilton's equation:
\[\dot{q}_{i}= \frac{\partial H}{\partial p_{i}} \tag{4}\] \[\dot{p}_{i}= -\frac{\partial H}{\partial q_{i}}=-k(q_{i}-q_{0})+\zeta\sum_{j}w _{ij}q_{j}\]
Use the symplectic algorithm to solve the Hamiltonian system, and set the system origin to \(q_{0}\):
\[q_{i}(t_{n+1})=q_{i}(t_{n})+\Delta\dot{q}_{i}(t_{n})=q_{i}(t_{n})+\frac{ \Delta}{m}p_{i}(t_{n}) \tag{5}\] \[p_{i}(t_{n+1})=p_{i}(t_{n})+\Delta\dot{p}_{i}(t_{n})=p_{i}(t_{n} )-\Delta kq_{i}(t_{n})+\zeta\Delta\sum_{j}J_{ij}q_{j}(t_{n})\]
Where \(\mathbf{t_{n}}\) is the \(n\)th iteration. It can be seen from the above formula that \(\mathbf{q}_{i}(\mathbf{t_{n}})\) and \(\mathbf{p_{i}}(\mathbf{t_{n}})\) depend on the value of the previous state. With the iteration of the value, the energy is continuously converted. As the energy of the Ising model decreases, the solution is gradually approaching the ground state of the Ising model. Dimensional issues are not considered in numerical calculations, so parameters can be combined. The Eq. (5) is called the iterative formula of Spring-lsing Algorithm.
### 4. Point convolutional neural network
In the iterative calculation of the algorithm, the most important calculation is multiplication of \(J_{ij}\) and \(\mathbf{q}_{i}(\mathbf{t_{n}})\). A method of iterative calculation using point convolution to replace the product of vector and matrix is proposed, so that the algorithm can be used in high-bandwidth computing chips, like GPU and Al chip. Moreover, the point convolution is optimized in the hardware design to reuse the convolution kernel and reduce data access to accelerate the computation in parallel. Fig. 3 shows an alternative way of turning the iterative equation into the neural network architecture computation. \(\mathbf{q}_{i}(\mathbf{t_{n}})\) of a single test is assigned at fixed coordinate of the feature map, so the number of generalized coordinates and feature maps are the same. The size of the point convolution kernel also depends on the coupling relationship matrix of the Ising model so that the number of convolution kernel is \(\mathbf{n}\), and the number of convolution kernel channels is \(\mathbf{n}\). The rest of the architecture is the addition module, which can be completed through the residual structure in the neural network and is supported in mainstream Al chips.
## Discussion
In this chapter, we show the experimental results based on the spring vibration model. Then we introduce how to implement the above algorithm through point convolution and residual network and implement it on the CASSANN-v2 architecture.
To demonstrate the effect of Eq. (1), the algorithm is tested on the K\({}_{2000}\) benchmark instance, which is a random undirected graph with 2000 vertices and 1999000 edges [23]. The K\({}_{2000}\) has been used many times to solve maximum-cut problems (MAX-CUT) for Ising model comparison of performance [47, 23, 48].
### Qualitative result
When the kinetic energy is large enough, it can cross the local minimum value by the local oscillation; but at the same time, if the kinetic energy is too large, it cannot stay in any minimum value. So that, the following constraints are added each time \(q_{i}\) is updated:
\[q_{i}\gets f(q_{i})=\begin{cases}-\sqrt{2},&q_{i}<-\sqrt{2}\\ q_{i},&-\sqrt{2}\leq q_{i}\leq\sqrt{2}\\ \sqrt{2},&q_{i}>\sqrt{2}\end{cases} \tag{6}\]
Where \(f(*)\) describes the boundaries of \(q_{i}\). For the spring to vibrate, the boundary is slightly than the original setting [**-1,1**] so that set \(q_{i}\in[-\sqrt{2},\sqrt{2}]\). After combining the boundary conditions, the equation describes the motion law of the spin.
The mass point vibration result of running the spring vibration model algorithm in 10000 iterations is shown in Fig. 4. The benchmark has 2000 spins, of which the first twenty are selected in the figure for visualization. In the early stage of the algorithm, because the mass points are initialized at origin and only given a small disturbance, the energy of the Ising model has a long-term decline process. It can be clearly seen in the figure that, the polylines are very dense, which means that the mass points are oscillating violently. In this time, the energy of Ising model is also rapidly oscillating and declining. In the middle, there are many
Figure 3: The parallel calculation of the spring vibration model algorithm through the form of point convolution. The size of the feature map affects the number of parallel tests for the algorithm. Using a 2x2 feature map is four independent iterative calculations. The value of the feature map is the generalized coordinate value, and the point convolution kernel is the weight data of the Ising model. The \(q^{\prime}_{n}\) is the temporary variable. On the right is the entire point convolution network architecture.
mass points attached to the boundary gradually which have been at the lower energy point. Finally, only some mass points are still oscillating to search for the optimal and the energy of the Ising model has tended to the ground state and the detail of the energy change is shown in the inset of Fig. 4(a). Also, it can be seen that a few spin flips bring fluctuations of Ising energy.
2. Quantitative result
It can be easily predicted that the potential energy of the spring is lost within the limitation of the boundary conditions as time progresses. Therefore, in the later stages of evolution, it is necessary to compensate for the lost energy. In order to further search the ground state of Ising model accurately, Spring-Ising Algorithm introduces the concept of energy dynamic balance to increase the energy proportion of Ising model and improve the
Figure 4: The spring vibration model algorithm on the K\({}_{\rm{zz}}\) in 10000 iterations. The parameter configuration is as follows: k = 0.5, \(\zeta\) = 0.8\(\zeta\) - \(>\) 10\(\zeta\), \(\Delta\) = 0.2, \(m\) = 1. (a) The energy change curve of the Ising model. The mass point positions in Spring-Ising Algorithm are initialized near the origin, so the energy starts from 0 and decreases rapidly. Before N\({}_{\rm{zz}}\)=2000, the energy is descending in a violent shock. After that, vibrate slightly to search for the energy minimum. (b) The mass point (the first twenty) vibration. The very dense parts of the graph are the effect of multiple mass points oscillations. When the system completes the basic search, it tends to be stable. Most of the mass points tend to be stable and only some of them perform local search (like after N\({}_{\rm{zz}}\)=5000).
search efficiency. To compensate for the energy loss, Spring-Ising Algorithm sets the \(\zeta\) as a linear variable \(\zeta(\mathbf{t})\). In order to reduce the complexity of the algorithm, this variable is regarded as a constant in the calculation of the Lagrangian equation, which means that the time-varying effect in the Lagrange equation is not considered. Through further analysis and solution of this equation, the ground state finding of the Ising model system is obtained.
This test is based on the same small disturbance for initializing with different strategies of \(\zeta\). As shown in Fig. 5, no matter what the value of \(\zeta\) is fixed, the ground state search of the Ising model is easy to fall into a local optimum. Although the larger \(\zeta\) quickly leads to better local results (the blue line), it is difficult to search further to get better results. By gradually changing the value of \(\zeta\), further searches can be performed after the spring model has entered local stability. The red line and the orange line can be clearly seen each time steady state is established and further searches. This result is very similar to sufficiently slow cooling in simulated annealing. When the step length is short enough, better search results can be obtained.
The probability density function (CDF) is an important way to judge the performance of algorithms for solving Ising models. Fig 6. shows the cumulative distributions of the cut value of the K\({}_{2000}\). The algorithm is compared with the HdSB and HhSB algorithm [47], and there are partial similarities under different modeling methods. It shows that the spring vibration model algorithm within the specified number of steps can search for better cut value. The inset shows that the algorithm can find the optimal value more effectively and the number of optimal solutions accounts for 2.9% of all solutions, which is only about 1.2% solved by HhSB and HdSB.
Figure 5: The effect of different \(\zeta\) on the average results of K\({}_{\text{norm}}\), \(\zeta_{0}\) is the base value. \(\zeta_{0}\) = 0.05. The first and second sets of data (green curve and blue curve) indicate that the current \(\zeta\) is fixed at 0.8\(\zeta_{0}\) or 10\(\zeta_{0}\), respectively. The third and fourth set of data (orange curve and red curve) indicates that the \(\zeta\) is set from 0.8\(\zeta_{0}\) to 10\(\zeta_{0}\) with different step lengths. (N\({}_{\text{norm}}\) = 200 or 1000)
3. Hardware implementation
The test platform of this algorithm is a personal computer (Intel 8700K and NVIDIA GTX 2080ti) and the AI architecture (CNN accelerator) developed by Institute of Semiconductors, CAS [45]. By GTX 2080ti testing in the PyTorch framework, when the size of Ising model is 2000 and the number of tests is 1000, the calculation time is 9.95s/10000 steps, which means that a sample time of 10,000-step tests is 9.95ms. But when the number of tests is 100, the time is 2.30ms/10000 steps. The GPU has a shorter average single-sample test time at more massive tests. The average power consumption of the 2080Ti is 60.6W. By the AI architecture, the size of Ising model is 2000 and the number of tests is 49 (7\(\times\)7 feature map), the calculation time is 381.15ms/10000 steps, which means that a sample time of 10,000-step tests is 7.78ms. The average power consumption of the CASSANN-v2 is lower than 10W.
## Methods
### Numerical iteration
First, regard the spin of the Ising model as \(q\) and the coupling coefficient weight as \(J\). The ground state search process of the Ising model is solved under the mass points oscillation process. Based on the spring vibration model, the vibration equation combined with the Ising model is constructed and transformed into the following Hamiltonian.
\[H(q,p,t)=\sum_{i}\frac{1}{2}q_{i}p_{i}+\sum_{i}\frac{1}{2}k(q_{i}-q_{0})^{2}+ \zeta H_{\mathit{Ising}}(\mathbf{q}) \tag{7}\]
According to the equations of motion, all mass points are initialized near the origin, and then all mass points are made to start moving under the joint action of the spring and the Ising model. According to the symplectic method, the positions of the mass points at each
Figure 6: The spring vibration model algorithm cumulative distribution of cut values C of the K\({}_{\mathrm{tot}}\) compared to HQSB and HQSB. The red curve is the result of the Spring-Ising Algorithm. The inset is the magnification around the best-known cut value. The red curve illustrate that the Spring-Ising Algorithm has better suboptimal distribution results and more optimal values than HQSB and HQSB for the overall search results.
time step are found. The final spin state of the Ising model is obtained by tracking the mass points (numerical iteration).
\[\begin{split}& q_{i}(\mathbf{t}_{n+1})=q_{i}(\mathbf{t}_{n})+\Delta p_{i}( \mathbf{t}_{n})\\ & p_{i}(\mathbf{t}_{n+1})=p_{i}(\mathbf{t}_{n})-\Delta kq_{i}(\mathbf{t}_{n})+ \zeta(\mathbf{t}_{step})\Delta\sum_{j}J_{ij}\mathbf{q}_{j}(\mathbf{t}_{n})\end{split} \tag{8}\]
\(\Delta\), \(\mathbf{k}\) and \(\mathbf{\zeta}\) are independent adjustable variables. Then, Eq. (8) has the following constraints, like the perfectly inelastic walls work [47]. \(\mathbf{q}_{i}\in\mathbf{[-\sqrt{2},\sqrt{2}]}\). \(\mathbf{p}_{i}\in\mathbf{[-2,2]}\). \(\zeta(\mathbf{t}_{step})\) is a function linearly related to the number of iterations. For simple calculation, \(\zeta(\mathbf{t}_{step})\) is set as a piecewise constant function. Eq. (8) iterates the specified number of times to get the result of the Ising model ground state.
### 2. Hardware implementation
For an Ising model with \(\mathbf{n}\) spins, generalized coordinates \(\mathbf{q}\) are mapped to feature maps. The number of feature map pixels is the number of simultaneous iterations. The coupling coefficient matrix of the Ising model is mapped to the point convolution kernel. Divide the \(J\) into \(\mathbf{n}\) 1x1 convolution kernels with \(\mathbf{n}\) channels by row. Through the residual structure, the addition operation required in the algorithm is completed. By continuously calling this network structure (Fig. 3), the numerical calculation of \(\mathbf{q}\) and \(\mathbf{p}\) in the eq. (8) is updated. After an artificially set time step or calculation time, the \(\mathbf{q}\) is sampled, which is the current low energy state of the Ising model.
## 3 Data availability
The data that support the findings of this study are available from the corresponding author upon reasonable request.
## 4 Code availability
The code used in this work is available from the corresponding author upon reasonable request.
|
2303.04700 | Efficient Visuo-Haptic Object Shape Completion for Robot Manipulation | For robot manipulation, a complete and accurate object shape is desirable.
Here, we present a method that combines visual and haptic reconstruction in a
closed-loop pipeline. From an initial viewpoint, the object shape is
reconstructed using an implicit surface deep neural network. The location with
highest uncertainty is selected for haptic exploration, the object is touched,
the new information from touch and a new point cloud from the camera are added,
object position is re-estimated and the cycle is repeated. We extend Rustler et
al. (2022) by using a new theoretically grounded method to determine the points
with highest uncertainty, and we increase the yield of every haptic exploration
by adding not only the contact points to the point cloud but also incorporating
the empty space established through the robot movement to the object.
Additionally, the solution is compact in that the jaws of a closed two-finger
gripper are directly used for exploration. The object position is re-estimated
after every robot action and multiple objects can be present simultaneously on
the table. We achieve a steady improvement with every touch using three
different metrics and demonstrate the utility of the better shape
reconstruction in grasping experiments on the real robot. On average, grasp
success rate increases from 63.3% to 70.4% after a single exploratory touch and
to 82.7% after five touches. The collected data and code are publicly available
(https://osf.io/j6rkd/, https://github.com/ctu-vras/vishac) | Lukas Rustler, Jiri Matas, Matej Hoffmann | 2023-03-08T16:41:24Z | http://arxiv.org/abs/2303.04700v2 | # Efficient Visuo-Haptic Object Shape Completion for Robot Manipulation
###### Abstract
For robot manipulation, a complete and accurate object shape is desirable. Here, we present a method that combines visual and haptic reconstruction in a closed-loop pipeline. From an initial viewpoint, the object shape is reconstructed using an implicit surface deep neural network. The location with highest uncertainty is selected for haptic exploration, the object is touched, the new information from touch and a new point cloud from the camera are added, object position is re-estimated and the cycle is repeated. We extend Rustler _et al._ (2022) by using a new theoretically grounded method to determine the points with highest uncertainty, and we increase the yield of every haptic exploration by adding not only the contact points to the point cloud but also incorporating the empty space established through the robot movement to the object. Additionally, the solution is compact in that the jaws of a closed two-finger gripper are directly used for exploration. The object position is re-estimated after every robot action and multiple objects can be present simultaneously on the table. We achieve a steady improvement with every touch using three different metrics and demonstrate the utility of the better shape reconstruction in grasping experiments on the real robot. On average, grasp success rate increases from 63.3% to 70.4% after a single exploratory touch and to 82.7% after five touches. The collected data and code are publicly available ([https://osf.io/j6rkd/](https://osf.io/j6rkd/), [https://github.com/ctu-vras/vishac](https://github.com/ctu-vras/vishac)).
## I Introduction
We consider the following robotic setup. A static RGB-D camera connected to a robotic arm controller is observing one or more unknown 3D objects. To be able to grasp and manipulate the object, the robotic system needs a model of the object in terms of a complete shape, _e.g._, an accurate mesh. There are intrinsic limitations to the performance of computer vision techniques for 3D reconstruction of objects from images or point clouds if only a limited number of viewpoints is available. Solutions relying on RGB or RGB-D images, LIDAR point clouds, or voxels, cannot easily overcome self-occlusion and may have specific difficulties with transparent or specular objects. The robot arm cannot reliably grasp and manipulate the object given only partial information. However, the manipulator can be controlled to touch or poke the object in order to extend the surface for which the model is accurate.
We address the following problem. Given an initial RGB-D map, obtain an accurate representation of the complete shape of the object with the help of exploratory contact actions. The objective is either to maximize the accuracy given an upper bound on the number of touches or minimize the number of touches to reach a predefined accuracy on the complete surface. This problem in turn requires solutions of sub-problems such as prediction of the least reliable part of the surface, estimation of the free space around the objects and the detection of touch-induced object movement.
The scenario models a setup, where new objects are presented to a system that minimizes the risk of an unstable grasp, and therefore "explores" the shape of the object by touching and poking before attempting a grasp and subsequent manipulation. For example, imagine a conveyor belt for sorting objects of different sizes into respective bins, e.g. in a scrapyard, where the robot must be able to pick up any object.
**Contributions.** We present a pipeline for visuo-haptic shape completion called VISHAC, importantly extending the work of Rustler _et al._ [1], which serves as a baseline here. The first group of improvements concerns the process of shape completion performed by Implicit Geometric Regularization for Learning Shapes (IGR), which we modified
Fig. 1: Schematic operation of VISHAC. An initial RGB-D image of the scene is captured (1), a transformation \(\mathbf{R}_{0}\) from the robot base to the object is obtained, and the object is segmented and converted into a point cloud \(\mathcal{X}\) (2). Iterative reconstruction: In each step, \(n=0:(N-1)\), the point cloud is inserted into a neural network (3) and a completed shape \(\mathbf{O}_{n}\) is created (4). The most uncertain point \(\mathbf{p}_{n}\) is selected for touch (5). After contact, the object may have been displaced, giving rise to a new transformation \(\mathbf{R}_{n}\). Haptic data \(\mathbf{h}_{n}\) (6) from contact and visual data by taking a new image from the RGB-D camera \(\mathbf{v}\) (7) are collected. The transformation \(\mathbf{R}_{n}\) is computed from pose estimation (8) and the new data, transformed into the original frame \(\mathbf{R}_{0}\), are added to \(\mathcal{X}\). See Sec. III-H for details.
as follows. We use a new, theoretically grounded, method to determine the points with highest uncertainty. In addition, we changed the sampling of points inside the network to respect the input point cloud more closely. Finally, the yield of every haptic exploration is increased by adding not only the contact points to the point cloud but also incorporating the empty space established through the robot movement to the object. The two last mentioned improvements together make the pipeline more robust. The second group of enhancements pertains to the practical aspects of this scenario. We do not use a dedicated "finger" for haptic exploration but directly the jaws of a closed two-finger gripper, which is a more compact solution. The object position is newly re-estimated after every robot action, so objects are allowed to move after being poked. Multiple objects can be present simultaneously on the table. The speed of the pipeline is improved through parallelization (2 times in simulation and 2.4 times in the real world). We contribute a more detailed evaluation using a set of different metrics. Finally, real-word grasping experiments demonstrate the effectiveness of our approach.
## II Related work
For the problem addressed, two main sources of information have been used: visual input (RGB or RGB-D sensors) and haptic exploration (tactile or force sensing). In recent work, the two were often combined.
### _Visual-only Shape Completion_
Devices able to sense depth and thus create point clouds are widely available and can provide rich information about the scene. The early methods were based on geometric properties or templates. The geometric ones either assume that most objects humans use are symmetric, so completion can be done by mirroring partial information about its axis of symmetry [2], or detect primitive shapes [3]. Template-based methods benefit from prior knowledge in the form of a database of object shapes [4].
More recently, solutions based on machine learning gained popularity. An example is Gaussian Process Implicit Surface (GPIS) [5], which, however requires points on the whole surface and suffers from poor scaling over a dense point cloud. Thus, the input must be downsampled, resulting in loss of detail. Other methods were originally created to make surfaces from complete point clouds, but provide shape completion abilities by interpolating between shapes in a latent space [6, 7, 8]. In this work, we start from IGR [8] and propose improvements.
Deep Learning (DL) techniques such as Convolutional Neural Networks (CNNs) for shape completion typically represent objects as voxel grids. This allows to introduce probabilistic uncertainty in voxel grids, but the methods usually suffer from cubically growing computational requirements with the number of voxels, limiting the resolution of the output shape. Finally, the newest methods utilize graph attention networks [9] or transformers to complete a shape [10, 11].
### _Haptic-Only Shape Completion_
With advances in haptic exploration, some purely haptic approaches have been proposed. They utilize some techniques mentioned above, such as implicit shape potentials [12], or Gaussian Processes (GPs) [13, 14, 15]. Gaussian-based methods have the advantage of having the ability to express uncertainty directly from their nature using the variance of each point. However, haptic-only completion needs a high number of touches, which is time-demanding.
### _Visuo-Haptic Shape Completion_
A combination of visual and haptic data has the potential to combine the best of both worlds. GP methods were proposed [16, 17, 18]. However, points covering most of the surface are needed, which leads to the need for a lot of exploration. A way to overcome this issue may be to use symmetry as in [19].
CNN-based methods [20, 21] usually require fewer touches but suffer from lower resolution due to computational requirements. Smith _et al._ proposed approaches [22, 23] based on Graph Neural Network (GNN). Reconstructions by these methods have a higher resolution but are nonsmooth and, for now, evaluated only in simulation.
An important part of haptic exploration is the decision where to touch. The object can be touched randomly as done by Smith _et al._[22], or always select a position opposite the camera (from "behind") as Watkins-Vall _et al._[20]. However, these are not as effective as an uncertainty-driven approach. Uncertainty can come from the Gaussian distribution [16, 17, 18, 19]; from the Monte Carlo dropout [24]; or from the Signed Distance Function (SDF) [1, 25]. Alternatively, it can be learned where to touch as in Smith _et al._[23].
Our work belongs to uncertainty-driven approaches and is based on implicit surface deep neural network IGR [8], exploiting the definition of SDF to estimate uncertainty in order to efficiently explore the most promising parts of the objects. We extend and directly compare ourselves with [1]. Indirectly, this encompasses also a comparison with other methods, namely [26, 27, 28, 29], which were outperformed in [1].
## III Method
We propose an iterative method depicted in Fig. 1. The objective is to iteratively improve shape reconstruction of objects on the table combining images and selecting locations for tactile exploration. The algorithm is described in detail in Section III-H. The following sections detail individual modules required by the pipeline.
### _Implicit Surfaces_
An implicit surface is a set of points whose signed distance to a surface is equal to zero. The function to compute this distance is called Signed Distance Function (SDF) and is defined as
\[f(\mathbf{x})=s, \tag{1}\]
where, in our case, \(\mathbf{x}\) is a point defined in 3D space and \(s\) is the signed distance. Traditionally, \(f\) would be described
analytically, but it can also be learned with a neural network. Then, the implicit surface generated with a neural network can be described as
\[\mathcal{M}=\{\mathbf{x}\in\mathbb{R}^{3}\mid f(\mathbf{x};\mathbf{\theta})=0\}, \tag{2}\]
where \(f(\mathbf{x};\mathbf{\theta}):\mathbb{R}^{3}\rightarrow\mathbb{R}\) is a Multi-Layer Perceptron (MLP) learned to approximate SDF, with \(\mathbf{\theta}\) being the parameters of the network.
### _Implicit Geometric Regularization for Learning Shapes_
As in [1], we selected IGR [8] to represent the function \(f\). The method assumes at input a point cloud \(\mathcal{X}=\{\mathbf{x}_{1:C}\}\), where \(C\) is the number of points in the point cloud, and optionally a set of normals for each point \(N=\{\mathbf{n}_{1:C}\}\). To train on multiple objects, the network utilizes an auto-decoder architecture from Park _et al._[6] with different latent code \(\mathbf{z}_{i}\) for every shape \(i\in I\) in the input set. In the prediction phase \(|I|=1\). Therefore we will introduce loss only for one shape \(i\in I\), defined as:
\[\ell(\mathbf{\theta},\mathbf{z}_{i})=\ell_{\mathcal{X}}(\mathbf{\theta},\mathbf{z}_{ i})+\lambda\mathbb{E}_{\mathbf{x}}\left[\|\nabla_{\mathbf{x}}f(\mathbf{x}; \mathbf{\theta},\mathbf{z}_{i})\|-1\right]^{2}+\alpha\left\|\mathbf{z}_{i}\right\| \tag{3}\]
where
\[\ell_{\mathcal{X}}(\mathbf{\theta},\mathbf{z}_{i})=\frac{1}{C}\sum_{c=1}^{C}(|f( \mathbf{x};\mathbf{\theta},\mathbf{z}_{i})|+\|\nabla_{\mathbf{x}}f(\mathbf{x}_{c}; \mathbf{\theta},\mathbf{z}_{i})-\mathbf{n}_{c}\|) \tag{4}\]
The first term in Eq. 3 encourages \(f\) to vanish on \(\mathcal{X}\) and \(\nabla_{\mathbf{x}}f\) to be close to the supplied normals. The second term is called the Eikonal term and regularizes the network by pushing \(\nabla_{\mathbf{x}}f\) to be of unit Euclidean norm. The term is also used for uncertainty estimation--described later in Section III-E.
The result is iteratively optimized at both train and inference time (over multiple shapes in batches from the whole train set \(I\) while training and over one shape for prediction). In this work, we use a trained network from [1] and our modifications pertain to inference only. The parameters \(\mathbf{\theta}\) are fixed during inference and only \(\mathbf{z}_{i}\) is changed.
### _IGR Modifications - Sampling and Free Space_
By default, the IGR network uses a random sampling of points in each of its iterations. It is effective when a complete point cloud is inserted. However, when a partial point cloud is used, the network tends to inflate the objects, _i.e._, the output shape is over the boundaries of the input. We propose to use Farthest Point Sampling (FPS) as in [30]. The points sampled by this algorithm are spatially far from each other and help the network better understand the whole object in each iteration, making the final shape tighter with the input.
Another improvement of the method is the use of information about the space explored. As the robot moves through space, we know that the traversed space is free and no shape can be there. We keep in memory only the points that are less than \(20\,\mathrm{cm}\) from the center of a given object. During the inference phase of the network, a signed distance is calculated for every point in the free space explored. We want all free space points to be outside of the surface, _i.e._, to have a positive signed distance. Therefore, we add to the loss the sum of the absolute values of all distances that are lower than \(1\,\mathrm{mm}\).
### _Object Representation from Visual and Haptic Data_
There are several possible representations of an object \(O\). We choose to represent an object as a point cloud (which is, in fact, an implicit surface from Eq. 2), concatenated with uncertainty for each point. Mathematically expressed as
\[O=\{\mathbf{x}\in\mathbb{R}^{3},u\in\mathbb{R}\mid f(\mathbf{x};\mathbf{\theta}, \mathbf{z})=0,u\geq 0\}, \tag{5}\]
where \(u\) stands for the uncertainty of the given point.
Our method is iterative, therefore, we compute a new shape in each iteration \(n\). For \(n=0\) the shape \(O_{0}\) is computed only from RGB-D information \(\mathbf{v}_{init}\). In all other iterations we get the shape \(O_{n}\) with visual information \(\mathbf{v}_{init}\), \(\mathbf{v}_{0:(n-1)}\), and haptic information \(\mathbf{h}_{0:(n-1)}\). Eq. 5 is therefore changed to
\[O_{n}=\{\mathbf{x}\in\mathbb{R}^{3},u\in\mathbb{R}\mid f(\mathbf{x};\mathbf{ \theta},\mathbf{z}_{n})=0,u\geq 0\}, \tag{6}\]
where \(\mathbf{z}_{n}\) is the current latent vector optimized on point cloud \(\mathcal{X}_{n}=\{\mathbf{h}_{0},\ldots,\mathbf{h}_{(n-1)},\mathbf{v}_{0}, \ldots,\mathbf{v}_{(n-1)},\mathbf{v}_{init}\}\).
To obtain haptic information \(\mathbf{h}_{n}\), we must first select the position for haptic exploration \(\mathbf{p}_{n}\) on the object that minimizes the global uncertainty of the object, _i.e._, select the point with the highest current uncertainty as
\[\begin{split}& m=\operatorname*{argmax}_{u}O_{n},\\ &\mathbf{p}_{n}=\mathbf{x}_{m}\in O_{n}.\end{split} \tag{7}\]
The desired \(\mathbf{h}_{n}\) is then obtained from the position of the actual contact between the robot and the object.
Note that we showed the equations for only one object at a time. However, our method is capable of handling multiple objects in a complex scene. So, we will later in an algorithm refer to objects as \(O_{k,n}\), where \(k=1:K\) is the object's id in the scene, and \(n\) is the current iteration.
### _Object Shape Uncertainty_
A crucial part of this work is the estimation of uncertainty. We selected part of the loss from Eq. 3, particularly the Eikonal loss
\[\ell_{Eikonal}=(\|\nabla_{\mathbf{x}}f(\mathbf{x};\mathbf{\theta},\mathbf{z})\|-1)^ {2}. \tag{8}\]
It was proven by Takashi [31] that a given function \(f(\mathbf{x})\) that meets the condition of the Eikonal Equation \(\|\nabla_{\mathbf{x}}f(\mathbf{x})\|=1\) on a Riemann manifold \(M\) is a SDF to a hypersurface \(M\). Furthermore, Crandal and Lions [32] presented Viscosity Solutions that prove the same applies even if \(\|\nabla_{\mathbf{x}}f(\mathbf{x})\|\) does not exist on every \(\mathbf{x}\). Given these results, we can compute the Eikonal loss from Eq. 8 for all points on our current shape \(O\)--where the higher the loss, the higher the uncertainty.
### _Segmentation of Multiple Objects_
First, bounding boxes of all objects in the input RGB image are found--in the real world using Yolov7 [33] fine-tuned on our objects, and using color-based segmentation in the simulation. We then run the Flood Fill algorithm [34] on the depth image (aligned to RGB). The algorithm starts at a given pixel (we use the center of the bounding box) and expands over neighbors that fulfill the given criterion
(difference in depth in our case) until no neighbor is left. We also restrict the region of interest by the bounding box from the RGB image. Segmented depths, together with camera information, are then used to create point clouds of objects. A more detailed description can be found on GitHub [https://github.com/ctu-vras/vishac](https://github.com/ctu-vras/vishac).
### _Pose Estimation_
In haptic exploration methods, a common but unrealistic assumption is that objects are fixed to the surface (also in [1]). Objects naturally move when they are touched and their pose needs to be re-estimated. Many existing pose estimation methods require prior knowledge of the objects at the instance level [35, 36] or category level [37, 38]. We seek methods that work with unknown arbitrary objects. Having segmented point clouds of each object at hand, we chose a simple and computationally cheap (no GPU) solution using Iterative Closest Point (ICP) [39]. Alternative solutions for unknown objects are [40, 41].
### _VISHAC Algorithm_
We present the algorithm of our method in Alg. 2 and the same is depicted in Fig. 1. The algorithm is high-level pseudocode, with a module for shape completion Alg. 1 described in more detail.
```
1:while Pipeline is running do
2:\(k=\textit{SelectFromQueue() or WaitForRequest()}\);
3:\(\mathbf{z}=\textit{LoadLatent(k)}\);
4:\(\mathcal{X}=\textit{LoadData(k)}\);
5:\(\mathbf{z}_{optimized}=\textit{Optimize}(\mathbf{z},\mathcal{X})\); \(\triangleright\) Loss from Eq. 3
6:\(O=\textit{PredictShapeAndUncertainty}(\mathbf{z}_{optimized})\);
7:endwhile
```
**Algorithm 1** Create Shape
We will first describe the module for the shape creation itself. In [1] the IGR network was used as a standalone library. To perform more efficiently and to be able to handle more objects at once, we modified it to be more compatible with the whole ecosystem (under Robot Operating System (ROS)). The module contains the input point clouds, latent vectors, and other parameters for each object in the scene, allowing simple switching between objects without excessive overhead. The next object to be completed is selected through messages sent from the main script. If a new request is received and reconstruction is running, the new objects are placed in a queue. The module runs in the background, which allowed a considerable speed-up of the whole process, as now reconstructions are processed while the robot is moving. The basic operation is shown in Alg. 1. First, a new shape is selected from a queue (if it is not empty, otherwise the module waits for a new request). Then the latent vector \(\mathbf{z}\) and the input point clouds \(\mathcal{X}\) for the given shape are loaded. If the shape is new, the first latent code is created randomly with a normal distribution. Otherwise, the last known vector for the given object is used. The current \(\mathbf{z}\) is optimized with the loss from Eq. 3. Finally, the shape \(O\) is created, together with the uncertainty computed with Eq. 8.
The main Alg. 2 starts with capturing the initial visual information (box (1) in Fig. 1, line 3 in Alg. 2). An initial transformation \(\mathbf{R}_{0}\) of the object in the base frame of the robot is obtained. The information is then segmented and a point cloud is created for each object in the scene (box (2), line 4). The segmentation itself is described in III-F.
Every iteration starts with computation of the current pose for all objects in the scene (Section III-G). The pose for all objects is computed here--the explored object may change pose after the touch is released and surrounding objects may have been moved unintentionally, so it is more robust to compute pose for all objects. This pose is used mainly to correctly select the point to explore.
```
1:Input: maximal number of haptic explorations N, maximal run time \(T\) Output: Final shape \(O_{1:K,n}\)\(\triangleright\) For \(K\) shapes
2:\(t_{init}=\textit{CurrentTime}()\);
3:Start \(\textit{CompleteShape}()\) service in background;
4:\(\mathbf{v}_{init}=\textit{CaptureVisuallInformation}()\);
5:\(\mathcal{X}_{1:K}=\textit{Segment}(\mathbf{v}_{init})\);
6:\(k=1:K\); \(\triangleright\) Start with all objects
7:for\(n=0,\ \ldots,\ (N-1)\)do
8:ifCurrentTime\(()-t_{init}\geq T\)then
9:break;
10:endif
11:\(\mathbf{R}_{1:K,n}=\textit{ComputePose}(1,\ldots,K)\);
12:\(O_{k,n}=\textit{CompleteShapeRequest}(k)\);
13:\(\mathbf{p}_{n};k=\textit{SelectTouchPoint}(O_{1:K,n},\mathbf{R}_{1:K,n})\);
14:\(\textit{MoveRobot}(\mathbf{p}_{n})\);
15:\(\mathbf{R}_{k,n}\cdot\mathbf{h}_{n}=\textit{GetContactInformation}()\);
16:\(\mathbf{R}_{k,n}\cdot\mathbf{v}_{n}=\textit{CaptureVisualInformation}()\);
17:\(\mathbf{R}_{k,n}=\textit{ComputePose}(k)\);
18:\(\mathbf{v}_{n};\mathbf{h}_{n}=\textit{Transform}(\mathbf{R}_{k,n}\cdot\mathbf{v}_{n}, \mathbf{R}_{k,n}\cdot\mathbf{h}_{n},\mathbf{R}_{k,n})\);
19:\(\mathcal{X}_{k}\)\(+=\)Segment\((\mathbf{v}_{n})\);
20:\(\mathcal{X}_{k}\)\(+=\)\(\mathbf{h}_{n}\);
21:endfor
22:Return:\(O_{1:K,n}\)
```
**Algorithm 2** Multi-Object Shape Completion
Having the segmented point clouds and poses, a request for shape is sent (box (3), line 11). In the first iteration, we request shapes for all objects to create collision shapes for the motion planning algorithm. In other iterations, we request shape only for the last touched object. The impact position \(\mathbf{p}_{n}\) is selected (box (4), line 12) based on Eq. 7. When more than one object is available, the impact position is selected as the point with maximal uncertainty among all the objects. To prevent exploration of only one object, we allow one object to be touched three times in a row, and then it is removed from the touch-selecting algorithm for the given iteration.
After \(\mathbf{p}_{n}\) is selected, the robot is moved to the position
and contact information is extracted (box (5-6), lines 13-14). The movement consists of two subsequent movements. Firstly, the robot is moved to a position \(10\,\mathrm{cm}\) from object along the normal of \(\mathbf{p}_{n}\). Next, the robot is moved along the normal with linear movement until contact occurs. In our case, the contact is detected with the change in joint torques. Haptic information \(\mathbf{h}_{n}\) is created as a circle perpendicular to the impact normal, with the center in the position of the end effector.
After collision, new visual information is saved, segmented and added to the point cloud for the touched object, together with the haptic information (box (7), lines 15-19). To make sure that we segment the correct object, the RGB-D information is cropped with the bounding box found for the given object in the last iteration (the box is slightly enlarged to allow movement of the object). Finally, the pose \(\mathbf{R}_{n}\) of the object right after touch (before the contact is released) is computed (box (8), line 16). This pose is used to transform the current data into the frame of \(\mathbf{v}_{init}\)--the first frame must be used so that the latent vectors for the given objects can be reused. Note that now only the pose of the explored object is computed, unlike at the beginning of each iteration (line 10). Also, in Fig. 1, only one pose estimation is shown for simplicity.
The whole pipeline runs until the selected number of touches (over all objects) is done or until the time limit is reached.
## IV Experiments and Results
The primary experiments we conducted demonstrate how the completeness and accuracy of reconstructions change with each additional touch. In addition, we evaluated the quality of the reconstructions in a real-world grasping experiment. Examples are shown in the accompanying video.
We evaluated the reconstruction obtained by VISHAC on the eight objects shown in Fig. 2 and on one more object for grasping--a transparent spray bottle for which ground truth is not available, and thus it is not possible to compute the metrics. Nevertheless, it is a typical object where haptic feedback dramatically improves the initial reconstruction from the RGB-D visual sensor. None of the objects was used in training the shape completion network.
In the real-world setup, we filled the objects with water to make them weigh about \(0.5\,\mathrm{kg}\) which simplifies collision detection by the robot. Note that in the experiments evaluating the Act-VH method [1], the objects were glued to the table and a dedicated finger was used. The VISHAC method introduced a component that tracks object movement caused by touches (box (8) in Fig. 1). The sensitivity of the torque sensors is not sufficient for manipulation of very light objects. The current setup with objects partially filled with water is more challenging than gluing the objects. To avoid disadvantaging the reference method Act-VH, we report its results for objects glued to the table.
The arrangement of the experimental bench is shown in Fig. 2. The robot is a Kinova Gen3 with a Robotiq 2F-85 gripper and a Intel RealSense D435 camera.
### _Evaluation Metrics_
We used three metrics to evaluate accuracy: (i) Jaccard similarity (JS), _i.e._, the intersection over union of voxelized shapes; (ii) Chamfer distance (CD), _i.e._, the average distance of each point in one set to the closest point in the second set and vice versa; (iii) and the deviation of the reconstructed mesh surface area from the ground truth.
We use three metrics, since the information they provide is complementary. JS does not take into account the shape of the intersection and of the union, _e.g._, it attains the same value for a sharp hallucinated peak or a thin layer of added volume. CD is highly informative in most cases, but it is oversensitive to small scale changes, even if the reconstructed shape is close to ground truth. The deviation of the area of the mesh evaluates the accuracy of the estimated scale and allows to check for biases.
Unless otherwise stated, experiments for each scene were repeated three times. We show the performance with real time on the x-axis, with individual touches numbered.
### _Simulation Experiments_
The simulation environment consists of a robot modeled in the MuJoCo [42] simulator controlled through ROS. Objects are able to move on the table. Collision with the object is computed from the manipulator joint torques--as in the real setup. Throughout this section, we compare the performance of VISHAC with two variants of Act-VH [1]: 'Act-VH' and 'Act-VH - new data'. Act-VH constitutes the original experimental results from [1], in the setup with objects fixed to the table and poking with a dedicated probe, and IGR without any major changes. Only the results for the objects used here were selected. This comparison also demonstrates improvements in the runtime of the methods. 'Act-VH - new data' is a result of running the method from [1] on the data from the new setup--contacts with closed gripper and objects moving as a result of haptic exploration. This serves to isolate the benefits of the modifications of IGR inference in VISHAC. Act-VH was run until 5 touches were completed; VISHAC and 'Act-VH - new data' until 15 touches were done.
Fig. 2: The real-world robot setup with Kinova Gen3 robot, Robotiq 2F-85 gripper, external RGB-D camera and all objects used. Closed gripper was used for haptic exploration, open for grasping.
#### Vi-B1 Reconstruction - one object in the scene
A comparison of performance is shown in Fig. 3. Both 'Act-VH' and 'Act-VH - new data' are more "greedy" with higher reconstruction accuracy gains per touch. However, this comes at the expense of accuracy as more data come in. VISHACis more "conservative", but maintains a steady performance gain. The relative increase in performance in the time when 'Act-VH' completed 5 touches (approximately touch 10 in out method) is 7.7% in JS and 21.3% in CD. After the last touch, VISHAC is better than 'Act-VH - new data' with a relative difference of 15.5% in JS and 31.7% in CD.
The same trend can be seen from the shaded areas, showing the confidence interval of \(\pm 1\) standard deviation. The width of the areas shows the variability of the results for each object and each repetition of the experiment. For VISHAC, the variability shrinks with time, showing that the method is more precise and robust. This is not the case for both versions of Act-VH.
In Fig. 4, the deviations in the mesh area are shown. We can see that even though, for example, JS performance for Act-VH in simulation was similar to VISHAC (see Fig. 3) there is a significant difference in area (blue for Act-VH, red for VISHAC). The baseline method (evaluated on both new and original data) tends to inflate the shapes, resulting in good results for JS or CD, but a high deviation in area. On the other hand, VISHAC converges to the ground truth value.
#### Vi-B2 Reconstruction - multiple objects in the scene
In Fig. 5 results for reconstructions of multiple objects in a more complex scenes are shown. We randomly selected five configurations of objects with two or three of them present in each scene. The accuracy is almost the same as that for one-object-at-a-time experiments. However, the runtimes for 15 touches are about \(80\,\mathrm{seconds}\) higher. This is mostly caused by more complicated touch point computation and motion planning. The deviations in the mesh area, shown in Fig. 4, further prove that the pipeline behaves similarly with one or more objects. Overall, the results show that the pipeline is able to handle multiple objects at once.
Furthermore, we show how the uncertainty (purple line) changes over time. The uncertainty is computed as the mean of the uncertainties for each point of each object. One can see that the uncertainty decreases with increasing accuracy. Thus, it could be used to evaluate the quality of reconstruction during runtime and stop the pipeline when a predefined criterion is met.
### _Real-world Experiments_
We tested the pipeline on the same set of objects as in simulation, in single- or multi-object configuration. The superior performance of VISHAC over Act-VH has already been demonstrated in simulation. Here we show the added value of VISHAC in grasping experiments.
#### Vi-C1 Reconstruction
In Fig. 6, the results for the precision of the reconstruction are shown. The single-object experiments are shown in black. We can see that the trends of both JS and CD are the same as for the simulation, even though we can notice noise in some touches. In yellow, the results for multi-object experiments are shown. Again, the results for both types of experiments are similar, showing that the method is transferable to the real world. The overall accuracy in the real world is lower than in simulation. The
Fig. 4: Simulation and real experiments. Mean area of meshes. Numbers in each datapoint – number of touches. _Single_ – scenes with only single objects; _multi_ – scenes with more objects. _Act-VH_ is a baseline from [1] and _Act-VH_ - _new data_ is the same method evaluated on data collected in this work.
Fig. 5: Simulation – reconstruction – multiple objects in scene. Average reconstruction accuracy (5 scenes, 3 repetitions each). Numbers in each datapoint – number of touches. Shaded areas – standard deviation. Jaccard similarity (JS) higher values better. Chamfer distance (CD) lower values better.
Fig. 3: Simulation – reconstruction – 1 object in scene. Average reconstruction accuracy (8 objects, 3 repetitions each). Numbers in each datapoint – number of touches. Shaded areas – standard deviation. Jaccard similarity (JS) higher values better. Chamfer distance (CD) lower values better.
main reason is noise in the RGB-D sensor and inaccurate collision detection (noise in the joint encoders).
The mean mesh area is shown in Fig. 4. The area for single (black) and multiple (yellow) objects scenes approximately converges to the ground truth value.
#### Iv-D2 Grasp Success Rate
The last experiment evaluates the grasp success rate, _i.e._, the percentage of successful grasps. To sample grasp proposals, GraspIt! [43] was used. To check the quality of each grasp, the objects were picked and moved \(10\,\mathrm{cm}\) in the upwards direction. If the object did not fall from the gripper, the grasp was marked as successful. We decided to inspect grasp success using reconstruction after 0 and 1 touches to show how only a single touch improves the result. In addition, reconstructions after touches 5, 10, and 15 are used. We attempted to grasp 3 times for every repetition of the pipeline on each object, resulting in 9 grasp per object per touch--that makes 81 grasps per touch and 405 grasps in total. The results are shown in Fig. 7.
There is already a difference between 0 and 1 touches. The success rate increased from 63.3% to 70.4%. Maximum success was achieved using reconstructions after 10 touches. However, the difference between 5 and 15 touches is only 2.5% (82.7% vs. 85.2%).
In general, we can say that 5 touches are enough for a sufficient grasp success rate. To compare, the maximum success rate achieved in the baseline [1] was 77.8%. It is also worth mentioning that the result was achieved after time that could be comparable to touch number 12 in our results.
## V Conclusion, Discussion, and Future Work
We proposed a new method for shape completion using a combination of visual and haptic feedback. VISHAC outperformed the baseline Act-VH [1] in terms of speed, reconstruction quality and robustness. We experimentally validated VISHAC in both simulated and real-world environments, using 8 objects and an additional one for grasping. VISHAC was evaluated in scenes with one, two, or three objects. We always touched the objects 15 times and repeated each experiment three times, resulting in almost one hundred experiments in total. In addition, a new uncertainty computation strategy was evaluated, showing that it can be used for on-the-fly quality measurements. The reconstructions were furthermore validated with more than 400 grasps, demonstrating the usability of shape completion in a core robotic task.
There are several directions for future work. The results in the real setup are negatively affected by the noise induced by the contact events--the collision is detected with a certain delay, the object moves, and the new pose is not re-estimated perfectly. This could be mitigated in two ways. First, the most effective will be faster contact detection. In the current setup, collisions are detected from the joint torque sensors in the manipulator and its dynamic model and their remapping onto the end effector. In our setup, this leads to delayed and noisy estimation of the collision and significant movement of the object. A remedy would be a force/torque sensor at the robot wrist or tactile sensors at the end effector. Second, the object pose re-estimation after every haptic exploration could be further improved by using alternative pose estimators or adding tracking.
Furthermore, on a robot hand with sensorized fingertips, data for reconstruction could be collected more effectively by sliding the fingers over the object surface (tactile servoing). Finally, poking and touching reveal surface stiffness and other physical properties that play a role in grasping and could be exploited.
|
2308.06265 | Long-term Effects of Temperature Variations on Economic Growth: A
Machine Learning Approach | This study investigates the long-term effects of temperature variations on
economic growth using a data-driven approach. Leveraging machine learning
techniques, we analyze global land surface temperature data from Berkeley Earth
and economic indicators, including GDP and population data, from the World
Bank. Our analysis reveals a significant relationship between average
temperature and GDP growth, suggesting that climate variations can
substantially impact economic performance. This research underscores the
importance of incorporating climate factors into economic planning and
policymaking, and it demonstrates the utility of machine learning in uncovering
complex relationships in climate-economy studies. | Eugene Kharitonov, Oksana Zakharchuk, Lin Mei | 2023-06-17T16:50:08Z | http://arxiv.org/abs/2308.06265v1 | # Long-term Effects of Temperature Variations
###### Abstract
This study investigates the long-term effects of temperature variations on economic growth using a data-driven approach. Leveraging machine learning techniques, we analyze global land surface temperature data from Berkeley Earth and economic indicators, including GDP and population data, from the World Bank. Our analysis reveals a significant relationship between average temperature and GDP growth, suggesting that climate variations can substantially impact economic performance. This research underscores the importance of incorporating climate factors into economic planning and policymaking, and it demonstrates the utility of machine learning in uncovering complex relationships in climate-economy studies.
## 1 Introduction
Climate change, a defining issue of our time, has far-reaching implications that extend beyond the environmental sphere.
Among these, the economic consequences of climate change are of paramount importance, yet they remain insufficiently understood. This research aims to shed light on this critical issue by investigating the long-term effects of temperature variations on economic growth.
The global economy is a complex system influenced by a multitude of factors, among which climatic conditions play a significant role. According to the Intergovernmental Panel on Climate Change (IPCC), the global temperature has increased by approximately 1.0\({}^{\circ}\)C since the pre-industrial period due to human activities, primarily the burning of fossil fuels and deforestation [1]. This rise in temperature has led to more frequent and severe weather events, such as droughts, floods, and storms, which have direct and indirect impacts on economic activities.
Moreover, the World Bank estimates that climate change could push more than 100 million people into poverty by 2030 due to its impacts on agriculture and food prices [2]. On a macroeconomic level, a study published
in Nature found that unmitigated climate change could lead to a 23% reduction in global GDP per capita by 2100 [3].
Despite the urgency and magnitude of this issue, there is a gap in the literature regarding the use of machine learning techniques to investigate the long-term effects of temperature variations on economic growth. This study aims to fill this gap by applying machine learning models to global land surface temperature data from Berkeley Earth and economic indicators from the World Bank.
Our specific research question is: How do long-term temperature variations affect economic growth? The findings of this study will provide valuable insights for policy-making and future climate-economy research.
## 2 Literature Review
The relationship between climate change and economic growth has been a subject of extensive research over the past few decades. Various studies have explored this relationship from different perspectives, providing valuable insights but also leaving some questions unanswered.
One of the earliest studies in this field by Nordhaus (1991) introduced the concept of the "environmental Kuznets curve", suggesting that economic development initially leads to environmental degradation, but after a certain point, further development reduces environmental impacts [4].
This theory has been challenged by subsequent research indicating that the relationship between economic growth and environmental degradation is not universally applicable and depends on various factors, including the type of environmental pressure and the country's institutional quality [5].
In the context of climate change, Dell, Jones, and Olken (2012) found that higher temperatures significantly reduce economic growth in poor countries but have little effect in rich countries [6]. This finding underscores the importance of considering the heterogeneity of countries in climate-economy studies.
More recently, Burke, Hsiang, and Miguel (2015) used historical fluctuations in temperature to estimate its effect on economic productivity. They found that unmitigated climate change could lead to a 23% reduction in global GDP per capita by 2100, as was mentioned before. [3].
While these studies have significantly advanced our understanding of the climate-economy relationship, there is a gap in the literature regarding the use of machine learning techniques to investigate this relationship. Machine learning models can
Figure 1: Environmental Kuznets curve
capture complex, non-linear relationships and interactions between variables, making them particularly suitable for climate-economy studies. This study aims to fill this gap by applying machine learning models to global temperature and economic data.
## 3 Methods
This study employs a combination of data preprocessing, exploratory data analysis, feature engineering, and machine learning modeling to investigate the long-term effects of temperature variations on economic growth. The following sections detail the specific steps taken in each of these processes.
### Data Sources
The datasets utilized in this research were procured from two primary sources: Berkeley Earth and the World Bank. The Berkeley Earth dataset, comprising 577,463 entries, provides daily land surface temperature data by country from 1743 to 2013 [7]. The World Bank datasets, each containing 271 entries, offer annual Gross Domestic Product (GDP) and total population data by country from 1960 to 2022. [8][9].
### Data Exploration, Preprocessing, Transformation
The initial phase involved thoroughly exploring the datasets, focusing on the data types, the format of tables, and handling missing values. Subsequently, the temperature, GDP, and population datasets were merged based on country and year, resulting in a consolidated dataset of 4,187 entries. This dataset, spanning from 1960 to 2013, includes the average annual temperature, GDP, and population for each country-year pair, covering 79 countries (Figure 2).
### Feature Engineering
During the feature engineering phase, we created new features named 'TemperatureRatio', 'gdp_growth', and 'population_growth'. These features represent the ratio of the corresponding metric in a given year to that in the previous year, capturing the annual changes in temperature, GDP, and population. These engineered features are expected to enhance the predictive capability of the subsequent machine learning models.
### Exploratory Data Analysis (EDA)
In the exploratory data analysis phase, we employed a multifaceted approach to comprehend the inherent characteristics and relationships within our dataset. This involved the generation of visual representations of the data and the computation of descriptive statistics, which provided a comprehensive overview of the dataset's distribution and central tendencies.
We also examined the correlation matrix to understand the interdependencies between
Figure 2: Datasets Intersections by Countries
the variables in our dataset. This step was crucial in identifying potential predictors for our models and avoiding multicollinearity, which could distort the results of our analyses.
To forecast future values, we utilized the Auto-Regressive Integrated Moving Average (ARIMA) model. This model was chosen due to its ability to capture a suite of different standard temporal structures in time series data. It allowed us to model and forecast future points in the series, which was essential for our study's predictive component.
Additionally, we used Ordinary Least Squares (OLS) Regression, a statistical method that minimizes the sum of the squared residuals to estimate the unknown parameters in a linear regression model. This method was instrumental in determining the relationship between our dependent and independent variables and quantifying the strength and direction of these relationships.
### Machine Learning (ML) Modeling
The machine learning modeling stage involved training and evaluating various models with the objective of predicting GDP growth based on engineered features. We also aimed to predict the average temperature based on GDP growth. The models we evaluated included Linear Regression, Decision Tree Regressor, and Random Forest Regressor.
The dataset was split into training and test sets, with 80% of the data used for training and 20% used for testing. The models were trained on the training set and evaluated on the test set using the R-squared score as the evaluation metric, which quantifies the proportion of the variance in the dependent variable that is predictable from the independent variables.
The machine learning code implemented in Python, using the scikit-learn library, facilitated the training, testing, and evaluation of our models. We also leveraged the feature Importances attribute of the RandomForestRegressor to identify the relative importance of each feature in predicting the target variable. This provided valuable insights into the key drivers of GDP growth and average temperature.
### Methodological Rationale
The methods employed in this research were chosen to provide a comprehensive, robust, and insightful analysis of the data. The EDA phase was crucial for understanding the data's inherent characteristics and relationships. He summarizes the main characteristics of datasets, often using statistical graphics and other data visualization methods. It is primarily used to see what the data can tell us beyond formal modeling and hypothesis testing.
The use of visualizations and descriptive statistics provided a clear overview of the data's distribution and central tendencies. The correlation matrix was instrumental in identifying potential predictors for our models and avoiding multicollinearity, which could distort the results of our analyses.
The ARIMA model and OLS Regression were chosen for their specific strengths in time series forecasting and linear regression analysis, respectively. ARIMA is particularly effective in capturing different standard temporal structures in time series data, making it a valuable tool for forecasting future points in the series. OLS Regression,
on the other hand, is a reliable method for estimating the unknown parameters in a linear regression model, making it ideal for determining the relationship between our dependent and independent variables.
The machine learning modeling stage was designed to predict GDP growth or average temperature based on engineered features. The models evaluated, including Linear Regression, Decision Tree Regressor, and Random Forest Regressor, were chosen for their proven effectiveness in similar predictive tasks. The use of the R-squared score as the evaluation metric ensured a quantifiable measure of the models' performance, allowing for an objective selection of the final model.
## 4 Results
The results of this study are presented in two parts: the statistical analysis of the dataset and the performance of the machine learning models.
Below are the visualizations depicting the trends in average temperature, GDP per capita, and population. Notably, the graphs showcase a substantial growth in GDP per capita worldwide over the given time period.
to very hot climates. The temperature ratio, which represents year-to-year changes in temperature, also shows a wide range, indicating significant variations in temperature changes across different countries and years.
Population Growth: Population growth shows a negative mean value of -204.74, which is quite unusual. This could be due to the way population growth is calculated in this dataset or could be influenced by outliers. The maximum value of population growth is 3598.6, indicating that some countries experienced a significant increase in their population during certain years.
These insights provide a high-level understanding of the economic, climatic, and demographic variations across different countries and years included in the dataset. They could be useful in further analysis and modeling stages of the study.
_The following illustration (Figure 3) demonstrates the application of the Z-score method for outlier removal. In this visualization, we have calculated GDP per capita to depict its relationship with the average temperature:_
Negative Relationship: The equation of the line of best fit shows a negative coefficient for the 'AverageTemperature' variable, indicating a negative relationship between average temperature and GDP per capita. This suggests that as the average temperature increases, the GDP per capita decreases.
Outliers: The z-score-based outlier removal process has effectively filtered out extreme values that could potentially skew the analysis. This results in a more accurate representation of the general trend in the data. Residuals: The residuals (the differences between the observed GDP per capita and the GDP per capita predicted by the line of best fit) vary across the data. This suggests that while there is a general trend, there are other factors not considered in this simple model that influence GDP per capita.
In summary, the analysis suggests a negative relationship between average temperature and GDP per capita, but also highlights the complexity of this relationship and the potential influence of other factors.
### Correlation Matrix
Table 2 shows the correlation coefficients between several variables. Here's a brief interpretation of the correlation matrix.
Year and GDP: The correlation coefficient of 0.18826 suggests a weak positive relationship. As the years increase, the GDP tends to increase slightly.
Figure 4: Scatter Plot with Fitted Line Showing the Relationship Between Average Temperature and GDP per Capita.
GDP growth and Average Temperature: The correlation coefficient of -0.433038 suggests a moderate negative relationship. As GDP growth increases, the Average Temperature tends to decrease, and vice versa.
GDP and Population: The correlation coefficient of 0.306889 suggests a weak positive relationship. As GDP increases, the population also tends to increase slightly.
Population and Population Growth: The correlation coefficient of -0.56618 suggests a moderate negative relationship. As the population increases, the population growth tends to decrease, and vice versa.
The correlation matrix is important in the context of ARIMA, OLS, and ML models. For example, in Machine Learning models, understanding the correlation between variables can be useful for feature selection. Features that are highly correlated with the target variable can be important predictors and correlated with each other often provide redundant information. Finally feature selection techniques may be used to reduce dimensionality.
### ARIMA Model Results (SARIMAX)
The provided output in Table 3 is the summary of an ARIMA (AutoRegressive Integrated Moving Average) model, which is a type of time series model.
Here are a few insights based on the output:
Significant AR Terms: The AR (AutoRegressive) terms (ar.L1 to ar.L5) are all statistically significant as their p-values are less than 0.05. This suggests that the GDP growth has a significant relationship with its own past values and cyclical patterns. The negative coefficients indicate that an increase in GDP growth in the previous years is associated with a decrease in GDP growth in the current year, and vice versa.
Model Fit: The Log Likelihood, AIC (Akaike Information Criterion), and BIC (Bayesian Information Criterion) are measures of the goodness of fit of the model. Lower values of these metrics indicate a better fit. The Log Likelihood value is -26996.766, and the AIC and BIC values are 54005.533 and
Figure 5: Correlation Matrix of the Variables in the Dataset
\begin{table}
\begin{tabular}{l r|r|r|r|r} & \multicolumn{4}{c}{SARIMAX Results} \\
**Dep. Variable:** & \multicolumn{2}{c}{glp.growth} & \multicolumn{2}{c}{**No. Observations:**} & \multicolumn{1}{r}{4187} \\
**Model:** & ARIMA (5, 1, 0) & **Log Likelihood** & -26996.766 \\
**Date:** & Sun, 4 Jun 2023 & **AIC** & 54005.533 \\
**Time:** & 23:04:13 & **BIC** & 54043.57 \\
**Sample:** & 0 & **HQIC** & 54018.985 \\ & -4187 & & & \\
54043.570, respectively. These values can be used for comparing different models.
Residual Diagnostics: The Ljung-Box test (Q statistic) checks for autocorrelation in the residuals (the differences between the observed and predicted GDP growth values)). A significant p-value (less than 0.05) suggests that there is autocorrelation in the residuals, which is not desirable as it violates one of the assumptions of time series modeling. The Jarque-Bera test checks for normality of the residuals. A significant p-value (less than 0.05) suggests that the residuals are not normally distributed residuals of the model are not behaving as ideally expected. This could imply that there might be additional information or patterns in the GDP growth data that the ARIMA model has not captured. In this case, both tests have significant p-values, indicating potential issues with the model fit.
Heteroskedasticity: The Heteroskedasticity (H) test checks for constant variance in the residuals (homoskedasticity). A significant p-value (less than 0.05) suggests that the residuals have non-constant variance (heteroskedasticity), which is not desirable. In this case, the H value is 1.88 with a significant p-value, indicating heteroskedasticity in the residuals. The significant Heteroskedasticity test result suggests that the variability of the GDP growth in our dataset changes over time. This could be due to various factors such as changes in economic policies, market conditions, or other external events that affect the economy.
### Ordinary Least Squares (OLS) Regression Results
The OLS Regression results provide insights into the relationship between the average temperature and GDP growth in your dataset.
\begin{table}
\begin{tabular}{l r r|r r} & \multicolumn{3}{c}{OLS Regression Results} \\
**Dep. Variable:** & gdp\_growth & **R-squared:** & 0.188 \\
**Model:** & OLS & **Adj. R-squared:** & 0.187 \\
**Method:** & Least Squares & **F-statistic** & 965.9 \\
**Date:** & Fri, 09 Jun 2023 & **Prob (F-statistic):** & 5.43E-191 \\
**Time: 03:21:18** & 3:12:18 & **Log-Likelihood:** & -26142 \\
**No. Observations:** & 4187 & **AIC:** & 5.23E+04 \\
**Df Residual:** & 4185 & **BIC:** & 5.23E+04 \\
**Df Model:** & 1 & & \\
agricultural productivity, labor productivity, and health outcomes.
Model Fit: The R-squared value of 0.188 indicates that the model explains about 18.8% of the variance in the GDP growth. This suggests that while average temperature is a significant predictor of GDP growth, there are other factors not included in the model that also influence GDP growth.
Residual Diagnostics: The significant Omnibus and Jarque-Bera test results suggest that the residuals of the model (the differences between the observed and predicted GDP growth values) are not normally distributed. This could imply that there might be non-linear relationships or interactions between variables in the GDP growth data that the OLS model has not captured.
### Machine Learning Modeling Results
The Linear Regression, Decision Tree Regressor, and Random Forest Regressor models were trained and evaluated on the dataset. The performance of each model was evaluated using the R-squared score, which measures the proportion of the variance in the dependent variable that is predictable from the independent variables.
The Linear Regression model achieved an R-squared score of 0.268, indicating that it could explain about 26.8% of the variance in the GDP growth or average temperature. The Random Forest Regressor model achieved an R-squared score of 0.976, indicating that it could explain about 97.6% of the variance in the GDP growth or average temperature.
The Random Forest Regressor model was selected as the final model due to its superior performance.
An assessment of each feature's impact within the model was conducted, revealing that 'gdp_growth' stood out as the most potent predictor of average temperature, thereby demonstrating a robust relationship between these two factors. The features 'population', 'population_growth', and 'gdp' followed in terms of their influence.
\begin{table}
\begin{tabular}{c|c|c} \# & **Feature** & **Importance** \\ \hline
0 & gdp\_growth & 0.400804 \\
1 & population & 0.26089 \\
2 & population\_growth & 0.223452 \\
3 & gdp & 0.114853 \\ \end{tabular}
\end{table}
Table 6: Feature Importances of the Random Forest Regressor Model
\begin{table}
\begin{tabular}{c|c|c} \# & **Model** & **R2 Score** \\ \hline
0 & Random Forest & 0.975624 \\
1 & Decision Tree & 0.938603 \\
2 & Linear Regression & 0.268078 \\ \end{tabular}
\end{table}
Table 5: R-squared Scores of the Machine Learning Models
These figures and tables provide a visual and numerical summary of the data and the results of the analyses, aiding in the interpretation and understanding of the findings.
## 5 Discussion
The purpose of this research was to explore the long-term implications of temperature variations on economic growth across a diverse range of nations. The study utilized a comprehensive dataset spanning from 1960 to 2022, which encompassed average temperature, GDP, and population data. The analysis unveiled a substantial negative correlation between temperature fluctuations and economic growth, thereby answering the research question regarding the relationship between these two variables.
The descriptive statistics of the dataset provided a high-level understanding of the economic, climatic, and demographic variations across different countries and years. The GDP growth was wide, indicating significant variation across different countries and years. The average temperature ranged from -6.8\({}^{\circ}\)C to 29.4\({}^{\circ}\)C, indicating a wide range of climatic conditions. The population growth showed a negative mean value, which could be due to the calculation method or outliers. See Figures 8 - 11.
The visualization plot of average temperature versus GDP per capita further reinforced the negative relationship between these two variables. The line of best fit showed a negative slope, indicating that as the average temperature increases, the GDP per capita decreases.
The correlation matrix revealed several interesting relationships. There was a moderate negative relationship between GDP growth and average temperature, a weak positive relationship between GDP and population, and a moderate negative relationship between population and population growth. These correlations provide valuable insights into the complex relationships between these variables and their potential impact on economic growth.
The ARIMA model results indicated that GDP growth has a significant relationship with its own past values and cyclical patterns. But, the significant p-values for the Ljung-Box and Jarque-Bera tests suggested potential issues with the model fit.
The OLS regression results provided further insights into the relationship between the average temperature and GDP growth. The negative coefficient for the 'AverageTemperature' variable indicates a significant negative relationship between average temperature and GDP growth. Need to remember that model only explained about 18.8% of the variance in the GDP growth, suggesting that other factors not included in the model also influence GDP growth.
The study also revealed a significant negative relationship between the average temperature and GDP. This suggests that for every one-degree increase in the average temperature, the GDP decreases by approximately 7.36 units. This result aligns with previous research that has identified a negative impact of temperature fluctuations on economic outcomes.
Machine learning models, particularly the Random Forest Regressor, were instrumental in accurately predicting average temperature based on gdp_growth and other variables.
The Random Forest Regressor model achieved an impressive R-squared score of 0.976, signifying its capacity to account for approximately 97.6% of the variance in the average temperature. This result is
Figure 11: Line Chart of Population Over Time
Figure 12: Actual vs. Predicted Average Temperature
noteworthy as it highlights the potential of machine learning models in forecasting economic outcomes based on environmental factors.
The feature importance derived from the Random Forest Regressor model showed that the 'gdp_growth' feature was the most influential predictor of average temperature, emphasizing the significant impact of temperature fluctuations on economic growth. It's important to mention that other features such as 'population' and 'population_growth' also contributed to predicting GDP, indicating the complex nature of economic growth.
The insights and patterns found in the results of this study include the significant negative relationship between average temperature and GDP growth, the high predictive accuracy of the machine learning models, particularly the Random Forest Regressor, and the significant relationship between GDP growth and its own past values and cyclical patterns as revealed by the ARIMA model. These findings provide a comprehensive understanding of the complex interplay between temperature variations and economic growth, highlighting the potential economic costs of climate change and the importance of developing effective mitigation strategies.
Important to acknowledge the limitations of this study. The dataset used does not include data on other potentially relevant factors such as technological advancements, political stability, and policy changes, which could also influence economic growth. Moreover, the study assumes a linear relationship between temperature fluctuations and economic growth, which may not always be the case.
But in comparison to previous researches, this study adds a new dimension by providing evidence of the long-term effects of temperature fluctuations on economic growth. While previous studies have mainly focused on the impacts of extreme weather events on economic outcomes, this study provides a more comprehensive view by considering long-term temperature variations.
## 6 Conclusion
The research journey embarked upon in this study aimed to unravel the intricate relationship between long-term temperature variations and economic growth. Leveraging a robust dataset, advanced machine learning models, and time series forecasting, the study has shed light on this complex interplay. The implications of the findings are far-reaching, with potential to influence policy-making, economic planning, and climate change mitigation strategies.
### Key Takeaways
Significant Negative Correlation: The primary takeaway from this research is the
Figure 13: R-squared Scores of the Machine Learning Models
significant negative correlation between temperature variations and economic growth. The Ordinary Least Squares (OLS) regression model revealed that for every unit increase in average temperature, there is an approximate decrease of 7.36 units in GDP. This finding is a stark reminder of the economic implications of climate change, underscoring the urgency for effective climate change mitigation strategies.
High Predictive Accuracy of Machine Learning Models: The machine learning models employed in this study, particularly the Random Forest Regressor, demonstrated a high degree of accuracy in predicting economic outcomes based on environmental factors. The Random Forest Regressor model achieved an impressive R-squared score of 0.976, explaining about 97.6% of the variance. This suggests that machine learning models can be potent tools for predicting economic outcomes, providing valuable insights for policy-making and economic planning.
Impact of Temperature Variations on Economic Growth: The study revealed a significant negative relationship between the average temperature and GDP. This suggests that increases in temperature could have a detrimental effect on economic growth, which could be due to various reasons such as the impact of climate change on agricultural productivity, labor productivity, and health outcomes.
### Implications of Findings
The findings of this study have profound implications. The demonstrated negative relationship between temperature variations and economic growth highlights the economic costs of climate change. This understanding is crucial for policy-makers, economists, and environmentalists as they strategize to mitigate the impacts of climate change.
The high predictive accuracy of the machine learning models used in this study suggests that these models can be effectively used to predict economic outcomes based on environmental factors. This could be a game-changer for economic planning and policy-making, enabling more accurate predictions and more informed decision-making.
Important to acknowledge the limitations of this study. The dataset used does not account for other potentially relevant factors such as technological advancements, political stability, and policy changes, which could also influence economic growth. Furthermore, the study assumes a linear relationship between temperature variations and economic growth, which may not always hold true. Future research could delve into these aspects in more detail.
In conclusion, this research has significantly contributed to our understanding of the economic implications of temperature variations. It provides a foundation for future research in this area and offers valuable insights that can inform policymaking, economic planning, and climate change mitigation strategies. As we continue to grapple with the impacts of climate change, understanding its economic implications and developing effective strategies to mitigate these impacts is of paramount importance.
## 7 Future Work
While this research has provided valuable insights into the relationship between long-term temperature variations and economic
growth, it has also opened up new avenues for further exploration. Several questions remain unanswered, and the findings of this study suggest several directions for future research.
### Unanswered Questions
One of the key unanswered questions is how other environmental factors, beyond temperature variations, might impact economic growth. For instance, how do changes in precipitation patterns, extreme weather events, or sea-level rise affect economic outcomes? Understanding these relationships could provide a more holistic view of the environmental impacts on economic growth.
Another unanswered question pertains to the potential non-linear relationship between temperature variations and economic growth. While this study assumed a linear relationship, it is plausible that the relationship could be non-linear, with different temperature thresholds triggering different economic responses.
Finally, the impact of technological advancements, political stability, and policy changes on the relationship between temperature variations and economic growth remains largely unexplored in this study. These factors could potentially moderate or exacerbate the impacts of temperature variations on economic growth.
### Suggested Future Research
Based on the findings of this study, several directions for future research are suggested.
Firstly, future studies could incorporate other environmental factors into the analysis. This would provide a more comprehensive understanding of the environmental impacts on economic growth.
Secondly, future research could explore the potential non-linear relationship between temperature variations and economic growth. This could involve using more sophisticated statistical and machine learning models that can capture non-linear relationships.
Thirdly, future studies could incorporate other potentially relevant factors such as technological advancements, political stability, and policy changes into the analysis. This would provide a more nuanced understanding of the impacts of temperature variations on economic growth.
Lastly, future research could also explore the impacts of temperature variations on different sectors of the economy. This could provide insights into which sectors are most vulnerable to temperature variations and could inform sector-specific mitigation strategies.
In conclusion, while this research has made significant strides in understanding the relationship between long-term temperature variations and economic growth, there is still much to be explored. The unanswered questions and suggested future research directions provide a roadmap for future studies in this important area. As we continue to grapple with the impacts of climate change, further research in this area is not only valuable but also necessary.
|
2301.05267 | Polarimetric Reverberation Mapping in Medium-Band Filters | Earlier, we suggested the "reload" concept of the polarimetric reverberation
mapping of active galactic nuclei (AGN), proposed for the first time more than
10 years ago. We have successfully tested this approach of reverberation
mapping of the broad emission line on the galaxy Mrk 6. It was shown that such
an idea allows one to look at the AGN central parsec structure literally in a
new light. However, the method originally assumed the use of
spectropolarimetric observations, expensive in terms of telescope time, and
implemented on rare large telescopes. Currently, we propose an adaptation of
the polarimetric reverberation mapping of broad lines in medium-band filters
following the idea of the photometric reverberation mapping, when filters are
selected so that their bandwidth is oriented to the broad line and the
surrounding continuum near. In this paper, we present the progress status of
such monitoring conducted jointly at the Special astrophysical observatory and
Asiago Cima Ekar observatory (OAPd/INAF) with support from Rozhen National
Astronomical Observatory (NAO), some first results for the most frequently
observed AGNs Mrk 335, Mrk 509, and Mrk 817, and the discussion of the future
perspectives of the campaign. | Elena Shablovinskaya, Luka Ä. PopoviÄ, Roman Uklein, Eugene Malygin, Dragana IliÄ, Stefano Ciroi, Dmitry Oparin, Luca Crepaldi, Lyuba Slavcheva-Mihova, Boyko Mihov, Yanko Nikolov | 2023-01-12T19:32:27Z | http://arxiv.org/abs/2301.05267v1 | # Polarimetric Reverberation Mapping in Medium-Band Filters
###### Abstract
Earlier, we suggested the "reload" concept of the polarimetric reverberation mapping of active galactic nuclei (AGN), proposed for the first time more than 10 years ago. We have successfully tested this approach of reverberation mapping of the broad emission line on the galaxy Mrk 6. It was shown that such an idea allows one to look at the AGN central parsec structure literally in a new light. However, the method originally assumed the use of spectropolarimetric observations, expensive in terms of telescope time, and implemented on rare large telescopes. Currently, we propose an adaptation of the polarimetric reverberation mapping of broad lines in medium-band filters following the idea of the photometric reverberation mapping, when filters are selected so that their bandwidth is oriented to the broad line and the surrounding continuum near. In this paper, we present the progress status of such monitoring conducted jointly at the Special astrophysical observatory and Asiago Cima Ekar observatory (OAPd/INAF) with support from Rozhen National Astronomical Observatory (NAO), some first results for the most frequently observed AGNs Mrk 335, Mrk 509, and Mrk 817, and the discussion of the future perspectives of the campaign.
polarization; active galactic nuclei; reverberation mapping +
Footnote †: journal: _universe_
09 December 2022
09 January 2023
## 1 Introduction
According to the unified model of active galactic nuclei (AGN) [1; 2], the central parts of the central machine are surrounded by a gas-dust region, the so-called dusty torus. The presence of a dusty region is key to explaining the observed dichotomy of type 1 and type 2 AGN (see [3]). Characteristics of dust surrounding AGN, e.g. its location and chemical composition determine the accretion properties of the AGN. Thanks to high-angular-resolution observations of local active galaxies in the infrared (IR) (e.g., [4; 5; 6; 7; 8]) and molecular lines (e.g., [9; 10]) now it becomes possible to obtain direct images of the dusty region, while in the optical range, this structure is still unresolvable. The development of observational capabilities made it possible to determine the geometry of the dusty region, which turned out to be different from the
toroidal (see [11] for a review), and also to move from the simple models of a clumpy [12] and smooth-distributed [13] dusty "torus" to more complex ones ([7; 14; 15] etc.).
The equatorial scattering observed in the optical range in many central regions of type 1 AGN [16; 17; 18] is also associated with the presence of a dusty region. This process is responsible for the specific polarization signatures along the emission line profiles: an \(S\)-shaped swing in the polarization angle and a dip in the polarization degree along the emission line profile [19], which cannot be explained by any other polarization mechanisms in AGN. Scattering by particles of the medium (mainly electrons) occurs in the plane of rotation of the AGN at a distance of \(R_{\rm sc}\), where the optical depth becomes greater than 1 [16; 18; 20]. Based on physical assumptions, \(R_{\rm sc}\) is consistent with the dust sublimation radius. In particular, [21; 22; 23] use IR measurements as \(R_{\rm sc}\) for estimations of supermassive black hole (SMBH) masses by spectropolarimetric data of angle swings in broad lines. However, there are no direct observations of the equatorial scattering region, and the regions observed in IR may be located farther from the AGN center at a greater optical depth than the optical radiation scattering region.
Since the scattering and emitting regions are spatially separated, polarimetric reverberation mapping can be used to determine their sizes. The simulation done by Goosmann et al. [24] showed that the equatorially scattered polarized emission of the AGN must lag behind the continuum emission. However, the first observational test for NGC 4151 showed \(R_{\rm sc}<R_{\rm BLR}\)[25]. Due to the use of broad-band filters covering mostly continuum, the polarization observed in NGC 4151 was contributed not only by equatorial scattering but also by sources of the polarized continuum, such as the accretion disk or the base of the jet. Shablovinskaya et al. [26] revised the approach and proposed the idea of AGN reverberation mapping in polarized broad lines. When using a polarized flux in an emission line with a continuum flux subtracted from it, the influence of other polarization mechanisms is minimized, which allows us to measure the time delay that occurs precisely due to scattering by the equatorial region. The new approach was applied to the analysis of data from spectropolarimetric monitoring of the Seyfert galaxy Mrk 6. The detected delay between the polarized emission in the broad H\(\alpha\) line and the continuum at a wavelength of 5100 A was about 100 days, which is close to the theoretical value (\(\sim\)115 days, [27]), but about two times less than the expected delay according to the spatial estimate of the size of the dust region, obtained by IR-interferometry (\(\sim\)214 days, [5]). This discrepancy requires a detailed analysis, but without static reinforcement based on monitoring other galaxies, it cannot clarify the physics of AGNs.
Spectropolarimetric monitoring, which initially underlay the broad line polarimetric reverberation method, requires not only a large amount of time on a large telescope but also the use of a device equipped with this mode, which is implemented only in a few observatories. Small telescopes are best suited for monitoring a large sample of objects. To adapt the technique for small instruments, it was necessary to switch from spectroscopy to direct images, as was done in the case of photometric reverberation mapping [28]1. Similarly, AGN reverberation mapping in polarized broad lines at small telescopes can be implemented using image-polarimetry in mid-band filters oriented to the emission line and continuum nearby.
Footnote 1: The \(S\)-band filter is not used in the analysis of the data.
In this paper, we consider the adaptation of the method of polarimetric reverberation mapping of broad lines to observations with small telescopes and some preliminary results. The paper structure is as follows. Section 2 describes the observational technique used on 1- and 2-m class telescopes and the sample of the AGNs chosen for the first stage of the monitoring project. In Section 3, the first results for the three most frequently observed AGNs--Mrk 335, Mrk 509, and Mrk 817--are given, which are then discussed and compared with other estimations in Section 4. The perspectives of the observational approach are described in Section 5. The summary of the current project state is in Section 6.
## 2 Observational Technique and Sample
Since the beginning of 2020, we have been conducting polarimetric monitoring of a sample of type 1 AGN with equatorial scattering at the 1-m telescope Zeiss-1000 [30] of the Special Astrophysical Observatory of the Russian Academy of Sciences (SAO RAS), at the Copernico 1.82-m telescope of the Asiago-Cima Ekar Observatory and at 2-m telescope of Rozhen National Astronomical Observatory (NAO).
On the Zeiss-1000 telescope of the SAO RAS at different times, we used two instruments "StoP" and "MAGIC" using medium-band 250 A-wide filters from the SED-set\({}^{2}\) and 100 A-wide filters called Sy671 and Sy685. In 2020, observations were made on the photometer-polarimeter "StoP" [31]. Using a double Wollaston prism [32] as a polarization analyzer, the device made it possible to simultaneously registered four images in the detector plane corresponding to electric vector oscillations in the directions 0\({}^{\circ}\), 45\({}^{\circ}\), 90\({}^{\circ}\) and 135\({}^{\circ}\) and, consequently, three Stokes parameters \(I\), \(Q\), and \(U\) within one exposure. This method of observation makes it possible to minimize the effect of atmospheric depolarization and increase the accuracy of polarimetric observations (for more details, see [33]). In the polarimetry mode of the "StoP" device with a CCD system (2k \(\times\) 2k px) Andor iKon-L 936 [a detailed study of the detector is described in 34] in the 2 \(\times\) 2 binning mode the scale is 0\({}^{\prime\prime}\).42/pix with the field of view (FoV) for each direction of polarization 0\({}^{\prime}\).9 \(\times\) 6\({}^{\prime}\).1.
Since the end of 2020, we have switched to a new device--the "MAGIC" multimode focal reducer [34; 35]. Retaining all the advantages of the predecessor instrument in the technique of polarimetric observations (the linear polarization measurement accuracy is up to 0.1% for stellar sources up to 16 mag.), the new device has a large FoV: a Wollaston quadrupole prism [36], used as polarization analyzer, projects onto the CCD-detector 4 images of the input mask, corresponding to the directions of oscillation of the electric vector 0\({}^{\circ}\), 90\({}^{\circ}\), 45\({}^{\circ}\) and 135\({}^{\circ}\), each with a size of 6\({}^{\prime}\).5 \(\times\) 6\({}^{\prime}\).5. This allows using several local standard stars in the field of an object in differential polarimetry. When using the focal reducer with the same Andor iKon-L 936 CCD system in the 1 \(\times\) 1 binning mode, the scale was 0\({}^{\prime\prime}\).45/pix.
At Asiago Cima Ekar observatory (OAPd/INAF), data were obtained using H\(\alpha\), 671 and 680 filters (70 A-, 100 A-, and 100 A-widths, respectively, and centred at 656, 671, and 680 nm, respectively) and a double Wollaston prism as a polarization analyzer, placed inside the Asiago Faint Object Spectrograph Camera\({}^{3}\) (AFOSC) of the 1.82-m Copernico telescope. The same technique allows simultaneously obtaining four images of the input mask in different directions of polarization in the FoV 0\({}^{\prime}\).8 \(\times\) 9\({}^{\prime}\).4 with Andor iKon-L 936 CCD system with a scale of 0\({}^{\prime\prime}\).51/pix in 2 \(\times\) 2 binning mode.
The data at Rozhen National Astronomical Observatory (NAO) were obtained at the 2-m Ritchey-Chretien-Coude (RCC) telescope using the two-channel Focal Reducer Rozhen (FoReRo-2) [37], equipped with a double Wollaston prism and a 2k \(\times\) 2k px Andor iKon-L CCD camera. The images were taken through IF642 and IF672 narrow-band filters (with 26 A and 33 A FWHMs, centered at 6416 A and 6719 A, respectively). The four images have FoV of 50\({}^{\prime\prime}\).7 \(\times\) 50\({}^{\prime\prime}\).7 each and a scale of 0\({}^{\prime\prime}\).994/px in the 2 \(\times\) 2 binning mode used. However, due to the infrequent observations and the small FoV not allowing one to apply the reduction technique shown below, we have not used further this data for the time series analysis.
During each observational night, we received calibration images (flat frames for each filter, bias) to correct data for additive and multiplicative errors. For each object, the series of images (at least 7 frames in each filter) were taken, the exposure times depend on the object brightness, and weather conditions and are usually ranged from 2 to 5 min. To correct statistics each frame is processed independently, and statistical evaluation is made by averaging the random value by robust methods giving its unbiased estimate. In this case, the polarimetric errors are the standard deviation of the robust distribution.
AGN observations were accompanied by observations of polarized standard stars and stars with zero polarization. Introducing the instrumental parameters \(K_{Q}\) and \(K_{U}\), which characterize the transmission of polarization channels, determined from observations of unpolarized standard stars, as well as \(I_{0}\), \(I_{45}\), \(I_{90}\), and \(I_{135}\) as the intensity at four polarization directions, we can measure three Stokes parameters:
\[I=I_{0}+I_{90}K_{Q}+I_{45}+I_{135}K_{U} \tag{1}\]
\[Q=\frac{I_{0}-I_{90}K_{Q}}{I_{0}+I_{90}K_{Q}} \tag{2}\]
\[U=\frac{I_{45}-I_{135}K_{U}}{I_{45}+I_{135}K_{U}} \tag{3}\]
Here and below, we use \(Q\) and \(U\) to denote the normalized Stokes parameters.
Then, the degree of linear polarization \(P\) and the polarization angle \(\varphi\) as:
\[P=\sqrt{Q^{2}+U^{2}} \tag{4}\]
\[\varphi=\frac{1}{2}\arctan\left(\frac{U}{Q}\right) \tag{5}\]
The observation technique and data reduction are described in more detail in [33]. Note here that the interstellar medium (ISM) polarization is corrected using only one local standard star in the field of the AGN, which may introduce a slight bias in measured polarization parameters. Yet, this bias is about to be small and stable within the monitoring campaign.
When the signal-to-noise ratio of the measured polarization in AGNs was small (\(\sigma_{P}/P\gtrsim 0.7\), where \(\sigma_{P}\) is the error of the polarization degree \(P\) measurement), the polarization degree was corrected for the polarization bias [38]:
\[P_{\text{unbiased}}=P\cdot\sqrt{1-(1.41\cdot\sigma_{P}/P)^{2}}. \tag{6}\]
However, \(>\)95% of obtained data is of high signal-to-noise ratio (\(\sigma_{P}/P<0.7\)).
Over the past two years, we have concentrated on observations of the 6 brightest (12-15 mag) objects in the sample (see Table 1) with equatorial scattering, confirmed by spectropolarimetric observations at the 6-m BTA SAO RAS. All type 1 AGNs are observed sequentially in the polarimetry mode in several mid-band filters, the passbands of which are spectrally oriented toward the emission of the broad H\(\alpha\) line and the continuum near the line. Note that in all cases, it is the broad H\(\alpha\) line that we observe since the equatorial scattering effect is most detectable there. The selection of filters for three objects from the sample Mrk 335, Mrk 509, and Mrk 817 is shown in Figures 1 and 2. Depending on the available filter sets one filter or the combination of two filters was used for obtaining the emission line flux. Observations were carried out approximately once a month, depending on the weather conditions and according to the allocated telescope time for the implementation of programs.
\begin{table}
\begin{tabular}{c c c c c c} \hline \hline
**Object** & **Filters** & _Q, \%_ & _U, \%_ & _P, \%_ & \(\boldsymbol{\varphi}\), \({}^{\circ}\)_ \\ \hline \multirow{4}{*}{Mrk 335} & SED675 & \(0.28\) & \(-0.16\) & \(0.32\) & \(165.1\) \\ & SED650 & \(0.55\) & \(-0.51\) & \(0.75\) & \(158.6\) \\ \cline{1-1} & 680 & \(0.41\) & \(-0.14\) & \(0.43\) & \(170.6\) \\ \cline{1-1} & 671 & \(0.12\) & \(-0.13\) & \(0.18\) & \(156.4\) \\ \cline{1-1} & H\(\alpha\) & \(0.40\) & \(-0.34\) & \(0.52\) & \(159.8\) \\ \hline \multirow{4}{*}{Mrk 817} & SED625 & \(-0.69\) & \(-0.43\) & \(0.81\) & \(106.0\) \\ \cline{1-1} & Sy685 & \(-0.01\) & \(-0.62\) & \(0.62\) & \(134.5\) \\ \cline{1-1} & Sy671 & \(-0.82\) & \(0.36\) & \(0.89\) & \(78.1\) \\ \hline \hline \end{tabular}
\end{table}
Table 1: AGN sample and calculated expected values of the polarization parameters (_Q, U, P_ and _\(\varphi\)_) in the filters used in the observations based on the data from [22]. It is important to note that the earlier published data for the objects Mrk 335 and Mrk 79 have been corrected after more thorough processing.
\begin{table}
\begin{tabular}{c c c c c c} \hline \hline
**Object** & **Filters** & _Q, \%_ & _U, \%_ & _P, \%_ & \(\boldsymbol{\varphi}\), \({}^{\circ}\)_ \\ \hline \multirow{4}{*}{Mrk 6} & SED675 & \(0.16\) & \(-0.66\) & \(0.68\) & \(141.8\) \\ & SED650 & \(0.44\) & \(-0.62\) & \(0.76\) & \(152.7\) \\ \cline{1-1} & SED625 & \(0.33\) & \(-0.67\) & \(0.75\) & \(148.1\) \\ \hline \multirow{2}{*}{Mrk 79} & SED675 & \(-0.42\) & \(0.02\) & \(0.42\) & \(88.6\) \\ \cline{1-1} & SED650 & \(-0.40\) & \(0.04\) & \(0.40\) & \(87.1\) \\ \hline \multirow{2}{*}{NGC 4151} & SED650 & \(-0.13\) & \(0.18\) & \(0.22\) & \(62.9\) \\ \cline{1-1} & SED600 & \(-0.18\) & \(0.24\) & \(0.30\) & \(63.4\) \\ \hline \multirow{2}{*}{Mrk 509} & SED675 & \(0.74\) & \(-0.63\) & \(0.97\) & \(159.8\) \\ \cline{1-1} & SED650 & \(0.63\) & \(-0.60\) & \(0.87\) & \(158.2\) \\ \hline \hline \end{tabular}
\end{table}
Table 1: _Cont._
Figure 1: The spectropolarimetric data for Mrk 335 from [22] taken at 09 November 2013 with the overplotted selected filters. **Left**: SED675 (dark green) and SED650 (light green) transmission curves are given. In panels 2–5, the data of image-polarimetry obtained with Zeiss-1000/MAGIC 01 November 2022 are overplotted with black dots. **Right**: H\(\alpha\) (yellow), 671 (red), and 680 (purple) transmission curves are given. In panels 2–5, the data of image-polarimetry obtained with AFOSC 31 October 2022 are overplotted with black dots. In both figures: flux in ADU (1), the Stokes parameters \(Q\) (2) and \(U\) (3) in %, the polarization degree \(P\) in % (4), the polarization angle \(\varphi\) in degrees (5).
We estimated the expected values of the polarization effect due to equatorial scattering in the observations of all studied AGNs in medium-band filters for a broad line and continuum based on the previously obtained spectropolarimetric data [22]. Since the transmittance of medium-band filters is measured in the laboratory, we denote it as a filter's response function \(filter(\nu)\) and multiply it by the spectral distribution of the polarization parameters \(\xi_{\nu}\) [here we
Figure 2: The spectropolarimetric data for Mrk 509 (left) and Mrk 817 (right) from [22] taken at 21 October 2014 and 29 May 2014, respectively, with the overplotted selected filters. **Left:** SED675 (dark green) and SED650 (light green) transmission curves are given for Mrk 509. In panels 2-5, the data of image-polarimetry obtained with Zeiss-1000/MAGIC 29 August 2021 are overplotted with black dots. **Right:** Sy685 (dark green), Sy671 (medium green) and SED625 (light green) transmission curves are given for Mrk 817. In panels, 2-5 the data of image-polarimetry obtained with Zeiss-1000/MAGIC 28 August 2021 are overplotted with black dots. In both figures: flux in ADU (1), the Stokes parameters \(Q\) (2) and \(U\) (3) in %, the polarization degree \(P\) in % (4), the polarization angle \(\varphi\) in degrees (5).
used \(Q(\nu)\) and \(U(\nu)\) in per cent] over the frequencies of the investigated AGNs, to determine its expected values \(X\) (in terms of \(Q\) and \(U\)) in specific filters:4
Footnote 4: The \(Q\) and \(U\) are the same as those in the literature, but they are different from the literature.
\[X=\frac{\int\xi_{\nu}\cdot filter(\nu)\cdot dv}{\int filter(\nu)\cdot dv} \tag{7}\]
The estimated values of \(Q\) and \(U\) are given in Table 1. The values of \(P\) and \(\varphi\) are calculated using Equations (4) and (5). It is interesting to note that in the case when the observations are carried out in two mid-band filters, one of which is oriented to the continuum, and the second is so that the flux from the broad emission H\(\alpha\) line falls into it, the difference between the normalized Stokes parameters between the continuum and the line is small and does not exceed \(\sim\)0.3%, which is comparable to the linear polarization measurement error for AGN (0.1\(-\)0.2% in good weather conditions). The difference in the degree of polarization in the two filters is \(\sim\)0.1\(-\)0.4%, and the difference in the polarization angle is no more than 10 degrees. Thus, the swing seen in the spectropolarimetric observations could not be resolved by photometric polarimetry in filters, but this could indicate a difference between the emission line and continuum polarization parameters. Here, the configuration for the Mrk 817 object deserves special attention, when a broad emission line is observed in two filters oriented to the "blue" and "red" wings of its profile. For Mrk 817 (the spectrum of the object with overplotted transmission curves of the filters used is shown in Figure 2 on the right), the difference between the normalized Stokes parameters for the continuum and the line wings reaches \(\sim\)0.7%, and the deviation of the polarization angle from its value in the continuum is \(\pm\) 28\({}^{\circ}\). It should be noted here that the Sy685 filter is also oriented to the atmospheric absorption B-band \(\lambda\) = 6860-6917 A (and Table 1 shows calculations without correction for this band). Nevertheless, the Mrk 817 case most clearly shows that using medium-band filters oriented to different wings of the H\(\alpha\) broad line profile, we can trace the characteristic changes in the polarization angle profile, the wavelength dependence of which acquires a characteristic \(S\)-shaped profile during equatorial scattering on a gas-dust torus.
## 3 First results
We performed polarimetric observations of the AGN sample on the 1-m and 1.82-m telescopes in 2020-2022. The weather and the time allocated within the schedules did not allow us to observe objects with a high cadence, and the total amount of data on the light curves does not exceed 25 epochs, even in the case of the most regularly observed objects. Such a meagre amount of data does not allow us to get a reliable result yet. In this section, we will consider the current status of monitoring of three objects--Mrk 335, Mrk 509, and Mrk 817. The monitoring period and the number of epochs are mean values of the nonpolarized continuum, and the broad line flux and mean polarization degree of the broad line are given in Table 2. There also, we provide the measure of variability calculated using Equation (3) from [39]. The full observational data are given in Appendix A.
### Mrk 335
Mrk 335 (\(z=0.025\), RA 00 06 19.5 Dec +20 12 10.6 J2000) is a well-known narrow-line Sy 1 galaxy. The signs of the equatorial scattering in broad lines were observed in Mrk 335 spectropolarimetric data for the first time in [40]. The violent polarization angle swing along the H\(\alpha\) line profile was confirmed in [22]; here, we present the same data obtained at 6-m BTA telescope of SAO RAS in Figure 1. As it could be seen the polarization angle variations are of about \(\pm\) 50\({}^{\circ}\), yet the polarization degree changes relative to the continuum are minor. In Figure 1, left, the data of image-polarimetry obtained with Zeiss-1000/MAGIC 01 November 2022 are overplotted; in Figure 1, right, the data of image-polarimetry obtained with AFOSC 31 October 2022 at Asiago observatory are given. Note here that in all the cases, only slight differences of the polarization parameters between continuum and broad line bands are detected.
In total, 10 epochs of Mrk 335 polarimetric data were obtained with MAGIC and 13 epochs with AFOSC. Unfortunately, due to the different brightness of the field stars in the filters used in MAGIC and AFOSC, we were unable to use the same local standard. For the MAGIC data reduction, we used the reference star [GKG2008] 5 nearby at a distance of \(\sim\)1\(\farcm\)3 from the source, and the star TYC 1184-771-1 at a distance of \(\sim\)2\(\farcm\)5 for AFOSC data. The light curves polarized and broad integral line fluxes and continuum flux are shown in Figure 3. For AFOSC data, the broad line flux is the sum of fluxes measured in two filters (671 and 680). In all cases, the fluxes are given in mJy. One can see that despite the broad line curves the continuum flux light curves seem to behave in a different way in the data sets obtained with MAGIC and AFOSC. That was the reason not to merge the light curves not to introduce the systematical errors in the correlation analysis.
\begin{table}
\begin{tabular}{c c c c c c c} \hline & **Period** & \(N\) & \(I_{\text{cont}}\), mJy & \(F_{\text{var}}^{\text{cont}}\) & \(I_{\text{line}}\), mJy & \(F_{\text{var}}^{\text{line}}\) & \(P_{\text{line}}\), \% \\ \multicolumn{1}{c}{**(1)**} & **(2)** & **(3)** & **(4)** & **(5)** & **(6)** & **(7)** & **(8)** \\ \hline Mrk 335 & 9 September 2020–1 November 2022 & 23 & 90.2 \(\pm\) 7.9 & 0.101 & 140.1 \(\pm\) 9.9 & 0.085 & 0.9 \(\pm\) 0.3 \\ Mrk 509 & 26 May 2020–29 August 2021 & 11 & 12.2 \(\pm\) 0.5 & 0.018 & 21.0 \(\pm\) 2.2 & 0.092 & 1.9 \(\pm\) 0.8 \\ Mrk 817 & 14 December 2020–28 August 2021 & 8 & 23.7 \(\pm\) 1.1 & 0.036 & 36.9 \(\pm\) 1.3 & 0.044 & 1.9 \(\pm\) 0.4 \\ & & & & & 23.3 \(\pm\) 1.2 & 0.072 & 2.1 \(\pm\) 0.8 \\ \hline \end{tabular}
\end{table}
Table 2: Mean values of nonpolarized continuum and broad line flux and mean polarization degree of the broad line obtained for Mrk 335, Mrk 509, and Mrk 817 during the observations: (1) object name, (2) monitoring period (dd/mm/yyyy), (3) the number of epochs, (4) mean continuum flux in mJy, (5) variability measure of the continuum flux, (6) mean broad H\(\alpha\) line flux with the continuum subtracted, in mJy, (7) variability measure of the broad line flux, (8) mean polarization degree of the broad line in %. For Mrk 817, the upper values of \(I_{\text{line}}\), \(F_{\text{var}}^{\text{line}}\) and \(P_{\text{line}}\) are for the “blue” line profile wing and the bottom values are for the “red” wing.
Additionally, we have estimated the polarization of Mrk 335 using observations at Rozhen observatory taken at 15 August 21. For IF642 oriented to the continuum \(P=1.25\pm 0.26\%\), and \(\varphi=78^{\circ}.5\pm 6^{\circ}.0\); for IF672 oriented to the H\(\alpha\) emission line \(P=1.13\pm 0.13\%\), and \(\varphi=89^{\circ}.0\pm 3^{\circ}.3\). Here, one can detect the slight difference of the absolute polarization level, particularly seen in the polarization angle. However, due to the lack of the local standard stars in the FoV of the source we are not able to correctly take into account the atmospheric depolarization, ISM effects, etc. Moreover, the flux calibration is also absent which makes it complicated to exclude the polarized continuum flux from the polarized line flux in a proper way. Thus, these data illustrate the possibilities of the medium-band polarimetry at 2-m telescope, yet it is not used for the further analysis.
To determine the time delay between the light curves, we performed a cross-correlation analysis using two approaches. As the main analysis tool, we used the JAVELIN code [41; 42; 43], widely used in AGN reverberation mapping campaigns. In Figure 4 the results of the JAVELIN analysis of the time delay of the polarized broad line emission \(I_{\rm line}^{p}\) relative to the variable continuum flux \(I_{\rm cont}\) for AFOSC (red histogram) and MAGIC (green histogram) data are presented. Additionally, we conducted a joint analysis of both light curves, combining them so that one of the light curves is shifted in time relative to the other by more than the duration of the entire monitoring period. The results of the analysis of this synthetic curve are shown in Figure 4 in black and provide only additional information. Similarly, in Figure 5 the time delay of nonpolarized broad line emission \(I_{\rm line}\) relative to the continuum \(I_{\rm cont}\) is analyzed. Also, to estimate the delay between the light curves, we used the code ZDCF [44; 45]. Separately for the MAGIC and AFOSC light curves, cross-correlation analysis using ZDCF did not show results due to large uncertainties caused by a small number of points. The results of estimating the delay between the combined synthetic curves are given in Figures 4 and 5 in grey.
Figure 3: Mrk 335 light curves. From top to bottom: polarized broad line flux \(I_{\rm line}^{p}\), integral broad line flux \(I_{\rm line}\) with subtracted continuum and integral continuum flux \(I_{\rm cont}\). Fluxes are given in mJy. Red circles are used to denote the AFOSC data, and green squares are for the MAGIC data. For AFOSC, the broad line flux is the sum of fluxes measured in two filters (671 and 680).
Despite the number of epochs comparable to what we previously obtained for Mrk 6 in spectropolarimetric mode [26], the analysis of the delay between \(I_{\rm line}^{p}\) and \(I_{\rm cont}\) does not show an unambiguous peak for Mrk 335. In Figure 4 it can be seen that the histogram of estimates of time delays for AFOSC and MAGIC data is close, about 180 and 150 days, respectively, but the peak of the time-lag distribution has an error of 25-40%. Synthetic data show two peaks at 224 \(\pm\) 24 days and 157 \(\pm\) 18 days, where the given errors are formally calculated as the standard deviations of the given Gaussian-like peaks. Here, the larger peak is definitely an artifact, since it is repeated in the analysis of photometric data (Figure 5). The second peak is \(\sim\)4 times weaker than the first one, but its position roughly coincides with other estimates. Thus, we see the tendency of the \(I_{\rm line}^{p}\) light curves to show a delay of about 150-180 days. However, such a time lag is close to the half of year, which characterize the typical length of the observational periods of the source, and is shorter than the gaps between these periods. This might indicate the measured value as the analysis artefact. Additionally, we performed data analysis of the time delay of \(I_{\rm line}\) relatively to \(I_{\rm cont}\) to estimate \(R_{\rm BLR}\) if possible. JAVELIN histograms for AFOSC and MAGIC data, as well as analysis of synthetic light curves by the ZDCF method, indicate an estimated delay between 73 \(\pm\) 18 and 87 \(\pm\) 17 days. The AFOSC data separately demonstrate a peak at the value of 27 \(\pm\) 17 days, which is close to the cadence of observations (about 1 time per month) and can be an artifact of analysis.
Figure 4: Time delay analysis of the polarized broad line emission \(I_{\rm line}^{p}\) relative to the variable continuum flux \(I_{\rm cont}\) for Mrk 335. Histograms show the results of JAVELIN analysis based on AFOSC data (red histogram), MAGIC data (green histogram), and combined time-shifted data (black histogram). On the left \(y\)-axis the frequency of occurrence of parameter values sets during MCMC sampling is shown. 5000 sets of parameter values were used in the simulation. The grey curve shows the results of the ZDCF analysis of the combined time-shifted data (values are given on the \(y\)-axis on the right).
### Mrk 509
As well as in the case of Mrk 335, Mrk 509 (\(z=0.035\), RA 20 44 09.8 Dec \(-10\) 43 24.7 J2000) was observed in spectropolarimetric mode firstly in [40] and later in [22]. The latter data were used in Figure 2 (left). As can be seen in the figure, we selected for observations two medium-band (FWHM = 250 A) filters oriented to a broad line and a continuum near. The overplotted image-polarimetry data was obtained with Zeiss-1000/MAGIC 29 August 2021. As in the case of Mrk 335, Mrk 509 shows only a slight difference in the polarization parameters between continuum and broad line bands, more detectable in the polarization angle variations.
As far as the object can be observed for only four months a year, in 2020-2021, we gained only 11 epochs using Zeiss-1000/MAGIC. To reduce the data, we used the reference star TYC 5760-1396-1 nearby at a distance of \(\sim\)1\(\aas@@fstack{\prime}\)5 from the source. The light curves are shown in Figure 6. For all measured fluxes \(I_{\rm line}^{p}\), \(I_{\rm line}\), and \(I_{\rm cont}\) the variability is observed, and the pattern of \(I_{\rm line}^{p}\) variations differs from other light curves. The curves show a large gap between the observational epochs associated with the inability to observe the object evenly throughout the year.
To estimate the time-delay in a broad polarized line, we applied the JAVELIN code to the received data. It turned out that despite a small number of epochs, the analysis revealed an unambiguous peak at \(114^{+12.7}_{-8.8}\) days (Figure 7). We have also applied the JAVELIN analysis to the data taken only in 2020 excluding the epochs from 2021, and we have obtained the same time-delay. This estimate corresponds to the size of the dusty region \(\sim\)0.1 pc. Note, however, that the ZDCF analysis did not show a significant correlation. Additionally, we performed 2020 data analysis of the time delay of \(I_{\rm line}\) relatively to \(I_{\rm cont}\) to estimate \(R_{\rm BLR}\) following the Mrk 335 case. JAVELIN histogram is shown in Figure 8 demonstrating two clear peaks at \(39\pm 5\) days and at \(85\pm 11\) days (which is approximately \(39\pm 5\) days \(\times\) 2). Taking into account that the median cadence of observations in 2020 is \(\sim\)16 days, we could not unambiguously make a conclusion about the origin of the double-peaked correlation histogram.
Figure 5: Time delay analysis of the broad line emission \(I_{\rm line}\) relative to the variable continuum flux \(I_{\rm cont}\) for Mrk 335. The coloured histograms and labels are the same as in Figure 4.
Figure 6: Mrk 509 light curves obtained with MAGIC. From top to bottom: polarized broad line flux \(l_{\rm line}^{p}\) and integral broad line flux \(I_{\rm line}\) with subtracted continuum and integral continuum flux \(I_{\rm cont}\). Fluxes are given in mJy.
Figure 7: Time delay analysis of the polarized broad line emission \(l_{\rm line}^{p}\) relative to the variable continuum flux \(l_{\rm cont}\) for Mrk 509. Histogram show the results of JAVELIN analysis based on MAGIC data. On \(y\)-axis the frequency of occurrence of parameter values sets of during MCMC sampling is shown. 10000 sets of parameter values were used in the simulation. The time-delay estimation equal to \(114^{+127}_{-8.8}\) days is shown with the yellow vertical line.
### Mrk 817
Mrk 817 (\(z=0.031\), RA 14 36 22.1 +58 47 39.4 J2000) is a Sy 1.2 AGN, where the equatorial scattering was discovered in [22]. The data published in that work is presented in Figure 2, right, where the transmission curves of the two filters 100 A-width oriented to "red" and "blue" broad line wings and selected for monitoring are also shown. A broader (FWHM = 250 A) filter was chosen for continuum polarimetry. The spectropolarimetric data demonstrate small changes in the polarization degree \(P\) and a violent switch of the polarization angle \(\varphi\) along the line profile. In Figure 2, right, the data of Mrk 817 image-polarimetry obtained on the MAGIC 28 August 2021 device is also plotted. Comparing the values obtained in two filters oriented to different wings of the lines, one can see a significant difference in the polarization parameters. This indicates that the use of a similar filter configuration in image-polarimetry mode may be an alternative approach for identifying signs of equatorial scattering using small-class telescopes or large instruments for observations of faint AGNs where the spectropolarimetric data show too low a signal-to-noise ratio. Note here that in Figure 2 for the Mrk 817 data, there are visible differences in the polarization parameters between the observations of 2014 and 2021, especially in Sy685 band. In this case, it is important to obtain newer spectropolarimetric data in order to confirm whether such a difference is the result of the influence of external factors (e.g., the variability of atmospheric B-band 6860-6917 A) or internal changes in the spectrum of Mrk 817 in polarized light.
During the Mrk 817 monitoring, 8 epochs of observations were obtained using the MAGIC device in the period from December 2020 to August 2021. To reduce the data, we used a reference star of comparable brightness in the field of the object (RA 14 36 06.7 +58 50 38.4 J2000) at a distance of \(\sim\)3\(\arcmin\).6 from the source. The data were obtained relatively evenly, once or twice every two months. Unfortunately, during the monitoring period, Mrk 817 did not show
Figure 8: Time delay analysis of the broad line emission \(I_{\rm line}\) relative to the variable continuum flux \(I_{\rm cont}\) for Mrk 509. Histogram show the results of JAVELIN analysis based on MAGIC data. On \(y\)-axis the frequency of occurrence of parameter values sets of during MCMC sampling is shown. 10,000 sets of parameter values were used in the simulation.
significant variability either in the continuum or in the broad line. The light curves of Mrk 817 are given in Figure 9. It can be seen that \(I_{\rm line}\) does not show differences between "red" and "blue" wings in integral light (see the middle panel in Figure 9). However, in polarized light, for several epochs of observations, both the difference in \(I_{\rm line}^{p}\) between filters is visible (e.g., in the epoch of 18 December 2020), and a violent change of \(I_{\rm line}^{p}\) between epochs (e.g., 02 July 2021 and 07 July 2021, see the upper panel in Figure 9). The meagre amount of data with no significant variability did not allow us to obtain any result in cross-correlation analysis.
## 4 Discussion
Since 2020 due to the lack of stable weather meeting our requirements in two observatories involved in the project (SAO RAS and Asiago) we have not reached the desired cadence when observing a sample of objects, and the total number of epochs obtained has reached 23 for only one object. Despite this, we managed to obtain some first results for three sample objects Mrk 335, Mrk 509, and Mrk 817, presented in this paper. As these AGNs are studied in deep detail in various multi-wavelength campaigns, here we discuss our results in comparison with the measurements given in the literature to investigate if the provided estimations are reliable.
Mrk 335 was intensively studied in numerous reverberation mapping campaigns. The BLR size \(R_{\rm BLR}\) was measured as \(16.4^{+5.2}_{-3.2}\)[46; 47], \(17.3^{+4.9}_{-4.3}\)[48], \(15.7^{+3.4}_{-4.0}\)[49], \(14.3\pm 0.7\)[50; 51], \(10.6^{+1.7}_{-2.9}\)[52], \(17.0^{+2.5}_{-3.2}\) It days [53] in Hf broad line and \(20.5^{+2.0}_{-2.8}\) It days [28] in H\(\alpha\) broad line. Given estimations of the BLR size are \(\sim\)10 times larger than the accretion disk size of \(\sim\)1 It day [54]. According to the scale relation from [22], \(R_{\rm sc}\simeq 5.1R_{\rm BLR}\), so \(R_{\rm sc}\) for Mrk 335 could be estimated as \(\sim\)70 It days. IR reverberation mapping in \(K\) band provided \(R_{\rm IR}\approx 166\) It days [55] which is two times greater than a value obtained using the scale relation. Lyu et al. [56] found the size of the dusty region in _WISE WI_ band \(R_{W1}\approx 1300\) It days. According to the relation of the dusty region sizes in different bands \(R_{K}\):\(R_{W1}=0.6\):1 given in the same paper, \(R_{K}\approx 770\) It days which is much larger than other estimations and seems not to be reliable. Thus, the value of the polarized emission line time lag is predicted to be in \(\sim\)70-170 days range. Throughout
Figure 9: Mrk 817 light curves obtained with MAGIC. From top to bottom: polarized broad line flux \(I_{\rm line}^{p}\) and integral broad line flux \(I_{\rm line}\) with subtracted continuum and integral continuum flux \(I_{\rm cont}\). Red and blue dots in \(I_{\rm line}^{p}\) and \(I_{\rm line}\) light curves denote the fluxes of the “red” and “blue” broad line profile wings, respectively. Fluxes are given in mJy.
our polarimetric reverberation mapping monitoring the polarized H\(\alpha\) emission showed a delay of about 150-180 days, which is in good agreement with the predictions of the size of the dusty region for Mrk 335. However, we prefer to refrain from the statement of such a result, primarily due to the fact that such an estimation may be caused by a correlation artefact since it is close to the value of 1/2 year. Moreover, as it was mentioned above, this estimate is longer than the length observation periods of the source, but shorter than the gaps between them. In addition, the maxima of cross-correlation functions have large uncertainties of 25-40%, which makes them unconfident. Moreover, the results of the analysis of the corresponding photometric data do not coincide with the known estimates of \(R_{\rm BLR}\) for Mrk 335. These facts give reasons to doubt the sustainability of the results we have obtained so far.
Sy1 galaxy Mrk 509 was also studied in multiple campaigns covering the entire electromagnetic spectrum (see, e.g., [57] for a review). The BLR size \(R_{\rm BLR}\) was measured as \(76.7^{+6.3}_{-6.0}\)[46; 47] and \(79.6^{+6.1}_{-5.4}\) It days [49] in H\(\beta\) broad line. From our estimations, relying on the maximum value from the double-peaked histogram in Figure 8\(R_{\rm BLR}=\)85 \(\pm\) 11 It days which coincides with the measurements given in the literature. However, we cannot still explain the existence of the second estimation of the time delay being two times shorter. The estimations of \(R_{\rm BLR}\) predict a very extended BLR region, which is much larger (\(\sim\)40 times) than the accretion disk size of \(\sim\)2 lit day [58; 59], on the one hand, and only \(\sim\)2 times less then IR reverberation mapping in \(K\) band estimations \(R_{\rm IRRM}\approx\) 131 It days [55]. GRAVITY Collaboration et al. [60] resolved the hot gas structure in Mrk 509 with VLTI/GRAVITY near-infrared interferometry and measured the size of the dusty region \(R_{\rm IRIR}\approx 296\pm 30\) It days. Using given \(R_{\rm BLR}\) measurements and the scale relation from [22], \(R_{\rm sc}\approx\) 408 It days. While \(R_{\rm IRIF}>R_{\rm IRRM}\) is usually predicted (see [26] for references), \(R_{\rm sc}\) should be less then dusty structures in AGN. Apparently, such controversial measurements rise issues of the dusty region size. During our monitoring, we estimated \(R_{\rm sc}\approx 114^{+12.7}_{-8.8}\) It days, or \(\sim\)0.1 pc. A comparison with near-IR torus interferometric data [60] shows that the equatorial scattering region is 2 times smaller than the radius of the dusty structure in the IR band, which is similar to what we previously obtained for Mrk 6 [26]. However, our estimate of \(R_{\rm sc}\), although consistent with the estimates of the size of the dusty region obtained by two independent methods, is only \(\sim\)1.3-1.6\(R_{\rm BLR}\). In general, all other estimates of the size of structures inside the central Mrk 509 parsec obtained independently indicate that \(R_{\rm BLR}\) is most likely overestimated, for example, due to the presence of outflows driven by AGN observed for Mrk 509 (e.g., [61]). This reveals the necessity of more intensive homogeneous monitoring in polarized and integral light.
Mrk 817 is in the focus of several vast monitoring campaigns, e.g., AGN STORM 2 [62]. The BLR size \(R_{\rm BLR}\) was measured as \(15.0^{+4.2}_{-3.4}\)[46; 47], \(21.8^{+2.4}_{-3.0}\)[49], \(14.0^{+3.4}_{-3.5}\) It days[63] in H\(\beta\) broad line, and \(28.3^{+2.1}_{-1.8}\), \(26.8^{+2.8}_{-2.5}\), \(51.7^{+14.9}_{-1.3}\) It days simultaneously in H\(\beta\), H\(\gamma\) and FeII lines, respectively [64]. Given estimations of the BLR size are \(\sim\)3-6 times larger than the accretion disk size of \(\sim\)4.5 It day [54]. Mrk 817 was observed within IR RM monitoring and the dusty region size was estimated as \(R_{\rm IRRM}=89\pm 9\) It days [55]. This coincides with the expected estimates of \(R_{\rm sc}\) using the scale relation \(R_{\rm sc}\approx 95\pm 15\) It days, which is predicted for our measurements. Due to the insignificant variability of the polarized and non-polarized fluxes, despite the monitoring period being two times longer than the expected time lag no result was obtained for Mrk 817. However, the variability of blue and red wings of polarized broad H\(\alpha\) line is intriguing enough to go on with the observations more intensively.
## 5 Future perspectives
The new approach of polarimetric reverberation mapping in broad lines looks promising since it can provide additional information about the size of structures in AGN, and therefore, better understand the nature of processes associated with accretion onto SMBH. As we have shown in the given paper, the technique in medium-band filters together with one-shot differen
tial polarimetry is suitable for small telescopes, yet needs a careful adaptation. It is important to note that the polarimetry of faint polarized sources, in contrast to, e.g., differential photometry, put high restrictions on permissible weather conditions. Even weak cirruses or a haze can significantly depolarize the radiation of observed objects, and the variability of atmospheric transparency between exposures significantly degrades the quality of data. Here we consider several issues related to the adaptation of observations in the framework of monitoring.
1. Filter selection. At the beginning of the monitoring observations, we selected filters from our existing sets (see Section 2), focusing on our experience of observations in the framework of photometric reverberation mapping of AGN [65; 66]. Because of this, we mainly aimed to use pairs of 250 A-width filters oriented to a broad line and a continuum near. However, as shown in Table 1 via the convolution of spectropolarimetric data with filter transmission and in Figures 1 and 2 using the example of image-polarimetric data for Mrk 335 and Mrk 509, this strategy does not always seem optimal. This is due to that since the variations of the polarization parameters along the wavelength during equatorial scattering are small, and \(\varphi\) has an \(S\)-shaped profile, even the 250 A-width filter may be too broad, and the average value of the line polarization in the filter, summing up by wavelengths, will not differ markedly from the continuum. However, it is important to note that the differences are small when we consider polarization normalized by intensity. When the polarized flux in the line is considered with the subtracted polarized flux of the continuum, the behaviour of the variability in the broad line begins to differ significantly in polarized and nonpolarized light. It corresponds to the fact that we see this radiation coming from different regions of the AGN. This is what we observe in the case of objects Mrk 335 and Mrk 509. Thus, it can be argued that even if a medium-band filter covering the whole line profile is selected, the variable flux \(l_{\mathrm{line}}^{p}\) is detectable.
In cases of bright AGNs, a more optimal strategy may be to choose narrower filters oriented on different sides from the centre of a broad emission line. In our case, we were able to implement this by using 100 A-width filters for Mrk 817 monitoring. The light curves we obtained are not yet sufficient to reveal the approach efficiency within the monitoring framework. However, as was shown above, such a strategy is more suitable to check AGNs for signs of equatorial scattering in polarized light efficiently using telescope time.
Another problem for us was the combination of observational data obtained using a different set of filters. One can see this in the example of Mrk 335 observations, which were carried out at the 1-m telescope of the SAO RAS and 1.82-m in Asiago. Having the same trend, the variability of radiation, especially in the continuum, differs significantly between the data obtained in 250 A-width filter SED650 and in 70 A-width H\(\alpha\). It is unlikely that the reason for this difference was the AGN emission lines being on the transmission edge of the SED650 filter, e.g., the [FeX] 6374 A line. The more likely reason may be that different reference stars were used when processing the data sets obtained from MAGIC and AFOSC. In any case, the data obtained require additional analysis.
2. Aperture selection and host-galaxy subtraction. Two objects presented in this article, Mrk 335 and Mrk 509, are almost star-shaped sources. Their host-galaxies, which fitting can be found in [49], have a small contribution to the optical band. In our observations, taking into account the image quality, the profile of objects was indistinguishable from the profiles of stars in the field. Thus, we were able to choose the size of the aperture for photometry so that the signal-to-noise ratio was maximum. However, when the host galaxy is extended, the choice of aperture is complicated, as the larger the aperture size, the more galactic flux is recorded, lowering the contrast of the polarized radiation of the nucleus. For Mrk 817 polarimetry, a fixed aperture size of \(\sim\)4" was chosen so that when processing data with different image quality (data with a seeing better than 3" was used), the same contribution of the host-galaxy would be inside the aperture. However, greater accuracy will be achieved if, when processing AGN images with extended galaxies, a galaxy model is subtracted from the frames. If this is not so
critical in the case of Mrk 817, then for, e.g., NGC 4151, where the host-galaxy has a size \(>\)3\({}^{\prime}\) and the contrast of the nucleus is relatively small, subtraction of the galaxy fitting may be necessary to construct the light curves. This should be the subject of a separate detailed check.
3. Cadence of observations. Currently, based on simulated AGN light curves, attempts are being made to determine the optimal cadence for reverberation mapping observations. Improving cadence leads to fewer artifacts in cross-correlation analysis, but requires a large amount of telescope time. The upper limit for time resolution is the expected time delay since when observations are made with a lower frequency, the observed variability will not be related. For example, Kovacevic et al. (2016) offer \(\sim\)5-days cadence for estimates of the accretion disk size of typically 1-10 It days size using LSST. Woo et al. (2018) suggested having a factor of 5 or better time resolution for a given time lag. Due to the time allocation on the telescopes, the typical cadence of our observations was planned to be about 1 month. Such a cadence is close to optimal with the expected sizes of \(R_{\rm sc}\) for Mrk 509; for Mrk 335 and Mrk 817, a cadence of \(\sim\)20 days would be more effective. However, due to weather conditions, it turned out to be impossible to conduct observations every month, so the real-time resolution is worse. Moreover, our estimates show that it is important to simultaneously measure \(R_{\rm sc}\) and \(R_{\rm BLR}\) to improve the scale relation (which may, generally speaking, have a different appearance for different objects). In this case, observations should be carried out at least 1 (preferably 2) times a week. Such a cadence can be achieved using a telescope, observations on which are fully oriented only for such a task. According to the adaptation of the method of reverberation mapping of the polarized lines to observations on a 1-m-class telescope, such a project has prospects for development.
## 6 Conclusions
We presented the first results of reverberation mapping in polarized broad lines conducted at the 1-m telescope of SAO RAS and at 1.82-m at Astronomical observatory Asiago. Since 2020, we obtained the first results for the three most frequently observed objects from our sample of type 1 AGNs with equatorial scattering, namely, Mrk 335, Mrk 509, and Mrk 817.
* For Mrk 335, the measured dusty region size is \(R_{\rm sc}\sim\) 150-180 It days. This result coincides with the values predicted concerning the several estimations of the dusty structure in the IR band and measurements of \(R_{\rm BLR}\) via optical reverberation mapping campaigns. However, due to the irregular observations, the monitoring is going on to check whether our result is a cross-correlation artefact.
* For Mrk 509, we obtained \(R_{\rm sc}\approx 114^{+12.7}_{-8.8}\) It days, or \(\sim\)0.1 pc. This is 2 times smaller than the radius of the dusty structure in the IR band.
* For Mrk 817, no result is obtained due to the low variability of the object during the monitoring period. However, observations of the polarized flux in the two line profile wings demonstrate a sharp variability between epochs as well as a significant difference in the polarized flux in the two wings during one epoch. This shows the potential possibility of recording the delay of a polarized signal of a broad line in different parts of its profile.
Conceptualization, Elena Shablovinskaya and Luka C. Popovic; Methodology, Elena Shablovinskaya and Luka C. Popovic; Software, Elena Shablovinskaya and Roman Uklein; Validation, Dragana Ilic; Formal analysis, Luka C. Popovic, Roman Uklein and Eugene Malygin; Investigation, Elena Shablovinskaya and Eugene Malygin; Data curation, Elena Shablovinskaya, Dragana Ilic, Stefano Ciroi, Dmitry Oparin, Luca Crepaldi, Lyuba Slavcheva-Mihova, Boyko Mihov and Yanko Nikolov; Writing--original draft, Elena Shablovinskaya; Writing--review & editing, Luka C. Popovic, Eugene Malygin and Dragana Ilic; Visualization, Elena Shablovinskaya and Eugene Malygin; Supervision, Luka C. Popovic; Project administration, Elena Shablovinskaya.
**Funding:** E.S., E.M. and R.U. were supported by RFBR grant, project number 20-02-00048 while conducting observations on 1-m telescope of SAO RAS, reducing and analyzing the polarimetric data. L.C.P., and D.I. acknowledge funding provided by Astronomical Observatory (the contract 451-03-68/2022-14/200002) and by University of Belgrade-Faculty of Mathematics (the contract 451-03-68/2022-14/200104), through the grants by the Ministry of Education, Science, and Technological Development of the Republic of Serbia. L.S.M. and B.M. acknowledge the project "Reverberation mapping of quasars in polarized light" within the agreement between Bulgarian Academy of Sciences and Serbian Academy of Sciences and Arts, 2020-2022. D.I. acknowledges the support of the Alexander von Humboldt Foundation.
**Institutional Review Board Statement:** Not applicable.
**Informed Consent Statement:** Not applicable.
**Data Availability Statement:** The observational data underlying this article is available on request 1 yr after the publication of this paper.
**Acknowledgments:** Observations with the SAO RAS telescopes are supported by the Ministry of Science and Higher Education of the Russian Federation. The renovation of telescope equipment is currently provided within the national project "Science and Universities".
**Conflicts of Interest:** The authors declare no conflict of interest.
## Appendix A |
2305.17394 | One-Step Knowledge Distillation and Fine-Tuning in Using Large
Pre-Trained Self-Supervised Learning Models for Speaker Verification | The application of speech self-supervised learning (SSL) models has achieved
remarkable performance in speaker verification (SV). However, there is a
computational cost hurdle in employing them, which makes development and
deployment difficult. Several studies have simply compressed SSL models through
knowledge distillation (KD) without considering the target task. Consequently,
these methods could not extract SV-tailored features. This paper suggests
One-Step Knowledge Distillation and Fine-Tuning (OS-KDFT), which incorporates
KD and fine-tuning (FT). We optimize a student model for SV during KD training
to avert the distillation of inappropriate information for the SV. OS-KDFT
could downsize Wav2Vec 2.0 based ECAPA-TDNN size by approximately 76.2%, and
reduce the SSL model's inference time by 79% while presenting an EER of 0.98%.
The proposed OS-KDFT is validated across VoxCeleb1 and VoxCeleb2 datasets and
W2V2 and HuBERT SSL models. Experiments are available on our GitHub. | Jungwoo Heo, Chan-yeong Lim, Ju-ho Kim, Hyun-seo Shin, Ha-Jin Yu | 2023-05-27T07:20:54Z | http://arxiv.org/abs/2305.17394v2 | One-Step Knowledge Distillation and Fine-Tuning in Using Large Pre-Trained Self-Supervised Learning Models for Speaker Verification
###### Abstract
The application of speech self-supervised learning (SSL) models has achieved remarkable performance in speaker verification (SV). However, there is a computational cost hurdle in employing them, which makes development and deployment difficult. Several studies have simply compressed SSL models through knowledge distillation (KD) without considering the target task. Consequently, these methods could not extract SV-tailored features. This paper suggests One-Step Knowledge Distillation and Fine-Tuning (OS-KDFT), which incorporates KD and fine-tuning (FT). We optimize a student model for SV during KD training to avert the distillation of inappropriate information for the SV. OS-KDFT could downsize Wav2Vec 2.0 based CEAP-TDNN size by approximately 76.2%, and reduce the SSL model's inference time by 79% while presenting an EER of 0.98%. The proposed OS-KDFT is validated across VoxCeleb1 and VoxCeleb2 datasets and W2V2 and HuBERT SSL models. Experiments are available on our GitHub 1.
Footnote 1: [https://github.com/jungwoo4021/OS-KDFT](https://github.com/jungwoo4021/OS-KDFT)
Jungwoo Heo\({}^{*}\), Chan-yeong Lim\({}^{*}\), Ju-ho Kim, Hyun-seo Shin, and Ha-Jin Yu\({}^{\dagger}\) School of Computer Science, University of Seoul
[email protected], [email protected], [email protected], [email protected], [email protected]
**Index Terms**: Speaker verification, Self-supervised learning model, Knowledge-distillation
## 1 Introduction
Speaker verification (SV) verifies whether the input utterance is vocalized by the target speaker. Most SV studies have employed hand-crafted acoustic features such as spectrogram, Mel spectrogram, and the Mel-Frequency Cepstral Coefficient (MFCC) as inputs [1, 2]. Recently, in speech signal processing fields, there has been an increasing interest in speech self-supervised learning (SSL) models such as Wav2Vec 2.0 (W2V2) [3], HuaBERT [4], and WavLM [5] because they have the potential to extract more affluent representation than the hand-craft method [6, 7]. Following this trend, a recent SV work achieved remarkable performance using speech SSL models [8].
Despite speech SSL models' impressive performance, there is a computational cost hurdle in employing them. The largest version of W2V2 and HuBERT contains approximately 317M and 964M parameters, respectively. Because of their large size, the development and deployment of models present a significant challenge. Consequently, research communities have focused on building lightweight SSL models, and some have attempted to train small-size models [9]. However, due to their limited capacity, training a small model with a significant amount of data is difficult [10, 11]. As an alternative approach, several studies have explored the possibility of knowledge distillation (KD), which is a well-known model compression strategy that transfers knowledge from a large model to a smaller one [12]. Sanh _et al._[13] and Jiao _et al._[14] have devised DistilBERT and Tinybert that distills the BERT and demonstrates the potential of KD in the natural language processing (NLP). Following these approaches, in the acoustic signal processing field, Lee _et al._[15] and Chang _et al._[16] designed FitHuBERT and DistilHuBERT to deliver outstanding performance on the speech processing universal performance benchmark [17]. Moreover, Peng _et al._ successfully downsized W2V2 framework through KD [18]. Therefore, studies using KD have become mainstream in current research.
Nevertheless, simply compressing SSL models through KD has fundamental limitations. Large SSL models are often optimized for the target task to extract task-tailored features, which are better than the original features [6, 7, 8]. As illustrated in Figure 1 (a), previous SSL compressing studies first constructed a lightweight SSL model through KD and then utilized it for the target task with a fixed state. Because they cannot consider the target task, they could not extract the task-customized features. Fine-tuning (FT) of the distilled SSL models can mitigate this concern, it demands additional training. Furthermore, determining the appropriate transition point from KD to FT is difficult because knowing the optimal quantity of KD for FT is complicated. Thus, iterative empirical experiments may be required to decide the best transition point.
In this paper, we aim to compress the SSL model for SV
Figure 1: _Comparison of the previous SSL model KD and proposed OS-KDFT. (a) describes the training steps of the previous method that trains KD and FT independently. (b) illustrates the process of OS-KDFT, which perform KD and FT simultaneously. Model ‘T’ and ‘S’ means ’teacher network’ and ’student network’._
tasks efficiently to facilitate development and deployment. We proposed a novel training strategy One-Step Knowledge Distillation and Fine-Tuning (OS-KDFT) that jointly trains KD and FT. We believed that by performing KD and FT concurrently, teacher networks could effectively transfer crucial information for SV. Therefore, in the proposed OS-KDFT, we incorporate KD and FT training processes, as depicted in Figure 1 (b). Through this, the proposed method can perform KD while considering the target task to avert the distillation of inefficient information for the target task. Moreover, this method avoids the picking transition point of KD and FT learning. OS-KDFT is explained in detail in Section 3.
Through this paper, we make the following contributions.
* To the best of our knowledge, OS-KDFT is the first approach to compress the speech SSL model while concurrently fine-tuning. Previous studies have only concentrated on condensing the SSL through KD, but we also considers FT to extract task-tailored features.
* The proposed OS-KDFT has effectively reduced W2V2 + ECAPA-TDNN size by 76%, and the SSL model's inference time by 79% while showing an EER of 0.98%.
* The proposed OS-KDFT is validated across VoxCeleb1 and VoxCeleb2 datasets and W2V2 and HuBERT speech SSL models.
## 2 Related work
Speech SSL models have demonstrated satisfactory performance in many acoustic signal processing research. Nevertheless, due to their high computational costs, studies on model compression have drawn attention [13, 14]. This section introduces _i)_ previous efforts, _ii)_ why we studied KD, and _iii)_ the research in the NLP that inspired our proposed method.
### SSL model compression
Model compression methods include quantization, pruning, and knowledge distillation. To reduce the size of each parameter, quantization represents weights in lower bitwidth representations. Wang _et al._[19] suggested the potential of quantization by successfully reducing the W2V2 framework. Nevertheless, quantization has a limitation in that it cannot reduce the number of parameters. Pruning removes weights or channels that have minimal impact on the output [20]. Lai _et al._[21] showed the superiority of this technique by applying the pruning method to the speech SSL model. However, pruning presents difficulties in establishing appropriate criteria for selecting the parameters to be pruned. Knowledge distillation refines knowledge from a large model to a small model. KD could reduce the number of parameters and avoid the effort of finding parameters to remove. Furthermore, this method demonstrated its effectiveness in various lightweight speech SSL frameworks, such as FitHuBERT [15], Distilnubert [16], and LightHuBERT [22]. Therefore, in following this trend, we utilized KD as a technique for compressing the speech SSL model.
### Distribution mismatch between teacher and student
Researchers in the NLP have argued that the ideal distribution for students might be different from the teacher's output [23, 24]. Their studies determined that distribution disparity can occur despite teachers and students conducting the identical task. In our study, the student model learns SV that the teacher model has never trained. Therefore, the distributional gap between the ideal student and teacher might be more significant than the NLP research. From this perspective, we modified the architecture of the student network to bridge distribution mismatch as in Section 3.
## 3 Os-Kdft
Model compression has become an increasingly important topic in the research community, because it can facilitate the development and deployment of deep neural networks. Many SSL model application studies have explored various KD techniques, but they only focused on distilling teachers' knowledge regardless of the target tasks. Thus, this study aims to transfer teachers' knowledge to be suitable for SV. To this end, we suggest the novel training strategy OS-KDFT, which incorporates teacher knowledge transfer and speaker verification tuning. OS-KDFT employs a unique structured student network that has two branches to imitate teachers' output and perform the target task. This section explains the overall architecture of the student, the weight initialization, and the learning rate setting, modified from the original.
### Architecture
We distilled the W2V2 and HuBERT frameworks, consisting of convolutional neural network (CNN) and transformer encoders. We constructed a student network with reduced number of parameters by limiting the number of transformer encoder layers. Figure 2 (a) describes the architecture of the student network, in which the student network has two routes: the non-adapter path for KD and the adapter path for SV. Both paths share CNN and encoder weights, but only the adapter path utilizes independent parameters via adapters. We divided the student's branches to mitigate negative transfer, which is performance degradation due to conflicts between tasks in a multi-task setting [25]. Because we jointly optimized the student network with two different tasks (KD and FT), the student was exposed to negative transfer. One of the well-known solutions for mitigating the negative transfer is using independent parameters for each task [26, 27]. Thus, we divided the student's branches and inserted additional parameters (adapters, inspired by LORA [28]) into
Figure 2: The architecture of the student network (a) and encoder module (b). There are two routes, a non-adapter path and an adapter path depending on whether they go through adapters or not. The features passing through the non-adapter path are used for KD training. On the other hand, the adapter path indicates the path of the mini-batch utilized for performing SV. (\(\oplus\) : element-wise summation.)
the one performing SV.
The detailed process of the encoder block is depicted in Figure 2 (b) and Equations (1)-(3). In the non-adapter path, the _Layer norm_ and _Multi-head attention_ layers computes hidden features \(X=\{x_{1},x_{2},...,x_{n}\}\). After that, \(x_{i}\) is converted to \(f(x_{i})\) through the feed-forward layer (Equation 1). The output \(Y=\{y_{1},y_{2},...,y_{n}\}\) is the element-wise summation of \(x_{i}\) and \(f(x_{i})\), as in Equation (2). In the adapter path, the hidden features \(X^{\prime}=\{x^{\prime}_{1},x^{\prime}_{2},...,x^{\prime}_{n}\}\) is calculated via the _Layer norm_ and _Multi-head attention_ layers, and \(x^{\prime}_{i}\) is transformed to \(f(x^{\prime}_{i})\) with the identical mechanism in the non-adapter path. The \(x^{\prime}_{i}\) is also fed to the adapter, which consists of downsampling (\(W_{down}\in\mathbb{R}^{1024\times 64}\)), a ReLU activation function, and upsampling (\(W_{up}\in\mathbb{R}^{64\times 1024}\)). As explained by Equation (3), the out features \(y^{\prime}_{i}\) are the element-wise summation of \(x^{\prime}_{i}\), \(f(x^{\prime}_{i})\), and the output of the adapter.
\[f(x_{i})=Feed\,Forward(x_{i}) \tag{1}\] \[y_{i}=x_{i}+f(x_{i})\] (2) \[y^{\prime}_{i}=x^{\prime}_{i}+f(x^{\prime}_{i})+ReLU(x^{\prime} _{i}W_{down})W_{up} \tag{3}\]
### Weight initialization & learning rate
We randomly initialize the parameters of classifiers and adapters because the original SSL model (teacher) does not contain both modules. On the other hand, CNN and encoder weights are generated utilizing teacher weights. When initializing the student encoders, we use the weights from the teacher encoders in order of closeness to the CNN. This strategy is based on Chen _et al._'s findings that an encoder closer to a CNN can extract affluent features for SV [8].
Chen _et al._ froze the W2V2 during the first 10 epochs to alleviate the disparity of learning quantity between W2V2 and ECAPA-TDNN. By imitating their strategy, we adjusted different learning rates on the CNN, encoders, adapters, and classifiers. Equations (4)-(6) describe the learning rate of each module in epoch \(\tau\). The learning rate of the randomly initialized classifier (\(lr^{\tau}_{c}\)) was reduced following a cosine annealing rule (Equation 4). On the other hand, the learning rate of the pre-trained CNN and encoders (\(lr^{\tau}_{s}\)) gradually increased during the initial 10 epochs and then decreased (Equation 5). In the adapters, the learning rate (\(lr^{\tau}_{s}\)) decreased from the maximum to the minimum since it was also randomly initialized. But the value was adjusted by multiplying by \(\theta\) (Equation 6). We set \(\beta\) to 0.93 and \(\theta\) to 10 because they delivered the best results in our experiments.
\[lr^{\tau}_{c}=\eta_{min}+\frac{1}{2}(\eta_{max}-\eta_{min})(1+cos(\frac{\tau }{\tau_{tot}}\pi)) \tag{4}\]
\[lr^{\tau}_{s}=\begin{cases}lr^{\tau}_{c}\times\frac{\tau}{10},&\tau\leq 10 \\ lr^{\tau-1}_{s}\times\beta,&\tau>10\end{cases} \tag{5}\]
\[lr^{\tau}_{a}=lr^{\tau}_{c}\times\theta \tag{6}\]
## 4 Experiment setting
### Dataset
We used VoxCeleb1 [29] and VoxCeleb2 [30] datasets to evaluate our proposed method. The VoxCeleb1 training set is comprised of 148,642 utterances from 1,211 speakers, and the test set consists of 4,874 utterances from 40 speakers. The VoxCeleb2 training set corresponds to 1,092,009 samples that were collected from 5,994 speakers. We only used the VoxCeleb2 training set without a test partition. For the data augmentation (DA), we employed MUSAN [31] and RIR reverberation [32] datasets. We evaluated the models using all three official VoxCeleb1 trials: Vox1-O, Vox1-Extend, Vox1-Hard. The primary metric is the equal error rate (EER) based on cosine similarity.
### Baseline
Based on Chen _et al._, we defined a framework combining the speech SSL model and ECAPA-TDNN as the baseline [8]. We implemented the baseline using the HuggingFace [33] transformers library and exploited pre-trained W2V2 and the HuaBERT version of XLSR 2 and large3.
Footnote 2: facebook/wav2vec2-large-xlsr-53 facebook/hubert-large-ll60k
Footnote 3: facebook/hubert-large-ll60k
### Experiment details
We constructed a mini-batch using 128 samples, and each sample was randomly cropped into a 2-second length. Then, we employed the Adam optimizer without weight decay, utilized the mean-squared error (MSE) for KD learning, and multiplied it by 100 to adjust the ratio between losses. As the speaker classification loss function, we used the AAM-softmax [34] criterion with a margin of 0.15 and a scale of 20. We applied SpecAugment on the output of SSL model in our experiment with data augmentation. In the evaluation, the entire utterance and its five segments of 3 seconds were utilized. Further details can be found on our GitHub.
## 5 Results
**Comparison with the baseline.** Table 1 compares conventional frameworks and the proposed OS-KDFT. Experiment #1 is the baseline framework present in [8], and Experiment #2 is our implementation. Each experiment demonstrates an EER of 0.73% and 0.82% without score calibration. Experiments #3 and ##4 are the results of compressing the framework of Experiment #2 through knowledge distillation and fine-tuning (KDFT) and OS-KDFT, respectively. KDFT is a training strategy that incorporates KD and SV learning without any modification, and OS-KDFT is our proposed method that developed KDFT using adapters. KDFT and OS-KDFT significantly compress the baseline by approximately 76% and reduce the SSL model's inference time by 79%. KDFT achieves an EER of 1.26%, a seriously degraded performance from 0.82% (#3). However, OS-KDFT successfully carries out KD and FT and delivers an EER of 0.98% (#4). Through these results, we confirm that the proposed OS-KDFT has the potential to distill the SSL model suitable for SV.
**Comparison with conventional method.** To investigate
\begin{table}
\begin{tabular}{l c c c} \hline
**Experiment** & **Size** & **Inf. time** & **EER (\%)** \\ \hline \#1 W2V2 + ECAPA-TDNN (small) [8] & 321.4M & N/A & 0.73 \\ \#2 W2V2 + ECAPA-TDNN (small)* & 48.8ms & 0.82 \\ \hline \#3 KDFT & 76.1M & 10.5ms & 1.26 \\ \#4 OS-KDFT & 76.6M & 10.5ms & 0.98 \\ \hline \end{tabular}
\end{table}
Table 1: _Comparison of the equal error rate (%) and SSL model’s inference time between W2V2 + ECAPA-TDNN and compressed models, generated through KDFT and proposed OS-KDFT, the the VoxCeleb2 dataset (*: our implementation). The inference time was measured on an NVIDIA RTX A5000 GPU with a batch size of one. We repeated inference time measurement 100 times and recorded the average value._
further, we compared OS-KDFT with other training strategies in the VoxCeleb1 dataset; Figure 3 illustrates these results. In these experiments, we did not apply DA to exclude variables. The blue (left) bars in Figure 3 show the results of compressing the SSL model via KD and its use for SV. This method achieved EER of 6.83% and 8.20% when the epoch ratio of KD and FT was at 50:50 and 75:25, respectively. The yellow (right) bars depict the result of further tuning for SV: this decreased the EER to 5.91% and 7.30%, respectively for each epoch ratios. These results confirm that simply compressing the model with KD does not generate optimal students for the target task. In addition, performance deviations may occur depending on the proportion of learning KD and SV. The green solid line represents the EER of the student that acquired knowledge from the SV-tuned teacher. Since the teacher can identify speakers, we could utilize Kullback-Leibler divergence loss for student training in this experiment. As a result, we trained the student to predict teachers' softmax output distribution, resulting in the EER of 7.17%. The red dotted line represents the OS-KDFT results, and it delivers the lowest EER of 5.91%. These results show that a KD and FT joint training strategy has potential compared to conventional compression methods.
**Ablation study.** Table 2 displays the performance variation with an application of each strategy of OS-KDFT on the VoxCeleb1 and VoxCeleb2 datasets. Compressing the baseline through KDFT in VoxCeleb1 dataset yields an EER of 8.17% (#5). In Experiment #6, we added only the adapter's parameters to #5 without separating branches, and it demonstrated a marginally improved EER of 7.94%. When dividing routes (#7), it achieves the EER of 7.28%, which is a significantly enhanced performance than #5. Finally, we can reach the best performance of 5.64% by adjusting the learning rate described in Section 3.2. In experiments using VoxCeleb2, we used not only _Original (O)_ but _Extend (E)_ and _Hard (H)_ trials for a more sophisticated evaluation. When the baseline was compressed through KDFT, it delivered EERs of 3.35%, 3.41%, and 5.97% in O, E, and H trials, respectively (#9). In Experiment #10, we simply add the adapter's parameters without branch split, and it degrades the performance to 3.46%, 3.41%, and 6.01%. On the other hand, in experiment #11, we also divide the path and improve EER to 2.74%, 3.01%, and 5.44%. The best performances of 2.50%, 2.56%, and 5.18% are delivered when the learning rate is also diversified for each module (#12). Therefore, it is difficult to attribute the effect of OS-KDFT to simply increasing the number of parameters and each strategy of OS-KDFT is necessary.
**Application to other models.** To further verify the effectiveness of OS-KDFT, we applied this to another SSL model and classifier. Table 3 presents the results of these studies. In Experiments #13 and #14, we changed the classifier from ECAPA-TDNN to a linear layer. The model trained identically to #3 delivered an EER of 8.27%, while the framework trained with OS-KDFT presented an EER of 5.92%. In Experiments #15 and #16, we used HuBERT instead of W2V2. Distilling HuBERT through KDFT offered an inferior EER of 7.15%. In contrast, when compressing HuBERT via OS-KDFT, the EER was 5.97%. Through these results, we can confirm that the OS-KDFT method effectively works in other frameworks.
## 6 Conclusion
In this paper, we design a One-Step Knowledge Distillation and Fine-Tuning (OS-KDFT) method to condense SSL model for SV. OS-KDFT is the first approach to compress the speech SSL model while concurrently fine-tuning and it mitigating negative transfers by utilizing adapters. Through OS-KDFT, we can compress the 321.4M model to 76.6M, and reduce the SSL model's inference time by 79% while presenting an EER of 0.98%. We have verified the effectiveness of the OS-KDFT through comparison with other training strategies and applications on other SSL model. Nevertheless, our research has limitations. To generalize the effectiveness of OS-KDFT, we should evaluate our proposed method with different KD methods (e.g., transferring teachers' knowledge from hidden features rather than the output). Thus, we will incorporate OS-KDFT with different KD methods in future works.
## 7 Acknowledgements
This work was supported by the National Research Foundation of Korea(NRF) grant funded by the Korea government. (MSIT) (2023R1A2C1005744)
\begin{table}
\begin{tabular}{c c c c c} \hline \hline
**VoxCeleb1 train (without DA)** & & & & \\
**\#Exp** & **Train strategy** & **Size** & **EER (O)** & **EER (E)** & **EER (H)** \\ \hline
**\#5** & KDFT & 76.1M & 8.17 & N/A & N/A \\
**\#6** & KDFT (AS param) & 76.6M & 7.94 & N/A & N/A \\
**\#7** & OS-KDFT (AS) & 76.6M & 7.28 & N/A & N/A \\
**\#8** & OS-KDFT (AS, LR) & 76.6M & **5.64** & N/A & N/A \\ \hline \hline \multicolumn{5}{c}{**VoxCeleb1 train (without DA)**} & & & \\
**\#Exp** & **Train strategy** & **Size** & **EER (O)** & **EER (E)** & **EER (H)** \\ \hline
**\#9** & KDFT & 76.1M & 3.35 & 3.41 & 5.97 \\
**\#10** & KDFT (AS param) & 76.6M & 3.46 & 3.41 & 6.01 \\
**\#11** & OS-KDFT (AS) & 76.6M & 2.74 & 3.01 & 5.44 \\
**\#12** & OS-KDFT (AS, LR) & 76.6M & **2.50** & **2.56** & **5.18** \\ \hline \hline \end{tabular}
\end{table}
Table 2: Equal error rate (%) for different training strategies which are trained by VoxCeleb1(#5-8) & VoxCeleb2(#8-12). O, E, and H are official trial lists Vox1-O, Vox1-E, and Vox1-H, respectively. AS indicates that adapters are inserted into the student encoder layer, and LR means we set different learning rates as described in Section 3.2.
\begin{table}
\begin{tabular}{c c c c} \hline \hline
**\#Exp** & **Train strategy** & **SSL model** & **Classifier** & **EER (\%)** \\ \hline
**\#13** & KDFT & W2V2 & Linear & **8.27** \\
**\#14** & OS-KDFT & & & **5.92** \\ \hline
**\#15** & KDFT & HuBERT & Linear & **7.15** \\
**\#16** & OS-KDFT & & & **5.97** \\ \hline \hline \end{tabular}
\end{table}
Table 3: Experimental results with different PLMs and classifier. Experiments were performed on the VoxCeleb1 dataset, and data augmentation was not applied.
Figure 3: Comparison of SSL model KD, tuned-teacher SSL KD, and OS-KDFT in VoxCeleb1. To exclude variables, we did not apply DA. The blue (left) and yellow (right) bars display the EER of the compressed SSL model for frozen and FT versions, respectively. The green solid line and red dotted line results from tuned-teacher KD and OS-KDFT. The \(x\)-axis represents the ratio of epochs for KD and SV. |
2308.09156 | Characterizing Information Seeking Events in Health-Related Social
Discourse | Social media sites have become a popular platform for individuals to seek and
share health information. Despite the progress in natural language processing
for social media mining, a gap remains in analyzing health-related texts on
social discourse in the context of events. Event-driven analysis can offer
insights into different facets of healthcare at an individual and collective
level, including treatment options, misconceptions, knowledge gaps, etc. This
paper presents a paradigm to characterize health-related information-seeking in
social discourse through the lens of events. Events here are board categories
defined with domain experts that capture the trajectory of the
treatment/medication. To illustrate the value of this approach, we analyze
Reddit posts regarding medications for Opioid Use Disorder (OUD), a critical
global health concern. To the best of our knowledge, this is the first attempt
to define event categories for characterizing information-seeking in OUD social
discourse. Guided by domain experts, we develop TREAT-ISE, a novel multilabel
treatment information-seeking event dataset to analyze online discourse on an
event-based framework. This dataset contains Reddit posts on
information-seeking events related to recovery from OUD, where each post is
annotated based on the type of events. We also establish a strong performance
benchmark (77.4% F1 score) for the task by employing several machine learning
and deep learning classifiers. Finally, we thoroughly investigate the
performance and errors of ChatGPT on this task, providing valuable insights
into the LLM's capabilities and ongoing characterization efforts. | Omar Sharif, Madhusudan Basak, Tanzia Parvin, Ava Scharfstein, Alphonso Bradham, Jacob T. Borodovsky, Sarah E. Lord, Sarah M. Preum | 2023-08-17T19:08:42Z | http://arxiv.org/abs/2308.09156v2 | # Characterizing Information Seeking Events in Health-Related Social Discourse
###### Abstract
Social media sites have become a popular platform for individuals to seek and share health information. Despite the progress in natural language processing for social media mining, a gap remains in analyzing health-related texts on social discourse in the context of events. Event-driven analysis can offer insights into different facets of healthcare at an individual and collective level, including treatment options, misconceptions, knowledge gaps, etc. This paper presents a paradigm to characterize health-related information-seeking in social discourse through the lens of events. Events here are board categories defined with domain experts that capture the trajectory of the treatment/medication. To illustrate the value of this approach, we analyze Reddit posts regarding medications for Opioid Use Disorder (OUD), a critical global health concern. To the best of our knowledge, this is the first attempt to define event categories for characterizing information-seeking in OUD social discourse. Guided by domain experts, we develop _TREAT-ISE_, a novel multilabel treatment information-seeking event dataset to analyze online discourse on an event-based framework. This dataset contains Reddit posts on information-seeking events related to recovery from OUD, where each post is annotated based on the type of events. We also establish a strong performance benchmark (77.4% F1 score) for the task by employing several machine learning and deep learning classifiers. Finally, we thoroughly investigate the performance and errors of ChatGPT on this task, providing valuable insights into the LLM's capabilities and ongoing characterization efforts.
\({}^{1}\)Department of Computer Science, Dartmouth College, USA
\({}^{2}\)Department of Computer Science and Engineering, CUET, Bangladesh
\({}^{3}\)Department of Biomedical Data Science, Geisel School of Medicine, Dartmouth College, USA
{omar.sharif.gr, sarah.masud.preum}@dartmouth.edu
This post mentions multiple key events, e.g., taking medication (_Gabaentin_), and experiencing psychophysical effects (_pain_ or _anxiety_). Such event analysis based on a large number of samples can reveal insights into different aspects of treatment at both individual and collective levels (e.g., _how many people report a new or rare side effect during pain management_), the perceived value of treatment (e.g., _ineffective pain medication_), self-treatment strategies (e.g., _self-dosing Kratom_), knowledge gaps and concern (_rare or new side effects of treatment_), and misconceptions (e.g., _self-dosing Kratom is safe for pain relief_). However, a significant gap still exists in performing event-driven analysis on health discourse in online communities, including social media.
To showcase the significance of event-driven analysis, we explore online discussions regarding recovery from opioid use disorder (OUD), a critical concern with substantial societal impact. OUD remains a leading cause of mortality in the US, incurring a massive economic toll, estimated at 1.02 trillion dollars annually [16]. Existing challenges, including stigma around addiction, limited healthcare access, and distrust of traditional systems, drive individuals towards seeking recovery support within social groups [17]. Medications for opioid use disorder (MOUD) offer a vital treatment avenue, capable of saving lives [14]. Pseudo-anonymous platforms like Reddit, known for its wide US user base and
emphasis on anonymity, provide a unique lens into MOUD-based recovery insights. Reddit's reach and focus on sensitive topics, such as mental health and substance use disorder, position it as a significant source for rich content [14, 15, 16].
Event analysis on social media text confronts distinct hurdles. Defining and standardizing event types pose a challenge, especially considering the influence of domain-specific factors that determine event relevance--_clinical_ events differing from _career_ events, for instance. Moreover, equivalent events might be expressed in vastly dissimilar colloquial terms. Remarkably, there exists no dataset for delving into event analysis within online discourses.
This paper introduces an event-based framework for online discourse analysis. We study Reddit posts on MOUD, a critical topic amid the opioid crisis. Collaborating with experts, we define information-seeking events, create annotation guidelines, and curate a unique labeled dataset. Using the event schema and info-seeking posts, we explore information quality systematically. Comprehensive insight into treatment needs relies on classifying data into specific events. We frame identifying core events in posts as a multi-label, multi-class classification challenge. We assess the dataset with advanced text classifiers including large language models. Our major contributions are as follows.
* **Resource:** Based on guidance from domain experts, we propose a treatment information-seeking event (ISE) schema that can help to understand the OUD treatment trajectory. Leveraging this schema, we develop _TREATISE_, a multilabel dataset comprising human-annotated samples. The novel dataset, annotation guidelines, and associated code will accelerate further research in online health discourse analysis1. Footnote 1: All of these resources will be made publicly available upon the paper’s acceptance.
* **Social:** We focus on a highly vulnerable population, i.e., individuals considering or undergoing OUD recovery, which has received little attention in previous work, and characterize their self-reported MOUD treatment information needs. The dataset and other outcomes can complement traditional electronic health records and survey data and capture the real-world complexity of recovery.
* **Benchmarking:** We investigate the performance of ten off-the-shelf machine learning and deep learning models for this task. Furthermore, we thoroughly assess the effectiveness of ChatGPT, thereby uncovering the potential scope of ChatGPT and state-of-the-art text classifiers for such complex, knowledge-intensive discourse analysis.
## Related Work
### Social Media and Substance Use Disorder
Social media platforms offer individuals opportunities to share the different events of their lived experiences, such as addiction, logistical barriers, treatment strategy, the experience of psychophysical effects, and more [23]. Prior studies have utilized these data to understand different types of substance usage, including cannabis, alcohol, opioids, and others [13, 14]. Chancellor et al. [15] tried to uncover alternative treatment options for OUD by analyzing the opioid discourse on Reddit. Romano et al. [16] presented a framework for extracting keywords and applied it to extract insights about OUD recovery from Reddit. Another related study by Balsamo et al. [17] investigated how much online social community can support individuals undergoing opioid usage. Our work differs from existing studies both in task formulation and scope. We focus on events as the primary analytical lens, encompassing diverse categories to study OUD discourse.
### Event Analysis for Health-related Discourse
Event analysis includes two main subtasks: _event detection:_ identifying trigger terms and event type, and _argument extraction:_ extracting event arguments from texts and assigning roles to arguments based on event type [12]. Recent approaches have adopted a text-generation paradigm, leveraging large language models to prompt the extraction of event types, triggers, and arguments [13, 14, 15]. Few works attempted to detect events without extracting triggers [14]. However, these models often exhibit suboptimal performance when dealing with event analysis tasks heavily reliant on domain-specific knowledge [13]. The limited availability of training data and the complexity of domain-specific terms contribute to this issue.
Research on event analysis from social media discourse regarding health is limited. Naik, Bogart, and Rose [10] attempted to develop a disease progression timeline by analyzing patient-authored texts in social support groups. They explored the connections between medical events and users' engagement within these groups. In a similar work, Wen and Rose [10] investigated the behavioral trajectory of participants by analyzing cancer-related events in online medical discourse. In subsequent work, Wen et al. [17] created a temporal tagger to extract cancer-related event dates to explore treatment trajectories.
This work focuses on event detection rather than trigger or argument extraction since it is one of the first attempts to understand OUD discourse in the context of events. We curate a dataset of information-seeking events by actively involving domain experts in the process. Moreover, we analyze ChatGPT's performance and errors, presenting valuable insights into the model's capabilities and contributing to ongoing efforts to understand its characteristics.
## Defining Information-Seeking Events
Our overarching goal is to characterize information-seeking events (ISE) from online discourse. These events are self-reported by individuals considering or undergoing medications for OUD (MOUD) treatment. MOUD includes Buprenorphine (e.g., Suboxone, Subutex, Sublocade), Methadone, and Naltrexone [18]. We collaborate with five domain experts to define
the events that best characterize the treatment information-seeking events from different stages of the treatment journey. Our collaborators are well-versed and internationally acclaimed scholars in substance use disorder, spanning various areas such as epidemiology, public health policy, mental health, addiction psychiatry, addiction medicine, and biomedical data science. They review the collected samples and offer valuable insights into various facets of opioid recovery. Based on the guidance from the domain experts, we identify five coarse categories of events for treatment information needs. These ISE categories are: _Accessing MOUD, Taking MOUD, Experiencing Psychophysical Effects_, _Relapse or co-occurring substance usage_, and _Tapering MOUD_. All of these events are prevalent in recovery using MOUD.
* **Accessing MOUD (AM):** Information-seeking events related to accessing MOUD, such as concerns about insurance, pharmacy, providers, etc. Analyzing samples from this event can help to determine the common barriers people encounter during recovery using MOUD that affect treatment induction, adherence and retention.
* **Taking MOUD (TM):** Information-seeking events related to MOUD regimen details, e.g., questions about timing, dosage, frequency of taking a MOUD, concerns about splitting and missing a dose. This class can surface potential misconceptions and concerns about MOUD administration that negatively impact treatment adherence.
* **Experiencing Psychophysical Effects during Recovery (EP):** Information-seeking events related to concern about potential physical and/or psychological effects during recovery. This event class covers both experienced and anticipated psychophysical effects. It can surface rare and new adverse effects of MOUD as well as prevalent psychophysical effects of MOUD, their severity and potential impact of treatment adherence.
* **Relapse (RL) or co-occurring substance usage:** This class includes events that talk about relapsing or using other substances during recovery. Such substance use can be attributed to recreational purposes or for self-medication (e.g., marijuana for sleep). We follow NIDA's2 list of commonly used drugs to identify what counts as a substance. Samples of this event class can help unearth specific information individuals seek concerning recreational and medical usage of substances. Footnote 2: [https://tinyurl.com/4ckwz453](https://tinyurl.com/4ckwz453)
* **Tapering MOUD (TP):** Information-seeking events related to reducing the dose or frequency of MOUD and eventually quitting MOUD. Although the current standard of care recommends consulting healthcare providers for tapering MOUD, individuals often resort to self-managed tapering strategies. Analyzing events from this class can inform addiction researchers and clinicians about the context of self-managed tapering strategies (e.g., why and when people self-taper) and their effectiveness (what works for whom).
* **Others (Oth):** Information-seeking events related to other issues.
In this work, we focus on information-seeking events from posts on social media. It should be noted that we can also analyze relevant information-providing or sharing events (i.e., comments or replies to the original post) through this lens of events. This will help us measure the availability and quality of shared information in online discourse more systematically, e.g., self-management strategies for tapering, high-dose of MOUD suggested by peers or common misinformation regarding relapse during recovery using MOUD. Such systematic analysis can potentially uncover actionable insights to improve treatment adherence and outcomes.
## Dataset Collection and Annotation Strategies
### Data collection
We chose Reddit as the data source due to its anonymity policy and rich content on MOUD [10]. We selected r/Suboxone as our primary data source as (i) it has both the highest number of members and the number of peer interactions (e.g., number of posts, comments) among the subreddits specific to different options of MOUD; and (ii) it is strictly moderated where any irrelevant posts are removed by moderators (e.g., drug soliciting posts). So, this subreddit offers a unique chance to understand users' information needs related to a MOUD authentically. We scraped all the posts between January 2018 (as minimal interaction was observed in this subreddit prior to 2018) and August 2022 (study start time). We collected a total of 25,044 posts using the PRAW and PushShift APIs [1]. The collected data includes titles, posts, comments, likes, upvotes, and unique post IDs while strictly adhering to ethical considerations by not collecting/storing information that violates ethical concerns. After removing irrelevant posts (e.g., polls, link-only posts), we ended up with a corpus of 15,253 relevant posts. Among these posts, we annotate 5083 randomly selected posts.
### Data Annotation
**Feasibility of Crowd-sourced Annotation:** Annotating the type of treatment information-seeking events is a challenging task that demands significant effort. Initially, we employed the widely-used approach of annotating through crowd-workers on Amazon Mechanical Turk [12]. We selected a pool of Master qualified workers (mTurkers with high approval ratings), provided them with explicit annotation guidelines, and conducted a trial run on 300 samples. However, we encountered poor annotation quality and low inter-annotator agreement (only 40.5%). This is because this annotation task requires a good understanding of domain knowledge and annotators need interactive, progressive training sessions to ensure they understand the nuances of different types of events. Our trial run indicates a lack of suitability of crowd-workers for such a challenging inference task. Therefore, we decided to perform in-house annotation with students and experts.
**Annotation process:** To complete the annotation, we form a diverse group of 9 annotators: 3 undergraduates and 6 graduate students. Initially, we provided them with background
knowledge on MOUD and suboxone through multiple sessions led by experts. To achieve quality annotation, our primary focus was to confirm that the annotators understand what are the ISEs for OUD recovery and how a user can seek information for multiple event types in a post (details added in the appendix3). Each annotator was trained for four weeks through trial annotation tasks before they started actual annotation to ensure annotators were well-versed with ISE classes and eliminate the uncertainties about annotation guidelines. Each sample was reviewed by at least two different annotators for the annotation. Table 1 demonstrates sample posts with associated labels.
Footnote 3: [https://tinyurl.com/yym7ywn](https://tinyurl.com/yym7ywn)
**Inter-annotator Agreement:** We compute the inter-annotator agreement in terms of Cohen's \(\kappa\)-score [12]. Table 2 shows the \(\kappa\)-score for each class where the AM class achieves the highest agreement score of 0.86, and the _other_ class gets the lowest (0.68). The mean \(\kappa\)-score of 0.76 indicates substantial agreement between the annotators. The presence of domain-specific drug names, lengthy text samples with shorthand, slang, and misspellings posed challenges during annotation. Table 2 exhibits the initial agreement score between two annotators. It is important to mention that a domain expert reviewed all the samples after labeling by two annotators to ensure the data quality. Subsequently, resolved any confusion or annotation disagreement and rectified the labels. This domain expert is a study team member who is disjointed from the set of recruited annotators. This complies with the recommended best practice for qualitative health data annotation/coding [23]. Thus we develop **TREAT-ISE** a MOUD treatment information-seeking event dataset comprised of 5083 multilabel samples.
Table 3 presents the statistics and lexical summary of TREAT-ISE. The dataset is imbalanced, with EP having the highest number of samples. Among the classes, EP stands out with the most words (\(\approx\)271k) and unique words (\(\approx\) 13k), while the AM class has the lowest counts (\(\approx\)108k, \(\approx\)7.6k). TREAT-ISE stands apart from other domain-specific datasets by presenting a unique multilabel classification challenge with significantly longer average sample lengths (ranging from \(122\) to \(151\)). The average sample length for similar multilabel classification tasks is less than 50 [22, 23]. In the ablation studies, we present a few insights into how large language models handle these domain-specific long texts.
## Methodology
We present a comprehensive benchmark evaluation of the TREAT-ISE dataset encompassing various methodologies, including off-the-shelf non-transformer, Transformer-based, and Large language models such as ChatGPT. These methods represent standard approaches for multilabel classification and provide a diverse range of technical implementations for thoroughness. The details of each method are described in the subsequent paragraphs.
* **Non-transformer models:** In the baseline evaluation, we explore the performance with two machine learning models: Logistic Regression (LR) [10] and Naive Bayes with Support Vector Machine (NBSVM) [24]. For deep learning approaches, we investigate two variants: one utilizes pretrained FastText [13] embeddings with a feedforward network, and the other employs a Bidirectional Gated Recurrent Unit (BiGRU). In BiGRU, embedding features are propagated to a GRU layer with 80 hidden units. The output from the last hidden layer is passed to global average pooling and max-pooling layers. Subsequently, the outputs of the pooling layers are concatenated and passed for classification.
* **Transfomer-based models:** In recent years, transformer-based [25] models have achieved state-of-the-art performance on various NLP tasks. We employ six transformer-based pre-trained models to benchmark the multilabel ISE classification task. These include Bidirectional Encoder Representations from Transformers (BERT) [4], a distilled variant of BERT (DistilBERT) [22], robust
\begin{table}
\begin{tabular}{p{113.8pt} p{113.8pt} p{113.8pt}} \hline
**Title** & **Post** & **Events** \\ \hline Looking for suboxone guidance? & I take 1-2mg subs per day which is a decrease from the original dose of 8mg. Just looking for a plan of action in which to stick with to eventually get off completely. & Taking MOUD (TM), Tapering (TP) \\ \hline Which Kratom strain helps with Bupe withdrawal & When I run out of my Suboxone prematurely, I like to keep Kratom on hand for my extremely low energy and excessive yawning. & Relapse (RL), Psychophysical effects (EP) \\ \hline \end{tabular}
\end{table}
Table 1: Sample data excerpts with titles, posts, and labels (shortened and paraphrased as per IRB guideline).
\begin{table}
\begin{tabular}{l|c c c c c|c}
**Class** & **\#Samples** & **\#Words** & \begin{tabular}{c} **\#Unique** \\ **words** \\ \end{tabular} &
\begin{tabular}{c} **\#Avg.** \\ **words/sample** \\ \end{tabular} \\ \hline AM & 873 & 108k & 7665 & 124.16 \\ TM & 1637 & 199k & 10477 & 122.07 \\ TP & 1424 & 215k & 11087 & 151.40 \\ EP & 1837 & 271k & 13395 & 147.75 \\ RL & 1420 & 202k & 10776 & 142.33 \\ Oth & 473 & 48k & 6159 & 102.62 \\ \hline \end{tabular}
\end{table}
Table 3: Summary of different classes of the TREAT-ISE dataset.
BERT architectures with more training data RoBERTa Liu et al. (2019), ELECTRA Clark et al. (2020), a model with generalized autoregressive pertaining (XLNet) Yang et al. (2020) and MPNet Song et al. (2020). All the models are sourced from the Huggingface library. Subsequently, fine-tuned on our dataset for 10 epochs with a learning rate \(2e^{-5}\) and batch size 16. The intermediate model demonstrating the best validation set performance is saved for the test set prediction.
* **ChatGPT:** Several recent studies have demonstrated that large language models like ChatGPT can surpass humans in various classification and annotation tasks Strong et al. (2023); Gilardi, Alizadeh, and Kubli (2023). So we explore the scope of ChatGPT Ouyang et al. (2022) to classify ISE in our annotated dataset. To comprehensively assess its capabilities, we explore three distinct settings: zero-shot (ZS), few-shot (FS), and chain-of-thought (CoT) Wei et al. (2022) prompting. The chain-of-thought approach gives the model more reasoning about this domain-specific task Min et al. (2022). We thoroughly explored various versions of prompts and refined those that showed encouraging outcomes. We select the optimal prompt through an iterative process of trial and error guided by the empirical observations of the model's output. Our approach involves two prompt templates for conducting the experiments: _'Short'_ and _'Long'_. The _'Short'_ template offers minimal details concerning the ISE classes, while the _'Long'_ variant provides the model with a detailed definition of the classes. We adopt both _'Short'_ and _'Long'_ templates in the ZS and FS experiments. However, for the chain-of-thought approach, we use a modified version of long prompts, including the reasoning for the examples. We set the temperature value to 0.0 across all experiments to ensure the deterministic behavior of the model.
The test set **excludes** all the samples used to identify the best prompts and in-context examples used in the few-shot and chain-of-thought prompts. This ensures unbiased evaluation and prevents the risk of potential data leakage. Due to space constraints, we could not share example prompts in the main paper. However, they are readily accessible through the appendix4.
Footnote 4: [https://tinyurl.com/ycyj3v4a](https://tinyurl.com/ycyj3v4a)
## TREAT-ISE: Benchmark Evaluation
In this section, we outline the experimental and evaluation settings and present the experimental results. Furthermore, we perform comprehensive ablation studies to understand the performance of large language models (i.e., ChatGPT) on a domain-specific, challenging text classification task.
**Experimental and Evaluation Setup:** All the experiments were conducted on a GPU-accelerated Google Colab platform. Machine learning and deep learning models were trained with ktrain Maiya (2020), while all the transformer models were implemented from Huggingface. Finally, we investigate the performance of the ChatGPT model via API (version _gpt-3.5-turbo-0613_) calls.
TREAT-ISE is partitioned into three mutually exclusive sets: train (80%), validation (10%), and test (10%). We leverage various statistical measures (precision (P), recall (R), F1-score) to asses and understand the model's performance across different classes. The validation set is utilized to tune the model hyperparameters across various experiments. The weighted F1-score (WF1) on the test set is used to compare and determine the superiority of the models.
\begin{table}
\begin{tabular}{c c c c|c c c|c c c|c c c|c c c|c} \multicolumn{1}{c}{} & \multicolumn{3}{c}{**AM**} & \multicolumn{3}{c}{**TM**} & \multicolumn{3}{c}{**TP**} & \multicolumn{3}{c}{**EP**} & \multicolumn{3}{c}{**RL**} \\ \hline
**Model** & **P** & **R** & **F1** & **P** & **R** & **F1** & **P** & **R** & **F1** & **P** & **R** & **F1** & **P** & **R** & **F1** & **WF1** \\ \hline \multicolumn{11}{c}{_Non-transformer Baselines_} \\ LR & 0.63 & 0.68 & 0.65 & 0.61 & 0.64 & 0.63 & 0.74 & 0.64 & 0.68 & 0.49 & 0.66 & 0.56 & 0.71 & 0.57 & 0.63 & 0.593 \\ NBSVM & 0.73 & 0.57 & 0.64 & 0.58 & 0.73 & 0.65 & 0.72 & 0.78 & 0.75 & 0.44 & 0.75 & 0.56 & 0.61 & 0.62 & 0.62 & 0.602 \\ FastText & 0.64 & 0.73 & 0.68 & 0.61 & 0.70 & 0.65 & 0.69 & 0.84 & 0.75 & 0.50 & 0.78 & 0.61 & 0.63 & 0.64 & 0.63 & 0.624 \\ BiGRU & 0.72 & 0.69 & 0.70 & 0.73 & 0.68 & 0.70 & 0.80 & 0.84 & 0.82 & 0.61 & 0.61 & 0.61 & 0.80 & 0.72 & 0.76 & 0.702 \\ \hline \multicolumn{11}{c}{_Transformer Baselines_} \\ BERT & 0.85 & 0.74 & 0.79 & **0.83** & 0.64 & 0.72 & 0.84 & 0.84 & 0.84 & **0.69** & 0.59 & 0.64 & 0.88 & 0.76 & 0.81 & 0.733 \\ RoBERTa & 0.82 & 0.82 & 0.82 & 0.75 & 0.80 & **0.77** & 0.80 & 0.89 & 0.84 & 0.63 & 0.74 & **0.68** & 0.88 & 0.75 & 0.81 & 0.757 \\ Distil-BERT & 0.80 & 0.69 & 0.74 & 0.81 & 0.62 & 0.70 & 0.84 & 0.78 & 0.81 & 0.68 & 0.57 & 0.62 & 0.84 & 0.76 & 0.80 & 0.711 \\ ELECTRA & 0.77 & 0.79 & 0.78 & 0.80 & 0.67 & 0.73 & **0.87** & 0.84 & 0.85 & 0.65 & 0.64 & 0.65 & 0.80 & **0.88** & 0.84 & 0.748 \\ XLNet & 0.84 & **0.82** & **0.83** & 0.79 & 0.72 & 0.75 & 0.85 & 0.84 & **0.85** & 0.59 & 0.78 & 0.67 & **0.88** & 0.85 & **0.86** & **0.774** \\ MPNet & 0.79 & 0.81 & 0.80 & 0.80 & 0.71 & 0.75 & 0.81 & 0.85 & 0.83 & 0.68 & 0.66 & 0.67 & 0.78 & 0.82 & 0.80 & 0.751 \\ \hline \multicolumn{11}{c}{_ChatGPT Baselines_} \\ ChatGPT (ZS-S) & **1.0** & 0.26 & 0.41 & 0.74 & 0.30 & 0.43 & 0.67 & 0.42 & 0.52 & 0.62 & 0.10 & 0.18 & 0.62 & 0.81 & 0.70 & 0.433 \\ ChatGPT (ZS-L) & 0.78 & 0.61 & 0.69 & 0.68 & 0.63 & 0.65 & 0.69 & 0.67 & 0.68 & 0.70 & 0.29 & 0.41 & 0.77 & 0.53 & 0.63 & 0.581 \\ ChatGPT (FS-S) & 0.48 & 0.79 & 0.60 & 0.47 & **0.92** & 0.62 & 0.45 & **0.96** & 0.61 & 0.44 & 0.83 & 0.57 & 0.65 & 0.69 & 0.67 & 0.609 \\ ChatGPT (FS-L) & 0.51 & 0.78 & 0.62 & 0.52 & 0.87 & 0.65 & 0.50 & 0.86 & 0.63 & 0.40 & **0.92** & 0.56 & 0.66 & 0.72 & 0.69 & 0.620 \\ ChatGPT (CoT) & 0.62 & 0.78 & 0.69 & 0.49 & 0.87 & 0.62 & 0.55 & 0.89 & 0.68 & 0.49 & 0.76 & 0.60 & 0.74 & 0.56 & 0.64 & 0.631 \\ \hline \end{tabular}
\end{table}
Table 4: Classwise performance for treatment information seeking event detection. WF1 indicates the weighted F1 score based on all six classes. The shorthand indicates ZS-S, ZS-L: Zero-shot (Short, Long), FS-S, FS-L: Few-shot (Short, Long), and CoT: Chain-of-Thought prompting. Due to space constraints, the models’ performance in _other_ class is not included.
## Results
Table 4 presents the classwise performance of all the models on the test set of the TREAT-ISE dataset. BiGRU achieved the highest WF1 of 0.702 among the non-transformer baselines. XLNet outperformed all other models with a maximal WF1 score of 0.774. It excelled particularly well in AM, TP, and RL classes, with scores of 0.83, 0.85, and 0.86, respectively. RoBERTa attained the highest WF1 of 0.757 of the BERT variants. All models encountered challenges in identifying samples from _taking medication_ (TM) and _experiencing psychophysical effects_ (EP) events. Surprisingly, all the ChatGPT variants underperformed compared to other baselines. We conducted a detailed ablation study to get more insights into this. The CoT prompt acquired the highest WF1 of 0.631 and outperformed all other prompting techniques. In contrast to the other baselines, where there is a balance between classwise precision and recall, all the ChatGPT variants (except ZS-S) showed much higher recall than precision. This indicates that ChatGPT tends to overpredict the classes. Overall, the results indicate that identifying treatment information-seeking events is difficult, requires domain knowledge, and has significant room for improvement.
**Statistical significance:** We conduct statistical significance testing using the McNemar (1947) test to see if the best-performing model (i.e., XLNet) outperforms other models in a statistically meaningful way. Since the results from each classifier are nominal data (i.e., classes of events) and it is difficult to train multiple copies of the models, this test is a suitable approach. We conducted a pair-wise comparison between XLNet and all the other models for each event class. The classwise _P-value_ indicates XLNet is significantly better than all other models in three of the five classes, namely, TM (P\(<\)\(0.001\)), EP (P=\(0.008\)), and TP (P=\(0.003\)). The performance of XLNet is not statistically significant for the remaining two classes. Specifically for AM (P=\(0.657\)) and RL (P=\(0.446\)), XLNet's performance is comparable to RoBERTa and FastText, respectively.
### Ablation Studies with ChatGPT and XLNet
Although recent studies illustrate that ChatGPT can outperform humans in knowledge-intensive tasks [1], results (Table 4) demonstrate that ChatGPT exhibits suboptimal performance in classifying treatment information-seeking events. This is particularly noticeable for samples that require significant domain knowledge to distinguish between events. This motivates us to conduct a deeper investigation into the scope of ChatGPT for such event analysis. We also aim to uncover whether the errors of the transformer models are echoed in ChatGPT or they are distinct. So, in this section, we perform a thorough side-by-side analysis between the best-performing model in our task, XLNet, and the top-performing prompt setting of the ChatGPT model (i.e., chain-of-thought). The findings are as follows.
* **ChatGPT tends to overpredict more:** After qualitative and quantitative analysis, we found that ChatGPT often fails to understand the context holistically and overpredictors. Consider the following example, which is about _relapse_ (RL) and _tapering_ (TP). Although ChatGPT predicted these labels correctly because of the mention of side effects and dosage information, it erroneously added TM and EP labels....I started by quitting kratompletely and taking 2mg of suboxone, I experienced no **withdrawals** during the switch but also no high. Today I'm down to **1.5mg of suboxone**, and I'm so happy! Planning to go down to 1.25mg pretty soon too.
Figure 1 illustrates the confusion matrices for the ChatGPT chain-of-thought approach. Table 5 presents the classwise overprediction ratio (#false positive / #predicted positives) for both ChatGPT and XLNet. Surprisingly, the average overprediction ratio for ChatGPT is 45%. That means almost half of the time, it incorrectly predicts that samples contain information-seeking events. ChatGPT exhibits higher error in the TM and EP classes, with 166 (out of 323) and 135 (out of 267) mispredictions, respectively. In contrast, XLNet exhibits a drastically lower overprediction ratio for all categories except in the EP class.
* **ChatGPT struggles more on long samples:** For analysis, we compute the frequency of correct and wrong predictions on different length ranges for both ChatGPT (CoT) and XLNet. Figure 2 illustrates the correlation, indicating that the frequency of accurate prediction is higher among the shorter samples and decreases as the samples' length increases. On average, the samples where ChatGPT made errors had a length of 128.04, whereas, for XLNet, this value is 140.21. This analy
Figure 1: Confusion matrices of each category for ChatGPT with the chain-of-thought (CoT) approach. The _Rest_ class indicates predictions on all other classes.
\begin{table}
\begin{tabular}{c|c c c c c c} & **AM** & **TM** & **TP** & **EP** & **RL** & **Oth** \\ \hline \multirow{2}{*}{CG} & 36/96 & 166/323 & 103/227 & 135/267 & 32/122 & 44/74 \\ & 0.375 & 0.513 & 0.453 & 0.505 & **0.262** & 0.594 \\ \hline \multirow{2}{*}{XL} & 12/75 & 35/165 & 21/139 & 92/227 & 19/155 & 9/33 \\ & 0.160 & 0.212 & 0.151 & 0.405 & **0.122** & 0.27 \\ \hline \end{tabular}
\end{table}
Table 5: Classwise overprediction ratio (#false positive / #predicted positives) of ChatGPT (CG) with CoT prompts and the XLNet (XL) model.
sis suggests both models encounter difficulties in understanding information-seeking events with long-range context. However, XLNet shows slightly more robustness than ChatGPT.
* **ChatGPT misclassifies events more:** To obtain the confusion mapping, we calculate the frequency of incorrect predictions for each event in relation to other events as presented in Table 6. Analyzing the results, it becomes evident that ChatGPT faces difficulty in distinguishing advice events associated with TM, EP, and RL classes, often misclassifying them as _other_ class. The model made the highest (92) number of errors on the RL event class and, most of the time considered it as either the TM or EP event class. Interestingly, XLNet often misclassified TM as EP (10) class.
The results suggest that ChatGPT is biased toward predicting TM and EP classes. After qualitative observation, we notice that ChatGPT often mislabels samples as TM when dosage information is provided _(e.g., 2mg kratom, 1.5 mg bupe)_, even though these instances do not seek treatment information. Similarly, the model frequently mislabels posts mentioning psychophysical effects (e.g., withdrawals, sleep) as EP, despite these not being information-seeking events. Surprisingly, on 54 occasions, the model identified posts that were seeking treatment information but failed to predict appropriate event classes and mislabeled them as the _other_ class. This mislabeling can be attributed to the model's poor understanding of the domain-specific nuances.
## Ethical Considerations
This research was approved by the Institutional Review Board (IRB) of the author's institution.
**User Privacy:** All the data samples were collected and annotated in a manner consistent with the terms and conditions of the respective data source. We do not collect or share any personal information (e.g., age, location, gender, identity) that violates the user's privacy.
**Biases:** Any biases found in the dataset and model are unintentional. Experts and a set of diverse groups of annotators labeled the data following a comprehensive annotation guideline and all annotations were reviewed to address any potential annotation biases. Our data collection exclusively focused on one subreddit (r/suboxone), possibly leading to a bias towards the r/suboxone community.
**Intended Use:** We intend to make our dataset accessible per Reddit policies to encourage further research on online health discourse as well research on MOUD.
## Conclusion and Future Work
In this paper, we address a critical social concern by investigating the information needs of individuals who are considering or undergoing recovery from opioid use disorder. On the guidance of experts, we develop a multilabel, multiclass dataset (_TREAT-ISE_) aiming to characterize OUD treatment information-seeking events. This dataset introduces a new resource to the field, enabling us to study MOUD treatment for recovery through the lens of _events_. The event schema we defined can be valuable to surface clinical insights such as knowledge gaps about treatment, tapering strategies, potential misconceptions, and beyond. Moreover, our data collection process, event-centric schema design, and data annotation strategy can be replicated to develop similar resources for other domains. Finally, we benchmark the dataset with a wide range of NLP models and demonstrate the potential challenges of the task with thorough ablation studies.
There are several scopes for potential improvement. Due to costly and time-consuming annotation, we had to limit the dataset size to 5083 samples. We will explore the possibility of minimal supervision to augment the dataset size by leveraging our annotation protocol and additional available data (over 10K samples). Other research can explore how treatment information-seeking events vary in other online communities and subreddits. In addition, investigating how other large models (e.g., LLaMA) perform on this task can provide us with valuable insights.
\begin{table}
\begin{tabular}{c c|c c c c c|c|c} & & **AM** & **TM** & **TP** & **EP** & **RL** & **Oth** & Total \\ \hline \multirow{2}{*}{AM} & CG & - & **7** & 2 & 3 & 2 & 5 & 19 \\ & XL & - & 1 & 1 & 3 & 2 & 3 & 10 \\ \hline \multirow{2}{*}{TM} & CG & 1 & - & 2 & 2 & 2 & **10** & 17 \\ & XL & 0 & - & 6 & 10 & 5 & 3 & 24 \\ \hline \multirow{2}{*}{TP} & CG & 2 & 1 & - & 2 & 1 & **8** & 14 \\ & XL & 0 & 5 & - & 6 & 1 & 0 & 12 \\ \hline \multirow{2}{*}{EP} & CG & 1 & 6 & 5 & - & 3 & **20** & 35 \\ & XL & 1 & 1 & 1 & - & 2 & 0 & 5 \\ \hline \multirow{2}{*}{RL} & CG & 6 & 29 & 15 & **31** & - & 11 & 92 \\ & XL & 0 & 5 & 1 & 6 & - & 1 & 13 \\ \hline \multirow{2}{*}{Oth} & CG & 4 & **10** & 1 & 4 & 2 & - & 21 \\ & XL & 7 & 3 & 1 & 1 & 0 & - & 12 \\ \hline \end{tabular}
\end{table}
Table 6: Confusion mapping of ChatGPT (CG) with chain-of-thought approach and XLNet (XL) model. Each cell indicates how many times an event (in row) confuses with another event indicated in the column.
Figure 2: Correlation between sample length and frequency of correct/wrong predictions: as the length of samples (measured in words) increases, the frequencies of accurate predictions decrease for both models. |
2307.16195 | Implementation of Fast and Power Efficient SEC-DAEC and SEC-DAEC-TAEC
Codecs on FPGA | The reliability of memory devices is affected by radiation induced soft
errors. Multiple cell upsets (MCUs) caused by radiation corrupt data stored in
multiple cells within memories. Error correction codes (ECCs) are typically
used to mitigate the effects of MCUs. Single error correction-double error
detection (SEC-DED) codes are not the right choice against MCUs, but are more
suitable for protecting memory against single cell upset (SCU). Single error
correction-double adjacent error correction (SEC-DAEC) and single error
correction-double adjacent error correction-triple adjacent error correction
(SEC-DAEC-TAEC) codes are more suitable due to the increasing tendency of
adjacent errors. This paper presents the implementation of fast and low power
multi-bit adjacent error correction codes for protecting memories. Related
SEC-DAEC and SEC-DAEC-TAEC codecs with data length of 16-bit, 32-bit and 64-bit
have been implemented. It is found from FPGA based implementation results that
the modified designs have comparable area and have less delay and power
consumption. | Sayan Tripathi, Jhilam Jana, Jaydeb Bhaumik | 2023-07-30T10:22:54Z | http://arxiv.org/abs/2307.16195v1 | # Implementation of Fast and Power Efficient SEC-DAEC and SEC-DAEC-TAEC Codecs on FPGA
###### Abstract
The reliability of memory devices is affected by radiation-induced soft errors. Multiple cell upsets (MCUs) caused by radiation corrupt data stored in multiple cells within memories. Error correction codes (ECCs) are typically used to mitigate the effects of MCUs. Single error correction-double error detection (SEC-DED) codes are not the right choice against MCUs, but are more suitable for protecting memory against single cell upset (SCU). Single error correction-double adjacent error correction (SEC-DAEC) and single error correction-double adjacent error correction-triple adjacent error correction (SEC-DAEC-TAEC) codes are more suitable due to the increasing tendency of adjacent errors. This paper presents the implementation of fast and low power multi-bit adjacent error correction codes for protecting memories. Related SEC-DAEC and SEC-DAEC-TAEC codecs with data length of 16-bit, 32-bit and 64-bit have been implemented. It is found from FPGA based implementation results that the modified designs have comparable area and have less delay and power consumption.
Keywords:Soft errors, Memories, SEC-DAEC, SEC-DAEC-TAEC, FPGA
## 1 Introduction
In modern high speed computing applications, static random access memory (SRAM) play a very important role as a storage subsystems. For electronic systems to operate properly, SRAM reliability is a key consideration. The main issue is the soft errors brought on by radiations, which have an impact on the SRAMs' reliability [1]. One memory cell is corrupted by a soft error in a single cell upset (SCU). However the multiple cell upsets (MCUs) are a prevalent event with downscaling of technology nodes [2]. ECCs are mostly used in memories as a defence against these soft errors. In the past, SEC codes were mainly useful for protecting SRAMs from SCUs [3]-[4]. To protect memories against MCUs, interleaved SEC-DED codes have been used, but they are more complex. Recently, DAEC [5]-[12], TAEC [13]-[15] and burst error correction (BEC) codes [16]-[18] have been introduced.
Neale et al. have presented a technique for designing SEC-DED-DAEC codes that has the additional capability of scaling adjacent error detection (xAED) [6]-[7]. Further modifications were made to these codes in order to implement the TAEC feature [13]. A method for double adjacent ECCs that has zero miscorrection for memories was proposed by Dutta et al. [8]. Reviriego et al. have presented area and delay optimisation strategies for SEC-DED-DAEC codes [9]. Neale et al. [13] and Adalid et al. [14] developed the triple adjacent error correcting codes. But these codes need complicated decoding circuitry and have higher delay which make their design more gate-intensive. Tripathi et al. have implemented an efficient multi-bit adjacent ECC for memory [11]. Also, Moran et al. presented a flexible unequal error correction codes. [15]. Li et al. proposed 3-bit BEC codes which have lower encoder and decoder delay [16].
In this paper, we have introduced modified SEC-DAEC and SEC-DAEC-TAEC codes with 16, 32, and 64 information bits and 0% miscorrection rate in order to increase the reliability of storage systems against soft errors and make the system delay and power efficient. On the FPGA platform, performance of modified and existing codes has been analysed in terms of area, delay and power.
The remaining part of our paper is structured as follows. Section 2 provides basics of SEC-DAEC and SEC-DAEC-TAEC codes. Section 3 describes the corrections on existing SEC-DAEC and SEC-DAEC-TAEC codes. Section 4 presents implementation results and finally concluding remarks are made in Section 5.
## 2 Basics of SEC-DAEC and SEC-DAEC-TAEC Codes
In this section, basic overview of \(H\)-matrix construction for SEC-DAEC and SEC-DAEC-TAEC codes is presented. The \(H\)-matrices for SEC-DAEC and SEC-DAEC-TAEC codes are constructed using some design constraints which are as follows: i) every columns should be non-zero and have unique values, (ii) XOR sum of any two adjacent columns must be unique and non-zero, should not be equal to any of the individual column, (iii) XOR sum of any three adjacent columns must be unequal to any of the individual column and non-zero. The first condition confirm the SEC capability. DAEC property is satisfied by conditions (i) and (ii). TAEC property is confirmed by constraints (i), (ii) and (iii).
## 3 Corrections on Existing SEC-DAEC and SEC-DAEC-TAEC Codes [12]
In this section, some corrections on existing SEC-DAEC and SEC-DAEC-TAEC codes [12] are presented. For the sake of completeness, encoding and decoding processes of (23 16) SEC-DAEC-TAEC code [12] is described here. The (23, 16) \(H\)-matrix [12] is shown in Fig. 1. Parity bits are calculated from information bits during encoding, and the associated codeword is stored in the memory. Parity
equations associated with (23, 16) \(H\)-matrix are provided in equation (1).
\[\begin{split} p_{b1}&=i_{b3}\oplus i_{b10}\oplus i_{b13} \oplus i_{b14}\oplus p_{b3}\\ p_{b2}&=i_{b5}\oplus i_{b8}\oplus i_{b11}\oplus i_{b16}\oplus p_{b6}\\ p_{b3}&=i_{b4}\oplus i_{b6}\oplus i_{b8}\oplus i_{b10}\oplus i_{b11} \oplus i_{b14}\oplus i_{b15}\oplus i_{b16}\\ p_{b4}&=i_{b11}\oplus i_{b2}\oplus i_{b3}\oplus i_{b5}\oplus i_{b12} \oplus i_{b14}\oplus i_{b15}\oplus i_{b16}\\ p_{b5}&=i_{b1}\oplus i_{b4}\oplus i_{b7}\oplus i_{b15}\oplus p_{b4}\\ p_{b6}&=i_{b1}\oplus i_{b2}\oplus i_{b7}\oplus i_{b9}\oplus i_{b10} \oplus i_{b11}\oplus i_{b13}\\ p_{b7}&=i_{b2}\oplus i_{b6}\oplus i_{b9}\oplus i_{b12}\end{split} \tag{1}\]
During the decoding process, the syndrome is generated by multiplying the received codeword with the transpose of \(H\)-matrix. Syndrome values are calculated to locate the error in stored codeword. In the encoding process of section 3 [12], for 16-bit information word \(i_{b}\)=11111111111111, the encoder computes seven parity bits 0100010. Therefore, generated codeword will be \(\mathbf{011111101110111}\)\(\mathbf{010111}\) which is obtained by appending parity bits with information bits and stored in the memory. To demonstrate the correction capability of code, errors are injected on three adjacent bits on \(i_{b2}\), \(i_{b3}\) and \(i_{b4}\) in the generated codeword. After injection of errors, the received codeword will be \(\mathbf{0110001101111011010}\)\(\mathbf{111}\). The error in received codeword are corrected using syndrome bits (1011101). Finally, detected error is corrected using error correction logic.
In article [12], there are some mistakes in Fig. 4 like the value of the codeword stored in memory cells, injected errors, syndrome bits, codeword read from memory and codeword after error correction. The rectified figure has been depicted here as Fig. 2. Also, Fig. 5 in [12] has been corrected and presented in Fig. 3.
## 4 FPGA-Based Implementation Results
In this section, FPGA-based implementation of (23, 16), (40, 32) and (74, 64) SEC-DAEC and SEC-DAEC-TAEC codecs are presented. Several SEC-DAEC
Figure 1: \(H\)-matrix of SEC-DAEC and SEC-DAEC-TAEC (23, 16) code
**Data Input :**
**Information [\(\mathbf{b_{1}}\cdot\mathbf{b_{10}}\)] = **Encoding Process:**
**Parity Equations:**
\begin{tabular}{|l|} \hline \multicolumn{2}{|l|}{**Parity Bits:**} \\ \hline \multicolumn{2}{|l|}{\(p_{b1}=i_{b3}\oplus i_{b10}\oplus i_{b13}\oplus i_{b14}\oplus p_{b3}\)} \\ \multicolumn{2}{|l|}{\(p_{b2}=i_{b5}\oplus i_{b8}\oplus i_{b11}\oplus i_{b16}\oplus p_{b6}\)} \\ \multicolumn{2}{|l|}{\(\vdots\)} \\ \multicolumn{2}{|l|}{\(i_{b7}=i_{b2}\oplus i_{b6}\oplus i_{b9}\oplus i_{b12}\)} \\ \multicolumn{2}{|l|}{**Codeword Stored in Memory Cells :**} \\ \hline \multicolumn{2}{|l|}{**Codeword Stored in Memory Cells :**} \\ \hline \multicolumn{2}{|l|}{**Codeword Stored in Memory Cells :**} \\ \hline \multicolumn{2}{|l|}{**Codeword Stored in Memory Cells :**} \\ \hline \multicolumn{2}{|l|}{**Codeword Stored in Memory Cells :**} \\ \hline \multicolumn{2}{|l|}{**Codeword Stored in Memory Cells :**} \\ \hline \multicolumn{2}{|l|}{**Codeword Stored in Memory Cells :**} \\ \hline \multicolumn{2}{|l|}{**Codeword Red :
and SEC-DAEC-TAEC codes have been described in Verilog and implemented in FPGA platform. The modified and existing codes are implemented using Zynq UltraScale+ MPSoC (ZCU104) FPGA evaluaton kit. The performances of all designs are observed with respect to look-up tables (LUTs), delay, power for FPGA-implementation which are presented in Table 1. The performance of the modified SEC-DAEC and SEC-DAEC-TAEC codecs have been obtained by using common sub expression. The highest improvement in area (LUTs) is 47.03% and 36.54% achieved for the modified implementation of SEC-DAEC code and SEC-DAEC-TAEC code respectively. Also the highest 17.95% delay improvement for implemented SEC-DAEC code and SEC-DAEC-TAEC code is obtained against Moran et al.[15]. The highest improvement in power is 21.62% and 22.30% achieved in FPGA-based implementation of SEC-DAEC code and SEC-DAEC-TAEC code against Moran et al.[15] and Neale et al. [13] respectively.
Figure 3: Gate level design of modified (23, 16) SEC-DAEC-TAEC codec
\begin{table}
\begin{tabular}{|c|c|c|c|c|c|c|c|c|} \hline Scheme & Data & \multirow{2}{*}{Codec} & Area & Delay & Power & Impro. in & Impro. in & Impro. in \\ & Bits & & (LUTs) & (ns) & (W) & Area (\%) & Delay (\%) & Power (\%) \\ \hline & & Neale et al. & 68 & 3.24 & 2.89 & 20.59 & 3.70 & 19.72 \\ \multirow{4}{*}{DAEC} & \multirow{4}{*}{16} & Reviriego et al. & 65 & 3.18 & 2.72 & 16.92 & 1.89 & 14.71 \\ & & (23, 16) [9] & & & & & & \\ \cline{2-9} & Moran et al. & 72 & 3.49 & 2.96 & 25.00 & **10.60** & **21.62** \\ \cline{2-9} & & Dutta et al. & 66 & 3.22 & 2.86 & 18.18 & 3.11 & 18.88 \\ \multirow{4}{*}{DAEC} & Tripathi et al. & 64 & 3.16 & 2.34 & 15.63 & 1.27 & 0.85 \\ \cline{2-9} & Modified & 54 & 3.12 & 2.32 & - & - & - \\ \cline{2-9} & Neale et al. & 40 & 32 & & & & & \\ \cline{2-9} & Reviriego et al. & 114 & 4.06 & 3.83 & 14.04 & 1.48 & 4.96 \\ \cline{2-9} & Dutia et al. & 139 & 4.31 & 4.07 & 29.50 & 7.19 & 10.57 \\ \cline{2-9} & Tripathi et al. & 120 & 4.03 & 3.67 & 18.33 & 0.74 & 0.82 \\ \cline{2-9} & & Modified & 98 & 4.00 & 3.64 & - & - & - \\ \cline{2-9} & Reviriego et al. & 114 & 4.06 & 3.83 & 14.04 & 1.48 & 4.96 \\ \cline{2-9} & Dutia et al. & 139 & 4.31 & 4.07 & 29.50 & 7.19 & 10.57 \\ \cline{2-9} & Tripathi et al. & 120 & 4.03 & 3.67 & 18.33 & 0.74 & 0.82 \\ \cline{2-9} & & Modified & 98 & 4.00 & 3.64 & - & - & - \\ \cline{2-9} & Reviriego et al. & 114 & 4.06 & 3.83 & 14.04 & 1.48 & 4.96 \\ \cline{2-9} & Dutia et al. & 139 & 4.31 & 4.07 & 29.50 & 7.19 & 10.57 \\ \cline{2-9} & Tripathi et al. & 120 & 4.03 & 3.67 & 18.33 & 0.74 & 0.82 \\ \cline{2-9} & & Modified & 98 & 4.00 & 3.64 & - & - & - \\ \cline{2-9} & Reviriego et al. & 114 & 4.06 & 3.83 & 14.04 & 1.48 & 4.96 \\ \cline{2-9} & Dutia et al. & 139 & 4.31 & 4.07 & 29.50 & 7.19 & 10.57 \\ \cline{2-9} & Tripathi et al. & 120 & 4.03 & 3.67 & 18.33 & 0.74 & 0.82 \\ \cline{2-9} & & Modified & 98 & 4.00 & 3.64 & - & - & - \\ \cline{2-9} & Reviriego et al. & 114 & 4.06 & 3.83 & 14.04 & 1.48 & 4.96 \\ \cline{2-9} & Dutia et al. & 139 & 4.31 & 4.07 & 29.50 & 7.19 & 10.57 \\ \cline{2-9} & Tripathi et al. & 120 & 4.03 & 3.67 & 18.33 & 0.74 & 0.82 \\ \cline{2-9} & Modified & 98 & 4.00 & 3.64 & - & - & - \\ \cline{2-9} & Reviriego et al. & 114 & 4.03 & 3.67 & 18.33 & 0.74 & 0.82 \\ \cline{2-9} & Tripathi et al. & 120 & 4.03 & 3.67 & 18.33 & 0.74 & 0.82 \\ \cline{2-9} & & Modified & 98 & 4.00 & 3.64 & - & - & - \\ \cline{2-9} & Reviriego et al. & 114 & 4.03 & 3.67 & 18.33 & 0.74 & 0.82 \\ \cline{2-9} & Tripathi et al. & 120 & 4.03 & 3.67 & 18.
Table 2 represents the performance of modified and existing ECCs in terms of LUTs-delay product (LDP), power-LUTs Product (PLP), power-delay product (PDP) in FPGA platform. The highest improvement in terms of LDP, PLP are obtained compared to Neale et al. scheme and the highest improvement in terms of PDP is achieved against Moran et al. scheme.
## 5 Conclusion
In this article, modified FPGA-based implementation of fast and power efficient SEC-DAEC and SEC-DAEC-TAEC codes with 0% miscorrection are presented. On FPGA platform, modified and other related codes have been implemented. The highest improvement of 47.03% in area, 17.95% in delay and 22.30% in power have been achieved in FPGA implementation. The results show that the modified implementation has low delay and are power efficient than existing designs. Consequently, our implemented SEC-DAEC and SEC-DAEC-TAEC codes can be used in memory subsystem.
|
2305.09814 | Mechanization of scalar field theory in (1+1)-dimensions: BPS mech-kinks
and their scattering | We present an updated version of a general-purpose collective coordinate
model that aims to fully map out the dynamics of a single scalar field in
(1+1)-dimensions. This is achieved by a procedure that we call a
`mechanization': we reduce the infinite number of degrees of freedom down to a
finite and controllable number by chopping the field into flat segments
connected via joints. In this paper, we introduce two new ingredients to our
procedure. The first is a manifestly BPS mechanization in which BPS mech-kinks
saturate the same bound on energy as their field-theoretical progenitors. The
second is allowing the joints to `switch', leading to an extended concept of
the effective Lagrangian, through which we describe direct collisions of
mech-kinks and anti-kinks. | Filip Blaschke, Ondřej Nicolas Karpíšek, Lukáš Rafaj | 2023-05-16T21:38:06Z | http://arxiv.org/abs/2305.09814v1 | # Mechanization of scalar field theory in (1+1)-dimensions: BPS mech-kinks and their scattering
###### Abstract
We present an updated version of a general-purpose collective coordinate model that aims to fully map out the dynamics of a single scalar field in (1+1)-dimensions. This is achieved by a procedure that we call a'mechanization': we reduce the infinite number of degrees of freedom down to a finite and controllable number by chopping the field into flat segments connected via joints. In this paper, we introduce two new ingredients to our procedure. The first is a manifestly BPS mechanization in which BPS mech-kinks saturate the same bound on energy as their field-theoretical progenitors. The second is allowing the joints to'switch', leading to an extended concept of the effective Lagrangian, through which we describe direct collisions of mech-kinks and anti-kinks.
BPS, kinks, collective coordinate model, mechanization
## I Introduction
Field theories in (1+1)-dimensions with disconnected vacua support topological solitons - kinks - that are stable, particle-like objects. Kinks (and their higher-dimensional relatives) are relevant in many areas of contemporary physics, including cosmology, condensed matter and particle physics [1; 2; 3].
The collisions of solitons have become a major avenue for theoretical exploration of the inner workings of nonlinear field dynamics. Indeed, during collisions, the nonlinearity is'switched on' only intermittently and with an intensity that can be tuned, among other parameters, by the initial velocities of the impactors. The high grail of soliton dynamics would be the ability to predict - given the initial state of solitons and the model at hand - the outcome of any collision.
Although the kink-anti-kink (\(K\bar{K}\)) scattering have been studied since the late 70-ties [4; 5; 6; 7; 8] the true quantitative understanding of their main characteristics has been achieved only recently [9; 10; 11; 12; 13; 14; 15; 16; 17] (see also references in [18]).
A hallmark feature of \(K\bar{K}\) collisions is the bouncing phenomenon. It has been long since understood as a resonant transfer of kinetic energy to and from colliding solitons into localized modes of the field. In the case of \(\phi^{4}\) kink, they are the shape modes residing on the kinks themselves [10], while for \(\phi^{6}\) model, a delocalized mode emerges in between the \(\bar{K}K\) pair [13].
In Fig. 1, we showcase the evolution of the central field value \(\phi(x=0,t)\) as a function of time and initial velocity of the \(K\bar{K}\) configuration in the \(\phi^{4}\) model. This picture demonstrates the intricate dependence of the collision's outcome on the initial velocity. More precisely, we see that the bouncing happens only in certain windows that occur below the critical velocity \(v_{\rm crit}\approx 0.26\) and above \(v_{\rm min}\approx 0.18\). In between the bouncing windows there are the so-called 'bion chimneys' where the \(K\bar{K}\) pair form a long-living, quasi-periodic state that slowly decays via emission of radiation.
Both quantitative and qualitative understanding of this phenomenology is commonly pursued through the so-called Collective Coordinate Models (CCMs). This approach aims to reduce the infinitely-dimensional dynamics of the field theory down to a few most relevant degrees of freedom. The strategy is to select a background ansatz - a continuous family of curves \(\phi_{\rm bkg}(x;\{X_{a}(t)\})\) controlled by a given number of parameters \(X_{a}\) that may vary with time. For a relativistic field theory with a sin
Figure 1: Evolution of the center field value \(\phi(x=0,t)\) of \(K\bar{K}\) configuration for a range of initial velocities in the \(\phi^{4}\) model.
gle scalar field, i.e.
\[\mathcal{L}=\frac{1}{2}\partial_{\mu}\phi\partial^{\mu}\phi-V(\phi)\,, \tag{1}\]
the effective Lagrangian has a generic structure
\[L_{\rm eff}=\frac{1}{2}g_{ab}\dot{X}_{a}\dot{X}_{b}-U(X)\,,\quad a,b\in\{1,\ldots N\} \tag{2}\]
where \(N\) is the number of collective coordinates and where the metric and the potential are given by the integrals
\[g_{ab}\equiv\int\limits_{-\infty}^{\infty}{\rm d}x\,\frac{ \partial\phi_{\rm bkg}}{\partial X_{a}}\frac{\partial\phi_{\rm bkg}}{\partial X _{b}}\,, \tag{3}\] \[U(X)\equiv\int\limits_{-\infty}^{\infty}{\rm d}x\left(\frac{1}{ 2}\phi_{\rm bkg}^{\prime\,2}+V(\phi_{\rm bkg})\right). \tag{4}\]
The utility of CCM encoded in \(L_{\rm eff}\) depends very sensitively on \(\phi_{\rm bkg}\). Regarding the strategies for selecting viable ansatzes, we may postulate two complementary philosophies: i) the _engineering_ approach and ii) the _agnostic_ approach.
The engineering approach - as the name suggests - relies on incorporating prior information into the \(\phi_{\rm bkg}\). If the goal is the analysis of \(K\bar{K}\) scattering, for instance, the ansatz typically consists of a superposition of kink and anti-kink solutions plus a selected number of normal modes - a route that have been applied for, e.g. \(\phi^{4}\) model [9; 10] (see [18] for the somewhat intricate history of its deployment). However, CCMs that has been proposed also include Derrick modes [11], quasi-normal modes [19] and/or delocalized modes [13; 20; 21]. In fact, the engineering approach has become a precision tool for predicting major features of the \(K\bar{K}\) scattering, such as the critical velocity [17].
Despite its successes, the engineering approach has also disadvantages. In this approach, a given CCM is like a microscope that has been carefully trained on a particular spot of the sample. Regardless how successfully a CCM models the selected feature of dynamics it has no direct applicability on other aspects. Neither it can be used for discovering new dynamical features nor to unearth connections between known ones. In short, an engineering CCM is, by construction, a single-purpose tool.
On the other hand, the agnostic approach aims to be a general-purpose tool. Rather than carefully constraining the field(s) into a premeditated straitjacket, the agnostic CCM attempts to capture rough features of the field dynamics on a coarse-grained canvas. In this regard, it is mainly a tool for exploration. In practice, the best approach is a judicious synthesis of the two: deployment of agnostic CCM should be followed by an engineering one. Indeed, the findings of the former can be a posteriori verified and developed by the latter. Ideally, such a combo may allow exhaustive exploration of soliton dynamics in situations, where there is a vast space of initial configurations involving multiple fields and a higher number of spatial dimensions which makes numerical solutions of field theory very time-consuming.
To achieve this, we must first develop a toolkit for agnostic CCMs in various field theories, starting with a single scalar field in (1+1)-dimensions. An agnostic CCM must be exhaustive, meaning that the background ansatz \(\phi_{\rm bkg}\) approaches the continuum field in the limit \(N\to\infty\). Furthermore, it must be algebraically tractable - the number of terms in the effective Lagrangian should grow linearly with \(N\). This is to ensure that stepping from \(N\) to \(N+1\) does not not generate exponential increase in complexity.
In our previous paper [22], we proposed an early candidate for such an agnostic CCM that we have dubbed _mechanization_. The idea is to replace a continuum field with a piece-wise linear function - a _mech-field_. We have cataloged basic features of mesh-field dynamics for a few lowest values of \(N\) - which is the number of non-flat segments connected by \(N+1\) joints.
The most apparent advantage of the mechanization procedure is that it allows progressive exploration of the dynamics. As \(N\) increases, more modes of behavior becomes possible.
At \(N=1\), the mech-field is a mechanical analog of the kink - a _mech-kink_ (see Fig. 4). Let us point out two of its salient features: i) a static mech-kink can be boosted, despite the explicit breakdown of the Lorentz invariance that is typical for most CCM's and ii) the mech-kink has an exact periodic solution - the so-called Derrick mode. In fact, the structure of the effective Lagrangian turns out to be virtually identical to a field-theoretical relativistic CCM for a kink [11].
At \(N=2\), the mech-field connecting the same vacua behaves as a quasi-periodic oscillator that can decay - the joints fly to opposite infinities while the mech-field settles on the vacuum exponentially fast. In [22], we investigated how the lifetime of this _mech-oscillon_ depends on its initial dimensions. More importantly, we have shown that higher-\(N\) mech-oscillons can decay via multiple channels, including disintegration into excited pair of mech-kink and anti-mech-kink that - before escaping to infinity - may undergo several bounces.
Although our findings were encouraging, we have also identified several shortcomings of mechanization as proposed in [22]. For example, the moduli space of a generic mech-field turned out to be geodetically incomplete, having multiple singularities corresponding to situations when joints overlap. Further, we have also encountered a technical issue that prevented us from direct investigations of mech-\(K\bar{K}\) scattering. Because the segment between a mech-\(K\bar{K}\) pair lies precisely in a vacuum, there is no force between them, unlike in the field theory, where a short-range attractive force exists due to overlapping tails of kinks. Thus, a direct scattering of mech-kinks seemed to be impossible, while scattering of approximate mech-kinks turned out to be riddled by
numerical instabilities and presence of long-range forces.
In this paper, we present a solution to the above issues in addition to other conceptual advancements. Hence, we provide a significant step towards the construction of truly general-purpose CCM.
Our main findings are distributed in the paper as follows. In Sec. II, we reintroduce the mechanization procedure and provide explicit formulas for the effective Lagrangian using two different sets of coordinates. More importantly, we define a concept of _BPS mechanization_ that allows the construction of Bogomol'nyi-Prasad-Sommerfeld equations for static mech-kinks saturating the same Bogomol'nyi bound [26] as field-theoretical kinks.
In Sec. III, we compare properties of mech-kinks based on non-BPS and BPS mechanization, including the discussion of normal modes.
Sec. IV contains an investigation of direct mech-\(K\bar{K}\) scatterings for the simplest mech-fields. We first present a resolution of the decoupling problem: this is accomplished via _LOse Order Mechanization_, or LOOM. In short, we show that short-range interactions of kinks in a field theory are replaced by _contact_ interactions between mech-kinks. By allowing the joints to pass through each other (without encountering any singularities) we continue the free dynamics of a mech-\(K\bar{K}\) pair into to different'stage', where it becomes a mech-oscillon. This mech-oscillon may either decay or again form a new mech-\(K\bar{K}\) pair, which can fly apart or undergo bouncing. In this way, we show that both key features, namely bouncing and (mech-)bion formation, are represented even in the simplest mech-\(K\bar{K}\) scatterings. We showcase numerical results for both non-BPS and BPS kinks in \(\phi^{4}\) model.
Lastly, in Sec. V we discuss the presented results and point out the future directions for the mechanization program.
## II Mechanization
In this section, we gather all the technical aspects of the mechanization procedure; we define the mech-field and discuss associated moduli space providing explicit formulas for the metric via two complementary choices of coordinates. Lastly, we provide an explicit form for the effective Lagrangian for both non-BPS and BPS approaches.
### Mech-field
The mechanization procedure replaces a continuous field \(\phi(x,t)\) by a piece-wise linear function that is defined by a set of \(N+1\) control points (or _joints_) in the \(\phi\)-\(x\) plane, i.e. \(\{x_{a},\phi_{a}\}\), \(a=0\ldots N\) (see Fig. 2). We define a _mech-field_\(\phi_{M}(x,t)\) by the formula
\[\phi_{M}(x,t)\equiv\sum_{a=-1}^{N}\Bigl{(}\frac{\Delta\phi_{a}(t)}{\Delta x_{ a}(t)}(x-x_{a}(t))+\phi_{a}(t)\Bigr{)}\chi_{a}\,. \tag{5}\]
Here, \(\Delta f_{a}(t)\equiv f_{a+1}(t)-f_{a}(t)\) and the \(\chi_{a}\)'s are the indicator functions for each segment:
\[\chi_{a}\equiv\theta(x-x_{a})-\theta(x-x_{a+1})=-\Delta\theta(x-x_{a})\,, \tag{6}\]
where \(\theta(x)\) is the Heaviside's step function, i.e. \(\theta(x)=1\) if \(x>0\) and \(\theta(x)=0\) otherwise.
The \(x_{a}(t)\)'s are the positions of the joints on the \(x\)-axis. Note that the \(\phi_{a}\)'s correspond to the values of the field at \(a\)-th joint, i.e. \(\phi_{a}(t)\equiv\phi(x_{a}(t))\), _only_ if they are canonically ordered, namely \(x_{0}(t)<x_{1}(t)<\ldots<x_{N}(t)\). Throughout this section, we assume that this ordering holds.
We impose boundary conditions on a mech-field so that it has finite energy. Namely, we fix the two outermost segments in some vacua, i.e. \(\phi_{-1}=\phi_{0}=v_{\rm L}\) and \(\phi_{N}=\phi_{N+1}=v_{\rm R}\) where \(v_{\rm L,R}\) represent vacuum values on the left or right, respectively. We have formally added two static joints at spatial infinities, namely \(x_{-1}=-\infty\) and \(x_{N+1}=+\infty\). The continuity of the mech-field can be then verified by direct differentiation
\[\partial_{x}\phi_{M}(x,t)=\sum_{a=-1}^{N}\frac{\Delta\phi_{a}(t)}{\Delta x_{a} (t)}\chi_{a}-\sum_{a=-1}^{N}\Delta\bigl{(}\delta(x-x_{a})\phi_{a}\bigr{)}\,. \tag{7}\]
The second term on the r.h.s vanishes due to 'fundamental theorem of discrete calculus', i.e.
\[\sum_{a=-1}^{N}\Delta\bigl{(}\delta(x-x_{a})\phi_{a}\bigr{)}=\phi _{N+1}\delta(x-x_{N+1})-\phi_{-1}\delta(x-x_{-1})\] \[=v_{\rm R}\delta(x-\infty)-v_{\rm L}\delta(x+\infty)=v_{\rm R} \delta(-\infty)-v_{\rm L}\delta(\infty)=0\,.\]
A similar argument can be made to show that \(\partial_{t}\phi_{M}(x,t)\) is free of delta-functions too.
There are \(N-1\) 'heights' of joints \(\phi_{1},\ldots,\phi_{N-1}\) together with \(N+1\) positions \(x_{0},\ldots,x_{N}\) totalling \(2N\) degrees of freedom to describe a mech-field with \(N+1\) joints.
For further purposes, let us also introduce an alternative parametrization:
\[\phi_{M}(x,t)\equiv\sum_{a=-1}^{N}\Bigl{(}k_{a}(t)x+\Phi_{a}(t)\Bigr{)}\chi_{a }\,. \tag{8}\]
Here, the \(k_{a}\)'s are the slopes of the segments, i.e.1
Footnote 1: Here, the notation is slightly different from our previous paper [22], where we wrote \(k_{a+1}\) instead.
\[k_{a}\equiv\frac{\phi_{a+1}-\phi_{a}}{x_{a+1}-x_{a}}\,, \tag{9}\]
while the \(\Phi_{a}\)'s are given as
\[\Phi_{a}\equiv\frac{x_{a+1}\phi_{a}-x_{a}\phi_{a+1}}{x_{a+1}-x_{a}}\,. \tag{10}\]
The boundary conditions reads
\[k_{-1}=k_{N}=0\,,\hskip 14.226378pt\Phi_{-1}=v_{\rm L}\,,\hskip 14.226378pt\Phi_{N}=v _{\rm R}\,. \tag{11}\]
The inverse formulas to (9)-(10) are given as
\[x_{a+1}=-\frac{\Phi_{a+1}-\Phi_{a}}{k_{a+1}-k_{a}}\,,\hskip 14.226378pt\phi_{a+1} =\frac{k_{a+1}\Phi_{a}-k_{a}\Phi_{a+1}}{k_{a+1}-k_{a}}\,. \tag{12}\]
The \(\{k_{a},\Phi_{a}\}\) coordinates offer some advantages over \(\{x_{a},\phi_{a}\}\). For example, the metric, discussed in the next subsection, has the simplest form. A more subtle issue is the redundancy (or degeneracy) of \(\{x_{a},\phi_{a}\}\) coordinates. We illustrate this in Fig. 3: if we artificially add a joint on any segment, while keeping the neighboring slopes the same, the mesh-field does not change, i.e. the new joint is not dynamical. In particular, a vacuum configuration, i.e. \(\phi_{M}=v\), can be described with a single segment, two segments, or any number of segments, with the positions \(\{x_{a}\}\) undetermined by the dynamics for any \(N\). This can be seen directly from the formula (5) by setting \(\phi_{0}=\ldots=\phi_{N}=v\).
In \(\{k_{a},\Phi_{a}\}\) coordinates, on the other hand, the vacuum is given by \(k_{0}=\ldots=k_{N}=0\) and \(\Phi_{0}=\ldots=\Phi_{N}=v\) and there are no undetermined degrees of freedom. Furthermore, there is truly only a single segment, because whenever two subsequent \(\Phi_{a}\)'s and \(k_{a}\)'s equal each other, the \(x_{a}\) is undefined through Eq. (12). This is most easily seen from the rewriting of (8) as
\[\phi_{M}(x,t)\equiv v_{\rm L}+\sum_{a=0}^{N}\theta(x-x_{a}(t))\Big{(}\Delta k_ {a}(t)x+\Delta\Phi_{a}(t)\Big{)}\,, \tag{13}\]
that shows that whenever \(\Delta k_{a}=\Delta\Phi_{a}=0\), the coordinate \(x_{a}\) disappears.
### Moduli space
Generically, for any set of collective coordinates \(\{X_{a}\}\) the metric is given as
\[g(\{X\})_{ab}\equiv\int\limits_{-\infty}^{\infty}{\rm d}x\,\frac{\partial \phi}{\partial X_{a}}\frac{\partial\phi}{\partial X_{b}}\,. \tag{14}\]
In the \(\{x_{a},\phi_{a}\}\) coordinates, the metric consists of \((N+1)\times(N+1)\), \((N+1)\times(N-1)\) and \((N-1)\times(N-1)\)_tri-diagonal_ blocks, namely:
\[g(\{x,\phi\})=\begin{pmatrix}g^{xx}&g^{x\phi}\\ g^{\phi\phi}&g^{\phi\phi}\end{pmatrix}\,, \tag{15}\]
where \(g^{\phi x}=(g^{x\phi})^{T}\) and
\[g^{xx}=\begin{pmatrix}\frac{(\Delta\phi_{0})^{2}}{3\Delta x_{0}}&\frac{( \Delta\phi_{0})^{2}}{6\Delta x_{0}}&0&\ldots\\ \frac{(\Delta\phi_{0})^{2}}{6\Delta x_{0}}&\frac{(\Delta\phi_{0})^{2}}{3 \Delta x_{0}}+\frac{(\Delta\phi_{1})^{2}}{3\Delta x_{1}}&\frac{(\Delta\phi_{1} )^{2}}{6\Delta x_{1}}&\ldots\\ 0&\frac{(\Delta\phi_{1})^{2}}{6\Delta x_{1}}&\frac{(\Delta\phi_{1})^{2}}{3 \Delta x_{1}}+\frac{(\Delta\phi_{2})^{2}}{3\Delta x_{2}}&\ldots\\ \vdots&\vdots&\vdots&\ddots\end{pmatrix} \tag{16}\]
Figure 3: Illustration of a degeneracy of \(\{x_{a}(t),\phi_{a}(t)\}\) coordinates: insertion of a new joint on any segment does not change the mesh-field.
Figure 2: A depiction of a mesh-field \(\phi_{M}(x,t)\) as a sequence of \(N\) straight stretchable segments connected via massless joints.
\[g^{\phi\phi}=\begin{pmatrix}(\phi_{0}-\phi_{1})/6&0&0&\ldots\\ (\phi_{0}-\phi_{2})/3&(\phi_{1}-\phi_{2})/6&0&\ldots\\ (\phi_{1}-\phi_{2})/6&(\phi_{1}-\phi_{3})/3&(\phi_{2}-\phi_{3})/6&\ldots\\ 0&(\phi_{2}-\phi_{3})/6&(\phi_{2}-\phi_{4})/3&\ldots\\ \vdots&\vdots&\vdots&\ddots\end{pmatrix} \tag{17}\]
\[g^{\phi\phi}=\begin{pmatrix}(x_{2}-x_{0})/3&(x_{2}-x_{1})/6&0&\ldots\\ (x_{2}-x_{1})/6&(x_{3}-x_{1})/3&(x_{3}-x_{2})/6&\ldots\\ 0&(x_{3}-x_{2})/6&(x_{4}-x_{2})/3&\ldots\\ \vdots&\vdots&\vdots&\ddots\end{pmatrix} \tag{18}\]
The determinant reads2
Footnote 2: Note that the formula for the determinant given in our previous paper [22] was written incorrectly.
\[\left|g(\{x,\phi\})\right|=\frac{1}{12^{N}}\prod_{a=-1}^{N-1}\bigl{(}k_{a+1} -k_{a}\bigr{)}^{2}\prod_{b=0}^{N-1}\bigl{(}x_{b+1}-x_{b}\bigr{)}^{2}\,. \tag{19}\]
In these coordinates the metric is degenerate, i.e. \(\left|g\right|=0\), not only when positions of neighboring joints coincide, namely \(\Delta x_{a}=0\), but also when subsequent slopes are equal: \(\Delta k_{a}=0\). The latter type of singularity reflects the aforementioned degeneracy.
On the other hand, in \(\{k_{a},\Phi_{a}\}\) coordinates, the metric consists of four \(N\times N\)_diagonal_ blocks:
\[g(\{k,\Phi\})=\begin{pmatrix}g^{kk}&g^{k\Phi}\\ g^{\Phi k}&g^{\Phi\Phi}\end{pmatrix}\,, \tag{20}\]
where \(g^{\Phi k}=g^{k\Phi}\) and
\[g^{kk}=\frac{1}{3}\begin{pmatrix}x_{1}^{3}-x_{0}^{3}&0&0&\ldots\\ 0&x_{2}^{3}-x_{1}^{3}&0&\ldots\\ 0&0&x_{3}^{3}-x_{2}^{3}&\ldots\\ \vdots&\vdots&\vdots&\ddots\end{pmatrix}\,, \tag{21}\]
\[g^{k\Phi}=\frac{1}{2}\begin{pmatrix}x_{1}^{2}-x_{0}^{2}&0&0&\ldots\\ 0&x_{2}^{2}-x_{1}^{2}&0&\ldots\\ 0&0&x_{3}^{2}-x_{2}^{2}&\ldots\\ \vdots&\vdots&\vdots&\ddots\end{pmatrix}\,, \tag{22}\]
\[g^{\Phi\Phi}=\begin{pmatrix}x_{1}-x_{0}&0&0&\ldots\\ 0&x_{2}-x_{1}&0&\ldots\\ 0&0&x_{3}-x_{2}&\ldots\\ \vdots&\vdots&\vdots&\ddots\end{pmatrix}\,. \tag{23}\]
Furthermore, the determinant reads
\[\left|g(\{k,\Phi\})\right|=\frac{1}{12^{N}}\prod_{a=0}^{N-1}\bigl{(}x_{a+1}-x _{a}\bigr{)}^{4}\,, \tag{24}\]
and contains only a singularity of the type \(\Delta x_{a}=0\).
### The effective Lagrangian and the BPS mechanization
To obtain an effective Lagrangian we inserts a method given either by (5) or (8) into a Lagrangian density \(\mathcal{L}\) taken as a generic scalar-field theory in 1+1 dimensions, i.e.
\[\mathcal{L}=\frac{1}{2}\dot{\phi}^{2}-\frac{1}{2}\phi^{\prime\,2}-V(\phi)\,, \tag{25}\]
and integrate it over \(x\)-axis
\[L_{\rm eff}=\int\limits_{-\infty}^{\infty}\mathrm{d}x\,\mathcal{L}\bigl{(}\phi _{M}\bigr{)}\,. \tag{26}\]
The result derived by assuming canonical ordering of joints, \(x_{0}<x_{1}<\ldots<x_{N}\), reads (see the details in [22])
\[L\bigl{[}\{x,\phi\}\bigr{]}=\sum_{a=0}^{N-1}\Delta x_{a}\biggl{[} \frac{1}{6}\Bigl{(}\Delta\dot{\phi}_{a}-\frac{\Delta\dot{x}_{a}}{\Delta x_{a}} \Delta\phi_{a}\Bigr{)}^{2}-\frac{\Delta\phi_{a}^{2}}{2\Delta x_{a}^{2}}\] \[+\frac{1}{2}\Bigl{(}\dot{\phi}_{a+1}-\frac{\Delta\phi_{a}}{\Delta x _{a}}\dot{x}_{a+1}\Bigr{)}\Bigl{(}\dot{\phi}_{a}-\frac{\Delta\phi_{a}}{\Delta x _{a}}\dot{x}_{a}\Bigr{)}-\frac{\Delta{\cal V}\bigl{(}\phi_{a}\bigr{)}}{ \Delta\phi_{a}}\biggr{]}\,. \tag{27}\]
In the \(\{k,\Phi\}\) coordinates, the same can be expressed slightly more compactly as follows
\[L\bigl{[}\{k,\Phi\}\bigr{]}=\sum_{a=0}^{N-1} \biggl{[}\frac{x_{a+1}^{3}-x_{a}^{3}}{6}\dot{k}_{a}^{2}+\frac{x_{a+1}^ {2}-x_{a}^{2}}{2}\dot{k}_{a}\dot{\Phi}_{a}+\frac{\Delta x_{a}}{2}\dot{\Phi}_{ a}^{2}\] \[-\frac{1}{2}k_{a}^{2}\Delta x_{a}-\Delta x_{a}\frac{\Delta{\cal V }(\phi_{a})}{\Delta\phi_{a}}\biggr{]}\,, \tag{28}\]
where \(x_{a}\)'s and \(\phi_{a}\)'s are understood as functions of \(k_{a}\)'s and \(\Phi_{a}\)'s through relations (12). In both formulas (27)-(28), \({\cal V}(\phi)\) is the primitive function o the potential \(V(\phi)\), i.e. \({\cal V}^{\prime}(\phi)=V(\phi)\).
Let us stress that (27) and (28) are valid only if \(x_{0}<x_{1}<\ldots<x_{N}\). We will return to this point in Sec. IV, where we present the effective Lagrangian (LOOM) that incorporates all posible orderings.
Let us now point out that mechanization of the potential term, i.e.
\[\int\limits_{-\infty}^{\infty}\mathrm{d}x\,V(\phi_{M})=\sum_{a=0}^{N-1}\Delta x _{a}\frac{\Delta{\cal V}(\phi_{a})}{\Delta\phi_{a}}\,, \tag{29}\]
obtained by a direct integration is not unique and may not be the most optimal for studying topological solutions. In the following, let us label the outcome (29) a _non-BPS mechanization_ for the reasons that become obvious.
Now, let us consider a field theory in the form
\[\mathcal{L}_{J}=\frac{1}{2}\partial_{\mu}\phi\partial^{\mu}\phi+\frac{1}{2}J^ {2}+JW(\phi)\,, \tag{30}\]
where \(W(\phi)\) is the superpotential, i.e. \(V(\phi)\equiv\frac{1}{2}W^{2}(\phi)\), and where \(J\) is an auxiliary field. Of course, \(\mathcal{L}_{J}\) is physically equivalent to (25) as can be seen by eliminating
through its equation of motion: \(J=-W(\phi)\) and plugging it back.
However, if we _first_ mechanize the auxiliary field as
\[J_{M}=\sum_{a=0}^{N-1}\bigl{(}\theta(x-x_{a}(t))-\theta(x-x_{a+1}(t))\bigr{)}J_{a }(t)\,, \tag{31}\]
where _the positions of joints \(x_{a}(t)\) are the same as those appearing in \(\phi_{M}\)_, inserting both \(J_{M}\) and \(\phi_{M}\) into \(\mathcal{L}_{J}\) and integrating over \(x\) yields
\[L_{M}^{J} \supset\int\limits_{-\infty}^{\infty}\mathrm{d}x\,\Bigl{(}\frac{1 }{2}J_{M}^{2}+J_{M}W\bigl{(}\phi_{M}\bigr{)}\Bigr{)}\] \[=\sum_{a=0}^{N-1}\biggl{(}\frac{\Delta x_{a}}{2}J_{a}^{2}+\Delta x _{a}J_{a}\frac{\mathcal{W}(\phi_{a+1})-\mathcal{W}(\phi_{a})}{\phi_{a+1}-\phi_ {a}}\biggr{)}\,, \tag{32}\]
where \(\mathcal{W}\) is a primitive function of the superpotential, i.e. \(\mathcal{W}^{\prime}=W\).
Eliminating all \(J_{a}\)'s via their equations of motion, we arrive at what we dub _BPS mechanization_, namely:
\[L_{M}^{J}\xrightarrow[J_{a}=-\Delta\mathcal{W}_{a}/\Delta\phi_{a}]{}L_{M}^{ \mathrm{BPS}} \tag{33}\]
\[L_{M}^{\mathrm{BPS}} \equiv\sum_{a=0}^{N-1}\biggl{[}\frac{x_{a+1}^{3}-x_{a}^{3}}{6}k_{ a}^{2}+\frac{x_{a+1}^{2}-x_{a}^{2}}{2}\dot{k}_{a}\dot{\Phi}_{a}+\frac{\Delta x _{a}}{2}\dot{\Phi}_{a}^{2}\] \[\qquad-\frac{1}{2}k_{a}^{2}\Delta x_{a}-\frac{\Delta x_{a}}{2} \biggl{(}\frac{\Delta\mathcal{W}(\phi_{a})}{\Delta\phi_{a}}\biggr{)}^{2} \biggr{]}\,. \tag{34}\]
In other words, we have found that the following loop does not close (the horizontal arrows \(\xrightarrow[M]{}\) denote mechanization, while the vertical arrows denote elimination of auxiliary variables):
\[\begin{CD}\mathcal{L}_{J}@>{}>{}>\hskip 28.452756ptL_{M}^{J}\\ @V{}V{J=-W}V@V{}V{J_{a}=-\frac{\Delta\mathcal{W}_{a}}{\Delta\phi_{a}}}V\\ \mathcal{L}@>{}>{}>L_{M}\neq L_{M}^{\mathrm{BPS}}\end{CD} \tag{35}\]
Although the difference between \(L_{M}\) and \(L_{M}^{\mathrm{BPS}}\) is isolated only to the potential term and, at first glance, does not seem significant, we will show that it has a profound impact on the nature of static solutions and their dynamics.
## III Static solutions
In this section, we study the properties of static solutions of both \(L_{M}\) and \(L_{M}^{\mathrm{BPS}}\) and highlight their differences.
### Non-BPS mech-kinks
To find static solutions of a generic \(N\) mech-field we minimize the static energy:
\[E_{M}=\sum_{a=0}^{N-1}\biggl{(}\frac{(\Delta\phi_{a})^{2}}{2\Delta x_{a}}+ \Delta x_{a}\frac{\Delta\mathcal{V}(\phi_{a})}{\Delta\phi_{a}}\biggr{)}\,. \tag{36}\]
The coordinates of the joints (up to overall position) can be found as3
Footnote 3: Here, we assume the sequence \(\{\phi_{0},\phi_{1},\ldots\}\) to be monotonically increasing, i.e. the solution is a mech-kink interpolating vacua \(v_{\mathrm{L}}<v_{\mathrm{R}}\). The anti-mech-kinks would be found analogously after the appropriate insertion of absolute values inside the square roots so that \(\Delta x_{a}>0\) for all segments.
\[\Delta x_{a}=\frac{(\Delta\phi_{a})^{3/2}}{\sqrt{2\bigl{(}\mathcal{V}(\phi_{a +1})-\mathcal{V}(\phi_{a})\bigr{)}}}\,. \tag{37}\]
On the other hand, the field values \(\phi_{a}\)'s follows from minimization of (36) after inserting (37), i.e.
\[E_{M}\xrightarrow[(\ref{eq:2.1})]{}\sum_{a=0}^{N-1}\sqrt{2\Delta\phi_{a} \bigl{(}\mathcal{V}(\phi_{a+1})-\mathcal{V}(\phi_{a})\bigr{)}}\,. \tag{38}\]
This leads to a system of non-linear algebraic equations:
\[V(\phi_{a})^{2}=\frac{\mathcal{V}(\phi_{a+1})-\mathcal{V}(\phi_{a})}{\phi_{a+1 }-\phi_{a}}\frac{\mathcal{V}(\phi_{a})-\mathcal{V}(\phi_{a-1})}{\phi_{a}-\phi_{ a-1}}\,. \tag{39}\]
The simplest solution is \(N=1\) mech-kink (see Fig. 4). Its static width \(R_{K}\) and static energy \(m_{K}\) are given by
\[R_{K}=\frac{v_{\mathrm{R}}-v_{\mathrm{L}}}{\sqrt{2\kappa}}\,,\ \ \ \ m_{K}=(v_{ \mathrm{R}}-v_{\mathrm{L}})\sqrt{2\kappa}\,, \tag{40}\]
where
\[\kappa=\frac{1}{v_{\mathrm{R}}-v_{\mathrm{L}}}\int\limits_{v_{\mathrm{L}}}^{v_ {\mathrm{R}}}\mathrm{d}t\,V(t)\,. \tag{41}\]
Figure 4: Simplest mechanical model of a kink = ‘mech-kink’.
The corresponding values for \(\phi^{4}\) model are \(R_{K}=\sqrt{15/2}\) and \(m_{K}=\sqrt{32/15}\approx 1.46\). The latter value is not that far from the field-theoretical value \(M_{K}=4/3\). However, as we show in Fig. 5, the mass of higher-\(N\) mesh-kinks approaches \(M_{K}\) only relatively slowly.
The \(N=1\) mesh-kink has only one massive normal mode: the Derrick mode. This mode is associated with infinitesimal scaling and has been identified as a crucial element in constructing relativistic CCM's for \(K\bar{K}\) collisions in various field theories [11]. Its (angular) frequency is universally given as \(\omega_{D}^{2}=Q/M\), where \(Q\) is the second moment of static energy density. The corresponding formula for \(N=1\) mech-field reads \(q_{M}=(v_{\rm R}-v_{\rm L})^{2}/(12R_{K})\). In the case of \(\phi^{4}\) theory, we have \(q_{M}=\sqrt{5/6}\approx 0.91\) which is quite far from field-theoretical value \(Q\approx 0.43\).
In general, a mech-kink with \(N+1\) joints has \(2N\) normal modes. The lowest one is a zero mode corresponding to the overall translation of joints. The remaining \(2N-1\) modes are massive modes. As \(N\to\infty\), we should somehow recover the corresponding spectrum of field-theoretical kinks. This typically consists of a certain number of localized modes and a continuous spectrum of radiation modes, depending on the model at hand.
In Figs. 6-7, we show that this correspondence (if it exists at all) is not very visible at the displayed range \(N\leq 17\). For instance, Fig. 7 hints at some convergence of the \(3^{\rm rd}\) normal modes towards the (blue, dashed) line of the \(\phi^{4}\) kink's only massive mode, but this could be entirely coincidental and further investigation into higher \(N\)'s is needed to draw any conclusions.
These results illustrate that the correspondence between relatively high \(N\) non-BPS mech-kinks and field-theoretical kinks is not as simple, as one might hope, especially regarding the structure of normal modes. Let us now see how the situation differs for static solutions of \(L_{M}^{\rm BPS}\).
### BPS mech-kinks
In the BPS scheme, the static energy reads
\[E_{M}^{\rm BPS}=\sum_{a=0}^{N-1}\biggl{(}\frac{(\Delta\phi_{a})^{2}}{2\Delta x _{a}}+\frac{\Delta x_{a}}{2}\biggl{(}\frac{\Delta{\cal W}(\phi_{a})}{\Delta \phi_{a}}\biggr{)}^{2}\biggr{)}\,. \tag{42}\]
Proceeding as in the previous subsection, we first eliminate the coordinates of the joints via their equations of motion:
\[\Delta x_{a}=\frac{(\Delta\phi_{a})^{2}}{\Delta{\cal W}(\phi_{a})}\,. \tag{43}\]
Figure 5: The relative difference of mech-kink static energy \(m_{K}\) and field-theoretical value \(M_{K}\) as a function of the number of joints \(N\) for \(\phi^{4}\) and SG model.
Figure 6: Distribution of frequencies \(\omega\) of normal modes as a function of \(N\) for \(\phi^{4}\) model. Only a few of the lowest frequencies are displayed and zero modes are omitted.
Figure 7: Distribution of frequencies \(\omega\) of normal modes as a function of \(N\) for SG model. Only a few of the lowest frequencies are displayed and zero modes are omitted.
In contrast with the non-BPS case, if we insert this relation back into the \(E_{M}^{\rm BPS}\) we obtain a pure number
\[E_{M}^{\rm BPS}\underset{\eqref{eq:E_M}}{\longrightarrow}\sum_{a=0}^ {N-1}\Delta\mathcal{W}(\phi_{a})=\mathcal{W}(\phi_{N})-\mathcal{W}(\phi_{0})\] \[\qquad\qquad=\mathcal{W}(v_{\rm R})-\mathcal{W}(v_{\rm L})\equiv M _{K} \tag{44}\]
which is given by the difference of superpotentials evaluated for vacua at \(\pm\infty\) - the field-theoretical BPS mass of the kink \(M_{K}\)!
We can establish this result in a standard way by completing the energy into a (sum of) square(s) a la Bogomol'nyi [26]:
\[E_{M}^{\rm BPS} =\sum_{a=0}^{N-1}\biggl{(}\frac{(\Delta\phi_{a})^{2}}{2\Delta x_{ a}}+\frac{\Delta x_{a}}{2}\Bigl{(}\frac{\Delta\mathcal{W}(\phi_{a})}{\Delta \phi_{a}}\Bigr{)}^{2}\biggr{)}\] \[=\sum_{a=0}^{N-1}\frac{\Delta x_{a}}{2}\biggl{(}\frac{\Delta\phi_ {a}}{\Delta x_{a}}-\frac{\Delta\mathcal{W}(\phi_{a})}{\Delta\phi_{a}}\biggr{)} ^{2}\] \[\quad+\sum_{a=0}^{N-1}\Delta\mathcal{W}(\phi_{a})\geq\sum_{a=0}^{N -1}\Delta\mathcal{W}(\phi_{a})=M_{K}\,. \tag{45}\]
The minimization of energy is achieved by vanishing the squares giving us the conditions (43).
Interestingly, BPS mech-kinks have unconstrained 'heights' of the joints because there is no equivalent of Eq. (39) that would uniquely determine \(\phi_{a}\)'s. Indeed, \(\phi_{a}\)'s are arbitrary as long as \(\Delta x_{a}>0\). This boils down to conditions
\[\mathcal{W}(\phi_{a+1})>\mathcal{W}(\phi_{a})\,,\quad\forall a\,. \tag{46}\]
For \(N=1\), the only difference from the non-BPS case is in the parameter \(\kappa_{\rm BPS}\) (compare with Eq. (41)):
\[\kappa_{\rm BPS}=\frac{1}{2(v_{\rm R}-v_{\rm L})^{2}}\biggl{(}\int_{v_{\rm L} }^{v_{\rm R}}\mathrm{d}t\,W(t)\biggr{)}^{2}\,. \tag{47}\]
Its value for \(\phi^{4}\) model is \(\kappa_{\rm BPS}=2/9\) giving us \(R_{K}=3\), \(m_{K}=M_{K}=4/3\) and \(q_{M}=1\).
The structure of normal modes is also very different compared with non-BPS mech-kinks. A BPS mech-kink has \(N\) zero modes (!) corresponding to an overall shift in the position of joints and freedom to make infinitesimal shifts of each \(\phi_{a}\)'s. Consequently, it has only \(N\) massive normal modes in contrast with \(2N-1\) massive modes for non-BPS mech-kinks.
The frequencies of these massive modes, however, do depend on values of \(\phi_{a}\)'s. As an illustration, in Fig. 8 we display the frequencies of the two massive modes as functions of \(\phi_{1}\) for the \(\phi^{4}\) model. In fact, it is easy to work out explicit formulas:
\[\omega_{1}=\frac{\sqrt{4+2\phi_{1}^{2}}}{\sqrt{3}}\,,\quad\quad\omega_{2}= \frac{1}{3}\sqrt{54\phi_{1}^{2}+\frac{16}{\phi_{1}^{2}}-28}\,. \tag{48}\]
A salient feature of Fig. 8 is the expected mirror symmetry under \(\phi_{1}\to-\phi_{1}\) and the fact that, at the center \(\phi_{1}=0\), the second massive mode diverges, i.e. \(\omega_{2}\to\infty\). This is a footprint of the coordinate degeneracy: at that point the \(N=2\) BPS mech-kink is indistinguishable from \(N=1\) mech-kink. Indeed, the value \(\omega_{1}(\phi_{1}=0)=2/\sqrt{3}\) is the same as the \(N=1\) Derrick mode. What is more surprising is the fact that the lengths of segments
\[\Delta x_{0}=\frac{3}{2-\phi_{1}}\,,\quad\quad\Delta x_{1}=\frac{3}{2+\phi_{1} }\,, \tag{49}\]
are both positive in the range \(\phi_{1}\in(-2,2)\). Thus, the middle joint can also be placed outside of \([-1,1]\) contrary to expectations.
Lastly, let us address the question of recovering the spectrum of normal modes of \(\phi^{4}\) kink in the limit \(N\to\infty\). Compared with non-BPS mech-kinks, the situation is complicated by the fact that \(N\) massive normal modes depend on \(N-1\) free parameters, the \(\phi_{a}\)'s.
To make progress, we studied several somewhat random types of value assignments for the heights of joints, that may be called _linear_, _quadratic_ and _rational_, given by formulae:
\[\phi_{a}= -1+\frac{2a}{N}\,,\qquad\qquad\qquad\text{linear} \tag{50}\] \[\phi_{a}= -1+\frac{1}{2}\Bigl{(}\frac{2a}{N}\Bigr{)}^{2}\,,\qquad\qquad \text{quadratic}\] (51) \[\phi_{a}= 1-\frac{2(N-a)}{a+N}\,.\qquad\qquad\text{rational} \tag{52}\]
In Fig. 9, we display how the lowest-lying frequencies of normal modes change with increasing \(N\) for all three
Figure 8: Dependence of frequency of two massive modes for \(N=2\) BPS mech-kink on the height of the middle joint \(\phi_{1}\). The horizontal lines indicate relevant frequencies for field-theoretical kink in \(\phi^{4}\) theory.
types of assignments. We see that - especially for the two lowest massive modes - the frequencies tend to converge to the same values for all three assignments. This hints that the same limiting spectrum should be reached for any choice of \(\phi_{a}\)'s. However, as was the case for non-BPS mech-kinks, the convergence to \(\sqrt{3}\) - the frequency of the \(\phi^{4}\) kink's shape mode - is very slow if it exists at all. All that we can claim is that the convergence towards the shape mode is very slow in the displayed range \(N\leq 61\).
Another crucial property of BPS mech-kinks is the existence of \(N\) zero modes due to the degeneracy of the solution. For example, \(N=2\) mech-kink has not only translational zero mode but also a zero mode corresponding to infinitesimal change of \(\phi_{1}\), the height of the middle joint. This extra zero mode has an interesting and counter-intuitive impact on the dynamic of \(N=2\) mech-kinks. We can easily imagine (and we observe it in numerical simulations) that during dynamical evolution, the coordinate \(\phi_{1}\) drifts from its initial value, as its change does not cost any energy. We observe that once the middle joint becomes very close to one of the outer joints, the latter quickly accelerate to infinity. In this way, the \(N=2\) mech-kink sheds one of its outer joints and becomes effectively \(N=1\) mech-kink. This phenomenon is called 'joint ejection' and we reported it in our previous paper [22].
Although the joint ejections were observed for sufficiently perturbed \(N\geq 2\) non-BPS mech-kinks, in the BPS case they are practically inevitable due to the presence of extra zero modes. This leaves only the \(N=1\) mech-kink as a truly stable solution in BPS mechanization.
## IV Scattering of Mech-kinks
In this section, we investigate the simplest forms of scatterings between mech-kinks to showcase the second conceptual advancement of this paper: what we call a LOse Order Mechanization, or a LOOM. As we shall see, the analysis of mech-\(K\bar{K}\) scattering requires including non-canonical orderings of the joints and construction of an effective Lagrangian that incorporates these different orderings. We also present numerical results for dynamics of symmetric \(N=3\) mech-field and we find qualitatively similar behavior to field theory, namely that mech-\(K\bar{K}\) pair undergoes bounces or form (mech-)bions. We analyze the scattering of both non-BPS and BPS mech-kinks and comment on the differences.
### Decoupling
The compact nature of mech-fields, i.e. that they have finite extents outside of which there are only exact vacua, allow us to superpose objects, e.g. mech-kinks or mech-oscillons, without introducing any interaction between them. This is most easily visible at the effective Lagrangian level. Taking a generic mech-field and fixing the \(a\)-th segment to a vacuum, say \(\phi_{a}=\phi_{a+1}=v\), while keeping \(x_{a}\) and \(x_{a+1}\) dynamical, it is easy to see that the effective Lagrangian consists of two decoupled pieces:
\[L_{M}[\phi_{M}]\xrightarrow[\phi_{a},\phi_{a+1}\to v]\,L_{M}[\phi_{M}^{(1)}]+L _{M}[\phi_{M}^{(2)}]\,. \tag{53}\]
In other words, there remain no interacting terms that could inform the constituent mech-fields \(\phi_{M}^{(1,2)}\) about each other's existence.4 Thus, the two parts evolve according to their respective dynamics as if the other piece is not there at all. This is, of course, true only _until they start to overlap_, where the very description of the dynamics via effective Lagrangians (27) or (28) is invalid.
Footnote 4: During dynamical evolution a mech-field can pass through decoupled configuration. In such a case, however, non-zero derivatives preserves the interaction between the two parts, so the decoupling does not occur.
As a consequence, a mech-kink and an anti-mech-kink separated by a vacuum segment of arbitrary length do not
Figure 10: Illustration of the decoupling property: If the mech-field consists of two parts connected by a flat segment in a vacuum, the coordinates in the left and right parts of the mech-field do not interact.
Figure 9: Frequencies of normal modes for \(\phi^{4}\) BPS mech-kins as a function of the number of joints \(N+1\). We depict a gradual approach of the values for three different value assignments of field-values \(\phi_{a}\)’s as described in the text.
impart any force on one another. This should be compared with what is going on in the field theory, where a well-separated \(K\bar{K}\) configuration experiences an exponentially damped attractive force, precisely because field-theoretical solitons are not compact.5 This presents somewhat of a road-block to a naive investigations of mech-\(K\bar{K}\) scattering. Indeed, the dynamics derived from the effective Lagrangian (27) is trivial before the collision (no force) and undefined for the moment of contact as \(L[\{x_{a},\phi_{a}\}]\) applies only for canonically ordered mech-fields.
Footnote 5: In special field theories, however, compact solutions can be present [27; 28; 29]. Their behavior is quite similar to mech-kinks and mech-oscillons presented here (or, more correctly, vice versa).
One way around this obstacle is to investigate approximate mech-kinks. As an example, we can consider a symmetric \(N=3\) mech-field depicted in Fig. 11 with a middle segment not exactly in a vacuum but arbitrarily close to it.
However, the excess energy in the central segment would manifest as a constant attractive force between the (approximate) mech-kinks. Importantly, this force would be long-range and, clearly, an artefact of the selected configuration. In fact, we should not see the mech-field in Fig. 11 as describing well-separated mech-kinks - the presence of a long-range force between them makes them manifestly not well-separated. Rather, we should say that Fig. 11 depicts a symmetric mech-oscillon and has (a priori) nothing to do with mech-\(K\bar{K}\) dynamics.
Even for more elaborate mech-fields, long-range forces would remain present simply because piece-wise linear functions cannot rapidly approximate a constant (without actually being identical to it), unlike the exponentially decaying tails of kinks in the field theory. In this way, the investigation of 'approximate' mech-\(K\bar{K}\) scattering is irreparably contaminated by unphysical long-range forces.
Fortunately, we can investigate scattering of mech-kinks directly and in a natural way, as we shall do in this section.
### Rigid \(K\bar{K}\)-scattering
Let us consider a superposition of a mech-kink and anti-mech-kink with widths \(R_{K}\) separated by a flat middle segment of length \(R\) in a vacuum \(v_{2}\). Specifically,
\[\phi_{M}^{K\bar{K}}= \,v_{1}\theta(-x-R_{K}-R/2)+\Big{(}\theta(x+R_{K}+R/2)-\theta(x+R /2)\Big{)}\Big{(}\frac{v_{2}-v_{1}}{R_{K}}(x+R_{K}+R/2)+v_{1}\Big{)}+v_{2} \Big{(}\theta(x+R/2)-\theta(x-R/2)\Big{)}\] \[+\Big{(}\theta(x-R/2)-\theta(x-R_{K}-R/2)\Big{)}\Big{(}{-}\frac{ v_{2}-v_{1}}{R_{K}}(x-R/2)+v_{2}\Big{)}+v_{1}\theta(x-R/2-R_{K})\,. \tag{54}\]
Note that \(\phi_{M}^{K\bar{K}}=\phi_{M}^{K}(x+R/2)+\phi_{M}^{\bar{K}}(x-R/2)+v_{1}\), where \(\phi_{M}^{K,(\bar{K})}\) is an \(N=1\) (anti-)mech-kink.
The key to analyze the dynamics of \(\phi_{M}^{K\bar{K}}\) is to realize that there are four distinct orderings of joints (or _stages_, see Fig 12) depending on the value of \(R\), each of which is governed by a different effective Lagrangian.
Each stage follows from the previous one by continuation of \(R\) to more negative values. In turn, \(R\) has a different meaning in each stage. When \(R>0\), it is the distance between mech-kink and anti-kink. When \(-R_{K}<R<0\), it controls both the height and width of a symmetric \(N=3\) mech-oscillon that is placed above vacuum \(v_{1}\), while, when \(-2R_{K}<R<-R_{K}\), \(R\) serves a similar role for the mech-oscillon placed under \(v_{1}\). Finally, when \(R<-2R_{K}\), the mech-field \(\phi_{M}^{K\bar{K}}\) resembles arbitrary separated anti-mech-kink and mech-kink that interpolates the values \(v_{2}\) and \(2v_{1}-v_{2}\). If the latter value is not a vacuum of the given model, there is a constant, attractive force and the whole configuration is energetically disfavoured.
For simplicity, in this subsection we investigate _rigid_ mech-\(K\bar{K}\) scattering, where only \(R\) is dynamical while \(R_{K}\) is fixed to an appropriate initial value.
For the first stage, i.e. assuming \(R>0\), we have:
\[L_{\rm I}=\frac{(v_{2}-v_{1})^{2}}{4R_{K}}\hat{R}^{2}-\frac{(v_{2}-v_{1})^{2} }{R_{K}}-2\kappa R_{K}\,, \tag{55}\]
where \(\kappa\) is given either as in Eq. (41) for non-BPS mechanization or as in Eq. (47) in the BPS case. Up to an irrelevant constant, \(L_{\rm I}\) describes a free particle (with position \(R/2\)) of mass \(2(v_{2}-v_{1})^{2}/R_{K}\). 6 In this stage, the
Figure 11: \(N=3\) symmetric mech-field.
mech-kinks indeed behaves as free particles.
In the second stage, i.e. for \(-R_{K}<R<0\), we obtain
\[L_{\rm II}=\frac{(v_{2}-v_{1})^{2}}{4R_{k}^{2}}\big{(}R_{K}-R\big{)}\dot{R}^{2}-U_ {\rm II}\,, \tag{56}\]
where the potentials for both non-BPS and BPS mechanization read
\[U_{\rm II}= \frac{2R_{K}}{v_{2}-v_{1}}\big{(}\mathcal{V}\big{(}v_{2}+\frac{v _{2}-v_{1}}{R_{K}}R\big{)}-\mathcal{V}(v_{1})\big{)}\] \[-RV\Big{(}v_{2}+\frac{v_{2}-v_{1}}{R_{K}}R\Big{)}-\frac{(v_{2}-v_ {1})^{2}(R+R_{K})}{R_{K}^{2}}\,, \tag{57}\] \[U_{\rm II}^{\rm BPS} \tag{58}\]
It is easy to check that \(L_{\rm I}=L_{\rm II}\) at \(R=0\), namely that the transition from first stage to second stage is _continuous_. This is a direct consequence of continuity of the mech-field \(\phi_{M}^{KK}\) (54).
The Lagrangian \(L_{\rm II}\) describes a rigid and symmetric \(N=3\) mech-oscillon. Its solutions are periodic. Also notice, that the'metric' \((v_{2}-v_{1})^{2}(R_{K}-R)/(2R_{K}^{2})\) is regular at both transitions from the first to second stage (\(R=0\)) and from the second to third stage at \(R=-R_{K}\). This is also true for the potential which goes to zero at \(R=-R_{K}\). Thus, there are no singularities that prevent us to further continue the variable \(R\) below \(-R_{K}\).
The third stage also depicts a rigid \(N=3\) mech-oscillon which is placed below the vacuum \(v_{1}\):
\[L_{\rm III}=\frac{(v_{2}-v_{1})^{2}}{4R_{k}^{2}}\big{(}3R_{K}+R\big{)}\dot{R}^ {2}-U_{\rm III}\,, \tag{59}\]
where
\[U_{\rm III}= -\frac{2R_{K}}{v_{2}-v_{1}}\big{(}\mathcal{V}\Big{(}v_{2}+\frac{ v_{2}-v_{1}}{R_{K}}R\Big{)}-\mathcal{V}(v_{1})\big{)}\] \[+(2R_{K}+R)V\Big{(}v_{2}+\frac{v_{2}-v_{1}}{R_{K}}R\Big{)}+\frac{ (v_{2}-v_{1})^{2}(R+R_{K})}{R_{K}^{2}}\,, \tag{60}\] \[U_{\rm III}^{\rm BPS}= -\frac{R_{K}^{2}/(v_{2}-v_{1})^{2}}{(R_{K}+R)}\Big{(}\mathcal{V} \Big{(}v_{2}+\frac{v_{2}-v_{1}}{R_{K}}R\Big{)}-\mathcal{W}(v_{1})\Big{)}^{2}\] \[+\frac{2R_{K}+R}{2}W^{2}\Big{(}v_{2}+\frac{v_{2}-v_{1}}{R_{K}}R \Big{)}+\frac{(v_{2}-v_{1})^{2}(R+R_{K})}{R_{K}^{2}}\,. \tag{61}\]
Again, it is easy to check that \(L_{\rm II}=L_{\rm III}\) at \(R=-R_{K}\). Furthermore, both the kinetic term and the potential term remain well-defined at \(R=-2R_{K}\), thus we may continue to the fourth stage:
\[L_{\rm IV}=\frac{(v_{2}-v_{1})^{2}}{4R_{K}}\dot{R}^{2}-\frac{(v_{2}-v_{1})^{2 }}{R_{K}}-2\kappa_{\rm IV}R_{K}\,, \tag{62}\]
where
\[\kappa_{\rm IV}= \frac{\mathcal{V}(v_{1})-\mathcal{V}(2v_{1}-v_{2})}{v_{2}-v_{1}} -\Big{(}1+\frac{R}{2R_{K}}\Big{)}V(2v_{1}-v_{2})\,, \tag{63}\] \[\kappa_{\rm IV}^{BPS}= \frac{1}{2}\Big{(}\frac{\mathcal{W}(v_{1})-\mathcal{W}(2v_{1}-v_{2 })}{v_{2}-v_{1}}\Big{)}^{2}\] \[-\frac{1}{2}\Big{(}1+\frac{R}{2R_{K}}\Big{)}W^{2}(2v_{1}-v_{2})\,. \tag{64}\]
The \(L_{\rm IV}\) generically describes a particle in a linearly attractive potential, unless \(2v_{1}-v_{2}\) is also a vacuum of the model.
Since all transitions from one stage to the next are continuous, we may collect the fixed-order Lagrangians into a single LOose Order Mechanical Lagrangian (LOOM) that captures the dynamics of rigid mech-\(K\bar{K}\) collisions
Figure 12: Various ‘stages’ of \(N=3\) mech-field depending on the value of \(R\). If \(R>0\), the configuration is a mech-kink and anti-kink interpolating vacua \(v_{1}\) and \(v_{2}\) separated by distance \(R\). The width \(R_{K}>0\) is assumed to be always positive. In the second and third stages, when \(-2R_{K}<R<0\), the mech-field becomes a \(N=3\) mech-oscillon, first above and then below the vacuum \(v_{1}\). Its amplitude depends on \(R\) in a designated way. In the final stage \(R<-2R_{K}\), the mech-field can be arbitrarily wide, but since the middle segment does not lie in either vacua (assuming \(2v_{1}-v_{2}\) is not another vacuum), it is energetically unfavored.
for the entire range \(R\in(-\infty,\infty)\):
\[L^{\rm rigid}_{\rm OOM} = L_{\rm I}\theta(R)+L_{\rm II}\theta(-R)\theta(R+R_{K}) \tag{65}\] \[+L_{\rm III}\theta(-R-R_{K})\theta(R+2R_{K})\] \[+L_{\rm IV}\theta(-R-2R_{K})\,,\]
where \(L_{\rm I,II,III,V}\) are fixed-order Lagrangians given above. Note that \(\rm L^{\rm rigid}_{\rm OOM}\) is a continuous function of \(R\). Furthermore, \(L^{\rm rigid}_{\rm OOM}\) is nothing but a direct integration of the mech-field \(\phi^{K\bar{K}}_{M}\) assuming \(R_{K}={\rm const}>0\), i.e.
\[L^{\rm rigid}_{\rm OOM}=\int\limits_{-\infty}^{\infty}{\rm d}x\,{\cal L}\big{(} \phi^{K\bar{K}}_{M}\big{)}\,, \tag{66}\]
_allowing for all possible orderings of joints._
As there are no modes through which (rigid) mech-kinks can lose energy, they are either bound to reflect off each other (in case of, say, \(\phi^{4}\) model) or go through each other (in case of, e.g., SG model) as illustrated on Figs. 13-14.
Let us further point out that there is a direct field-theoretical analog of \(L_{\rm Rigid}\) based on the ansatz
\[\phi=\phi_{K}\big{(}x+R(t)/2\big{)}+\phi_{\bar{K}}\big{(}x-R(t)/2\big{)}+v_{1 }\,, \tag{67}\]
where \(\phi_{K}\) is the single-soliton solution for the given model. The corresponding effective Lagrangian does the same job as \(L_{\rm Rigid}\) of Eq. (65). However, there are conceptual differences. In field theory, one must be careful about placing the kinks sufficiently apart. This is especially true for the \(\phi^{8}\) model where the kinks have long polynomial tails and naive superposition ansatz like (67) is not suitable for numerical investigations [24]. In contrast, the mech-\(K\bar{K}\) superposition _is_ an exact solution of the equations of motion for any \(R>0\). Thus the outcome of the collision is automatically independent on the initial separation. We can summarize this by saying that the short-range interactions of field theory are replaced by _contact_ interactions in mechanization.
### mech-\(K\bar{K}\) scattering
Let us now turn on \(R_{K}(t)\). The mech-kinks now possess a Derrick mode that allows the resonant transfer of kinetic energy which manifests as _bouncing_ - this is when mech-\(K\bar{K}\) pair temporarily reemerges from the second stage into the first stage but does not have sufficient kinetic energy to fly to apart and, instead, plunge again below the \(R=0\) line. Notice that this gives us an operational definition of the number of bounces - it is the number of zeros of \(R(t)\) divided by 2 minus one.7
Footnote 7: This is irrespective of what the outcome of the scattering is.
With dynamical \(R_{K}(t)\), the \(N=3\) mech-oscillon in the second stage can decay, i.e. its width grows exponentially with time, while the height exponentially decreases (see [22] for details). Hence, non-rigid mech-kinks display more complicated behavior with two primary outcomes: a well-separated mech-\(K\bar{K}\) pair with excited Derrick modes or a state of decayed mech-oscillon.
The first stage is given as
\[L^{K\bar{K}}_{\rm I}=\frac{1}{2}g^{\rm I}_{ab}\dot{X}_{a}\dot{X}_{b}-\frac{(v _{2}-v_{1})^{2}}{R_{K}}-2\kappa R_{K}\,, \tag{68}\]
where \(X_{a}=\{R,R_{K}\}\) and where the metric reads
\[g^{\rm I}=\frac{2}{3R_{K}}\begin{pmatrix}3&3\\ 3&4\end{pmatrix}\,. \tag{69}\]
There are no singularities in either metric or potential along the \(R\) coordinate.
Figure 14: An example of a rigid mech-\(K\bar{K}\) collision in SG model.
Figure 13: An example of a rigid mech-\(K\bar{K}\) collision in \(\phi^{4}\) model.
The second stage is described by
\[L_{\rm II}^{K\bar{K}}=\frac{1}{2}g_{ab}^{\rm II}\dot{X}_{a}\dot{X}_{b}-U_{\rm II}\,, \tag{70}\]
where \(U_{\rm II}\) is the same as in previous subsection and where the metric reads
\[g^{\rm II}=\frac{1}{3R_{K}^{4}}\left(\begin{array}{cc}6R_{K}^{2}\left(R_{K}- R\right)&6R_{K}\left(R_{K}^{2}+R^{2}\right)\\ 6R_{K}\left(R_{K}^{2}+R^{2}\right)&-4\left(R^{3}-2R_{K}^{3}\right)\end{array} \right)\,. \tag{71}\]
From the formulae for determinant and Ricci scalar
\[\left|g^{\rm II}\right|=-\frac{4\left(R_{K}+R\right)\left(R^{2}R_ {K}+5RR_{K}^{2}-R_{K}^{3}+R^{3}\right)}{3R_{K}^{4}}\,, \tag{72}\] \[\mathcal{R}=\frac{9R_{K}^{4}\left(R^{2}R_{K}-2RR_{K}^{2}-R_{K}^{ 3}+R^{3}\right)}{2\left(R_{K}+R\right)^{2}\left(R^{2}R_{K}+5RR_{K}^{2}-R_{K}^ {3}+R^{3}\right)^{2}} \tag{73}\]
we see that there is a physical singularity at the transition into the third stage, i.e. at \(R=-R_{K}\). Unlike in the rigid case, we are now unable to continue to more negative values - the mech-field cannot attain values below the vacuum \(v_{1}\). This is an unphysical artefact that can be traced to the so-called null-vector problem - the metric is degenerate at \(R=-R_{K}\).
Hence, the full effective Lagrangian for mech-\(K\bar{K}\) scattering has only two stages:
\[L_{\rm OOM}^{K\bar{K}}=L_{\rm I}\theta(R)+L_{\rm II}\theta(-R)\,. \tag{74}\]
A CCM that has similar characteristics as \(L_{\rm OOM}^{K\bar{K}}\) is given by the ansatz
\[\phi=\phi_{K}\Big{(}b(t)(x+a(t))\Big{)}+\phi_{\bar{K}}\Big{(}b(t)(x-a(t)) \Big{)}+v_{1}\,. \tag{75}\]
The corresponding effective Lagrangian is described in [11] and suffers from the same null vector (or flatness) problem, namely that the metric has a singularity at \(a=0\). There are also known remedies for this malady, either by choosing different moduli space with a massive modes supplanted into the above ansatz with judiciously chosen amplitude moduli (see [10]) or to include the Derrick modes in a perturbative fashion as done in [11].
The flatness problem present in \(L_{\rm OOM}^{K\bar{K}}\) is a consequence of too simple a mech-field. Indeed, from the formula (54) we see that at \(R=-R_{K}\) the mech-field becomes exact vacuum everywhere, i.e. \(\phi_{M}^{K\bar{K}}=v_{1}\). Moreover, at this point \(\partial_{R}\phi_{M}^{K\bar{K}}=-\partial_{R_{K}}\phi_{M}^{K\bar{K}}\).
The null-vector problem, however, should all but disappear for higher \(N\) mech-fields, where there are more degrees of freedom and it is easy to avoid \(\phi_{M}=v_{1}\) for all values of \(R\). We plan to investigate higher \(N\) mech-\(K\bar{K}\) collisions in the future.
We display typical mech-\(K\bar{K}\) scattering on Figs. 15-17.
In Fig. 18 we display time-dependence of the center value of the mech-field \(\phi_{M}(x=0)\) for a range of initial velocities in the \(\phi^{4}\) model. The dark blue color represents +1 vacuum, while the white color stands for \(-1\) vacuum. A bouncing occurs for such velocities when the value of \(\phi_{M}(x=0)\) eventually returns to +1 value - these are the dark blue columns indicating that mech-\(K\bar{K}\) pair has been reformed after the initial collision. On the other hand, the white columns correspond to situations where the mech-oscilion decayed to -1 vacuum. The reader should compare Fig. 18 with Fig. 1.
Let us make several comments.
First, there exists a critical velocity above which the mech-\(K\bar{K}\) pair scatter elastically. The corresponding value for \(K\bar{K}\) collisions in \(\phi^{4}\) theory is \(v_{\rm crit}=0.2598\)[11], while for non-BSP \(L_{\rm OOM}^{K\bar{K}}\) it is around 0.32 (upper half of Fig. 18) and around 0.44 for BPS case (lower-half of Fig. 18). It is curious that BPS-ness makes the match with the field theory in this regard worse.
Second, there also exists a minimal velocity, below which bouncing does not occur. This is especially visible for BPS mechanization, where \(v_{\rm min}\approx 0.28\) and below which there is a clear change in character of the collisions. For non-BPS version (the upper part of Fig. 18), it is much harder to ascertain the value of \(v_{\rm min}\) without careful search. Certainly, there does not seem to be a qualitative change of the mech-\(K\bar{K}\) scattering like in the BPS case. In this regard, the BPS case is more similar to the field theory.
The third aspect - and perhaps the most striking one - is the absence of smooth edges between bouncing windows and bion chimneys in both non-BPS and BPS cases. In field theory, bouncing windows begin and end at practically vanishing values of outgoing velocities so that the transitions to bion chimneys are continuous. In our case, however, bouncing windows start and end abruptly at finite velocities. Currently, we cannot give a reason why this is so nor what is the significance of this phenomenon. We suspect it being an clue that could lead to further conceptual improvements of the mechanization and we plan to investigate this aspects of mech-\(K\bar{K}\) scattering in a future publication.
### Loom
We can generalize the concept of LOOM as follows. Given a mech-field, \(\phi_{M}\), with joints placed at positions \(\{x_{0},\ldots,x_{N}\}\), the effective Lagrangian describing its dynamics is given by
\[L_{\rm OOM}=\sum_{\sigma\in P_{N+1}}L_{\rm M}\big{(}\{x_{\sigma},\phi_{\sigma} ^{\rm true}\}\big{)}\prod_{a=0}^{N-1}\theta\big{(}x_{\sigma(a+1)}-x_{\sigma(a )}\big{)}\,, \tag{76}\]
where \(P_{N+1}\) is the set of all permutations of \(N+1\) indices and where \(L_{M}(\{x,\phi\})\) is a fixed order effective Lagrangian given by the formula (27). Notice that the (true) heights of the joints \(\phi^{\rm true}\) depend on the given permutation of joints (e.g. in Fig. 12 we see that the heights of joints have different values in each stage). Depending on which \(L_{M}(\{x,\phi\})\) is taken, either non-BPS or BPS, the resulting LOOM would describe either non-BPS or BPS dynamics.
Let us stress that \(L_{\rm OOM}\) is nothing else than a result of direct integration of the Lagrangian density, i.e.
\[L_{\rm OOM}=\int\limits_{-\infty}^{\infty}{\rm d}x\,{\cal L}\big{(}\phi_{M}\big{)}\,, \tag{77}\]
allowing\(\{x_{0}\,,\ldots\,,x_{N}\}\) to take all possible values. Consequently, the chain of Heaviside functions in the formula (76) is there to enforce that the appropriate fixed-order Lagrangian is switched on.
Lastly, let us comment that a generic mech-field for which all \(2N\) degrees of freedom are dynamical is not going to be directly extendable beyond canonical orderings of joints. This is due to the presence of singularities in the moduli space that we discussed in Sec. II (see, e.g. (24)). Generically, \(\phi_{a}\)'s need to be constrained in an appropriate way to ensure the continuity of the mech-field at each contact \(\Delta x_{a}=0\).
## V Summary and outlook
In this paper, we have introduced two conceptual advancements of mechanization compared with [22].
First is the BPS mechanization that replicates the BPS nature of kinks in field theory. Unlike the non-BPS mech-kinks with static energies \(m_{K}\) approaching the field theoretical mass \(M_{K}\) only very slowly, as evident from Fig. 5, the BPS mech-kinks saturates the same bound \(m_{K}=M_{K}\) for all \(N\). There is also a major difference in the number of zero modes. Non-BPS mech-kinks have \(2N-1\) massive normal modes and only one zero mode associated with translational symmetry. On the other hand, BPS mech-kinks have \(N\) massive modes and \(N-1\) additional zero modes that stem from the arbitrariness of heights of joints \(\phi_{a}\)'s. These extra zero modes make the \(N>1\) BPS mech-kinks dynamically unstable. We observed that they are very prone to joint ejections - a boundary joint flying off to infinity, effectively reducing the number of degrees of freedom. The joint ejections happen also for non-BPS mech-kinks, but in that case, there is an energy barrier to be overcome. This occurs almost spontaneously, as there is no energy barrier that needs to be overcome, unlike for the non-BPS mech-kinks. We are thus led to the conclusion that only the \(N=1\) BPS mech-kink is a dynamically stable solution.
We have also illustrated the convergence (or lack thereof) of the distribution of normal modes to the spectrum for kinks in Figs. 6-9. Especially Fig. 9 testifies that normal modes of BPS mech-kinks are unwilling to approximate the shape mode of \(\phi^{4}\) kink even for very high \(N\). This could be an important clue. Coupled with
Figure 15: Collision of mech-\(K\bar{K}\) pair in \(\phi^{4}\) model (non-BPS). Here we see a formation of a short-duration mech-oscillon. There are two bounces (around \(t=35\) and \(t=46\)) after which the mech-oscillon rapidly decays into \(-1\) vacuum.
the observation of dynamical instability of \(N>1\) BPS mech-kinks, we may conclude that there is a conceptual difference between kinks and mech-kinks that persists for any \(N\). One difference that indeed persists is the compactness of the mech-field. Thus, it may be not enough just to increase \(N\) to reach a quantitative match with field theory, but some new ingredient might be needed. We plan to pursue this clue in the future.
The compactness of mech-fields is also the reason for introducing the second conceptual advancement. In the pursuit of direct mech-\(K\bar{K}\) scattering, we have shown that short-range interactions in field theory can be replaced by contact interactions by allowing joints to pass through each other. This can be achieved without encountering singularities, that are present in a general mech-field, by working with constrained mech-fields. Fortunately, a mech-\(K\bar{K}\) pair separated by a flat segment of length \(R>0\) is such a mech-field whose moduli space is regular at \(R=0\) and we may continue the dynamics for negative values \(R<0\) via switching to a (constrained) mech-oscillon as described in Sec. IV. This is embodied in the notion of the LOose Order Mechanization, or LOOM.
First, we have presented a LOOM for rigid mech-\(K\bar{K}\) scattering, where only the separation \(R\) is dynamic. There, the LOOM involves four stages (see Fig. 12) that cover the full range of the moduli \(R\in(-\infty,\infty)\). When we turned on the time dependence of widths, \(R_{K}(t)\), we encountered a singularity at the transition from the second to the third stage preventing the mech-field to go below the vacuum. The same problem, known as the null-vector problem, appears also in naive CCMs and can be overcome by including more moduli [10]. In our approach too, we expect the null-vector problem to go away when we increase the number of joints. We plan to provide details in a subsequent publication.
It is surprising that both BPS property and the necessity of LOOM point to the same conclusion: a generic mech-field has too much degrees of freedom and it needs to be constrained. This is perhaps not surprising, given the ad-hoc nature its construction. In particular, during the course of dynamical evolution, the \(N>1\) BPS mech-kinks easily sheds degrees of freedom via joint ejections. This, in restricted capacity, is true also for non-BPS mech-kinks and it has been observed in high-\(N\) mech-oscillons as well [22]. The construction of LOOM - which consists in sewing together effective Lagrangian for different orderings of the joints - would not work at all if we allow all moduli of a generic mech-field to be dynamic.
As a closing remark, however, let us also point out a case where the problem could be the exact opposite. Indeed, if we consider a free field theory with \(V=0\), a general solution is described by superposition of arbitrary
Figure 16: Collision of mech-\(K\bar{K}\) in \(\phi^{4}\) model (BPS). Here, the mech-oscillon is formed for roughly 25 t.u. and disintegrates into a excited mech-\(K\bar{K}\) pair.
shapes moving with the speed of light either to the left or the right: \(\phi_{\rm free}(x,t)=f_{\rm L}(x-t)+f_{\rm R}(x+t)\). As a corollary, the solution to the Cauchy problem with a static initial shape, i.e. \(\phi_{\rm free}(x,0)=f(x)\) and \(\dot{\phi}_{\rm free}(x,0)=0\), is described by an immediate disintegration into two copies of the same shape with half the amplitude: \(\phi_{\rm free}(x,t)=f(x-t)/2+f(x+t)/2\).
In the mechanized version of the free field theory, this does not happen. Starting with a symmetric \(N=2\) mech-field - a triangle of width \(R\) and height \(A\), the governing equations of motion
\[\ddot{R}=\frac{16}{R}-\frac{2\dot{A}\dot{R}}{A}\,,\quad\ddot{A}=A\frac{\dot{R} ^{2}}{R^{2}}-\frac{20A}{R^{2}}\,, \tag{78}\]
can be solved exactly as
\[R=R_{0}+4t\,,\quad A=A_{0}\sqrt{1+4t/R_{0}}\,, \tag{79}\]
which depicts an ever-expanding triangle with a constant static mass \(A^{2}/R=A_{0}^{2}/R_{0}\). Clearly, such a solution is unphysical, not just because the joints fly apart with twice the speed of light. The correct solution should be the same as in field theory - that the triangle disintegrate to two similar triangles with half the initial height flying apart with the speed of light. However, this would require an instantaneous transition from symmetric \(N=2\) mech-field with 3 joints to a symmetric \(N=5\) mech-field with 6 joints. In other words, there would be a (triple) bifurcation at \(t=0\) where each joint turns into two. Also note that such a bifurcation would be a relativistic effect.
We believe that some strange characteristics of mech-kinks dynamics can be potentially explained by the current lack of 'bifurcation' process. We plan to pursue this possibility in a subsequent paper.
###### Acknowledgements.
We would like to thank T. Romanczukiewicz and A. Wereszczynski for many fruitful discussions and help with numerical code. We also acknowledge the institutional support of the Research Centre for Theoretical Physics and Astrophysics, Institute of Physics, Silesian University in Opava.
|
2308.09630 | Physical modelling of near-Earth asteroid (23187) 2000 PN9 with
ground-based optical and radar observations | We present a physical model and spin-state analysis of the potentially
hazardous asteroid (23187) 2000 PN9. As part of a long-term campaign to make
direct detections of the YORP effect, we collected optical lightcurves of the
asteroid between 2006 and 2020. These observations were combined with planetary
radar data to develop a detailed shape model which was used to search for YORP
acceleration. We report that 2000 PN9 is a relatively large top-shaped body
with a sidereal rotation period of 2.53216$\pm$0.00015 h. Although we find no
evidence for rotational acceleration, YORP torques smaller than
$\sim$10$^{-8}$$\,\rm rad/day^{2}$ cannot be ruled out. It is likely that 2000
PN9 is a YORP-evolved object, and may be an example of YORP equilibrium or self
limitation. | L. Dover, S. C. Lowry, A. Rożek, B. Rozitis, S. L. Jackson, T. Zegmott, Yu. N. Krugly, I. N. Belskaya, A. Fitzsimmons, S. F. Green, C. Snodgrass, P. R. Weissman, M. Brozović, L. A. M. Benner, M. W. Busch, V. R. Ayvazian, V. Chiorny, R. Ya. Inasaridze, M. Krugov, S. Mykhailova, I. Reva, J. Hibbert | 2023-08-18T15:43:36Z | http://arxiv.org/abs/2308.09630v1 | Physical modelling of near-Earth asteroid (23187) 2000 PN9 with ground-based optical and radar observations
###### Abstract
We present a physical model and spin-state analysis of the potentially hazardous asteroid (23187) 2000 PN9. As part of a long-term campaign to make direct detections of the YORP effect, we collected optical lightcurves of the asteroid between 2006 and 2020. These observations were combined with planetary radar data to develop a detailed shape model which was used to search for YORP acceleration. We report that 2000 PN9 is a relatively large top-shaped body with a sidereal rotation period of 2.53216\(\pm\)0.00015 h. Although we find no evidence for rotational acceleration, YORP torques smaller than \(\sim\)10\({}^{-8}\) rad/day\({}^{2}\) cannot be ruled out. It is likely that 2000 PN9 is a YORP-evolved object, and may be an example of YORP equilibrium or self limitation.
keywords: minor planets, asteroids: individual: (23187) 2000 PN9 - methods: observational - methods: data analysis - techniques: photometric - techniques: radar astronomy - radiation mechanisms: thermal
## 1 Introduction
The Yarkovsky-O'Keefe-Radzievskii-Paddack (YORP) effect is a thermal torque caused by the reflection, absorption and anisotropic re-emission of Solar radiation (Rubincam, 2000). The YORP effect can change the rotation period and spin axis orientation of small bodies, and is a key mechanism in their evolution. It can trigger the formation of binary asteroids, deliver asteroids to Earth-crossing orbits (Bottke et al., 2006) and cause spin-axis alignment in asteroidal families (Vokrouhlicky et al., 2003). There has been eleven direct detections of the YORP effect to date: (54509) YORP, (1862) Apollo, (1620) Geographos, (3103) Eger, (25143) Itokawa, (161989) Cacus, (101955) Bennu, (68346) 2001 KZ66, (10115) 1992 SK and (1685) Toro (Lowry et al., 2007; Taylor et al., 2007; Kaasalainen et al., 2007; Durech et al., 2008, 2012; Lowry et al., 2014; Durech et al., 2018; Nolan et al., 2019; Zegmott et al., 2021; Durech et al., 2022). Ten of these detections have a YORP acceleration below 10\({}^{-7}\) rad/day\({}^{2}\), with the much smaller asteroid (54509) YORP having a detected YORP spin-up rate of 3.49\(\times\)10\({}^{-6}\) rad/day\({}^{2}\). The YORP effect is expected to appear in both spin-up and spin-down configurations, yet all detections to date are in the spin-up case. This could be due to a physical process causing an excess of positive torques, such as such as tangential YORP (Golubov & Krugly, 2012). The lack of spin-down detections may also be a consequence of observational bias, as objects experiencing rotational deceleration will generally have longer periods. For periods greater than \(\sim\)8 h, it is difficult to obtain full rotational coverage with a single lightcurve. Asteroids with shorter rotation periods can more readily be observed in a single night, making them more lucrative targets. Objects with fast rotation are more likely to be in a spin-up configuration, hence there is a bias towards
detections of spin-up YORP. In order to better understand the YORP effect and its important influence on the evolution of the Solar System, further detections (and non-detections) must be made. As the process of making a YORP detection includes the development of a detailed physical model, the pursuit of YORP detections provides wider benefits to the overall understanding of asteroid evolution. Since April 2010, our group has been monitoring a selection of small asteroids that are strong candidates for direct detection of the YORP effect. The majority of these observations were conducted through a European Southern Observatory Large Programme with the 3.6 m New Technology Telescope (NTT) at La Silla, Chile. Accompanying observations have been made with various small and medium sized telescopes, with most imaging conducted at optical wavelengths. Asteroids that are closer to the Sun are exposed to more solar insolation, which in turn increases the strength of the YORP effect, thus all of the targets are near-Earth asteroids (NEAs). Asteroids were selected for their long-term observability, the range of achievable viewing geometries and their short rotation periods. By selecting targets with short rotation periods, we could ensure that several full rotations could be observed over the course of a single night, or several nights with lightcurve folding. Aside from observational constraints, the short-period (<8 h) regime is critical to understanding the fate of asteroids in a spin-up configuration. Objects must either reach a state of rotational equilibrium or accelerate beyond the spin-breakup barrier and experience a disruptive event. Probing asteroids that are close to the breakup limit thus makes it possible to link each asteroid's physical properties not only to its YORP state, but to its evolutionary track. This study focuses on one target from our campaign, (23187) 2000 PN9 (herafter PN9), which was observed using optical and planetary radar facilities between 2001 and 2020. PN9 is an Apollo-class NEA that has been designated as a potentially hazardous asteroid. It was discovered by the Lincoln Near-Earth Asteroid Research (LINEAR) tracking programme in August 2000 (Moravec et al., 2000) with a semi-major axis of 1.85 AU and an eccentricity of 0.59.
Using optical observations in 2001 and 2006, Belskaya et al. (2009) determined a synodic rotation period of 2.5325\(\pm\)0.0004 h, a lightcurve amplitude of 0.13 mag, a polarimetrically-derived albedo of 0.24\(\pm\)0.06 and an absolute magnitude \(H=16.2\), resulting in a diameter of 1.6\(\pm\)0.3 km. Busch et al. (2006) determined that the asteroid is roughly spherical with an approximate diameter of 2 km from radar observations made in 2001. A synodic rotation period of 2.537\(\pm\)0.002 h was reported from optical observations in 2016 (Warner, 2016). MIT-Hawaii Near-Earth Object Spectroscopic Survey (MITHNEOS) observations have lead to PN9 being classified as belonging to either the S/Sq, Sq or Sq/Q taxonomic types (Thomas et al., 2014; Binzel et al., 2019).
In Section (2) of this paper, we describe the optical and radar observations that were used in this analysis. Section (3) describes the process of developing a physical model for PN9, presenting both a lightcurve-only model and a combined optical and radar model. We also describe an analysis of PN9's rotational phase to search for evidence of YORP acceleration. In Section (4) we discuss the physical parameters of PN9, the implications of our YORP non-detection, and the importance of studying more top-shaped asteroids similar to PN9.
## 2 Observations of (23187) 2000 PN9
### Optical lightcurves
Our optical lightcurve dataset for PN9 spans fourteen years, from March 2006 to November 2020. Each lightcurve is summarised in Table 1, with indications of how each lightcurve was used in this work. Some low-quality lightcurves were not used for modelling or spin-state analysis. The subset of lightcurves used in modelling was used for both the lightcurve-only (Sect. 3.1) and combined lightcurve and radar (Sect. 3.2) models.
As shown in Figure 1, the asteroid was observed over a range of viewing geometries during the fourteen years of observation. This is important for shape modelling, as the entire surface of the asteroid cannot be seen with a single viewing geometry. Viewing the asteroid's rotation from different aspect angles, and under different shadowing conditions, can greatly improve constraints on its shape and rotational state. For YORP detections it is also important to regularly view the asteroid at similar viewing geometries, where the lightcurve shape is similar, to constrain rotational phase offset measurements. The distribution of optical observations for PN9 is hence favourable as it includes both repeating and varied viewing geometries.
Rotational lightcurves were extracted using relative photometry, comparing the asteroid's brightness to a selection of stable background stars. In some cases, sidereal tracking was used if a desirable signal-to-noise ratio (SNR) could be achieved on the asteroid without its full width at half maximum (FWHM) profile exceeding atmospheric seeing. This ensured that a circular aperture with a radius of twice the FWHM profile could be used for photometry. Otherwise, the asteroid was differentially tracked and exposure times were set to avoid trailing of the background stars beyond atmospheric seeing. Consideration was also taken to ensure a sufficient temporal resolution was achieved, i.e. each exposure was not a significant fraction of the asteroid's rotation period. All images were processed using standard CCD reduction procedures with bias subtraction and flat field correction, along with dark field correction where necessary.
Our optical dataset includes lightcurves from ten different observatories. It should be noted that the choice of optical filter has a negligible impact on the shape and amplitude of relative lightcurves, hence we have included observations using a variety of broadband and clear filters. Information on the observations that were conducted at each observatory are given below.
#### 2.1.1 Chuluvi Observatory - 2006, 2011
The 0.7 m telescope at Chuluvi Observatory (Kharkiv, Ukraine) was used to observe PN9 in March 2006 and March 2011. The asteroid was imaged in 2006 with a \(375\times 242\) pixel CCD with a field-of-view (FOV) of \(10.5^{\prime}\times 8.0^{\prime}\) using the Johnson-Cousins BVRI filters. In 2011, observations were conducted in the Johnson-Cousins R filter using a 1056 x 1027 pixel CCD with an FOV of \(16.9^{\prime}\times 16.4^{\prime}\). The lightcurves resulting from the 2006 apparition were previously published in Belskaya et al. (2009).
#### 2.1.2 New Technology Telescope - 2010, 2020
ESO's 3.6 m New Technology Telescope (NTT) at La Silla Observatory (Chile) was used to observe PN9 in 2010 and 2020 with the ESO Faint Spectrograph and Camera v.2 (EFOSC2), EFOSC2 has an FOV of \(4.1^{\prime}\times 4.1^{\prime}\) and a \(2048\times 2048\) pixel CCD. We used EFOSC2 in imaging mode with \(2\times 2\) binning, and images were co-added to increase the SNR on the asteroid. We observed PN9 using the Bessel R filter for two nights in August 2010 and one night in October 2010, and with the Bessel V filter for three nights in November 2020.
#### 2.1.3 Danish 1.54 m Telescope - 2010
We used the 1.54 m Danish Telescope at La Silla Observatory (Chile) to observe PN9 for one night in September 2010. We used the Danish Faint Object Spectrograph and Camera (DFOSC), which has an FOV of \(13.3^{\prime}\times 13.3^{\prime}\) and a usable CCD area of \(2148\times 2102\) pixels. DFOSC was used to image PN9 with \(1\times 1\) binning in the Bessel R filter. Images were co-added before lightcurve extraction.
#### 2.1.4 Table Mountain Observatory - 2010
PN9 was observed over three nights with the Jet Propulsion Lab's 0.6 m telescope at Table Mountain Observatory (California, USA) in September 2010. We imaged PN9 with a \(1024\times 1024\) pixel CCD that has an FOV of \(8.9^{\prime}\times 8.9^{\prime}\) using the R filter and \(1\times 1\) binning and images were co-added for lightcurve extraction.
#### 2.1.5 Abastumani Observatory - 2011
In March 2011, observations of PN9 were carried out with the 0.7 m Telescope at the Abastumani Astrophysical Observatory (Abastumani, Georgia). We imaged the asteroid without a filter using a \(3072\times 2048\) pixel CCD with an FOV of \(44.4^{\prime}\times 29.6^{\prime}\).
#### 2.1.6 Hale Telescope - 2015
We used the 5.1 m Hale telescope at Palomar Observatory (California, USA) to observe PN9 in June 2015. The telescope was equipped with the Large Format Camera (LFC), which has six \(2048\times 4096\) chips, each of which has an FOV of \(6.1^{\prime}\times 12.3^{\prime}\). We used the central CCD chip with \(2\times 2\) binning and the Bessel R filter to image PN9. Images were co-added for lightcurve extraction.
#### 2.1.7 Tien-Shan Observatory - 2015
In September 2015, there were observations of PN9 with the 1.0 m telescope at the Tien-Shan Astronomical Observatory (Almaty, Kazakhstan). We used a \(3072\times 3072\) pixel CCD, which has an FOV of \(18.9^{\prime}\times 18.9^{\prime}\), with \(2\times 2\) binning using the Johnson R filter.
#### 2.1.8 Shain Telescope - 2015
The asteroid was observed in September 2015 with the 2.6 m Shain Telescope at the Crimean Astrophysical Observatory (Nauchny, Ukraine). We imaged PN9 with a \(2048\times 2048\) pixel CCD, which has an FOV of \(9.5^{\prime}\times 9.5^{\prime}\), using \(2\times 2\) binning without a filter.
#### 2.1.9 Palmer Divide Station - 2016
This analysis includes six published lightcurves from the Palmer Divide Station (California, USA). PN9 was observed with three 0.35 m Meade LX200GPS telescopes equipped with commercial CCDs using the Johnson V filter. These lightcurves were obtained through the Asteroid Lightcurve Data Exchange Format (ALCDEF) database (Warner et al., 2011) and are discussed in Warner (2016). Note that observations taken with different telescopes during the same night are treated as separate lightcurves.
Figure 1: Observing geometries for the asteroid (23187) 2000 PN9 from the 2000 to the start of 2022. The top panels show the position of the asteroid in the ecliptic coordinate system (latitude and longitude) as observed from Earth. The bottom left panel shows the solar phase angle while the bottom right panel shows the geocentric distance to the asteroid. The marked points denote observations of the asteroid. Optical lightcurves are marked as blue circles and radar observations are represented by red crosses.
#### 2.1.10 Isaac Newton Telescope - 2020
We observed PN9 over two nights in August 2020 with the 2.5 m Isaac Newton Telescope (La Palma, Spain). Imaging was conducted in the Harris V filter with 1 \(\times\) 1 binning using the central chip of the Wide Field Camera. The CCD was windowed to give a \(10^{\prime}\times 10^{\prime}\) field with a resolution of \(1820\times 1820\). Images were co-added for lightcurve extraction.
### Planetary radar
This analysis made use of delay-Doppler imaging and continuous wave echo power spectra obtained by planetary radar facilities. For delay-Doppler imaging a circularly polarised phase-modulated signal is transmitted, with the modulation pattern determined by a pseudo-random binary phase code (Ostro, 1993; Magri et al., 2007). The modulation pattern allows for the determination of distance between the observer and the point on the asteroid that reflected the signal. The resolution of delay information is determined by the temporal resolution of the modulation by the pseudo-random code and is known as the band length. A delay-Doppler image is constructed with delay in
\begin{table}
\begin{tabular}{c c c c c c c c c c c c} \hline \hline ID & UT date & \(R_{\odot}\) & \(\Delta_{0}\) & \(\alpha\) & \(\lambda_{O}\) & \(\beta_{O}\) & Total & Obs. & Filter & Included & Included & Included & Reference \\ & [yyyy-mm-dd] & [AU] & [AU] & [\({}^{\circ}\)] & [\({}^{\circ}\)] & [\({}^{\circ}\)] & [hour] & facility & & (model) & (ph. off) & \\ \hline
1 & 2006-03-10 & 1.098 & 0.072 & 75.55 & 101.9 & 58.89 & 0.3 & ChO & B & & & 1 \\
2 & &... &... &... &... &... & 0.5 & ChO & V & & & 1 \\
3 & &... &... &... &... &... & 0.7 & ChO & R & & & 1 \\
4 & &... &... &... &... &... & 0.3 & ChO & I & & & 1 \\
5 & 2006-03-20 & 1.099 & 0.249 & 59.37 & 125.2 & 57.85 & 4.2 & CHO & R & & & 1 \\
6 & 2006-04-03 & 1.228 & 0.495 & 51.57 & 131.9 & 56.30 & 1.3 & ChO & R & & & 1 \\
7 & 2010-08-28 & 1.927 & 0.965 & 12.96 & 334.2 & 25.3 & 2.4 & NTT & R & & & * \\
8 & 2010-08-29 & 1.920 & 0.956 & 12.87 & 333.5 & 25.0 & 3.3 & NTT & R & & * & * & \\
9 & 2010-09-03 & 1.876 & 0.915 & 13.36 & 329.1 & 22.5 & 1.3 & ESOD & R & & & \\
10 & 2010-09-08 & 1.846 & 0.894 & 14.72 & 326.2 & 20.6 & 4.8 & TMO & R & & & \\
11 & 2010-09-09 & 1.839 & 0.890 & 15.18 & 325.5 & 20.0 & 5.8 & TMO & R & & & \\
12 & 2010-09-10 & 1.831 & 0.886 & 15.67 & 324.7 & 19.5 & 5.7 & TMO & R & & & \\
13 & 2010-10-14 & 1.558 & 0.944 & 37.80 & 307.3 & -1.2 & 3.9 & NTT & R & & * & \\
14 & 2011-03-10 & 0.956 & 0.118 & 105.00 & 56.3 & -17.8 & 1.5 & CHO & R & & * \\
15 & 2011-03-11 & 0.964 & 0.117 & 101.12 & 62.0 & -10.4 & 0.7 & ChO & R & & * & \\
16 & 2011-03-13 & 0.981 & 0.122 & 92.43 & 72.5 & 4.2 & 1.8 & ChO & R & & * & \\
17 & 2011-03-15 & 0.998 & 0.139 & 84.44 & 81.5 & 16.4 & 3.8 & ABAO & clear & & * & * \\
18 & 2011-03-23 & 1.069 & 0.254 & 66.78 & 104.5 & 38.9 & 2.5 & CHO & R & & * & * \\
19 & &... &... &... &... &... & 2.6 & CHO & R & & * & * \\
20 & 2011-03-27 & 1.105 & 0.321 & 62.38 & 111.1 & 42.8 & 4.2 & AbAO & R & & * & * \\
21 & 2015-06-18 & 2.398 & 2.070 & 24.92 & 349.6 & 30.4 & 3.5 & PAL & R & & * & \\
22 & 2015-09-10 & 1.886 & 0.958 & 16.68 & 322.6 & 22.2 & 1.4 & TSAO & R & & & \\
23 & 2015-09-11 & 1.879 & 0.855 & 17.15 & 321.9 & 21.7 & 2.5 & CAO & clear & & * & * \\
24 & 2016-03-28 & 1.056 & 0.337 & 70.87 & 98.3 & 24.4 & 3.5 & PDS & V & & * & 2 \\
25 & &... &... &... &... &... & 1.1 & PDS & V & & * & 2 \\
26 & 2016-03-29 & 1.065 & 0.351 & 69.54 & 100.1 & 26.0 & 2.4 & PDS & V & & * & 2 \\
27 & &... &... &... &... &... & 1.0 & PDS & V & & & 2 \\
28 & 2016-03-30 & 1.074 & 0.366 & 68.29 & 100.1 & 26.0 & 2.9 & PDS & V & & * & 2 \\
29 & 2016-03-31 & 1.083 & 0.381 & 67.12 & 103.3 & 28.9 & 3.3 & PDS & V & & * & 2 \\
30 & &... &... &... &... &... &... & 1.4 & PDS & V & & * & 2 \\
31 & 2020-08-10 & 2.133 & 1.243 & 17.1 & 339.2 & 33.0 & 3.2 & INT & V & & * & * \\
32 & 2020-08-11 & 2.127 & 1.231 & 16.9 & 338.7 & 32.9 & 4.1 & INT & V & & * & \\
33 & 2020-11-01 & 1.513 & 1.217 & 40.8 & 305.8 & -3.3 & 2.6 & NTT & V & & * & \\
34 & 2020-11-02 & 1.504 & 1.223 & 41.0 & 305.9 & -3.7 & 3.0 & NTT & V & & * & * \\
35 & 2020-11-03 & 1.496 & 1.229 & 41.2 & 306.0 & -4.1 & 2.9 & NTT & V & * & * \\ \hline \end{tabular} 1
[ENDFOOTNOTE]
the vertical axis and Doppler shift in the horizontal axis. Continuous wave (CW) spectra do not contain delay information, and instead only describe the Doppler shift of the reflected signal in the two circular polarisation orientations. It is important to carefully select which radar datasets to include in the shape modelling process, as poor quality data can greatly increase computational cost without improving the model. For a summary of radar observations of PN9, see Table 2.
#### 2.2.1 Arecibo Observatory - 2001
The William E. Gordon telescope at Arecibo (Puerto Rico, USA) was a 305 m fixed-dish radio telescope with a 2380 MHz radar transmitter. It was used to obtain radar observations of PN9 on 3, 4 and 5 March 2001. The delay-Doppler imaging on 3 March were for ranging and ephemeris correction, hence they were excluded from the analysis. On 4 March, imaging was mostly conducted with a band length of 0.1 \(\mu\)s giving a \(\sim\)15 m resolution, with some further imaging at 0.2 \(\mu\)s (\(\sim\)30 m). On 5 March, a band length of 0.2 \(\mu\)s (\(\sim\)30 m) was used for imaging. The observers submitted astrometric measurements which were used to refine the orbital solution. The CW spectra from 4 and 5 March were included in the analysis. Observations from 3 March were not obtained until late in the modelling process. Due to limited computational resources and the presence of data from subsequent days, these data were not added to the model. All but ten minutes of the delay-Doppler imaging is at extremely low resolution and would not offer a significant contribution to the model, however we recommend that the continuous wave spectra are included in any future work.
#### 2.2.2 Goldstone Solar System Radar - 2001, 2006
The Goldstone Solar System Radar (GSSR) facility consists of the fully steerable 70 m DSS-14 "Mars" antenna equipped with an 8560 MHz transmitter. DSS-14 is located in the Mojave Desert (California, USA) and is a part of the Deep Space Network. Delay-Doppler imaging of PN9 was conducted by GSSR on 3 March 2001 and 7 March 2006. The 2001 imaging used a baud length of 1.0 \(\mu\)s (\(\sim\)150 m) while the 2006 imaging used a baud length of 0.125 \(\mu\)s (\(\sim\)19 m). In 2006, GSSR also obtained CW spectra of PN9. These were not included in the analysis, as there are already higher quality radar data for this epoch and viewing geometry. Astrometric measurements from both 2001 and 2006 were used for ephemeris correction.
## 3 Physical Modelling and Spin-State Analysis
### Lightcurve-only modelling
The majority of observational data for PN9 are in the form of optical lightcurves. As modelling with radar data is an iterative and computationally intensive process, and fitting procedures are highly sensitive to input parameters, it was more efficient to first perform a lightcurve-only analysis of PN9. This can allow the use of predetermined constraints on rotation period, pole orientation and shape, which greatly improves efficiency when later modelling the object with a combination of lightcurve and radar data.
The lightcurve-only modelling includes observations marked in Table 1. Lightcurves that were not included were unsuitable due to poor temporal resolution, gaps in rotational coverage or low SNR.
A search for PN9's sidereal rotation period was conducted between 2.500 and 2.570 h, a range based on the previously reported synodic periods (Galeev et al. 2007; Belskaya et al. 2009; Warner 2016),
\begin{table}
\begin{tabular}{c c c c c c c c} \hline \hline Obs. & UT Date & RTT & Baud & Res. & Start-Stop & Runs & Radar & Note \\ & [yyyy-mm-dd] & [s] & [\(\mu\)s] & [m] & [hh:mm:ss-hh:mm:ss] & & model & \\ \hline Arecibo & 2001-03-03 & 62 & CW & & 09:40:32-09:56:40 & 8 & \\ & & CW & & 09:59:25-10:00:23 & 1 & & \\ & & & 4 & 600 & 10:02:38-10:16:19 & 3 & Ranging \\ & & & 4.5 & 675 & 10:17:46-10:23:28 & 3 & Ranging \\ & & & 0.5 & 75 & 10:27:14-10:36:39 & 5 & Ranging \\ Goldstone & 2001-03-03 & 62 & 1.0 & 150 & 13:14:07-15:02:52 & 49 & \(\bullet\) \\ Arecibo & 2001-03-04 & 67 & CW & & 09:04:03-09:16:47 & 6 & \(\bullet\) \\ & & & 0.2 & 30 & 09:18:59-09:24:35 & 3 & \(\bullet\) \\ & & & 0.1 & 15 & 09:27:01-09:38:27 & 3 & \\ & & & 68 & 0.1 & 15 & 10:09:44-10:31:28 & 10 & \(\bullet\) \\ Arecibo & 2001-03-05 & 76 & CW & & 09:05:55-09:12:43 & 3 & \(\bullet\) \\ & & & 77 & 0.2 & 30 & 09:15:42-10:52:44 & 38 & \(\bullet\) \\ Goldstone & 2006-03-07 & 36 & 0.125 & 19 & 19:24:26-19:31:09 & 6 & \(\bullet\) \\ & & & 0.125 & 19 & 19:31:49-20:30:26 & 48 & \(\bullet\) \\ Goldstone & 2006-03-10 & 86 & CW & & 12:02:53-14:42:14 & 58 & Low SNR \\ \hline \end{tabular} 10.5 \({}^{\rm a}\)Obs.” is the facility with which the observations were made. “UT Date” is the start date of the observations in universal time. “RTT” is the signal’s round trip time to the object and back. “Baud” is the baud length and “Res” is the delay resolution; continuous wave observations are marked as “CW” and do not have spatial resolution. “Start-Stop” is the UT timespan in which the observations were made. “Runs” is the number of transmit-receive cycles that were completed.
\end{table}
Table 2: Delay-Doppler observations of (23187) 2000 PN9
using the convex inversion routines described by Kaasalainen & Torppa (2001) and Kaasalainen et al. (2001). For each iteration over the period scan range, a shape model was generated for six random and unique rotational poles. Each shape model was then optimised to best fit the lightcurve data across the period range. The results of this scan, shown in Figure 2, identify a best-fit rotation period of \(2.532\pm 0.008\) h.
A further period scan was conducted over a wider but more coarse range of periods, searching for solutions between 1 h and 10 h. Solutions were found close to integer multiples of the 2.532 h period, which were discounted due to their corresponding shape models being both physically extreme and inconsistent with radar imaging data.
Using an input period of 2.532 h from the period scan, we conducted a search for the asteroid's rotational pole. For each pole on a \(5^{\circ}\times 5^{\circ}\) grid covering the celestial sphere, the model's period and convex shape were optimised. This scan assumed principal axis rotation. The scan was then repeated with the addition of YORP acceleration, for a range of YORP factors from \(-10^{-8}\) rad/day\({}^{2}\) to \(10^{-8}\) rad/day\({}^{2}\) in steps of \(2\times 10^{-9}\) rad/day\({}^{2}\). The goodness-of-fit for the global best solution of each YORP step does not converge towards any YORP value. This indicates that no YORP solution is found, hence only solutions with constant period rotation are considered in this section. A search for YORP with a radar-derived model of the asteroid is discussed in section 3.5.
Figure 3 shows the results of the constant-period pole scan, which does not converge to a single region in the celestial sphere. The best model's rotational pole lies at ecliptic longitude \(\lambda=105^{\circ}\) and ecliptic latitude \(\beta=+25^{\circ}\) with a rotation period of 2.532 h. Models within 5% of the best solution have poles corresponding to opposite regions of the celestial sphere, and are consistent in shape and period. This result shows that the orientation of the rotational axis is constrained, although the data are insufficient for distinguishing between prograde and retrograde rotation. This is a common issue when modelling with low-amplitude lightcurves produced by highly symmetrical objects. The convex hull model of the global best solution, shown in Figure 4, indicates that the asteroid is an oblate spheroid with signs of an equatorial ridge. There is a flattened section on the equator which could be interpreted as a crater, but could also be caused by a prominence or large boulder. The polar regions are also flattened, although this could be an artefact caused by uncertainty in the Z-axis. As shown in Figure A1, the model is able to reproduce the shape of most lightcurves, although in some cases there is a small phase offset and a mis-match in amplitude. Since there is no coherent progression in phase offset, the phase offsets are the result of period uncertainty which can be reduced with radar modelling. Convex hull models struggle to reproduce low-amplitude lightcurves due to the heightened dependence on surface features, which are not modelled, while general uncertainties in shape and pole can suppress or amplify brightness variation caused by the asteroid's overall shape.
### Combined radar & lightcurve model
Further modelling of PN9 was conducted using a combination of lightcurve and radar data using the shape software package (Hudson, 1993; Magri et al., 2007). As discussed in Section 3.1, it is efficient to begin with well-constrained input parameters describing the asteroid's shape and spin-state. In this case, those estimates are taken from the convex inversion analysis.
Figure 3: The results of a search for the rotational pole of (23187) 2000 PN9 using convex inversion of lightcurve data. For each pole solution in ecliptic coordinates \(\lambda\) and \(\beta\), the \(\chi^{2}\) fit of the solution is plotted for a colour range where the global minimum \(\chi^{2}\) is black and solutions 50% greater than the minimum are white. The best solution is marked with a yellow “+”. The yellow line and the white dotted and dashed lines enclose regions where \(\chi^{2}\) is within 1%, 5% and 10% of the best solution respectively. The top panel shows the full celestial sphere, with the region enclosed by the black rectangle shown in the bottom panel.
Figure 2: The results of a period search for asteroid (23187) 2000 PN9. For each period in the range shown, lightcurve data were used to optimise a model for six different rotational poles in the celestial sphere. The lowest \(\chi^{2}\) across the six models was recorded for the period being used. The best-fit sidereal rotation period for 2000 PN9 from this scan is \(2.532\pm 0.008\) h.
An input model was constructed for shape comprising a triaxial ellipsoid with principal axis rotation. Both the convex hull model and inspection of delay-Doppler images indicate that PN9 has a spheroid shape, so a sphere with a 1.9 km diameter was used as the input model. This diameter was chosen as it was close to the 2 km estimate from an earlier unpublished analysis of the radar data (Busch et al., 2006), but also in agreement with a reported \(1.6\pm 0.3\) km diameter based on an estimated optical albedo of \(0.24\pm 0.06\)(Belskaya et al., 2009). The rotation period was set to 2.532 h, as previously determined in section (3.1).
A \(10^{\circ}\times 10^{\circ}\) grid of poles was set up, covering the celestial sphere. For each fixed pole, the model's global shape and period were optimised to fit the radar data marked in Table 2. The ellipsoid model for each pole was then converted to a vertex model with 1000 vertices and 1996 facets, to allow for the fitting of surface features through the adjustment of individual facets. The continuous wave (CW) spectra were removed at this stage, as the model was sufficiently well-constrained and there was a risk of over-fitting to noise.
Early iterations of PN9's vertex model remained in good agreement with the convex hull model's shape, pole and period. As the radar model's initial parameters were derived from lightcurve data, a coarse radar period scan was conducted to confirm that the radar data independently favour a 2.5 h solution. This period scanned utilised the same six-pole strategy described in Section 3.1. Computational limitations restricted this to a coarse resolution that can only show global minima, and not identify local minima required for a precise period measurement. The resolution was increased around multiples of 2.5 h, as these are the most likely alternate solutions. As shown in Figure 5, a coarse period scan with radar data indicates a clear global minimum close to 2.5 h.
As a visual inspection of radar data shows a shape that is consistent with the convex hull model, it can be said that the lightcurve and radar datasets independently favour the same shape and period for PN9.
Having confirmed this, lightcurve data were then progressively introduced in subsequent fitting runs to produce a combined radar
Figure 4: The best-fit convex hull model of (23187) 2000 PN9 with the rotational pole \(\lambda=105^{\circ}\)\(\beta=+25^{\circ}\). This model assumes principal-axis rotation and a constant period of 2.532 h. The top row shows the model from the positive end of the Z, Y and X axes in the body-centric co-ordinate system. The bottom row shows the model from the negative end of the Z, Y and X axes. The Z-axis is aligned with the rotational pole, which is the shortest axis of inertia. The X-axis is arbitrarily set such that it is viewed from the positive end for the plane-of-sky at \(T_{0}\). Axis lengths are in arbitrary units, as lightcurve inversion does not produce scaled models.
Figure 5: The results of a period search for asteroid (23187) 2000 PN9 with radar data. shape was used to optimise an ellipsoid model for six different rotational poles in the celestial sphere to fit both continuous-wave and delay-Doppler data. The lowest \(\chi^{2}\) across the six models was recorded for the period being used, with a higher temporal resolution close to 2.5 h, 5 h and 7.5 h. This is a coarse scan intended to demonstrate a global minimum close to 2.5 h independently of lightcurve data.
and lightcurve model. The full subset of lightcurves, marked in Table 1, were not all included until the final iterations of modelling. Each additional lightcurve causes a significant increase in computation time whilst yielding diminishing returns. It is therefore most efficient to gradually introduce the lightcurve dataset as the model improves.
During the modelling process, various penalties were applied with shape to discourage certain features. The first penalty prevents excessive deviation of the centre of mass from the origin of the body-centric coordinate system. The second penalty prevents large divergence between the model's Z-axis and the axis of maximum inertia. A third penalty disallows non-principal axis rotation. A fourth penalty is used to suppress unphysical spikes that can occur when fitting a vertex model. Finally, a fifth penalty was applied to discourage deep concivities.
These penalties increase the \(\chi^{2}\) fit value when penalised features are encountered, meaning that shape is less likely to produce those features as it optimises models to produce a smaller \(\chi^{2}\) value. Each of these penalties is given a strength, with larger penalties more strongly discouraging features. The first three penalties were given a relatively high strength, and the latter two penalties were low in strength to ensure they only discouraged unphysical features without restricting the construction of craters, ridges and boulders.
The results of the shape pole scan with both radar and lightcurve data are shown in Figure 8. The pole is again constrained to two opposite regions, with the best solution at ecliptic longitude \(\lambda=96^{\circ}\) and ecliptic latitude \(\beta=\)\(+30^{\circ}\). Solutions within 1% of the global best fit in both the northern and southern hemispheres have consistent shapes and periods, again indicating an uncertainty as to whether PN9's rotation is prograde or retrograde. While a retrograde solution cannot be completely ruled out, the prograde solution is more clearly favoured in a lightcurve-only analysis (Fig. 3). As the two solutions have identical rotation periods and very similar shapes, any qualitative analysis of the two solutions will yield the same conclusions. As such, the subsequent sections of this paper will only consider the prograde solution.
The best-fitting model was re-modelled with 2000 vertices to allow for closer fitting of surface features to the 15 m resolution radar imaging, although this yielded a negligible improvement in the \(\chi^{2}\) fit. Both the optical and radar observations cover the entire surface of the asteroid, leaving no 'unseen' surface area in either of the two wavelength regimes. The geometric parameters of this model are presented in Table 3 and the shape model is shown in Figure 9. The lightcurve fits, shown in Figure 8, are an improvement upon those produced by the convex hull model. The majority of lightcurves are fitted well in terms of shape, phase and amplitude, with the few poor fits generally corresponding to low-quality lightcurves that were not used in the modelling process.
### Differences and limitations of the models
A comparison of the convex inversion and shape pole scans (Figs. 3 and 8) shows that the latter method produces clearer convergence towards the global pole solution for PN9.
For highly symmetrical asteroids such as PN9, optical lightcurves will be dominated by surface features and observations will be more affected by instrumental performance and atmospheric conditions. Combining lightcurves taken with different filters can amplify these issues, especially when considering scattering effects on the asteroid. In the case of PN9, lightcurves included in the analysis were taken in the V, R and clear filters. Separating these into different subsets to produce independent models is not a viable option, as each of the subsets alone does not provide an adequate number of rotations and viewing geometries to produce a good model.
For asteroids where global shape dominates both optical and radar
Figure 8: The results of a search for the rotational pole of (23187) 2000 PN9 using shape with radar and lightcurve data. For each pole solution in ecliptic coordinates \(\lambda\) and \(\beta\), the \(\chi^{2}\) fit of the solution is plotted for a colour range where the global minimum \(\chi^{2}\) is black and the maximum is white. The best solution is marked with a yellow \({}^{+}\). The yellow lines enclose regions where \(\chi^{2}\) is within 1% of the best solution, and dotted and dashed white lines enclose regions where \(\chi^{2}\) is within 5% and 10% of the best solution respectively. Green stars indicate the ecliptic coordinates of the observer’s line-of-sight for each date where delay-Doppler imaging was taken.
Figure 6: A comparison of observational data (red points) and the corresponding synthetic lightcurve (solid black line) for lightcurves 17 and 35 (Table 1). The synthetic lightcurves were generated using the combined radar and lightcurve model for (23187) 2000 PN9, with a combination of the Lambertian and Lombard-Scelinger scattering models. For plots of all 35 lightcurves, see Figure 8.
Figure 7: A comparison of lightcurve fits produced by the prograde (left panel) and retrograde (right panel) models of (23187) 2000 PN9 for lightcurve 15. The prograde model produces a better fit than the retrograde model, which has a smaller amplitude and a minor phase offset compared to the data.
features (e.g. Zegmot et al. (2021)), combining the data will better constrain the model. For asteroids like PN9, where surface properties are dominant, it can be difficult to reconcile the radar and optical data.
Radar can penetrate several wavelengths into the surface of an asteroid, and is thus sensitive to features within the top layer of material. Optical observations, however, only represent the surface of the asteroid. If there are any surface features that do not correlate with sub-surface features, such as buried rocks, radar echoes will be produced from features that are not visible on the surface (Virkki and Muinonen, 2016), hence there can be a disparity between optical and radar observations. The heightened importance of scattering laws and albedo introduces further complexity, resulting in a model that is a compromise between fitting both the optical and radar data.
Searches using only the radar data were also conducted, although the pole was poorly constrained without the wide range of viewing geometries afforded by the lightcurves. The shape model also benefits from the inclusion of lightcurve data, as the wide range of viewing geometries results in shadowing effects that can be used to better constrain the surface.
Observations that only cover a range of low sub-observer latitudes (i.e. equatorial views of the asteroid) can cause inaccuracy in shape models. When modelling with shape, this can cause models to assume a more spherical shape caused by uncertainty in the rotation axis. While the combined radar and lightcurve model for PN9 is highly spherical, the lightcurve and radar data span a sufficient range of viewing geometries to eliminate concerns as to whether PN9 could be more oblate than the model suggests. Goldstone radar imaging data from 2006 (Fig. A5) are particularly useful in this re
\begin{table}
\begin{tabular}{c c} \hline \hline Parameter & Value \\ \hline \(\lambda\) & \(96\pm 36^{\circ}\) \\ \(\beta\) & \(+30\pm 17^{\circ}\) \\ P & \(2.53216\pm 0.00015\) h \\ Max. extent along (x, y, z) & \(1.82\times 1.82\times 1.77\) km \\ (\(\pm\)) & \((0.08\times 0.07\times 0.11\) km) \\ Surface area & \(9.61\pm 0.80\) km\({}^{2}\) \\ Volume & \(2.62\pm 0.34\) km\({}^{3}\) \\ DEEVE dimensions (2a, 2b, 2c) & \(1.73\times 1.73\times 1.68\) km \\ (\(\pm\)) & \((0.10\times 0.09\times 0.06\) km) \\ D\({}_{\rm eq}\) & \(1.71\pm 0.07\) km \\ \hline \end{tabular} 1
\end{table}
Table 3: Summary of parameters for the prograde radar and lightcurve model of (23187) 2000 PN9.
Figure 9: The best-fit shape model of (23187) 2000 PN9 constructed with radar and lightcurve data. This model has its rotational pole at ecliptic longitude \(\lambda=96^{\circ}\) and ecliptic latitude \(\beta=430^{\circ}\), and a sidereal rotation period of \(2.53216\pm 0.00015\) h. The top row shows the model from the positive end of the Z, Y and X axes in the body-centric co-ordinate system. The bottom row shows the model from the negative end of the Z, Y and X axes. The Z-axis is aligned with the rotational pole, which is the shortest axis of inertia. The X-axis is arbitrarily set such that it is viewed from the positive end for the plane-of-sky at \(T_{0}\). Axis lengths are given in kilometres. It should be noted that the model for the antipode solution has a very similar shape, such that any discussion of this model’s features will also apply to the antipode model.
gard, as they correspond to a sub-observer latitude of -61\({}^{\circ}\) over 148\({}^{\circ}\) of rotation.
### Disk-integrated properties
Continuous wave (CW) spectra can be used to determine the asteroid's circular polarisation ratio (SC/OC), whereby the echo power is recorded in both the same circular (SC) and opposite circular (OC) polarisations.
Arecibo observations of PN9 from 4 and 5 March 2001 give SC/OC ratios of \(0.234\pm 0.003\) and \(0.235\pm 0.006\) respectively, which is consistent with the mean SC/OC ratio of \(0.270\pm 0.079\) for S and Q class NEAs (Benner et al., 2008). OC radar cross sections were also measured on these dates, returning \(0.20\pm 0.05\) km\({}^{2}\) and \(0.18\pm 0.05\) km\({}^{2}\) on 4 and 5 March respectively. The radar albedo, which is determined by dividing the OC cross-section by the model's projected area, was determined to be an average of \(0.08\pm 0.08\) on both days. This is consistent with the mean radar albedo of \(0.19\pm 0.06\) for S and Q type NEAs reported in Virkki et al. (2022).
The SC/OC ratio is often taken as an analogue for structural complexity near the surface. The SC component is determined by surface roughness at scales comparable to the sampled wavelength. For mirrorlike backscattering, the SC component would be zero. A surface that is very rough on scales comparable to the transmitted signal's wavelength will have a stronger SC component (Ostro et al., 2002). For the Arecibo CW observations of PN9, this scale is 13 cm. Results from the OSIRIS-REx mission, however, have cast doubt upon this interpretation of SC/OC ratios. Circular polarisation ratios suggest that Bennu is relatively smooth above the cm-scale (Nolan et al., 2013). However, the spacecraft data show that Bennu's surface is much rougher than this with larger scale boulders than expected (Dellagiusina et al., 2019). Eros and Itokawa, which are both S-complex asteroids that have been visited by spacecraft, have SC/OC ratios of \(0.22\pm 0.06\) and \(0.26\pm 0.04\) respectively (Magri et al., 2001; Ostro et al., 2004). Due to dissimilar formation processes, these asteroids exhibit differences in surface roughness (Susorney et al., 2019). PN9 is not thought to share a formation process with either of these asteroids, hence taxonomic and polarimetric similarities do not guarantee a similar surface to Eros or Itokawa. Didymos, which is a recently-visited S-complex asteroid with an SC/OC ratio of \(0.20\pm 0.02\)(Benner et al., 2008), is thought to be YORP evolved (Michel et al., 2022). Despite these similarities, a direct comparison is not advised. Didymos likely experienced spin-breakup to form Dimorphos, while it is not clear if PN9 has previously broken up and reformed. While radar polarimetry can be used to reliably infer the surface roughness of small bodies (e.g. Hickson et al. (2021)),
Figure 10: A comparison between delay-Doppler observations and the combined radar and lightcurve model for (23187) 2000 PN9, showing the first and last frame of each included dataset. Each three-panel image comprises the observational data (first panel), a synthetic echo (second panel) and a plane-of-sky projection of the model (third panel). On the first two panels, delay increases towards the bottom of the vertical axis and Doppler frequency increases along the horizontal axis. The plane-of-sky projections (third panel) are displayed with the celestial north at the top and east to the left, in an equatorial coordinate system. The rotation axis, which is closely aligned with the z-axis in the body-centric coordinate system, is marked with a purple arrow. The axes of minimum and intermediate inertia are indicated by red and green rods respectively. The body-fixed longitude \(\lambda\) and latitude \(\beta\) for the radar line-of-sight, and the rotational phase \(\phi\), are labelled for each image. These values were determined using the radar shape model’s spin-state. The projected centre of mass is marked with a cross. The full sets of radar imaging data are shown in Figures A2 through A5.
cution should be taken in assuming the surface roughness of PN9 from its SC/OC ratio.
To determine the optical albedo of PN9, the HG photometric system (Bowell et al., 1989) was fit to Minor Planet Center (MPC) astrophotometric data reported in the Johnson V-band using a Monte Carlo resampling method, obtaining \(\rm H=15.947\pm 0.036\) and \(\rm G=0.108\pm 0.016\). MPC astrophotometry has previously been shown to provide valuable data to constrain the phase curves of asteroids (e.g. Williams, 2013). The MPC astrophotometry does not report individual photometric uncertainties, and so each data point was re-sampled with a standard deviation equal to the maximum observed light curve amplitude of 0.181 to account for rotational variability in the data. Using the absolute magnitude and the radar-derived diameter \(\rm D_{eq}=1.71\pm 0.07\) km, the optical albedo is calculated as \(0.25\pm 0.02\), consistent with the polarimetrically-derived albedo from Belskaya et al. (2009). Caution must be used when inferring physical properties from phase curve-derived parameters of NEAs, however, due to the potential for changing aspect to introduce additional brightness modulations in the phase curve that are unrelated to the scattering behaviour of the surface material (Jackson et al., 2022).
### Rotational phase analysis and the search for YORP
Minute changes to an asteroid's rotation period can be detected through rotational phase analysis. A constant-period model of the asteroid can be used to generate synthetic lightcurves, which can then be compared with optical lightcurves. Any difference in rotational phase between the observed and synthetic lightcurves indicates a change in spin state.
Asteroids undergoing constant rotational acceleration due to the YORP effect will show a quadratic increase in rotational phase offset against time. Step changes in rotation period caused by mass-lofting, impacts, or repeated planetary perturbations, would cause sporadic changes in phase offset.
The best-fit model of PN9 presented in Section 3.2, which was constructed from a combination of radar and lightcurve data, was used to produce synthetic lightcurves corresponding to the 18 optical lightcurves used in this analysis.
To generate each synthetic lightcurve, a ray-tracing algorithm was used to determine the illumination of each of the model's facets through a full rotation at the appropriate viewing geometry. The scattering model that was used is a combination of the Lambertian and Lommel-Seelinger models (Kaasalainen et al., 2001). The sum of facet fluxes was then used to calculate the expected relative brightness of the asteroid, accounting for self-shadowing effects. This was converted to a relative magnitude, then both the synthetic and observed lightcurve magnitudes were offset to oscillate about a common zero point. The synthetic lightcurves were then shifted in phase in steps of \(0.5^{\circ}\) and the \(\chi^{2}\) fit of the shifted synthetic lightcurves to the observed lightcurves was measured. For each lightcurve, the shift that produced the best overall fit was taken to be the rotational phase offset between the constant period model and the actual rotational phase of the asteroid.
As PN9 is a highly symmetrical asteroid, brightness variations due to rotation are extremely small with lightcurve amplitudes often being as small as \(\sim\)0.05 magnitudes. Without observing clear turning points that can be reliably linked between lightcurves, it is difficult to detect a coherent progression in phase offsets caused by YORP. As PN9's lightcurves are extremely sensitive to surface detail and scattering parameters, it is not always possible to identify turning points that repeat across both observed and synthetic lightcurves. There are, however, a small number of clear and repeated turning points within our dataset that can be used for a phase offset analysis.
Figure 11 shows the measured phase offsets for each epoch, where temporally clustered measurements are averaged. A total of 24 lightcurves were included in the YORP fitting process, which are indicated in 1. The excluded lightcurves produced unacceptably large phase uncertainties. The best-fit YORP strength for PN9 is \(0.2\pm 1.6\times 10^{-8}\) rad/day\({}^{2}\), which is comparable in magnitude to the smallest confirmed YORP detections, and in line with expectations for a \(\sim\)2 km asteroid. Although this measurement is poorly constrained, and it is not possible to rule out constant-period rotation, it does place an upper limit on YORP acceleration.
We also considered a case where YORP acceleration between each appartion induces close to 360\({}^{\circ}\) of additional rotation. This would produce an apparent phase offset of 0\({}^{\circ}\) at each appartion by bringing the asteroid's rotation back into phase with a constant-period model. For this to be the case, YORP acceleration would have to be close to an integer multiple of \(4.8\times 10^{-6}\) rad/day\({}^{2}\). This value is greater than the current strongest published YORP detection of \(3.49\times 10^{-6}\) rad/day\({}^{2}\) with (54509) YORP (Lowry et al., 2007; Taylor et al., 2007). Considering that the diameter PN9 is \(\sim\)15\(\times\) greater than that of (54509) YORP, and that PN9 has high global symmetry, a YORP torque of this magnitude is considered to be unlikely. Our analysis therefore finds no compelling evidence for rotational acceleration of PN9, within the limits of the data. We discuss the potential significance of this below.
### Geophysical properties
The rapid spin-rate of PN9 implies that it could undergo frequent landslide and mass shedding events, and/or undergo structural failure. To investigate the spin-stability of PN9, we applied several geophysical analyses to the radar-derived shape model following the methods previously applied to asteroids (68346) 2001 KZ66 and (2102) Tantalus in Zegmott et al. (2021) and Rozek et al. (2022), respectively. In particular, gravitational slopes, gravitational potential, and topographic variation were determined by applying a polyhedron gravity field model modified for rotational centrifugal forces
Figure 11: Phase offset measurements for the best-fit combined radar and lightcurve model of (23187) 2000 PNP where 7\({}_{0}\)=2453815.29199 (March 2006). Phase offsets were measured against the ‘ph. off’ subset of lightcurves marked in Table 1. Phase offset measurements were averaged from groups of lightcurves, with groups being arranged such that there are a maximum of 180 days between consecutive lightcurves within a group. The straight dashed line represents a constant period model for reference.
(Werner & Scheeres, 1997; Rozitis et al., 2014; Richardson & Bowling, 2014; Richardson et al., 2019), and body-average cohesive forces were evaluated using the Druger-Prager failure criterion (Holsapple, 2007). These calculations were performed over a bulk density range of 1500 to 2500 kg m\({}^{-3}\) to cover the expected values for an S-type rubble-pile asteroid (Carry, 2012).
Figure 12 summarises the results of these analyses and indicates that PN9 is qualitatively very similar to asteroid Tantalus (i.e. Figure 12 of Rozek et al. (2022)). For instance, a minimum bulk density of \(\sim\)2070 kg m\({}^{-3}\) is required to prevent rotational mass shedding (Fig. 12e) and a cohesive strength of up to \(\sim\)50 Pa (Fig. 12f) is required to prevent rotational structural failure (versus 2200 kg m\({}^{-3}\) and 45 Pa for Tantalus, respectively). As shown in Figure 12c, for a nominal bulk density of 2000 kg m\({}^{-3}\) the gravitational slopes peak at \(\sim\)40\({}^{\circ}\) and there is prominent latitudinal banding in the gravitational potential (Fig. 12b). This facilitates mass movement from PN9's poles to its equator (Scheeres, 2015) and the Sq/Q-type classification of PN9 may indicate recent re-surfacing caused by YORP spin-up (Graves et al., 2018). If PN9 happens to be spinning-up by YORP, then the conditions for landslides, mass shedding, and structural failure become more easily met, which could eventually lead to the formation of a small moon (e.g. Walsh et al. (2008)).
Our data contain no evidence to suggest that PN9 already has a natural satellite. While the presence of a secondary can be difficult to detect with optical imaging, radar observations are particularly effective at identifying multiple systems (e.g. Brozovic et al. (2011); Taylor et al. (2019)). A preliminary analysis finds there are no consistent peaks in the continuous wave spectra, nor any visible satellites in delay-Doppler radar images. Any sufficiently bright secondary with a diameter above 19 m would be seen in individual images. The maximum photometric contribution from an undetected satellite would thus be on the order of \(10^{-4}\) magnitudes. We note that as a secondary would most likely have formed from a previous breakup of the primary, its composition - and hence brightness - would be comparable to that of the primary. Moons of top-shaped asteroids typically have \(\sim\)1% of the mass of their primary (Hyodo & Sugiura, 2022). Assuming equal densities, in the case of PN9 this would correspond to a \(\sim\)370 m moon. A secondary of this size would be detectable in any of the radar imaging data.
## 4 Discussion
Our analyses with optical and radar data show that PN9 has a spinning-top shape, which is characteristic of a rapidly rotating rubble pile (Walsh, 2018).
Top-shaped or 'YORPoid' asteroids are believed to be ubiquitous within the inner Solar System, due to the rate they are being discovered through radar and spacecraft imaging. The number of well-modelled examples, however, is relatively low.
In addition to this work, we have identified eleven top-shaped asteroids that have published models with full geometric parameters. These are summarised in Table 4. Some objects, such as (2867) Seins (Keller et al., 2010) and (29075) 1950 DA (Busch et al., 2007; Zegmott, 2021) were excluded as their top-like shapes exhibit global asymmetries that differentiate them from more definitive examples such as Bennu (Lauretta et al., 2019) or Moshp (Ostro et al., 2006). It should be noted that a larger number of candidates were identified, but do not have publicly available models and/or geometric parameters. We have compiled an informal list of these objects which can be accessed online1.
Footnote 1: [http://astro.kent.ac.uk/~YORP/spintop.html](http://astro.kent.ac.uk/~YORP/spintop.html)
In comparison to other top-shaped asteroids, several features of PN9 stand out. The majority of top-shaped asteroids listed in Table 4 are multiple systems. As discussed in Section 3.6, there is no evidence to suggest that PN9 has any satellites. To date, PN9 is the second largest top-shaped solitary asteroid with a fully developed shape model. In comparison to other top-shaped asteroids, PN9 has higher levels of global symmetry and a less pronounced equatorial ridge or bulge. This could be a result of PN9's greater mass, or the presence of internal cohesive forces.
Top-shaped asteroids are poor candidates for YORP detection. Their highly symmetrical shapes produce low-amplitude lightcurves that do not vary significantly between different viewing geometries. This makes it difficult to constrain the rotational pole and period, and increases the importance of accurate surface fitting and the performance of scattering models. While radar observations can somewhat mitigate the limitation, the only confirmed YORP detection on a top-shaped asteroid to date is derived from both radar and Hubble Space Telescope observations of (101955) Bennu (Nolan et al., 2019). Nevertheless, they may be crucial in distinguishing between components of 'normal YORP' (NYORP), which is dominated by global shape, and 'tangential YORP' (YTORP), which is driven by irregularity across an asteroid's surface (Golubov & Krugly, 2012). Strong YORP detections on globally symmetric asteroids, which should have very small NYORP components, would imply a strong TYORP component. Separating the components of observational YORP detections
Figure 12: Geophysical analysis of asteroid (23187) 2000 PN9. (a) Gravitational slopes and (b) gravitational potential computed assuming a bulk density of 2000 kg m\({}^{-3}\). (c) Areal distribution from (a) for three different values of bulk density. (d) Topographic variation from (b) for three different values of bulk density. (e) Negative effective gravity area as a function of bulk density. (f) Conseive strength as a function of bulk density and angle of friction. The vertical dashed lines in (e) and (f) show the bulk density range for a typical S-type rubble-pile asteroid.
can only be possible if both extremes are studied, as opposed to the current bias towards YORP analyses of highly asymmetric asteroids which have significant NYORP components.
Our analysis of PN9 includes 14.5 years of lightcurve coverage and shows that it is not currently experiencing significant rotational acceleration. Small YORP torques or sporadic changes to rotation period, however, cannot be ruled out with the current data.
The YORP effect is thought to be a key mechanism in the production of spinning-top rubble piles and binary systems. In the'spin-up' configuration YORP torque can steadily increase an asteroid's spin rate until it experiences physical deformation to become a top-shaped 'YORPoid'.
An analysis has been performed of the spin-driven evolution of (101955) Bennu and (162173) Ryugu by Hirabayashi et al. (2020), who find that reshaping at longer periods is driven by changes to surface structure, while reshaping at shorter periods is driven by the failure of internal structures. Ryugu and Bennu, which are both C-complex asteroids, have measured bulk densities of 1190 kg m\({}^{-3}\)(Scheeres et al., 2019; Watanabe et al., 2019). As an S-complex asteroid it is likely that PN9 has a higher density than this (Carry, 2012), which would suggest that a higher spin rate is required to induce rotational deformation. PN9's 2.53 h rotation period, which is close to the 2.2 h spin barrier for cohesionless asteroids (Pravec and Harris, 2000), favours the failure of internal structure being primarily responsible for any recent deformation PN9 has experienced.
YORP-driven deformation of near-Earth asteroids is likely to be self-limited by various mechanisms. As an asteroid approaches or crosses the spin-limit barrier, surface regolith may migrate from the poles towards the equator (Hirabayashi and Scheeres, 2019). In order to conserve angular momentum, the asteroid's period must increase, countering the YORP spin-up. Due to the YORP effect's strong dependence on shape, spin-driven reshaping into a more symmetrical top shape will decrease the strength of YORP torques. This self-governing must not occur in all cases, however, as it has been demonstrated that the YORP effect can form binaries through rotational breakup (Walsh et al., 2008).
Rotational breakup does not always produce a multiple system. Material can re-accrete towards the equator, producing an equatorial ridge (Hyodo and Sugiura, 2022), while the orbital evolution of satellites can lead to them migrating outward until they are lost. As PN9 is near-spherical and does not have a prominent equatorial ridge, there is no indication that it has previously experienced spin-breakup or lost a satellite.
It is also possible that PN9 is an example of an asteroid that is trapped in a state of rotational equilibrium, where normal and tangential YORP components enforce a constant rotation period over long time periods (Golubov and Scheeres, 2019). If a significant fraction of asteroids are found to have near-zero YORP acceleration, it would confirm the existence of'sinks' that halt the YORP cycle. This would have a significant impact on theories of asteroid evolution. YORP equilibrium states are, however, expected to be seen in systems that are more physically complex than PN9 (Breiter and Murawiecka, 2015; Golubov et al., 2016).
In the next century, PN9 will not come within 100 lunar distances of Earth. This is beyond the range of current and near-future radar facilities, limiting any future observations to optical and infrared telescopes. The best opportunity to observe PN9 until at least 2030 will be from the northern hemisphere in mid-2025, when medium-sized telescopes will be able to image the asteroid over several rotations. Larger northern telescopes should be able to image PN9 in early 2024 and early 2029, while facilities in the southern hemisphere are limited to the aforementioned mid-2025 appartition until after 2030. These observations could be used to better constrain PN9's pole and extend the baseline in the search for YORP, while any improvements to the physical model may improve upon the current phase offset measurements. It is unlikely that further ground-based observations will result in a YORP detection for PN9, however the current constraints could be significantly improved.
\begin{table}
\begin{tabular}{l l r r r r r r r} \hline \hline \multicolumn{1}{c}{ Asteroid} & \multicolumn{1}{c}{Period} & \multicolumn{1}{c}{Diameter} & Volume & Rotational & \multicolumn{1}{c}{Type} & \multicolumn{1}{c}{SC/OC} & Multiplicity & Ref. \\ & \multicolumn{1}{c}{[\(h\)]} & \multicolumn{1}{c}{[\(km\)]} & \multicolumn{1}{c}{[\(km^{3}\)]} & \multicolumn{1}{c}{Pole (\(\lambda\), \(\beta\)) [\({}^{\circ}\)]} & & & & \\ \hline (2102) & Tantalus\({}^{\dagger}\) & 2.391 & 1.3 & 1.05 & (180, +24) & Sr & 0.19 & & 1 \\ (3200) & Phaethon\({}^{\ddagger}\) & 3.604 & 6.4 & 75 & (316, -50) & B & 0.19 & & 2,3,4 \\ (23187) & 2000 PN9 & 2.532 & 1.82 & 2.627 & (096, +30) & S/Sq/Q & 0.23 & & this work, 5,6 \\ (65803) & Didymos & 2.260 & 0.84 & 0.249 & (310, -84) & Sq & 0.2 & Binary & 7,8,9 \\ (66391) & Moshup\({}^{\ddagger}\) & 2.765 & 1.53 & 1.195 & (326, -65) & S & 0.45 & Binary & 10,11 \\ (101955) & Bennu\({}^{\ddagger}\) & 4.296 & 0.57 & 0.062 & (086, -60) & B & 0.18 & & 12,13 \\ (136617) & 1994 CC & 2.389 & 0.69 & 0.125 & (336, +22) & Sq & 0.40, 0.50 & Triple & 14 \\ (153591) & 2001 SN263 & 3.426 & 2.9 & 8.2 & (309, -80) & B & 0.17 & Triple & 15 \\ (162173) & Ryugu & 7.633 & 0.88 & 0.377 & (179, -87) & Cg & N/A & 16,17 \\ (185851) & 2000 P107 & 2.775 & 0.99 & 0.337 & (294, +78) & C & 0.25 & Binary & 7,18,19, \\ (276049) & 2002 CE26 & 3.293 & 3.65 & 21.7 & (317, -20) & C & 0.21 & Binary & 20 \\ (341842) & 2008 EV5 & 3.725 & 0.42 & 0.035 & (189, -84) & C/X & 0.38 & & 21,22,23 \\ \hline \end{tabular} 10
[FOOTNOTE:10]Footnote 10: footnotetext: \({}^{\ddagger}\) is the sidereal rotation period of the asteroid. “Diameter” gives the maximum equatorial diameter. “Volume” is derived from the physical model of the asteroid. “Rotational Pole” denotes the spin-axis orientation of the asteroid in the ecliptic coordinate system. “Type” denotes the taxonomic classification(s) each asteroid has been given. “SC/OC”, also known as the circular polarisation ratio, is the ratio between same circular and opposite circular polarised radar echo. “Multiplicity” denotes the number of known bodies in the asteroid system. Inclusion in this list is determined by the shape of the primary or ‘Alpha’ body, and physical parameters refer to the primary. \({}^{\dagger}\) Retrograde model; \(\lx@sectionsign\) Values for the shape and spin-state are preliminary as of December 2022; \(\lx@sectionsign\) Also known as 1999 FOV4. [FOOTNOTE:4]Footnote 4: footnotetext: \({}^{\ddagger}\) is the sidereal rotation period of the asteroid. “Diameter” gives the maximum equatorial diameter. “Volume” is derived from the physical model of the asteroid. “Rotational Pole” denotes the spin-axis orientation of the asteroid in the ecliptic coordinate system. “Type” denotes the taxonomic classification(s) each asteroid has been given. “SC/OC”, also known as the circular polarisation ratio, is the ratio between same circular and opposite circular polarised radar echo. “Multiplicity” denotes the number of known bodies in the asteroid system. Inclusion in this list is determined by the shape of the primary or ‘Alpha’ body, and physical parameters refer to the primary. \({}^{\dagger}\) Retrograde model; \(\lx@sectionsign\) Values for the shape and spin-state are preliminary as of December 2022; \(\lx@sectionsign\) Also known as 1999 FOV4. [FOOTNOTE:4]Footnote 4: footnotetext: \({}^{\ddagger}\) is the sidereal rotation period of the asteroid. “Diameter” gives the maximum equatorial diameter. “Volume” is derived from the physical model of the asteroid. “Rotational Pole” denotes the spin-axis orientation of the asteroid in the ecliptic coordinate system. “Type” denotes the taxonomic classification(s) each asteroid has been given. “SC/OC”, also known as the circular polarisation ratio, is the ratio between same circular and opposite circular polarised radar echo. “Multiplicity” denotes the number of known bodies in the asteroid system. Inclusion in this list is determined by the shape of the primary or ‘Alpha’ body, and physical parameters refer to the primary. \({}^{\dagger}\) Retrograde model; \(\lx@sectionsign\) Values for the shape and spin-state are preliminary as of December 2022; \(\lx@sectionsign\) Also known as 19999 FOV4. [FOOTNOTE:4]Footnote 4: footnotetext: \({}^{\ddagger}\) is the sidereal rotation period of the asteroid. “Diameter” gives the maximum equatorial diameter. “Volume” is derived from the physical model of the asteroid. “Rotational Pole” denotes the spin-axis orientation of the asteroid in the ecliptic coordinate system. “Type” denotes the taxonomic classification(s) each asteroid has been given. “SC/OC”, also known as the circular polarisation ratio, is the ratio between same circular and opposite circular polarised radar echo.
The non-detection of rotational acceleration of PN9, combined with its highly symmetrical shape and short rotation period, suggest that if it is indeed YORP-evolved then it is an example of self-limitation. In order to better understand the physical evolution of near-Earth asteroids, it is essential to understand the factors that determine if YORP spin-up of a rubble pile will self limit or continue past the spin-breakup barrier and form a binary. As YORPoids are unfavourable targets for YORP detection, analyses of objects that are in the late stages of YORP evolution are under-represented. Further study of these asteroids with future ground-based optical and radar facilities, as well as spacecraft observations, are essential to better understanding the influence of YORP on evolutionary pathways for small bodies.
## Acknowledgements
We thank all staff at the various observatories that contributed towards this work. LD, SCL, AR, BR, SLJ, TZ, CS, AF and SFG all acknowledge support from the UK Science and Technology Facilities Council. YNK was partially funded by ALLEA through Funding Line 1 of the European Fund for Displaced Scientists (EFDS). IB thanks the PAUSE program for scientists in danger for its support. Part of this research was carried out at the Jet Propulsion Laboratory, California Institute of Technology, under contract with the National Aeronautics and Space Administration (80NM0018D0004). This work was based in part on observations collected at the European Organisation for Astronomical Research in the Southern Hemisphere under ESO programmes 185.C-1033(D, C) and 106.C-0794(A). It was also based in part on service observations made with the Isaac Newton Telescope operated on the island of La Palma by the Isaac Newton Group of Telescopes in the Spanish Observatorio del Roque de los Muchachos of the Instituto de Astrofisica de Canarias under programme I/2020B/05. Observations at the Danish 1.54m telescope at the ESO La Silla observatory were performed as part of the MiNTEp project, and supported by the Danish Natural Science Research Council (FNU). This research has been funded in part by the Aerospace Committee of the Ministry of Digital Development, Innovations and Aerospace Industry of the Republic of Kazakhstan (Grant No. BR 11265408). This work uses data obtained from the Asteroid Lightcurve Data Exchange Format (ALCDEF) database, which is supported by funding from NASA grant 80NSSC18K0851. This research made use of Astropy, (Astropy Collaboration et al., 2013, 2018), Matplotlib (Hunter, 2007), IRAF Community Distribution2 and the NASA/JPL HORIZONS ephemeris tool3. We thank Chris Magri for providing the shape software package and Sean Marshall for providing preliminary results for (3200) Phaethon.
Footnote 2: [https://iraf-community.github.io](https://iraf-community.github.io)
Footnote 3: [https://ssd.jpl.nasa.gov/horizons/](https://ssd.jpl.nasa.gov/horizons/)
## Data Availability
The PDS lightcurves of 2000 PN9 first published in Warner (2016) are available on the Asteroid Lightcurve Data Exchange Format (ALCDEF) database (Warner et al., 2011) at [https://alcdef.org/.Lightcurves](https://alcdef.org/.Lightcurves) that have not been previously published will be made available at [https://vizier.u-strasbg.fr](https://vizier.u-strasbg.fr). The shape models presented in this work will be submitted to the Database of Asteroid Models from Inversion Techniques (DAMIT) at [https://astro.troja.mff.cuni.cz/projects/damit/](https://astro.troja.mff.cuni.cz/projects/damit/). The radar data are available from the authors upon request.
## References
* Astropy Collaboration et al. (2013) Astropy Collaboration et al., 2013, A&A, 558, A33
* Astropy Collaboration et al. (2018) Astropy Collaboration et al., 2018, AJ, 156, 123
* Becker et al. (2015) Becker T. M., et al., 2015, Icarus, 248, 499
* Belskaya et al. (2009) Belskaya I. N., Fornasier S., Krugly Y. N., 2009, Icarus, 201, 167
* Benner et al. (2008) Benner L. A. M., et al., 2008, Icarus, 198, 294
* Binzel et al. (2004) Binzel R. P., Rivkin A. S., Stuart J. S., Harris A. W., Bus S. J., Burbine T. H., 2004, Icarus, 170, 259
* Binzel et al. (2019) Binzel R. P., et al., 2019, Icarus, 324, 41
* Bottke et al. (2006) Bottke W. F., Vokrouhlicky D., Rubincan D. P., Nesvorny D., 2006, Annual Review of Earth and Planetary Sciences, 34, 157
* Bowell et al. (1989) Bowell E., Hapke B., Domingue D., Lumme K., Peltoniemi J., Harris A. W., 1989, in Binzel R., Gehrels T., Matthews M. S., eds., Asteroids II. The University of Arizona Press, Tucson, pp 524-556
* Breiter & Muraiwacka (2015) Breiter S., Muraiwacka M., 2015, MNRAS, 449, 2489
* Brozovic et al. (2011) Brozovic M., et al., 2011, Icarus, 216, 241
* Busch et al. (2006) Busch M. W., Ostro S. J., Benner L. A. M., Giorgini J. D., 2006, Society for Astronomical Sciences Annual Symposium, 25, 169
* Busch et al. (2007) Busch M. W., et al., 2007, Icarus, 190, 608
* Busch et al. (2011) Busch M. W., et al., 2011, Icarus, 212, 649
* Carry (2012) Carry B. 2012, Planet. Space Sci., 73, 98
* Cheng et al. (2018) Cheng A. F., et al., 2018, Planet. Space Sci., 157, 104
* Dandy et al. (2003) Dandy C. L., Fitzsimmons A., Collander-Brown S. J., 2003, Icarus, 163, 363
* Dellagustin et al. (2019) Dellagustin D. N., et al., 2019, Nature Astronomy, 3, 341
* Durech et al. (2008) Durech J., et al., 2008, A&A, 489, L25
* Durech et al. (2012) Durech J., et al., 2012, A&A, 547, A10
* Durech et al. (2018) Durech J., et al., 2018, A&A, 609, A86
* Durech et al. (2022) Durech J., et al., 2022, A&A, 657, A5
* Galcer et al. (2007) Galcer A., Guimero R., Bikmaev I., Pinjgin G., Khamitov I., Aslan Z., 2007, Odessa Astronomical Publications, 20, 43
* Golubov & Krugly (2012) Golubov O., Krugly Y. N., 2012, AJ, 752, L11
* Golubov & Scheeres (2019) Golubov O., Scheeres D. J., 2019, AJ, 157, 105
* Golubov et al. (2016) Golubov O., Kravets Y., Krugly Y. N., Scheeres D. J., 2016, MNRAS, 458, 3977
* Graves et al. (2018) Graves K. J., Minton D. A., Hirabayashi M., DeMeo F. E., Carry B., 2018, Icarus, 304, 162
* Green et al. (1985) Green S. F., Meadows A. J., Davies J. K., 1985, MNRAS, 214, 29P
* Hickson et al. (2021) Hickson D. C., Virkki A. K., Perillat P., Nolan M. C., Shirazavansi S. S., 2021, The Planetary Science Journal, 2, 30
* Hirabayashi & Scheeres (2019) Hirabayashi M., Scheeres D. J., 2019, Icarus, 317, 354
* Hirabayashi et al. (2020) Hirabayashi M., et al., 2020, Icarus, 352, 113946
* Holsphe (2007) Holsphe K. A., 2007, Icarus, 187, 500
* Hudson (1993) Hudson S., 1993, Remote Sens. Rev., 8, 195
* Hunter (2007) Hunter J. D., 2007, Computing in Science & Engineering, 9, 90
* Hyodo & Sugimura (2022) Hyodo R., Sugimura K., 2022, ApJ, 937, 136
* Jackson et al. (2022) Jackson S. L., Rozitis B., Dover L. R., Green S. F., Kolb U. C., Andrews A. E., Lowry S. C., 2022, MNRAS, 513, 3076
* Kaasalainen & Torppa (2001) Kaasalainen M., Torppa J., 2001, Icarus, 153, 24
* Kaasalainen et al. (2001) Kaasalainen M., Torppa J., Muinonen K., 2001, Icarus, 153, 37
* Kaasalainen et al. (2007) Kaasalainen M., Durech J., Warner B. D., Krugly Y. N., Gaftonyuk N. M., 2007, Nature, 446, 420
* Keller et al. (2010) Keller H. U., et al., 2010, Science, 327, 190
* Lauretta et al. (2019) Lauretta D. S., et al., 2019, Nature, 568, 55
* Lowry et al. (2007) Lowry S. C., et al., 2007, Science, 316, 272
* Lowry et al. (2014) Lowry S. C., et al., 2014, A&A, 562, A48
* Magri et al. (2001) Magri C., Consolmagno G. J., Ostro S. J., Benner L. A. M., Beeney B. R., 2001, Meteoritics & Planetary Science, 36, 1697
* Magri et al. (2007) Magri C., Ostro S. J., Scheeres D. J., Nolan M. C., Giorgini J. D., Benner L. A., Margol J.-L., 2007, Icarus, 186, 152
* Michel et al. (2022) Michel P., et al., 2022, The Planetary Science Journal, 3, 160
* Moravec et al. (2000) Moravec Z., et al., 2000, Minor Planet Electronic Circulars, 2000-P48
* Naidu et al. (2015) Naidu S. P., et al., 2015, AJ, 150, 54
* Naidu et al. (2020) Naidu S. P., et al., 2020, Icarus, 348, 113777
* Nolan et al. (2013) Nolan M. C., et al., 2013, Icarus, 226, 629
* Nolan et al. (2019) Nolan M. C., et al., 2019, Geophysical Research Letters, 46, 1956
* Ostro (1993) Ostro S. J., 1993, Reviews of Modern Physics, 65, 1235
* Ostro et al. (2002) Ostro S. J., Hudson R. S., Benner L. A. M., Giorgini J. D., Magri C., Margot J. L., Nolan M. C., 2002, Asteroid Radar Astronomy. University of Arizona Press, pp 151-168
* Ostro et al. (2004) Ostro S. J., et al., 2004, Metcorites & Planetary Science, 39, 407
* Ostro et al. (2006) Ostro S. J., et al., 2006, Science, 314, 1276
* Pravece and Harris (2000) Pravece P., Harris A. W., 2000, Icarus, 148, 12
* Reddy et al. (2011) Reddy V., et al., 2011, Icarus, 216, 184
* Richardson et al. (2014) Richardson J. E., Bowling T. J., 2014, Icarus, 234, 53
* Richardson et al. (2019) Richardson J. E., Graves K. J., Harris A. W., Bowling T. J., 2019, Icarus, 329, 207
* Rozek et al. (2022) Rozek A., et al., 2022, MNRAS, 515, 4551
* Rozika et al. (2014) Rozika B., Macleman E., Emery J. P., 2014, Nature, 512, 174
* Rubinecam (2000) Rubinecam D. P., 2000, Icarus, 148, 2
* Scheeres (2015) Scheeres D. J., 2015, Icarus, 247, 1
* Scheeres et al. (2019) Scheeres D. J., et al., 2019, Nature Astronomy, 3, 352
* Shepard et al. (2006) Shepard M. K., et al., 2006, Icarus, 184, 198
* Somers et al. (2008) Somers J. M., Hicks M. D., Lawrence K. J., 2008, in AAS/Division for Planetary Sciences Meeting Abstracts #40. p. 28.21
* Sugita et al. (2019) Sugita S., et al., 2019, Science, 364, 252
* Susorney et al. (2019) Susorney H. C. M., Johnson C. L., Barrouin O. S., Daly M. G., Seabrook J. A., Bierhaus E. B., Lauretta D. S., 2019, Icarus, 325, 141
* Taylor et al. (2007) Taylor P. A., et al., 2007, Science, 316, 274
* Taylor et al. (2019) Taylor P. A., et al., 2019a, in 50th Annual Lunar and Planetary Science Conference. Lunar and Planetary Science Conference. p. 2945
* Taylor et al. (2019) Taylor P. A., et al., 2019b, Planet. Space Sci., 167, 1
* Thomas et al. (2014) Thomas C. A., Emery J. P., Trilling D. E., Delbo M., Hora J. L., Mueller M., 2014, Icarus, 228, 217
* Virkki and Muinonen (2016) Virkki A., Muinonen K., 2016, Icarus, 269, 38
* Virkki et al. (2022) Virkki A. K., et al., 2022, Planetary Science Journal, 3, 222
* Vokrouhlicky et al. (2003) Vokrouhlicky D., Nesvorny D., Bottke W. F., 2003, Nature, 425, 147
* Walsh (2018) Walsh K. J., 2018, ARA&A, 56, 593
* Walsh et al. (2008) Walsh K. J., Richardson D. C., Michel P., 2008, Nature, 454, 188
* Warner (2016) Warner B. D., 2016, Minor Planet Bulletin, 43, 240
* Warner et al. (2011) Warner B. D., Stephens R. D., Harris A. W., 2011, Minor Planet Bulletin, 38, 172
* Watanabe et al. (2019) Watanabe S., et al., 2019, Science, 364, 268
* Werner and Scheeres (1997) Werner R. A., Scheeres D. J., 1997, Celestial Mechanics and Dynamical Astronomy, 65, 313
* Williams (2013) Williams G. V., 2013, PhD thesis, Open University Milton Keynes, UK
* Zegmott (2021) Zegmott T. J., 2021, PhD thesis, Univ. of Kent
* Zegmott et al. (2021) Zegmott T. J., et al., 2021, MNRAS, 507, 4914
Physical modelling of near-Earth asteroid (23187) 2000 PN9 with ground-based optical and radar observations - Appendix
L. Dover\({}^{1}\), S. C. Lowry\({}^{1}\), A. Rozek\({}^{2,1}\), B. Rozitis\({}^{3}\), S. L. Jackson\({}^{3}\), T. Zegmott\({}^{1}\), Yu. N. Krugly\({}^{4,5}\), I. N. Belskaya\({}^{4,6}\), A. Fitzsimmons\({}^{7}\), S. F. Green\({}^{3}\), C. Snodgrass\({}^{2}\), P. R. Weissman\({}^{8}\), M. Brozovic\({}^{9}\), L. A. M. Benner\({}^{9}\), M. W. Busch\({}^{10}\), V. R. Ayvazian\({}^{11,12}\), V. Chiorny\({}^{4}\), R. Ya. Inasaridze\({}^{11,12}\), M. Krugov\({}^{13}\), S. Mykhailova\({}^{4,5}\), I. Reva\({}^{13}\), and J. Hibbert\({}^{14}\)
\({}^{1}\) Centre for Astrophysics and Planetary Science, University of Kent, Canterbury, UK
\({}^{2}\)Institute for Astronomy, University of Edinburgh, Royal Observatory, Edinburgh, UK
\({}^{3}\)Planetary and Space Sciences, School of Physical Sciences, The Open University, Milton Keynes, UK
\({}^{4}\)Institute of Astronomy, V. N. Kanzikin Kharkiv National University, Kharkiv, Ukraine
\({}^{5}\)Astronomical Observatory Institute, Faculty of Physics, A. Mickiewicz University, Poznan, Poland
\({}^{6}\)LESIA, Observatoire de Paris, Universite PSL, CNRS, Universite Paris Cite, Sorbonne Universite, Meudon, France
\({}^{7}\)Astrophysics Research Centre, Queens University Belfast, Belfast, UK
\({}^{8}\)Planetary Sciences Institute, Tucson, Arizona, USA
\({}^{9}\)Jet Propulsion Laboratory, California Institute of Technology, USA
\({}^{10}\)SET Institute, Mountain View, California, USA
\({}^{11}\)E. Anardoe Georgian National Astrophysical Observatory, Abastumani, Georgia
\({}^{12}\)Somtshe-Jornkeit State University, Akhatshie, Georgia
\({}^{13}\)Fesenkov Astrophysical Institute, Almaty, Kazakhstan
\({}^{14}\)Isaac Newton Group, Apartado de correos 321, 38700, Santa Cruz de La Palma, Canary Islands, Spain |
2301.05501 | The rebrightening of a ROSAT-selected tidal disruption event: repeated
weak partial disruption flares from a quiescent galaxy? | The ROSAT-selected tidal disruption event (TDE) candidate RX
J133157.6-324319.7 (J1331), was detected in 1993 as a bright (0.2-2 keV flux of
$(1.0 \pm 0.1) \times 10^{-12}$ erg s$^{-1}$ cm$^{-2}$), ultra-soft ($kT=0.11
\pm 0.03$ keV) X-ray flare from a quiescent galaxy ($z=0.05189$). During its
fifth All-Sky survey (eRASS5) in 2022, SRG/eROSITA detected the repeated
flaring of J1331, where it had rebrightened to an observed 0.2-2 keV flux of
$(6.0 \pm 0.7) \times 10^{-13}$ erg s$^{-1}$ cm$^{-2}$, with spectral
properties ($kT=0.115 \pm 0.007$ keV) consistent with the ROSAT-observed flare
$\sim$30 years earlier. In this work, we report on X-ray, UV, optical, and
radio observations of this system. During a pointed XMM observation $\sim$17
days after the eRASS5 detection, J1331 was not detected in the 0.2-2 keV band,
constraining the 0.2-2 keV flux to have decayed by a factor of $\gtrsim$40 over
this period. Given the extremely low probability ($\sim5\times 10^{-6}$) of
observing two independent full TDEs from the same galaxy over a 30 year period,
we consider the variability seen in J1331 to be likely caused by two partial
TDEs involving a star on an elliptical orbit around a black hole. J1331-like
flares show faster rise and decay timescales ($\mathcal{O}(\mathrm{days})$)
compared to standard TDE candidates, with neglible ongoing accretion at late
times post-disruption between outbursts. | A. Malyali, Z. Liu, A. Rau, I. Grotova, A. Merloni, A. J. Goodwin, G. E. Anderson, J. C. A. Miller-Jones, A. Kawka, R. Arcodia, J. Buchner, K. Nandra, D. Homan, M. Krumpe | 2023-01-13T12:04:24Z | http://arxiv.org/abs/2301.05501v1 | The rebrightening of a _Rosat_-selected tidal disruption event: repeated weak partial disruption flares from a quiescent galaxy?
###### Abstract
The _ROSAT_-selected tidal disruption event (TDE) candidate RX J133157.6-324319.7 (J1331), was detected in 1993 as a bright (0.2-2 keV flux of \((1.0\pm 0.1)\times 10^{-12}\) erg s\({}^{-1}\) cm\({}^{-2}\)), ultra-soft (\(kT=0.11\pm 0.03\) keV) X-ray flare from a quiescent galaxy (\(z=0.05189\)). During its fifth All-Sky survey (eRASS5) in 2022, _SRG_/eROSITA detected the repeated flaring of J1331, where it had rebrightened to an observed 0.2-2 keV flux of \((6.0\pm 0.7)\times 10^{-13}\) erg s\({}^{-1}\) cm\({}^{-2}\), with spectral properties (\(kT=0.115\pm 0.007\) keV) consistent with the _ROSAT_-observed flare \(\sim\)30 years earlier. In this work, we report on X-ray, UV, optical, and radio observations of this system. During a pointed _XMM_ observation \(\sim\)17 days after the eRASS5 detection, J1331 was not detected in the 0.2-2 keV band, constraining the 0.2-2 keV flux to have decayed by a factor of \(\gtrsim\)40 over this period. Given the extremely low probability (\(\sim 5\times 10^{-6}\)) of observing two independent full TDEs from the same galaxy over a 30 year period, we consider the variability seen in J1331 to be likely caused by two partial TDEs involving a star on an elliptical orbit around a black hole. J1331-like flares show faster rise and decay timescales (\(\mathcal{O}\)(days)) compared to standard TDE candidates, with negible ongoing accretion at late times post-disruption between outbursts.
keywords: accretion, accretion discs - galaxies: nuclei - black hole physics - transients: tidal disruption events
## 1 Introduction
Benefitting from the latest generation of time-domain surveys, the past decade has seen a vast growth in the diversity of observed transients originating from galactic nuclei. These events can be crudely divided into, and described as, either 'one-off' or'repeating' events, depending on the observed evolution of their lightcurves.
'One-off' events, characterised by a single epoch of major transient behaviour over an observed monitoring campaign, comprise the majority of newly reported nuclear transients. These include systems where the variability is likely linked to changes in the accretion process onto a supermassive black hole, such as has been reported in previously known AGN (e.g. changing-state AGN; Frederick et al., 2019; Trakhtenbrot et al., 2019; Ricci et al., 2020, 2021; Frederick et al., 2021; short-rise, slowed-decay Bowen accretion flares, Trakhtenbrot et al., 2019), or due to stellar tidal disruption events (TDEs) in quiescent galaxies1(see Saxton et al., 2020; van Velzen et al., 2020, 2021; Alexander et al., 2020 for recent reviews of X-ray, optical, infrared and radio observations of TDEs, respectively). Other transients, which may occur so close to the centres of galaxies that they are astrometrically indistinguishable from SMBH accretion, have also been reported (e.g. supernovae exploding in the narrow-line region of AGN, Drake et al., 2011), or predicted to exist (e.g. stellar collisions in nuclear star clusters; Dale et al., 2009).
Footnote 1: Strong TDE candidates have also been reported in galaxies showing signs of previous AGN activity (e.g. Merloni et al., 2015; Blanchard et al., 2017; Liu et al., 2020).
Even more recently, the population of known'repeating' events has expanded. Several TDE candidates have now shown multiple major outbursts, either through their strong, double-peaked optical lightcurves (AT 2019avd, Malyali et al., 2021; Chen et al., 2022), repeated X-ray outbursts (IC 3599, Grupe et al., 1995, 2001, 2015; Campana et al., 2015; eRASSI J045650.3-203750, Liu et al., 2022; AT 2018fyk, Wevers et al., 2022), or quasi-periodic optical outbursts potentially associated with repeated partial TDEs (ASASSN-14kq, Payne et al., 2021). Towards the more extreme end of known repeating transients lie the recently-discovered class of quasi-periodic eruptions (QPEs; Miniutti et al., 2019; Giustini et al., 2020; Arcodia et al., 2021, 2022), which show large amplitude, ultra-soft X-ray outbursts, with flare duration of the order of hours, and which recur over timescales of hours to days.
In this work, we report on the _SRG_/eROSITA (Sunyaev et al., 2021; Predehl et al., 2021) detection of the repeated flaring of a previously reported, _ROSAT_-selected TDE candidate, RXJ133157.6-324319.7 (Reiprich & Greiner, 2001; Hampel et al., 2022), originating from a quiescent galaxy at \(z=0.05189\)(Moretti et al., 2017). In Section 2, we report on the detection of this system with eROSITA and follow-up observations performed with _NICER_ (Section 2.2), _XMM_ (Section 2.3), and _Swift_ XRT (Section 2.4), as well as archival X-ray
observations (Section 2.5), UV, optical and mid-infrared photometry (Section 2.6) and radio observations (Section 2.7). We discuss the nature of the system in Section 3, before providing a summary in Section 4.
All magnitudes are reported in the AB system and corrected for Galactic extinction using \(A_{\rm V}=0.142\) mag, obtained from (Schlafly & Finkbeiner, 2011), \(R_{\rm V}=3.1\) and a Cardelli extinction law (Cardelli et al., 1989), unless otherwise stated. The effective wavelength for each filter was retrieved from the SVO Filter Profile Service2. All dates/times will be reported in universal time (UT).
Footnote 2: [http://swo2.cab.inta-csic.es/theory/fps/](http://swo2.cab.inta-csic.es/theory/fps/)
## 2 Re-Discovery and Follow-Up
eRASSJ1133157.9-324321 (herein J1331) was detected on 2022-01-20 as a bright new X-ray point source in a systematic search for TDE candidates during the fifth eROSITA All-Sky survey (eRASS5). The _eROSITA Science Analysis Software_ (eSASS; Brunner et al., 2022) inferred source position was (RA\({}_{\rm J2000}\), Dec\({}_{\rm J2000}\))=(13h31m57.9s, -32\({}^{\circ}\)43\({}^{\prime}\)21.2\({}^{\prime\prime}\)), with a 1\(\sigma\) positional uncertainty of 1.6\({}^{\prime\prime}\). No X-ray point source was detected within 60\({}^{\circ}\) of this position in each of the previous four eRASS. The eROSITA source position is consistent with a quiescent host galaxy at \(z=0.05189\), with total stellar mass, \(\log(M_{\star}/M_{\odot})=10.15\pm 0.09\), and an inferred black hole mass, \(\log(M_{\rm BH}/M_{\odot})=6.5\pm 0.2\) (appendix A). The quiescent nature of the host is suggested by both the optical spectrum of its host galaxy (appendix B; see also Hampel et al., 2022) and its _AllWISE_(Wright et al., 2010; Mainzer et al., 2014) mid-infrared colour, W1-W2=\(0.05\pm 0.05\) mag, far below the threshold of \(\gtrsim\)0.7 for mid-infrared AGN selection (Stern et al., 2012; Assef et al., 2018). After selecting J1331 as a promising TDE candidate, it was also realised that the host galaxy of J1331 was the same as that identified for the _ROSAT_-selected TDE candidate, RXJ133157.6324319.7, first detected in outburst in 1993, and recently presented in Hampel et al. (2022), with the finder chart for these transients presented in Fig. 11. The eRASS5 detection of J1331 thus suggested the remarkable rebrightening of a previously known TDE candidate, \(\sim\)29 years after the outburst detected by _ROSAT_.
### eRosita
Using the eSASS task SRCTOOL (eASSusers_211214; Brunner et al., 2022), source (and background) spectra and lightcurves were extracted from a 60\({}^{\circ}\) radius source region centred on the eRASS5 inferred position, with background counts extracted from a circular annulus with inner and outer radii of 140\({}^{\circ}\) and 240\({}^{\circ}\), respectively.
eROSITA scanned the position of J1331 eight times during eRASS5, with each scan separated by \(\sim\)4 hours, thus spanning a \(\sim\)28 hour window in total. During this time, J1331 was observed to be persistently bright (Fig. 12), as opposed to showing a short-lived flaring, and was clearly detected above background in each observation.
The eRASS5 X-ray spectra were then fitted using the Bayesian X-ray Analysis software (BXA; Buchner et al., 2014), which connects the nested sampling algorithm UltraNest (Buchner, 2021) with the fitting environment XSPEC (Arnaud, 1996). The source and background spectra were jointly fit with a source plus background model, with the latter using the Principal Component Analysis (PCA) background modelling first described in Simmonds et al. (2018), and as also applied to AT 2019avd in Malyali et al. (2021). The eRASS5 spectrum is well fitted by a tbabs*zbbody model (Fig. 11), with the Galactic equivalent neutral hydrogen column density, \(N_{\rm H}\), fixed to \(3.84\times 10^{20}\) cm\({}^{-2}\), the value along the line of sight to J1331 in H14PI Collaboration: et al. (2016), and \(kT=0.115^{+0.007}_{-0.007}\) keV. A fit with a power-law (tbabs*zpowerlaw) leaves large residuals between the observed data and model above 1 keV. When using the best fitting tbabs*zbbody model described above, the eRASS5 observed (unabsorbed).2-2 keV flux for J1331 is (6.0\(\pm\)0.7)\(\times 10^{-13}\) erg s\({}^{-1}\) cm\({}^{-2}\) (\((8\pm 1)\times 10^{-13}\) erg s\({}^{-1}\) cm\({}^{-2}\)), translating to an unabsorbed 0.2-2 keV luminosity of \((5.5\pm 0.7)\times 10^{42}\) erg s\({}^{-1}\).
J1331 was not detected in eRASS1-4, with 2\(\sigma\) upper limits on the 0.2-2 keV count rate of 0.016, 0.03, 0.07 and 0.03 cts s\({}^{-1}\) in each successive eRASS (see Table 11 for a full log of the X-ray observations of J1331). These count rate upper limits were then converted to 0.2-2 keV flux upper limits using the best fitting spectral parameters to the eRASS5 spectrum described above.
### _Nicer_ Xti
Follow-up observations of J1331 were obtained with the X-ray Timing Instrument (XTI) on board the _Neutron Star Interior Composition Explorer_ observatory (_NICER_; Gendreau et al., 2016) through pre-approved ToOs (PI: Z. Liu). _NICER_ observations commenced \(\sim\)4 days after the last eRASS5 observation, and continued for the next 15 days on a near daily basis (Table 11). We first generated cleaned and screened even files using the nicer12 task (with default recommended parameters), before using nicer12_task (with default recommended parameters), before using nicer2_task (with default parameters), before using nicer2_task (with default parameters), before using nicer2_task (with default parameters), after the best fitting spectral parameters,
out4. For XMM1 (XMM2), this resulted in only 4.1ks (25.7 ks), 12.8 ks (30.7 ks) and 11.8 ks (30.2 ks) of usable exposure time for PN, MOS1 and MOS2, respectively. In the subsequent analysis, only events with PATTERN<=4 and FLAG==0 were extracted for PN, whilst PATTERN<=12 and FLAG==0 filtering was applied for MOS1 and MOS2.
Footnote 4: [https://www.cosmos.esa.int/web/xmm-newton/](https://www.cosmos.esa.int/web/xmm-newton/)
sas-thread-epic-filterbackground
For XMM1, no source is detected within 30" of the host galaxy position in PN and MOS1 with detection likelihood, DETML, above 3, when running the standard _XMM_ source detection pipeline in the 0.2-2 keV band on the PN, MOS1, and MOS2 images. However, a source was detected in MOS2 at (RA\({}_{\rm J2000}\), Dec\({}_{\rm J2000}\))=(13h31m58s, -32\({}^{\circ}\)43\({}^{\prime}\)19\({}^{\prime\prime}\)), with a 1\(\sigma\) positional uncertainty of 2\({}^{\prime\prime}\), consistent with the _ROSAT_ and eROSITA positions (Fig. 1A). The DETML for this source is low (10.3), and the estimated observed 0.2-2 keV flux in the emldetect output is \((8\pm 3)\times 10^{-15}\) erg s\({}^{-1}\) cm\({}^{-2}\), \(\sim\)75\(\times\) fainter than the eRASS5 observed flux.
Given the uncertain detection of the system across all three EPIC cameras, we computed a 2\(\sigma\) upper limit on the 0.2-2 keV count rate using the SAS task emupper. This was done using the 0.2-2 keV band images, exposure and background maps for each camera, and a 30" radius circular extraction region for the source counts (centred on the _Gaia_ position of the host galaxy). For XMM1, this yielded upper limits of 0.006 ct s\({}^{-1}\), 0.0014 ct s\({}^{-1}\) and 0.002 ct s\({}^{-1}\) for PN, MOS1 and MOS2, respectively. We conservatively estimate the upper limit for the _XMM_ observation to that inferred from the MOS2 data, which corresponds to a 0.2-2 keV observed (unabsorbed) flux of \(1\times 10^{-14}\) erg s\({}^{-1}\) cm\({}^{-2}\) (\(2\times 10^{-14}\) erg s\({}^{-1}\) cm\({}^{-2}\)), assuming the spectral model inferred from the eRASS5 observation. The same procedure was repeated for XMM2, where we inferred upper limits of 0.003 ct s\({}^{-1}\), 0.0014 ct s\({}^{-1}\) and 0.0010 ct s\({}^{-1}\) for PN, MOS1 and MOS2, respectively, translating to 2\(\sigma\) upper limits on the observed (unabsorbed) flux of 6\(\times 10^{-15}\) erg s\({}^{-1}\) cm\({}^{-2}\) (1\(\times 10^{14}\) erg s\({}^{-1}\) cm\({}^{-2}\)).
### _Swift_ Xrt
Additional _Swift_ XRT (Burrows et al., 2005) observations of J1331 were performed between 2022-02-27 and 2022-08-24. The XRT observations were performed in photon counting mode, with the data analysed using the UK Swift Science Data Centre's (UKSSDC) online XRT product building tool (Evans et al., 2007, 2009). No source was detected in the 0.3-2 keV band at the position of J1331 in any follow-up observation.The 0.3-2 keV count rates were converted to 0.2-2 keV fluxes using webPIMMs6, assuming the same spectral model as from the eROSITA eRASS5 detection, with the fluxes presented in Table 2.
Footnote 4: [https://www.cosmos.esa.int/web/xmm-newton/](https://www.cosmos.esa.int/web/xmm-newton/)
### Archival X-ray observations
A detailed analysis of the ultra-soft outburst from RXJ133157.6324319.7, detected by pointed _ROSAT_ PSPC observations in the early 1990s, was previously performed in Hampel et al. (2022). In summary, the flare was characterised by an
Figure 1: Long-term 0.2-2 keV lightcurve of J1331, with circular and triangle markers representing observed fluxes and 2\(\sigma\) upper limits, respectively. The initial outburst was detected by _ROSAT_ in 1993, before being observed by eROSITA in 2022 to have rebrightened to a similar 0.2–2 keV observed flux. The X-ray spectra remained ultra-soft in each observation where the source was detected. For plotting clarity, we include the time-averaged flux measurement for eRASS5, and omit the _NICER_ upper limits.
8x increase in the 0.1-2.4 keV flux, relative to a 2\(\sigma\) upper limit, over an 8 day period (and a net increase in the same band by a factor of at least 40 relative to the deepest upper limit available). The X-ray spectrum at peak observed brightness was well fitted by a blackbody with \(kT=0.11\pm 0.03\) keV. The system was then not detected in two PSPC observations \(\sim\)165 days later, where it had faded by a factor of at least 30 relative to the peak observed _ROSAT_ flux.
To construct a long-term 0.2-2 keV lightcurve, the 0.1-2.4 keV _ROSAT_ PSPC lightcurve data in Table 1 of Hampel et al. (2022) was converted into 0.2-2 keV band fluxes using webPMMS, assuming the best fitting spectral model to the _ROSAT_ spectrum found in Hampel et al. (2022). Then, the 2\(\sigma\) upper limits from _ROSAT_ Survey, _XMM_ Slew and Swift XRT observations were computed using the _High-Energy Lightcurve Generator_ server (HILIGT; Saxton et al., 2021; Konig et al., 2021); the archival fluxes are presented in Fig. 1 and Table D1.
### UV, optical and mid-infrared photometry
J1331 was observed both before (Section 2.5) and after (Section 2.4) the eRASS5-detected outburst by _Swift_ XRT and UVOT (UVM2 filter; Roming et al., 2005). To search for transient UV emission, aperture photometry was performed on the level 2 UVOT sky images (downloaded from the UKSSDC) using the uvrosource task (HEASOFT v6.29, CALDB v20201215). Source counts were extracted from a circular aperture of 5\({}^{\prime\prime}\) radius, centred on the _Gaia_ position of the host of J1331, and background counts were extracted from a source-free region of radius 15\({}^{\prime\prime}\). The measured UVM2 magnitudes in the follow-up observations are consistent with the archival measured UVM2 magnitudes on the 2018-04-18, 2018-04-22, 2018-04-26 (Table E1).
No significant optical variability is seen in the \(\sim\)6 years before the eRASS5 outburst (57500\(\leq\) MJD \(\leq\)59500) in the forced photometry lightcurve provided by ATLAS (Tonry et al., 2018) (Fig. E1). Lastly, we note that no major variability is detected above the host galaxy emission within the _NEOWISE_ mid-infrared lightcurve between MJD\(\sim\)56680 and 59400 (Fig. E1), which was generated using the procedure described in section 3.2 of Malyali et al. (2021).
### Radio
We observed the coordinates of J1331 on 2022 Mar 02 with the Australia Telescope Compact Array (ATCA) radio telescope in 6 km configuration, using the 4cm dual receiver with central frequencies 5.5/9 GHz, each with a 2 GHz bandwidth split into 2049\(\times\)1 MHz spectral channels, and for a total of 150 min on source. Data were reduced following standard procedures in the Common Astronomy Software Applications (McMullin et al., 2007; CASA-TEAM et al., 2022). We used 1934-638 for flux and bandpass calibration and 1336-260 for phase calibration. Images of the target field were created using the CASA task tclean. No source was detected at the location of J1331 at either frequency band with a 3\(\sigma\) upper limit of 73.5\(\mu\)Jy/bm at 5.5 GHz and 54\(\mu\)Jy/bm at 9 GHz. Additionally, no source was detected in a stacked 5.5 and 9 GHz image, with a 3\(\sigma\) upper limit of 57.9\(\mu\)Jy/bm at a central frequency of 7.3 GHz.
## 3 Discussion
Comparing the X-ray lightcurve of J1331 with other ultra-soft nuclear transients (Fig. D4) from galaxies that were recently quiescent, or hosted low luminosity AGN, then J1331 decays faster than the majority of other X-ray bright TDEs7, but decays over much longer timescales than the bursts typically seen in QPEs (burst durations \(\lesssim\)30 ks, or \(\lesssim\)0.3 days; Miniutti et al., 2019; Giustini et al., 2020; Arcodia et al., 2021, 2022).
Footnote 7: Ignoring short timescale flaring behaviour seen in some TDE candidates, such as AT 2019ehr (van Velzen et al., 2021b).
Given the quiescent nature of the host galaxy, and the ultra-soft X-ray spectrum, an AGN origin for J1331 is disfavoured. We also rule out a mechanism similar to that producing the X-ray flares observed in Sgr A* (e.g. Neilsen et al., 2013; Ponti et al., 2015; Yuan and Wang, 2016; Ponti et al., 2017; Mossow et al., 2020), as the latter are clearly observationally distinct to J1331, with respect to the flaring timescales (Sgr A* flare durations \(\lesssim 10^{4}\) s; Mossow et al., 2020), spectral properties (flaring X-ray emission in Sgr A* is hard and likely synchrotron, e.g. Ponti et al., 2017), and peak observed luminosity (bolometric luminosity of Sgr A* is \(\sim 10^{36}\) erg s\({}^{-1}\); Genzel et al., 2010). Arguments against a Galactic origin for this system have previously been presented in Hampel et al. (2022).
Ultra-soft X-ray flares from quiescent galaxies have previously been considered as a reliable signature of a TDE (e.g. Zabludoff et al., 2021). However, the current theoretically predicted TDE rates are \(\gtrsim 10^{-4}\) yr\({}^{-1}\) galaxy\({}^{-1}\)(Stone et al., 2020), so it would be exceptionally unlikely to have observed two independent tidal disruption flares occuring within the same galaxy over a \(\sim\)30 year timescale (Poisson probability \(\sim 5\times 10^{-6}\); Fig. C2); a more exotic class of TDE would need to be invoked to explain J1331.
One such possibility, discussed in Hampel et al. (2022), is that J1331 was produced by a TDE involving a supermassive black hole binary (SMBHB). This scenario was partly proposed in an attempt to explain the fast X-ray brightening observed by _ROSAT_, since such TDEs may have highly non-monotonic decays of their X-ray lightcurves. This stems from the gravitational interaction between the companion BH and the debris streams, which may cause large perturbations to the orbits of the less bound debris and cause their chaotic evolution, as well as a complex evolution of the accretion rate over time. Liu et al. (2014); Ricarte et al. (2016); Coughlin et al. (2017) predict these systems to show sharp dips and rises in the X-ray lightcurve rate (of \(\sim\)1-2 orders of magnitude), on timescales of the order of the binary orbital period (Liu et al., 2014; Ricarte et al., 2016), although Coughlin et al. (2017) find highly variable accretion rates between different simulation runs and over timescales shorter than the SMBHB orbital periods (i.e. there still seems to be quite large uncertainties in the theoretically predicted lightcurves of TDEs involving SMBHBs).
Under the SMBHB scenario, both the eROSITA and _ROSAT_ observations would have had to have sampled a 'dipping', or 'brightening from a dip', phase of the X-ray lightcurve, respectively. For binary orbital periods of the order of \(\sim\)months, assuming \(\sim\)mpc binary separation as in Liu et al. (2014), then it would be quite fortuitous for us to have observed such behaviour. Furthermore, there is importantly no evidence for late time X-ray brightening episodes in the months after each outburst, as seen by _XMM_ and _Swift_ (Fig. 1), which one might expect to have observed given that the accretion rate is predicted to eventually revert back to the \(t^{-5/3}\) decay following 'dips' (e.g. Fig. 12 in Coughlin et al., 2017). We would therefore disfavour J1331 being caused by a full TDE around a SMBHB, given the fine tuning needed in order to match observations.
A more feasible scenario is that both outbursts were driven by a partial tidal disruption event (pTDE), potentially of the same object. Unless the pTDE rate is orders of magnitude larger than currently
estimated in the literature (Stone & Metzger, 2016; Chen & Shen, 2021; Zhong et al., 2022), then both outbursts would likely be related to the same star being disrupted by the same black hole (i.e. the star should have survived the initial encounter). Considering that the recurrence timescale of J1331 is \(\lesssim 30\) years, then it is also difficult to reconcile this with theoretical predictions for the recurrence timescales of flares in pTDEs where the star was initially scattered onto a parabolic orbit around the black hole (\(\gtrsim 400\) years, e.g. Ryu et al., 2020). Instead, the flaring may have been driven by the repeated stripping of a star on an elliptical orbit by the disrupting SMBH (see Hayasaki et al., 2013 for a discussion on potential origins for such stars). This scenario would be further supported by both the relatively small amount of inferred energy emitted in the eROSITA-detected outburst8 of \((5^{+6}_{-5})\times 10^{49}\) erg, corresponding to an accreted mass of \((5^{+7}_{-2})\times 10^{-4}(\epsilon/0.05)^{-1}\) M\({}_{\odot}\), where \(\epsilon\) is the radiative efficiency of accretion, and also by the extremely low \(L_{\rm X}\) at late-times (as suggested by the non-detection and deep upper limits in XMM2), since elliptical TDEs are predicted to produce short-lived, finite accretion bursts (Hayasaki et al., 2013). Given this, and that the radio observations were taken \(\sim\)40 days after the eRASS5 flare (section 2.7), then we note that we may have missed any associated jet or outflow launched in this event, as seen in other TDE candidates (e.g. Goodwin et al., 2022).
Footnote 8: Assuming a similar temporal evolution for both the eROSITA-detected and _ROSAT_-detected outbursts- see section C.
The case for a repeated pTDE is further enhanced by the fast rise and decay timescales seen with _ROSAT_ and eROSITA. Compared with full disruptions, pTDEs only strip the outermost layers of the star, with the specific energy distribution of the debris, \(\mathrm{d}M/\mathrm{d}E\), differing from full TDEs (e.g. Coughlin & Nixon, 2019; Miles et al., 2020; Ryu et al., 2020). Since the mass fallback rate, \(\dot{M}_{\rm B}(t)\), scales \(\propto\mathrm{d}M/\mathrm{d}E\), then \(\dot{M}_{\rm B}(t)\) is also predicted to differ between full and pTDEs. Ryu et al. (2020) find that the narrower spreads in \(\mathrm{d}M/\mathrm{d}E\) for pTDEs can yield \(\dot{M}_{\rm B}(t)\propto t^{-p}\), where \(p\sim 2-5\), more consistent with what is observed in J1331 (Fig. 2), and much steeper than a canonical \(t^{-5/3}\) decline predicted for the mass fallback rate in full TDEs (Rees, 1988; Phinney, 1989).
Lastly, although the mass fallback in weak pTDEs may evolve over shorter timescales relative to full TDEs, the viscous timescale, \(t_{\rm visc}\), still needs to be shorter than the minimum orbital period of the stellar debris so that the X-ray luminosity traces the mass fallback rate (assuming a constant radiative efficiency, negligible obscuration of the soft X-rays, and negligible disc cooling). Considering \(t_{\rm visc}\sim\alpha^{-1}(H/R)^{-2}\Omega^{-1}(r)\), where \(\alpha\) is the viscosity parameter (Shakura & Sunyaev, 1973), \(H\) and \(R\) the scale height and width of the disc, and \(\Omega^{-1}(r)\) the orbital period at distance \(r\) from the black hole, then \(t_{\rm visc}\sim 0.4(\alpha/0.1)^{-1}(H/R)^{-2}\) days at the circularisation radius (\(\sim 2R_{\rm tidal}/\beta\), where \(R_{\rm tidal}\) and \(\beta\) are the tidal radius and impact parameter for the disruption). A geometrically thick disc (\(H/R\sim 1\)), as may be expected to form for super-Eddington mass fallback rates, would be needed to reproduce accretion timescales of the order -days as seen in J1331. However, it is currently unclear how the stellar debris might circularise so efficiently in a weak pTDE (see Bonnerot & Stone, 2021 for a review on accretion flow formation in TDEs), and we also highlight here that similar concerns have recently been raised for explaining the short X-ray flare durations observed in QPEs via an accretion origin (e.g. Krolik & Linial, 2022; Lu & Quataert, 2022). Although future simulations would likely be needed to explore the debris circularisation in J1331-like events, alternative origins for the X-ray emission may be from compression shocks of the debris streams at pericentre (e.g. Steinberg & Stone, 2022), or circularisation shocks from debris stream collisions (Krolik & Linial, 2022; Lu & Quataert, 2022).
## 4 Summary
J1331 is a repeating X-ray transient associated to a quiescent galaxy at \(z=0.05189\), which we consider to be consistent with a scenario involving two weak pTDEs. Whilst several previously reported pTDE candidates have occurred in galaxies hosting an AGN, we highlight that the host of J1331 is quiescent. The main properties of J1331 can be summarised as follows:
1. J1331 was first detected by _ROSAT_ in 1993 (Hampel et al., 2022), where it had shown an ultra-soft (\(kT=0.11\pm 0.03\) keV) flaring by a factor of at least 40 relative to a previous \(2\sigma\) upper limit. The outburst also showed a fast rise, where it had brightened by a factor of eight over an 8 day period. The system was subsequently not detected in a deep pointed _ROSAT_ observation \(\sim\)165 days afterwards, as well as in _XMM_ Skew, and _Swift_ XRT observations performed between 2006 and 2018 (Table 1).
2. After not being detected by eROSITA in its first four eRASS, J1331 was observed to have brightened in eRASS5 to a 0.2-2 keV flux of \((6.0\pm 0.7)\times 10^{-13}\) erg s\({}^{-1}\) cm\({}^{-2}\). The eRASS5 spectrum is ultra-soft (\(kT=0.115^{+0.007}_{-0.007}\) keV), and is consistent with the \(kT\) inferred from the _ROSAT_-observed flare in 1993.
3. J1331 was not detected during pointed _XMM_ observations and _Swift_ XRT observations when followed up after the eRASS5 detection; the first (second) _XMM_ observation constrains the 0.2-2 keV flux to decay by a factor of \(\gtrsim\)40 (\(\gtrsim\)100) over a 17 (\(\sim\)200) day period after the eRASS5 observation. The faint 0.2-2 keV X-ray luminosities (\(<7\times 10^{40}\) erg s\({}^{-1}\), unabsorbed) at \(\sim 200\) days post-peak brightness, inferred via the second _XMM_ observation (Table 1), may be due to a late-time drop off in the mass fallback rate once the disruption episode is over.
4. Combined with the fast rise timescale seen by _ROSAT_, then J1331-like outbursts are short lived (rise and decay timescales of \(6^{+1}_{-1}\) days and \(3.9^{+0.1}_{-0.1}\) days, respectively; appendix C) and evolve over shorter timescales relative to full TDEs.
5. J1331 has only been observed to show transient emission in
Figure 2: Zoom-in on the first eROSITA-detected outburst in 2022, along with multiple power-law decay slopes plotted in grey dashed lines. The decay slope appears to be much steeper than the canonical \(t^{-5/3}\) decay predicted for TDEs with a uniform distribution of specific energies, and appears more consistent with a \(t^{-4}\) decay, as predicted in Ryu et al. (2020). We assume a peak MJD of 59593 for the X-ray outburst, and roughly estimate the MJD of disruption to be 59581 (section C). The markers follow the same legend as for Fig. 1.
the 0.2-2 keV band, with no transient optical, UV, or radio emission observed in follow-up observations.
We conclude by noting that J1331 appears to fill in the continuum of observed soft X-ray outbursts from quiescent galaxies, lying in between QPEs and TDEs with respect to its rise and decay timescales (Fig. 4), although the recurrence timescales are much longer than in the current sample of QPEs. Additional follow-up observations will be scheduled in order to more tightly constrain the recurrence timescales of outbursts from J1331. Future planned X-ray missions geared towards exploiting the X-ray transient sky, such as the _Einstein Probe_(Yuan et al., 2018), will likely be sensitive towards detecting similar partial disruptions; for these missions, the eROSITA All-Sky survey data may play an important role by providing a long-term baseline towards which new candidates can be identified. Given the faster decay timescales of J1331-like systems, then we would advocate promptly triggering high-cadence X-ray follow-up in order to better constrain the evolution of the accretion rate in future candidates.
## Acknowledgements
AM thanks Taeho Ryu for very useful discussions whilst preparing the manuscript. AM acknowledges support by DLR under the grant 50 QR 2110 (XMM_NuTra, PI: Z. Liu). This work was supported by the Australian government through the Australian Research Council's Discovery Projects funding scheme (DP200102471). We would like to thank the referee for a constructive report that improved the quality of the paper.
This work is based on data from eROSITA, the soft X-ray instrument aboard SRG, a joint Russian-German science mission supported by the Russian Space Agency (Roskosmos), in the interests of the Russian Academy of Sciences represented by its Space Research Institute (IKI), and the Deutsches Zentrum fur Luft- und Raumfahrt (DLR). The SRG spacecraft was built by Lavochkin Association (NPOL) and its subcontractors, and is operated by NPOL with support from the Max Planck Institute for Extraterrestrial Physics (MPE).
The development and construction of the eROSITA X-ray instrument was led by MPE, with contributions from the Dr. Karl Remeis Observatory Bamberg & ECAP (FAU Erlangen-Nuernberg), the University of Hamburg Observatory, the Leibniz Institute for Astrophysics Potsdam (AIP), and the Institute for Astronomy and Astrophysics of the University of Tubingen, with the support of DLR and the Max Planck Society. The Argelander Institute for Astronomy of the University of Bonn and the Ludwig Maximilians Universitat Munich also participated in the science preparation for eROSITA.
The eROSITA data shown here were processed using the eSASS software system developed by the German eROSITA consortium.
This work made use of data supplied by the UK Swift Science Data Centre at the University of Leicester.
The Australia Telescope Compact Array is part of the Australia Telescope National Facility ([https://ror.org/@5qajvd42](https://ror.org/@5qajvd42)) which is funded by the Australian Government for operation as a National Facility managed by CSIRO. We acknowledge the Gomeroi people as the traditional owners of the Observatory site.
The Legacy Surveys consist of three individual and complementary projects: the Dark Energy Camera Legacy Survey (DECaLS; Proposal ID 2014B-0404; PIs: David Schlegel and Arjun Dey), the Beijing-Arizona Sky Survey (BASS; NOAO Prop. ID #2015A-0801; PIs: Zhou Xu and Xiaohui Fan), and the Mayall z-band Legacy Survey (MzLS; Prop. ID #2016A-0453; PI: Arjun Dey). DECaLS, BASS and MzLS together include data obtained, respectively, at the Blanco telescope, Cerro Tololo Inter-American Observatory, NSF's NOIRLab; the Bok telescope, Steward Observatory, University of Arizona; and the Mayall telescope, Kitt Peak National Observatory, NOIRLab. Pipeline processing and analyses of the data were supported by NOIRLab and the Lawrence Berkeley National Laboratory (LBNL). The Legacy Surveys project is honored to be permitted to conduct astronomical research on Iolkam Du'ag (Kitt Peak), a mountain with particular significance to the Tohoho O'odham Nation.
NOIRLab is operated by the Association of Universities for Research in Astronomy (AURA) under a cooperative agreement with the National Science Foundation. LBNL is managed by the Regents of the University of California under contract to the U.S. Department of Energy.
This project used data obtained with the Dark Energy Camera (DECam), which was constructed by the Dark Energy Survey (DES) collaboration. Funding for the DES Projects has been provided by the U.S. Department of Energy, the U.S. National Science Foundation, the Ministry of Science and Education of Spain, the Science and Technology Facilities Council of the United Kingdom, the Higher Education Funding Council for England, the National Center for Supercomputing Applications at the University of Illinois at Urbana-Champaign, the Kavli Institute of Cosmological Physics at the University of Chicago, Center for Cosmology and Astro-Particle Physics at the Ohio State University, the Mitchell Institute for Fundamental Physics and Astronomy at Texas A&M University, Financiadora de Estudos e Projetos, Fundacao Carlos Chagas Filho de Amparo a Pesquisa do Estado do Rio de Janeiro, Conselho Nacional de Deservolvimento Cientifico e Tecnologico and the Ministerio da Ciencia, Tecnologia e Inovacao, the Deutsche Forschungsgemeinschaft and the Collaborating Institutions in the Dark Energy Survey. The Collaborating Institutions are Argonne National Laboratory, the University of California at Santa Cruz, the University of Cambridge, Centro de Investigaciones Energeticas, Medioambientales y Tecnologicas-Madrid, the University of Chicago, University College London, the DES-Brazil Consortium, the University of Edinburgh, the Eidgenossische Technische Hochschule (ETH) Zurich, Fermi National Accelerator Laboratory, the University of Illinois at Urbana-Champaign, the Institut de Ciencies de l'Espai (IEEC/CSIC), the Institut de Fisica d'Altes Energies, Lawrence Berkeley National Laboratory, the Ludwig Maximilians Universitat Munchen and the associated Excellence Cluster Universe, the University of Michigan, NSF's NOIRLab, the University of Nottingham, the Ohio State University, the University of Pennsylvania, the University of Portsmouth, SLAC National Accelerator Laboratory, Stanford University, the University of Sussex, and Texas A&M University.
BASS is a key project of the Telescope Access Program (TAP), which has been funded by the National Astronomical Observatories of China, the Chinese Academy of Sciences (the Strategic Priority Research Program "The Emergence of Cosmological Structures" Grant # XDB09000000), and the Special Fund for Astronomy from the Ministry of Finance. The BASS is also supported by the External Cooperation Program of Chinese Academy of Sciences (Grant # 114A11KYSB20160057), and Chinese National Natural Science Foundation (Grant # 12120101003, # 11433005).
The Legacy Survey team makes use of data products from the Near-Earth Object Wide-field Infrared Survey Explorer (NEOWISE), which is a project of the Jet Propulsion Laboratory/California Institute of Technology. NEOWISE is funded by the National Aeronautics and Space Administration.
The Legacy Surveys imaging of the DESI footprint is supported by the Director, Office of Science, Office of High Energy Physics
of the U.S. Department of Energy under Contract No. DE-AC02-05CH1123, by the National Energy Research Scientific Computing Center, a DOE Office of Science User Facility under the same contract; and by the U.S. National Science Foundation, Division of Astronomical Sciences under Contract No. AST-0950945 to NOAO.
M.K. acknowledges support from DFG grant KR 3338/4-1. D.H. is supported by DLR grant FKZ 50OR2003.
## Data Availability
The eRASS1-4 data taken within the German half of the eROSITA sky is currently planned to be made public by Q2 2024, whilst the eRASS5 data is scheduled to become public by Q2 2026. The Swift data is available to download through the UK Swift Data Science website9, whilst the NICER data is accessible through NASA's HEASARC interface10. Publicly available ATLAS data can be accessed through the ATLAS forced photometry service11, and _NEOWISE_ lightcurves can be accessed through the IRSA web portal12. ATCA data are stored in the Australia Telescope Online Archive13, and will become publicly accessible 18 months from the date of observation. The _XMM_ data will become public after the property period expires (2023-08-30). Follow-up optical spectra will likely remain private at least until the release of the forthcoming eROSITA-selected TDE population paper, but could be made available upon reasonable request.
Footnote 10: [https://www.swift.ac.uk/archive/index.php](https://www.swift.ac.uk/archive/index.php)
Footnote 10: [https://heasarc.gsfc.nasa.gov/docs/nicer/nicer_archive.html](https://heasarc.gsfc.nasa.gov/docs/nicer/nicer_archive.html)
Footnote 11: [https://fallingsstar-data.com/forcedphot/](https://fallingsstar-data.com/forcedphot/)
Footnote 12: [https://irsa.ipac.caltech.edu/applications/wise/](https://irsa.ipac.caltech.edu/applications/wise/)
Footnote 13: [https://atoa.atnf.csiro.au/](https://atoa.atnf.csiro.au/)
|
2302.03420 | Inadmissibility of invariant estimator of function of scale parameter of
several exponential distributions | In various applied areas such as reliability engineering, molecular biology,
finance, etc., the measure of uncertainty of a probability distribution plays
an important role. In the present work, we consider the estimation of a
function of the scale parameter, namely entropy of many exponential
distributions having unknown and unequal location parameters with a common
scale parameter. For this estimation problem, we have considered bowl-shaped
location invariant loss functions. The inadmissibility of the minimum risk
invariant estimator (MRIE) is proved by proposing a non-smooth improved
estimator. Also, we have obtained a smooth estimator which improves upon the
MRIE. As an application, we have obtained explicit expressions of improved
estimators for two well-known loss functions namely squared error loss and
linex loss. Further, we have shown that these estimators can be derived for
other important censored sampling schemes. At first, we obtained the results
for the complete and i.i.d. sample. We have seen that the results can be
applied for (i) record values, (ii) type-II censoring, and (iii) progressive
Type-II censoring. Finally, a simulation study has been carried out to compare
the risk performance of the proposed improved estimators. | Lakshmi Kanta Patra, Shrajal Bajpai, Neeraj Misra | 2023-02-07T12:10:16Z | http://arxiv.org/abs/2302.03420v2 | Inadmissibility of invariant estimator of function of scale parameter of several exponential distributions
###### Abstract
In various applied areas such as reliability engineering, molecular biology, finance, etc., the measure of uncertainty of a probability distribution plays an important role. In the present work, we consider the estimation of a function of the scale parameter, namely entropy of many exponential distributions having unknown and unequal location parameters with a common scale parameter. For this estimation problem we have consider bowl-shaped location invariant loss functions. The inadmissibility of the minimum risk invariant estimator (MRIE) is proved by proposing a non-smooth improved estimator. Also, we have obtained a smooth estimator which improves upon the MRIE. As an application, we have obtained explicit expressions of improved estimators for two well known loss functions namely squared error loss and linear loss. Further we have shown that these estimators can be derived for other important censored sampling schemes. At first we have obtained the results for complete and i.i.d. sample. We have seen that the results can be applied for (i) record values, (ii) type-II censoring, and (iii) progressive Type-II censoring. Finally, a simulation study has been carried out to compare the risk performance of the proposed improved estimators.
**Keywords**: Decision theory; minimum risk invariant estimator; location invariant loss function; inadmissibility, Brewster-Zidek type estimator; censored sample, record values.
## 1 Introduction
Similar to the hazard rate, entropy of a lifetime distribution is an important characteristic. It measures the uncertainty of a probability distribution. Shannon's entropy is widely used in various areas of science and technology, such as ecology, hydrology, water resources, social studies, economics, biology, etc. In molecular sciences, estimation of the entropy of molecules plays an important role in understanding various chemical and biological processes [18]. In economics, entropy estimation often allows the researchers to use data for the improvement of the assumptions on the parameters in econometric models, see [9]. In reliability theory entropy is used in
to measure uncertainty [10]. If we want to estimate the uncertainty of a parallel or series system with several independent components, we need to predict uncertainty in individual components. A two-parameter experiential distribution is the most commonly used lifetime distribution in life testing experiments and reliability theory. In the case of the exponential distribution, entropy is a function of the scale parameter. The estimation of scale parameters and the function of scale parameters is a well-studied problem in statistical decision theory. The inadmissibility of the best affine invariant estimator of a normal variance was first established by [26]. This result motivates many statisticians to find improved estimators for scale parameters. One may refer to [14] for a detailed review. The result of [26] was extended by [8] to prove the inadmissibility of the best equivariant estimator of powers of the scale parameter for a wide class of location-scale densities and for a general invariant loss function. Two new techniques for obtaining improvements over equivariant estimators were developed by [6] for strictly bowl-shaped loss functions. In this paper, we have considered the estimation of a function of a common scale parameter, namely the entropy of several exponential distributions. Let \(X\) be a random variable with probability density function \(f(x|\theta).\) Then, the Shannon's and Renyi entropy are given as
\[H(\theta)=-E(\ln f(x|\theta)) \tag{1.1}\]
and
\[R_{\alpha}(\theta)=\frac{1}{1-\alpha}\ln\int_{-\infty}^{\infty}f^{\alpha}(x| \theta)dx=\frac{1}{1-\alpha}\ln E\Big{(}f^{\alpha-1}(X|\theta)\Big{)} \tag{1.2}\]
respectively, where \(\alpha\geq 0.\)
Entropy of a probability distribution is an important characteristic, like general moments, quantiles mean, median and standard deviation. Entropy gives us information about the uncertainty of probability distribution. Recent past, many authors have investigated the estimation of entropy of various probability models from a decision-theoretic point of view. Now we will describe some previous work in this direction. Estimation of entropy of a multivariate normal distribution has been considered by [16]. They have shown the inadmissibility of usual estimators by deriving two improved estimators. Specially, they have obtained Stein-type and Brewster-Zidek-type improved estimators. Finally, they have shown that the Brewster-Zidek-type improved estimator is generalized Bayes. Estimation of measure of uncertainty, that is, the entropy of several experiential distributions, was investigated by [11]. They have shown that the BAEE is inadmissible under squared error loss function. Renyi entropy gives an important measure of uncertainty which is more flexible than Shannon entropy. Estimation Renyi entropy of \(k\) exponential distributions with common location but different scale have been investigated by [12]. The authors proposed a sufficient condition under which affine and scale equivariant estimators are inadmissible. [20] discussed the problem of finding improved estimators of the entropy of an two-parameter exponential distribution with ordered location parameters. They have adopted the techniques of [26], [7], and [13] to find the improved estimators under a general location invariant loss function. [23] proved that the usual estimator of the common hazard rate of several exponential distributions is inadmissible. They have obtained improved estimators which dominate the best affine equiv
ariant estimator. Recently, [19] studied the problem of estimating the entropy of an exponential population based on a doubly censored sample. He proved the inadmissibility of the best affine equivariant estimator under a general bowl-shaped location invariant loss function.
Let \(\boldsymbol{X}_{i}=(X_{i1},\ldots,X_{in})\) be a random sample taken from the population \(\Pi_{i}\), \(i=1,\ldots,k\) (\(k\geq 2\)). We assume that the samples are taken independently. The population \(\Pi_{i}\) is assumed to have density
\[f_{i}(x;\theta_{i},\sigma)=\left\{\begin{array}{ll}\frac{1}{\sigma}\exp \left(-\frac{x-\theta_{i}}{\sigma}\right),&\mbox{ if }x>\theta_{i},\theta_{i}\in\mathbb{R}, \sigma>0\\ \\ 0,&\mbox{ otherwise}\end{array}\right.. \tag{1.3}\]
For a population with probability density function (1.3), the Shannon's entropy is \(H(\sigma)=(1+\ln\sigma)\) and the Renyi entropy can be obtained as \(R(\sigma)=\ln\sigma-\frac{\ln\alpha}{1-\alpha}\). So estimation of \(H(\sigma)\) and \(R(\sigma)\) is equivalent to estimation of \(\theta=\ln\sigma\).
In this paper we have considered the estimation \(\theta\) with respect to a location invariant loss function \(L(T-\theta)\) where \(L(t)\) statistics the following conditions.
* \(L(t)\) is real valued absolute continuous and non monotone function.
* \(L(t)\) is such that \(L(t)\) is decreasing for \(t<0\) and \(L(t)\) is increasing for \(t>0\) and \(L(t)>0\) for all \(t\).
As a consequence of these conditions we get \(L(t)\) is differentiable all most everywhere. Based on the \(i\)-th sample \((X_{i1},\ldots,X_{in})\), a complete sufficient statistic for \((\theta_{i},\sigma)\) is \((X_{i},Y_{i})\) with \(X_{i}=nX_{i(1)}\) and \(Y_{i}=\sum_{j=1}^{n}(X_{ij}-X_{i(1)})\), where \(X_{i(1)}=\min\{X_{i1},\ldots,X_{in}\}\). We have \(S=\sum_{i=1}^{k}Y_{i}\), then \(V=\frac{S}{\sigma}\) follows a \(Gamma(k(n-1),1)\) distribution. Based on all the samples a complete sufficient statistic is \((\boldsymbol{X},S)\), where \(\boldsymbol{X}=(X_{1},\ldots,X_{k})\). So we have the pdf of \(X_{i}\) and \(V\) are
\[h_{i}(x_{i})=\frac{1}{\sigma}\exp\left\{-\frac{1}{\sigma}\left(x_{i}-n\theta_{ i}\right)\right\},x_{i}>n\theta_{i}\ \ \ \ g(v)=\frac{e^{-v}v^{k(n-1)-1}}{\Gamma(k(n-1))},v>0\]
respectively.
Consider a group of transformation as \(\mathcal{G}=\{g_{a,\boldsymbol{b}},a>0,\ \boldsymbol{b}\in\mathbb{R}^{k}\}\). The transformation is given as \(g_{a,\boldsymbol{b}}(\boldsymbol{z})=(a\boldsymbol{z}+\boldsymbol{b})\). Under this transformation complete sufficient statistics \((\boldsymbol{X},S)\) is invariant and we have
\[(\boldsymbol{\theta},\sigma)\rightarrow(a\boldsymbol{\theta}+\boldsymbol{b}, a\sigma)\ \ \ \ \mbox{and}\ \ \ \ \ (\boldsymbol{X},S)\rightarrow(a\boldsymbol{X}+\boldsymbol{b},aS)\,.\]
Consequently, we have
\[\ln\sigma\rightarrow\ln\sigma+\ln a\]
Also we have the loss function \(L(T-\theta)\) is invariant. Hence the form of an invariant estimator is obtained as
\[T_{c}=\ln S+c,\]
where \(c\) is a real constant.
The following lemma gives the minimum risk invariant estimator (MRIE). We denote \(\boldsymbol{\theta}=(\theta_{1},\theta_{2},\ldots,\theta_{k})\). Now onwards we use the notation \(E_{\boldsymbol{\theta},1}\) for \(E_{\boldsymbol{\theta},\sigma=1}\) and \(E_{\boldsymbol{0},1}\) for \(E_{\boldsymbol{\theta}=\boldsymbol{0},\sigma=1}\).
**Lemma 1.1**: _The MRIE of \(\theta\) with respect to a general location invariant loss function \(L(t)\) is_
\[T_{0}=\ln S+q_{0}, \tag{1.4}\]
_where \(q_{0}\) minimizes_
\[E_{\boldsymbol{\theta},1}\left(L\left(\ln S+c\right)\right). \tag{1.5}\]
**Proof:** The proof is simple and hence omitted for the sake of brevity. \(\Box\)
**Example 1.1**: _Let \(L(t)=t^{2}\). Then, we have \(c_{0}=-\psi(nk-k)\), where \(\psi(.)\) denotes digamma function. So the MRIE of \(\theta\) is \(T_{01}=\ln S-\psi(kn-k)\)._
**Example 1.2**: _Let \(L(t)=e^{at}-at-1\), \(a\neq 0.\) Then, using (1.5), we obtain \(c_{0}=\frac{1}{a}\ln\left(\frac{\Gamma(nk-k)}{\Gamma(nk+a-k)}\right)\), where \(a>k(1-n)\). In this case, the MRIE of \(\theta\) is \(T_{02}=\ln S+\frac{1}{a}\ln\left(\frac{\Gamma(nk-k)}{\Gamma(nk+a-k)}\right)\)._
In the present work we aim to obtain estimators which improve upon the MRIE of \(\ln\sigma\). We have studied this for four important sampling schemes: (i) complete and i.i.d. sample, (ii) record values, (iii) type-II censoring, and (iv) progressive Type-II censoring. Here we have adopted the techniques of [26] and [7] for finding improved estimators. Several researchers have studied the problem of finding an improved estimator of a scale parameter in the presence of unknown location parameter using these techniques. For some nice applications of these techniques, we refer to [15; 17; 24; 12; 27; 25; 21; 22; 28], and references therein.
The rest of the paper is organized as follows. In Section 2, we have proved the MRIE is inadmissible by deriving an improved estimator, which is not smooth based on the i.i.d sample. As an application, we have proposed improved estimators for the squared error loss and linear loss functions. A smooth estimator is derived in Section 3, which dominates the MRIE. For squared error loss and linear loss functions, we have derived the explicit expression of the smoothed improved estimators. A Bayes estimator has been given in Section 4. We have compared the risk performance of the improved estimators in Section 5. Section 6 has considered special sampling schemes such as record values, type-II censoring, and progressive Type-II censoring. For each sampling scheme, improved estimators are obtained using the result for the i.i.d sampling scheme. A simulation study has been carried out for the record values. Finally, in Section 7, we have given concluding remarks.
## 2 Inadmissibility of MRIE
In the previous section, we obtained the MRIE of \(\ln\sigma\). Now we will find an estimator which improves the MRIE under loss function \(L(t)\). [23] proved a similar result to find improved estimators of the common hazard rate of several exponential distributions. We have adopted the techniques
similar to [23]. Also, this result extends a result of [20] from one dimension to \(k\) dimension. For proving the inadmissibility of MRIE, we consider the following class of scale invariant estimators.
\[T_{\zeta}(\mathbf{X},S)=\ln S+\zeta(\mathbf{Z}), \tag{2.1}\]
where \(\mathbf{Z}=(Z_{1},\ldots,Z_{k})\), \(Z_{i}=\frac{X_{i}}{S}\) and \(\zeta\) is a measurable real valued function. The theorem below proves that the MRIE \(T_{0}\) is inadmissible by proposing a non smooth dominating estimator.
**Theorem 2.1**: _Suppose \(p_{0}\) be the unique solution of the equation_
\[E\left(L^{\prime}\left(\ln Y+p_{0}\right)\right)=0. \tag{2.2}\]
_where \(Y\sim Gamma(kn,1)\). Define an estimator_
\[T_{\zeta_{0}}(\mathbf{X},S)=\left\{\begin{array}{ll}\ln S+\min \left\{q_{0},p_{0}+\ln\left(\sum_{i=1}^{k}Z_{i}+1\right)\right\},&Z_{i}>0,i=1,\ldots,k\\ \\ \ln S+q_{0},&\mbox{otherwise}\end{array}\right., \tag{2.3}\]
_where \(q_{0}\) is the unique solution of (1.5). Then \(T_{\zeta_{0}}(\mathbf{X},S)\) dominates the MRIE \(T_{0}\) under a general location invariant loss function \(L(t)\) under the condition_
\[E\left[L^{\prime}\left(\ln V+p_{0}\right)\right]<0. \tag{2.4}\]
**Proof:** It is easy to see that the risk of the estimator \(T_{\zeta}\) depends on the unknown parameter \((\mathbf{\theta},\sigma)\) only through \(\theta_{1}/\sigma,\ldots,\theta_{k}/\sigma\). So we consider \(\sigma=1\) without any loss of generality. We can easily write the the expected loss of \(T_{\zeta}(\mathbf{X},S)\) as
\[R(\mathbf{\theta},T_{\zeta})=E_{\mathbf{\theta}}E_{\mathbf{\theta}}\left[L(\ln V+\zeta(W))\big{|}\mathbf{Z}=\mathbf{Z}\right].\]
Let us denote the conditional risk as
\[R_{1}(\mathbf{\theta},q)=E_{\mathbf{\theta}}\left[L(\ln V+q )\big{|}\mathbf{Z}=\mathbf{z}\right]. \tag{2.5}\]
Suppose \(\mathbf{z}=(z_{i},\ldots,z_{k})\) be such that \(z_{i}>0\), for all \(i\), and there exist \(j\) such that \(\theta_{j}>0\) for \(1\leq j\leq k\). Define \(\beta_{i}=n\theta_{i}\) and denote \(\mu=\max_{1\leq i\leq k}\{\beta_{i}/z_{i}\}\). The conditional density of \(V\) given \(\mathbf{Z}=\mathbf{z}\) is
\[f_{V}(v|\mathbf{Z}=\mathbf{z})\propto e^{-\left(\sum_{i=1}^ {k}z_{i}+1\right)v}v^{kn-1},\ \ v>\mu. \tag{2.6}\]
Based on the assumption of the loss function, it is easy to see that \(R_{1}(\mathbf{\theta},c)\) is strictly bowl shaped. Suppose \(q_{\mathbf{\theta}}(\mathbf{Z})\) minimizes \(R_{1}(\mathbf{\theta},c)\). Then \(q_{\mathbf{\theta}}(\mathbf{Z})\) is the unique solution of
\[E\left(L^{\prime}\left(\ln V+q_{\mathbf{\theta}}(\mathbf{Z$ })\right)\big{|}\mbox{\boldmath$Z}=\mathbf{z}\right)=0. \tag{2.7}\]
It can seen that
\[E\left(L^{\prime}\left(\ln V+q_{\mathbf{\theta}}(\mathbf{Z})\right)\big{|}\mathbf{Z}=\mathbf{z} \right)>0,\]
provided \(e^{-q_{\mathbf{\theta}}(\mathbf{Z})}<\mu\), which is a contradiction to (2.7). From this we can conclude that
\[e^{-q_{\mathbf{\theta}}(\mathbf{Z})}>\mu. \tag{2.8}\]
Again we have \(q_{\mathbf{0}}(\mathbf{Z})\) is the unique solution of
\[E_{\mathbf{\theta}=\mathbf{0}}\left(L^{\prime}\left(\ln V+q_{\mathbf{0}}(\mathbf{Z})\right) \big{|}\mathbf{Z}=\mathbf{Z}\right)=0. \tag{2.9}\]
It can be seen that
\[E_{\mathbf{\theta}=\mathbf{0}}\left(L^{\prime}\left(\ln V+q_{\mathbf{\theta}}(\mathbf{Z}) \right)\big{|}\mathbf{Z}=\mathbf{Z}\right)<0. \tag{2.10}\]
for \(q_{\mathbf{\theta}}(\mathbf{Z})>\mu\). Further we have \(R_{1}(\mathbf{\theta},q_{\mathbf{\theta}}(\mathbf{Z}))\) is strictly bowl shaped. So from the equation (2.9) and (2.10), it is implied that
\[q_{\mathbf{\theta}}(\mathbf{z})<q_{\mathbf{0}}(\mathbf{z}). \tag{2.11}\]
If \(\theta_{i}\leq 0\) for all \(i=1,\ldots,k\), then it is easy to see that
\[E_{\mathbf{\theta}}\left[L\left(\ln V+q\right)\big{|}\mathbf{Z}=\mathbf{z} \right]=E_{\mathbf{0}}\left[L\left(\ln V+q\right)\big{|}\mathbf{z}=\mathbf{z}\right]\]
This enable us \(q_{\mathbf{\theta}}(\mathbf{z})=q_{\mathbf{0}}(\mathbf{z}).\) By making a change of variable \(\mathfrak{u}=v(\sum_{i-1}^{k}z_{i}+1)\), we get from (2.9)
\[\int_{0}^{\infty}L^{\prime}\left(\ln\mathfrak{u}+q_{\mathbf{0}}( \mathbf{z})-\ln\left(\sum_{i=1}^{k}z_{i}+1\right)\right)e^{-\mathfrak{u}} \mathfrak{u}^{kn-1}d\mathfrak{u}=0. \tag{2.12}\]
From (2.12) with (2.2), we get \(q_{\mathbf{0}}(\mathbf{z})=p_{0}+\ln\left(\sum_{i=1}^{k}z_{i}+1\right)\). Again (1.5) and (2.4) gives us
\[p_{0}<q_{0}. \tag{2.13}\]
Now we consider a function of the form
\[\zeta_{0}(\mathbf{Z})=\left\{\begin{array}{ll}\min\left\{q_{0},p_{0 }+\ln\left(\sum_{i=1}^{k}z_{i}+1\right)\right\},\quad z_{i}>0,\ i=1,\ldots,k\\ \\ q_{0},\hskip 113.811024pt\text{otherwise}\end{array}\right.. \tag{2.14}\]
From (2.11) and (2.13) we have \(q_{\mathbf{\theta}}(\mathbf{Z})<\zeta_{0}(\mathbf{Z})<q_{0}\) on a set having probability grater than zero. We have \(R_{1}(\mathbf{\theta},q)\) is a strictly bowl shaped function in \(q\), \(R_{1}(\mathbf{\theta},q)\) is a decreasing function of \(q\) for \(q_{\mathbf{\theta}}(\mathbf{z})<c\) and hence \(R_{1}(\mathbf{\theta},\zeta_{0}(\mathbf{Z}))\leq R_{1}(\mathbf{\theta},q_{0})\). Since \(\ln(\sum_{i=1}^{k}z_{i}+1)<q_{0}-p_{0}\) is satisfied on
a set of positive probability for all \(\mathbf{\theta}\) and \(z_{i}>0\) for \(i=1,\ldots,k\). So obtained that Hence we get
\[R(\mathbf{\theta},T_{\zeta_{0}})\leq R(\mathbf{\theta},T_{0}).\]
Hence the theorem is proved. \(\Box\)
**Example 2.1**: _Let \(L(t)=t^{2}\). Then, from (2.2), we obtain \(b_{0}=-\psi(kn)\). Thus, the estimator_
\[T_{01}^{*}=\left\{\begin{array}{ll}\ln S+\min\left\{-\psi(kn-k),\ln(1+\sum_{ i=1}^{k}z_{i})-\psi(kn)\right\},&z_{i}>0,i=1,\ldots,n\\ \ln S-\psi(kn-k),&\mbox{otherwise}\end{array}\right.\]
_dominates \(T_{01}\)._
**Example 2.2**: _Let \(L(t)=e^{at}-at-1,\ a\neq 0\). Then, from (2.2), we obtain \(b_{0}=\frac{1}{a}\ln\left(\frac{\Gamma(nk)}{\Gamma(a+nk)}\right)\). Thus, the estimator_
\[T_{02}^{*}=\left\{\begin{array}{ll}\ln S+\min\left\{\frac{1}{a}\ln\left( \frac{\Gamma(nk-k)}{\Gamma(a+nk-k)}\right),\ln(1+\sum_{i=1}^{k}z_{i})+\frac{1} {a}\ln\left(\frac{\Gamma(nk)}{\Gamma(a+nk)}\right)\right\},&z_{i}>0\\ \ln S+\frac{1}{a}\ln\left(\frac{\Gamma(nk-k)}{\Gamma(a+nk-k)}\right),&\mbox{ otherwise}\end{array}\right.\]
_dominates \(T_{02}\) for \(a>k(1-n)\)._
## 3 Brewster-Zidek type improved estimator
In the last section, we proposed a non-smooth estimator. In this section, we prove inadmissibility of MRIE \(T_{0}\) by proposing an estimators which is smooth for \(\{\mathbf{Z}=(Z_{1},\ldots,Z_{k}):Z_{i}\in(0,\infty),i=1,2,\ldots,k\}\). [23] studied the problem of finding a smooth, improved estimator for common hazard rate. Here we consider finding smooth estimator of the entropy of several exponential distributions. For this purpose consider an estimator of the form
\[T_{d}(\mathbf{X},S)=\left\{\begin{array}{ll}\ln S+d,&\mathbf{Z}\in\mathcal{B}_{\mathbf{ r}}\\ &\\ \ln S+q_{0}&\mbox{otherwise}\end{array}\right., \tag{3.1}\]
where \(\mathcal{B}_{\mathbf{r}}=(0,r_{1}]\times(0,r_{2}]\times\ldots(0,r_{k}]\) with \(r_{i}>0\) for all \(i=1,\ldots,k\). With out loss of generality again, we take \(\sigma=1\). Define \(\mathbf{\beta}=(\beta_{1},\ldots,\beta_{k})\), where \(\beta_{i}=n\theta_{i}\) and \(\eta=\max_{1\leq i\leq k}\{\beta_{i}/r_{i}\}\). The results in this section extends a results of [20] form single exponential distribution to several exponential distribution.
To propose an improve estimators we will analyze the conditional risk function
\[\mathcal{F}(d,\mathbf{\beta})=E_{\mathbf{\beta}}\left[L\left(\ln S+d\right)\left|\mathbf{Z }\in\mathcal{B}_{\mathbf{r}}\right]\propto\int_{0}^{\infty}L(\ln v+d)f_{\mathbf{\beta }}(v,\mathbf{r})dv,\]
where
\[f_{\mathbf{\beta}}(v,\mathbf{r})\propto v^{kn-k-1}e^{-v}\prod_{i=1}^{k}\left(e^{\beta_{i}I _{(\beta_{i}>0)}}-e^{-vr_{i}}\right)I_{(v>\eta)}.\]
The following lemma we will study the properties of conditional risk function. This is useful to prove inadmissibility of MRIE
**Lemma 3.1**:
1. _For every_ \(r_{i}>0\)_,_ \(i=1,\ldots,k\) _the conditional risk_ \(\mathcal{F}(d,\mathbf{\beta})\) _is a strictly bowl shaped function in_ \(d\)_._
2. _Suppose_ \(d(\mathbf{r}\ \mathbf{\beta})\) _be the minimizer of_ \(\mathcal{F}(d,\mathbf{\beta})\) _and_ \(d(\mathbf{r},\mathbf{0})\) _be the minimizers of_ \(\mathcal{F}(d,\mathbf{0})\)_. Then for all_ \(\mathbf{\beta}\) _we have_ \(d(\mathbf{r},\mathbf{\beta})\leq d(\mathbf{r},\mathbf{0})\)_._
3. _The function_ \(d(\mathbf{r},\mathbf{0})\) _is non decreasing in_ \(r_{i}\) _for_ \(i=1,\ldots,k\)_._
**Proof: (1)** To prove \(\mathcal{F}(d,\mathbf{\beta})\) is a strictly bowl shaped by Lemma (2.1) of [7] we have to prove that \(\frac{\Lambda_{\mathbf{\beta}}(y-d_{2},\mathbf{r})}{\Lambda_{\mathbf{\beta}}(y-d_{1},\mathbf{ r})}\) is increasing in \(y\) for given \(0<d_{1}<d_{2}\), where
\[\Lambda_{\mathbf{\beta}}(y,\mathbf{r})\propto(e^{y})^{kn-k}e^{-e^{y}}\prod_{i=1}^{k} \left(e^{\beta_{i}I_{(\beta_{i}>0)}}-e^{-r_{i}e^{y}}\right)I_{(e^{y}>\eta/r)}.\]
Now we have
\[\frac{\Lambda_{\mathbf{\beta}}(y-d_{2},\mathbf{r})}{\Lambda_{\mathbf{\beta}}(y-d_{1},\mathbf{ r})}=\frac{(e^{y-d_{2}})^{kn-k}e^{-e^{y-d_{2}}}\prod_{i=1}^{k}\left(e^{\beta_{i}I_{( \beta_{i}>0)}}-e^{-r_{i}e^{y-d_{2}}}\right)I_{(e^{y-d_{2}}>\eta/r)}}{(e^{y-d_{ 1}})^{kn-k}e^{-e^{y-d_{1}}}\prod_{i=1}^{k}\left(e^{\beta_{i}I_{(\beta_{i}>0)}}- e^{-r_{i}e^{y-d_{1}}}\right)I_{(e^{y-d_{1}}>\eta/r)}}.\]
Using Lemma 6.1 of [20] it can be easily proved that \(\frac{\Lambda_{\mathbf{\beta}}(y-d_{2},\mathbf{r})}{\Lambda_{\mathbf{\beta}}(y-d_{1},\mathbf{ r})}\) is increasing in \(y\) for \(0<d_{1}<d_{2}\).
**(2)** By part (1) we have \(\mathcal{F}(d,\mathbf{\beta})\) is a strictly bowl shaped function in \(d\). Hence we get unique minimizer and we have
\[\mathcal{F}^{\prime}(d(\mathbf{r},\mathbf{\beta}),\mathbf{\beta})=0,\ \ \text{for all}\ \mathbf{\beta}. \tag{3.2}\]
Suppose \(d(\mathbf{r},\mathbf{\beta})>d(\mathbf{r},\mathbf{0})\), then we have
\[\mathcal{F}^{\prime}(d(\mathbf{r},\mathbf{\beta}),\mathbf{\beta})>\mathcal{F}^{\prime}(d( \mathbf{r},\mathbf{0}),\mathbf{\beta}))>\mathcal{F}^{\prime}(d(\mathbf{r},\mathbf{0}),\mathbf{0}))=0 \tag{3.3}\]
which is a contradiction to (3.2). This proves that \(d(\mathbf{r},\mathbf{\beta})\leq d(\mathbf{r},\mathbf{0})\) and the last inequality in the above expression follows form the fact that \(\frac{\Lambda_{\mathbf{\beta}}(y,\mathbf{r})}{\Lambda_{\mathbf{0}}(y,\mathbf{r})}\) is increasing in \(y\).
**(3)** Let \(r_{i}=t\) and denote \(d(\mathbf{r_{t}},\mathbf{0})=d((r_{1},\ldots,r_{i-1},t,r_{i+1},\ldots,r_{k}),\mathbf{0})\). Now for \(0<t<t_{1}\) by Lemma 6.2 of [20] it can be easily seen that \(\frac{\Lambda_{\mathbf{0}}(y,\mathbf{r_{t}})}{\Lambda_{\mathbf{0}}(y,\mathbf{r_{t}})}\) is nondecreasing in \(y\) which implies that \(d(\mathbf{r_{t}},\mathbf{0})\) is non decreasing in \(t\). This proves that \(d(\mathbf{r},\mathbf{0})\) is non decreasing in \(r_{i}\) for \(i=1,\ldots,k\).
\(\Box\)
As a consequence of of the above lemma we have the following dominance result.
**Theorem 3.2**: _The estimator \(T_{d(\mathbf{r},\mathbf{0})}=\zeta_{\mathbf{r}}(\mathbf{Z})+\ln S\) dominates \(T_{0}\) under a general location invariant loss function \(L(t)\), where_
\[\zeta_{\mathbf{r}}(\mathbf{Z})=\left\{\begin{array}{ll}d(\mathbf{r},\mathbf{0}),&\mathbf{Z}\in \mathcal{B}_{\mathbf{r}}\\ \\ q_{0},&\mbox{otherwise}.\end{array}\right. \tag{3.4}\]
Let consider \(\mathbf{r}^{\prime}=(r_{1}^{\prime},\ldots,r_{k}^{\prime})\) be such that \(r_{i}^{\prime}<r_{i}\) for \(i=1,\ldots,k\) and denote \(\mathcal{B}_{\mathbf{r}^{\prime},\mathbf{r}}=(r_{1}^{\prime},r_{1}]\times\ldots(r_{k}^ {\prime},r_{k}]\). Define an estimator as below.
\[T_{d,\mathbf{r},\mathbf{r}^{\prime}}(\mathbf{X},S)=\left\{\begin{array}{ll}\ln S+d,&\bm {Z}\in\mathcal{B}_{\mathbf{r}^{\prime}}\\ \\ \ln S+d(\mathbf{r},\mathbf{0}),&\mathbf{Z}\in\mathcal{B}_{\mathbf{r}^{\prime},\mathbf{r}}\\ \\ \ln S+q_{0},&\mbox{otherwise}.\end{array}\right.,\]
Proceeding as above it we can easily prove that the estimator \(T_{c(\mathbf{r}^{\prime},\mathbf{0}),\mathbf{r},\mathbf{r}^{\prime}}=\ln S+\zeta_{\mathbf{r},\mathbf{ r}^{\prime}}(\mathbf{Z})\) dominates \(T_{c(\mathbf{r},\mathbf{0})}\) and hence the MRIE \(T_{0}\), where
\[\zeta_{\mathbf{r},\mathbf{r}^{\prime}}(\mathbf{Z})=\left\{\begin{array}{ll}d(\mathbf{r}^{ \prime},\mathbf{0}),&\mathbf{Z}\in\mathcal{B}_{\mathbf{r}^{\prime}}\\ \\ d(\mathbf{r},\mathbf{0}),&\mathbf{Z}\in\mathcal{B}_{\mathbf{r}^{\prime},\mathbf{r}}\\ \\ q_{0},&\mbox{otherwise}\end{array}\right..\]
Similar to [23] now we select a partition of \(\prod_{i=1}^{k}[0,\infty)\) as \(0=r_{j,0}^{i}<r_{j,1}^{i}<r_{j,2}^{i}<\cdots<r_{j,m_{j}-1}^{i}<r_{j,m_{j}}^{ i}<\infty\) for \(i=1,2,\ldots,k\).
\[\left\{\begin{array}{ll}0=r_{j,0}^{1}<r_{j,1}^{1}<r_{j,2}^{1}<\cdots<r_{j,m_ {j}-1}^{1}<r_{j,m_{j}}^{1}<\infty\\ \\ 0=r_{j,0}^{2}<r_{j_{1}}^{2}<r_{j,2}^{2}<\cdots<r_{j,m_{j}-1}^{2}<r_{j,m_{j}}^{ 2}<\infty\\ \\ 0=r_{j,0}^{3}<r_{j,1}^{3}<r_{j,2}^{3}<\cdots<r_{j,m_{j}-1}^{3}<r_{j,m_{j}}^{3}< \infty\\ \\.\\.\\ 0=r_{j,0}^{k}<r_{j,1}^{k}<r_{j,2}^{k}<\cdots<r_{j,m_{j}-1}^{k}<r_{j,m_{j}}^{k }<\infty\end{array}\right.\]
Let \(\mathbf{r}_{j,l}=(r^{1}_{j,l},\ldots,r^{k}_{j,l}),\,l=0,\ldots,m_{j}\) and define
\[\zeta_{j}(\mathbf{Z})=\left\{\begin{array}{ll}d\left(\mathbf{r}_{j,1},\mathbf{0}\right),& \mbox{ if }\quad r^{i}_{j,0}<z_{i}\leq r^{i}_{j,1},i=1,\ldots,k\\ \\ d\left(\mathbf{r}_{j,2},\mathbf{0}\right),&\mbox{ if }\quad r^{i}_{j,1}<z_{i}\leq r^{i}_{j,2},i=1, \ldots,k\\ \\ d\left(\mathbf{r}_{j,3},\mathbf{0}\right),&\mbox{ if }\quad r^{i}_{j,2}<z_{i}\leq r^{i}_{j,3},i=1, \ldots,k\\ \\.&\\ d\left(\mathbf{r}_{j,m_{j}},\mathbf{0}\right),&\mbox{ if }\quad r^{i}_{j,m_{j}-1}<z_{i}\leq r^{i}_{j,m_{j}},i=1, \ldots,k\\ \\ q_{0},&\mbox{ otherwise}\end{array}\right.\]
Assume that
\[\max_{1\leq\kappa\leq m_{j}}\left|r^{\nu}_{j,\kappa}-r^{\nu}_{j,\kappa-1} \right|\to 0\mbox{ as }j\rightarrow\infty\mbox{ for }\nu=1,\ldots,k.\]
Then we have \(\zeta_{j}(\mathbf{Z})\rightarrow\zeta_{*}(\mathbf{Z})\) pointwise as \(j\rightarrow\infty.\) Since for \(j=1,2,\ldots,\) the estimator \(T_{j}(\mathbf{X},S)=\ln S+\zeta_{j}(\mathbf{Z})\) has smaller risk than that of \(T_{0}\). Then by applying Fatou's lemma we have the following dominance result.
**Theorem 3.3**: _Define a function of the form_
\[\zeta_{*}(\mathbf{Z})=\left\{\begin{array}{ll}d(\mathbf{Z},\mathbf{0}),&z_{1}>0,\ldots,z_ {k}>0\\ q_{0},&\mbox{ otherwise}\end{array}\right.. \tag{3.5}\]
_Then he estimator \(T_{BZ}(\mathbf{X},S)=\ln S+\zeta_{*}(\mathbf{Z})\) dominates \(T_{0}\) with respect to a general location invariant loss function \(L(t)\)._
Now we consider two special loss function and derive the smooth improved estimators which have uniformly smaller risk than \(T_{0}\).
**Example 3.1**: _Consider the squared error loss function \(L(t)=t^{2}\). Then from Theorem 3.3 we get the Brewster-Zidek type estimator as_
\[T_{BZ1}(\mathbf{X},S)=\left\{\begin{array}{ll}\ln S+d(\mathbf{Z},\mathbf{0}),&z_{1}>0, \ldots,z_{k}>0\\ \\ \ln S-\psi(nk-k),&\mbox{ otherwise}\end{array}\right.. \tag{3.6}\]
_where_
\[d(\mathbf{z},\mathbf{0})=-\frac{\int_{0}^{\infty}\ln ve^{-v}v^{kn-k-1}\prod_{i=1}^{k}(1-e^ {-vz_{i}})dv}{\int_{0}^{\infty}e^{-v}v^{kn-k-1}\prod_{i=1}^{k}(1-e^{-vz_{i}})dv}\]
_For \(k=2\)_
\[d(\mathbf{z},\mathbf{0})=-\frac{\psi(2n-2)\left[1-(z_{1}+1)^{2-2n}-(z_{2}+1)^{2-2n}+(z_ {1}+z_{2}+1)^{2-2n}\right]+D}{1-(z_{1}+1)^{2-2n}-(z_{2}+1)^{2-2n}+(z_{1}+z_{2}+ 1)^{2-2n}},\]
_where_
\[D=\frac{\ln(1+z_{1})}{(1+z_{1})^{2n-2}}+\frac{\ln(1+z_{2})}{(1+z_{2})^{2n-2}}- \frac{\ln(1+z_{1}+z_{2})}{(1+z_{1}+z_{2})^{2n-2}}\]
**Example 3.2**: _Consider the linear loss function \(L(t)=e^{at}-at-1,a\neq 0\). Then from Theorem 3.3 we get the Brewster-Zidek type estimator as_
\[T_{BZ2}(\mathbf{X},S)=\left\{\begin{array}{ll}\ln S+d(\mathbf{Z},\mathbf{0}),&\quad Z_{1 }>0,\ldots,Z_{k}>0\\ \\ \ln S+\frac{1}{a}\ln\left(\frac{\Gamma(nk-k)}{\Gamma(nk+a-k)}\right),&\quad \mbox{otherwise}\end{array}\right.. \tag{3.7}\]
_where \(d(\mathbf{z},\mathbf{0})=-\frac{1}{a}\ln H(\mathbf{z})\) with_
\[H(\mathbf{z})=\int_{0}^{\infty}e^{-v}v^{kn+a-k-1}\prod_{i=1}^{k}(1-e^{-vz_{i}})dv\]
_For \(k=2\)_
\[H(\mathbf{z})=\Gamma(a+2n-2)\left(1-(z_{1}+1)^{2-2n-a}-(z_{2}+1)^{2-2n-a}+(z_{1}+ z_{2}+1)^{2-2n-a}\right)\]
## 4 Bayes estimator
In this section, we have consider the Bayes estimation of \(\theta=\ln\sigma\). For this purpose we consider the prior distribution as
\[\Pi(\mathbf{\theta},\sigma)=\left(\prod_{i=1}^{k}\frac{1}{\sigma}\exp\left(\frac{ \theta_{i}-\mu_{0}}{\sigma}\right)I(\theta_{i}<\mu_{0})\right)\frac{\sigma_{0 }^{\nu+1}}{\Gamma(\nu)\sigma^{\nu+1}}\exp\left(-\frac{\sigma_{0}}{\sigma} \right). \tag{4.1}\]
So we have the posterior distribution of \((\mathbf{\theta},\sigma)\) is obtained as
\[\Pi(\mathbf{\theta},\sigma|\mathbf{X})=\left(\prod_{i=1}^{k}\frac{1+n_{i}}{ \sigma}\exp\left(\frac{n_{i}+1}{\sigma}(\theta_{i}-\min\{x_{i(1)},\mu_{0}\}) \right)I(\theta_{i}<\min\{x_{i(1)},\mu_{0}\})\right)\] \[\frac{1}{\sigma^{\nu+1+nk}}\exp\left(-\frac{1}{\sigma}(k\mu_{0}+ \sigma_{0}+\sum_{i}\sum_{j}x_{ij})\right).\]
Now we have for given \(\sigma\); \(\theta_{1},\theta_{2},\ldots,\theta_{k}\) are independent with \(\frac{n_{i}+1}{\sigma}(\theta_{i}-\min\{x_{i(1)},\mu_{0}\})\sim Exp(1)\). Also we get \(\frac{1}{\sigma}(k\mu_{0}+\sigma_{0}+\sum_{i}\sum_{j}x_{ij})\sim\Gamma(nk+\nu)\).
So the the Bayes estimator with respect to squared error loss function is obtained as
\[T_{\Pi}=E(\ln\sigma|\mathbf{X})=\ln\left(k\mu_{0}+\sigma_{0}+\sum_{i}\sum_{j}x_{ ij}\right)-\zeta(nk+\nu).\]
## 5 Simulation study
In the above, we have discussed the inadmissibility of MIRE of \(\ln\sigma\). To prove the inadmissibility, we have obtained two improved estimators. One is Stein-type non smooth estimators, and the other one is Brewster-Zidek-type smooth improved estimator of entropy of several exponential distributions. In this section, we will study the risk performance of the proposed improved estimators with respect to the squared error loss function by simulation. For the purpose of simulation, we have generated 20,000 random samples from two exponential distributions with location parameters \(\theta_{1}\), \(\theta_{2}\), and scale parameter 1 of sizes \(n=4,6,8\). In Table 1, we have tabulated percentage risk improvement (PRI) with respect to the MRIE \(T_{0}\) of the improved estimators for \(n=4\) and \(n=6\) and \(n=8\), respectively. For the simulation study, we have considered different values of \(\theta_{1}\) and \(\theta_{2}\). The PRI of an estimator \(T\) with respect to \(T_{0}\) is defined as
\[PRI(T)=\frac{Risk(T_{0})-Risk(T)}{Risk(T_{0})}\times 100\]
From the simulation, we have the following observations.
1. The PRI of \(T_{01}^{*}\) decreases as the value of \(\theta_{1}\) and \(\theta_{2}\) increases. The risk performance of \(T_{01}^{*}\) is better than \(T_{BZ1}\) when the values of \(\theta_{1}\) and \(\theta_{2}\) near zero.
2. The PRI of \(T_{BZ1}\) increases and then decreases as the values of \(\theta_{1}\) and \(\theta_{2}\) increases. The risk performance of \(T_{BZ1}\) better than \(T_{01}^{*}\) for larger values of \(\theta_{1}\) and \(\theta_{2}\).
3. We have seen that as \(n\) increases the PRI of \(T_{01}^{*}\) and \(T_{BZ1}\) decrease. For large values of \(n\), the performance of \(T_{01}^{*}\) and the MIRE are the same. For large values of risk performance of \(T_{BZ1}\) better than \(T_{01}^{*}\).
4. Form the simulation we say that overall \(T_{BZ1}\) perform better than the other estimators.
We have similar type of observations for linear loss function.
## 6 Special sampling schemes
In the previous section we have studied the estimation \(\ln\sigma\) based on i.i.d. sample. Here we will discuss the same estimation problem three special sampling schemes. These schemes are namely (i) record values, (ii) type-II censoring, and (iii) progressive Type-II censoring. Under these sampling schemes, we derive improved estimators over the MRIE and we will observe that the results follows from the i.i.d. sampling scheme. [22] studied the problem of estimating hazard rate of under these sampling schemes.
### Record Values
Various application of record model have been found in several areas such as sports analysis, hydrology, meteorology and stock market analysis. Several authors have investigated record values because of its importance. For a detail literature review in this direction we refer to [1], [2] and [3]. Let \(Z_{1},Z_{2},Z_{3},\dots\) be sequence of i.i.d random variables taken form an exponential population \(E(\mu,\sigma)\). For \(m\geq 2\) define \(u(1)=1\) and \(u(m)=\min\{j|j>u(m-1),Z_{j}>Z_{u(m-1)}\}\), then
\begin{table}
\begin{tabular}{|c|c|c|c|c|c|c|c|} \hline \multicolumn{2}{|c|}{\(n\)} & \multicolumn{2}{|c|}{4} & \multicolumn{2}{|c|}{6} & \multicolumn{2}{|c|}{8} \\ \hline \(\theta_{2}\) & \(\theta_{1}\) & \(T_{01}^{*}\) & \(T_{BZ1}\) & \(T_{01}^{*}\) & \(T_{BZ1}\) & \(T_{01}^{*}\) & \(T_{BZ1}\) \\ \hline \multirow{6}{*}{\(0.1\)} & 0.1 & 10.47579 & 0.665244 & 5.23372 & 2.720082 & 2.196369 & 3.563242 \\ \cline{2-9} & 0.2 & 8.177379 & 3.837873 & 2.352601 & 4.995069 & 0.370062 & 5.044513 \\ \cline{2-9} & 0.5 & 2.071238 & 6.912677 & 0.02876744 & 5.791525 & 0 & 4.552717 \\ \cline{2-9} & 0.6 & 1.128674 & 7.042573 & 0.0055315 & 5.500053 & 0 & 4.190413 \\ \cline{2-9} & 0.7 & 0.564448 & 7.004393 & 0.0004631 & 5.185024 & 0 & 3.894156 \\ \cline{2-9} & 0.8 & 0.2685096 & 6.872046 & 0 & 4.888716 & 0 & 3.665736 \\ \hline \multirow{6}{*}{\(0.2\)} & 0.1 & 8.177379 & 3.888181 & 2.352601 & 5.0272 & 0.370062 & 5.054616 \\ \cline{2-9} & 0.2 & 5.611596 & 6.894204 & 0.7650419 & 7.086941 & 0.02789947 & 6.307205 \\ \cline{2-9} & 0.5 & 1.128674 & 9.681164 & 0.0055315 & 7.550914 & 0 & 5.507273 \\ \cline{2-9} & 0.6 & 0.564448 & 9.752206 & 0.0004630 & 7.200807 & 0 & 5.099963 \\ \cline{2-9} & 0.7 & 0.2685096 & 9.665549 & 0 & 6.840881 & 0 & 4.772228 \\ \cline{2-9} & 0.8 & 0.1268464 & 9.492634 & 0 & 6.509755 & 0 & 4.521561 \\ \hline \multirow{6}{*}{\(0.4\)} & 0.1 & 3.523156 & 6.578196 & 0.1702922 & 6.007922 & 0.001014834 & 4.944141 \\ \cline{2-9} & 0.2 & 2.071238 & 9.369283 & 0.02876744 & 7.813017 & 0 & 5.953738 \\ \cline{2-9} & 0.5 & 0.2685096 & 11.74199 & 0 & 7.835906 & 0 & 4.784317 \\ \cline{2-9} & 0.6 & 0.1268464 & 11.72101 & 0 & 7.400792 & 0 & 4.317633 \\ \cline{2-9} & 0.7 & 0.05596951 & 11.5567 & 0 & 6.974144 & 0 & 3.947277 \\ \cline{2-9} & 0.8 & 0.02279446 & 11.31756 & 0 & 6.59024 & 0 & 3.665847 \\ \hline \multirow{6}{*}{\(0.7\)} & 0.1 & 0.564448 & 7.089663 & 0.0004630 & 5.211967 & 0 & 4.476075 \\ \cline{2-9} & 0.2 & 0.2685096 & 9.700335 & 0 & 6.835372 & 0 & 5.686527 \\ \cline{2-9} & 0.5 & 0.02279446 & 11.68276 & 0 & 6.5038 & 0 & 4.685471 \\ \cline{2-9} & 0.6 & 0.009141 & 11.5671 & 0 & 5.993729 & 0 & 4.14633 \\ \cline{2-9} & 0.7 & 0.00295038 & 11.32069 & 0 & 5.506559 & 0 & 3.682823 \\ \cline{2-9} & 0.8 & 0.0008821 & 11.00996 & 0 & 5.073649 & 0 & 3.304135 \\ \hline \end{tabular}
\end{table}
Table 1: Percentage risk improvement with respect to squared error loss function
\(\{X_{m}=V_{u(m)},m\geq 1\}\) gives a sequence of (maximal) record statistics. The sequence \(u(m),m\geq 1\) is called record times. Consider the record sample \(Y_{i1},\ldots,Y_{in}\) from \(Exp(\theta_{i},\sigma),i=1,2,\ldots,k.\) Then \((Y_{11},Y_{2,1},\ldots,Y_{k1},S)\) be the sufficient statistics for \((\theta_{1},\theta_{2},\ldots,\theta_{k},\sigma)\), where \(S=\sum_{i=1}^{k}(Y_{in}-Y_{i1})\sim Gamma(k(n-1),\sigma)\) and \(Y_{i1}\sim Exp(\theta_{i},\sigma)\).
We have the MRIE of \(\ln\sigma\) is
\[T_{0}^{R}=\ln S+q_{0},\]
where \(q_{0}\) minimizes
\[E_{\boldsymbol{\theta},1}L\left[L(\ln S+c)\right]\]
Now define \(z_{i}=\frac{Y_{i1}}{S},i=1,2\ldots,k\) and denote \(\boldsymbol{Y}=(Y_{11},Y_{21},\ldots,Y_{k1})\). Then using Theorem (2.1) we can prove that the estimator
\[T_{\zeta_{0}}^{R}(\boldsymbol{Y},S)=\left\{\begin{array}{ll}\ln S+\min\left\{ q_{0},p_{0}+\ln\left(\sum_{i=1}^{k}z_{i}+1\right)\right\},\quad z_{i}>0,i=1, \ldots,k\\ \\ \ln S+q_{0},\hskip 56.905512pt\mbox{otherwise}\end{array}\right., \tag{6.1}\]
have uniformly smaller risk than that of MRIE \(T_{0}^{R}\), where \(q_{0}\) and \(p_{0}\) is given as in Theorem (2.1). The estimator \(T_{\zeta_{0}}^{R}(\boldsymbol{Y},S)\) is non smooth. Now we will propose an estimator of \(\ln\sigma\) based on record values which dominates MRIE \(T_{0}^{R}\). By Theorem 3.3, the dominating estimator is obtained as
\[T_{BZ}^{R}(\boldsymbol{Y},S)=\left\{\begin{array}{ll}\ln S+d(\boldsymbol{Z},\boldsymbol{0}),&\quad z_{1}>0,\ldots,z_{k}>0\\ \\ \ln S+q_{0},&\quad\mbox{otherwise}\end{array}\right.. \tag{6.2}\]
where \(d(\boldsymbol{Z},\boldsymbol{0})\) given in example (3.1) for squared error loss function and in example (3.2) for linear loss function. We denote \(T_{01}^{*R}\) and \(T_{BZ1}^{R}\) be the improved estimators for squared error loss function. For linear loss function the improved estimators are denoted as \(T_{02}^{*R}\) and \(T_{BZ2}^{R}\).
#### 6.1.1 Simulation study
In this section, we compare the risk performance of the improved estimators based on record values generated from two exponential distributions. For simulation, 20,000 record samples of sizes \(n=4,6,8\) are generated from two exponential distributions with location parameters \(\theta_{1}\), \(\theta_{2}\), and scale parameter \(1\). We have presented the percentage risk improvement (PRI) with respect to the BAEE \(T_{0}\) of the improved estimators for \(n=4\) and \(n=6\) and \(n=8\), respectively, in Table 2. The risk of \(T_{0}\) is independent of \(\theta_{1}\) and \(\theta_{2}\) and constant. Risk values of \(T_{0}\) are obtained as for \(n=4,6,8\) are 0.183962, 0.1065479 and 0.07516415 respectively. From the simulated values, we have the following observations.
1. The performance of \(T_{01}^{*R}\) and \(T_{BZ1}^{R}\) better than \(T_{01}^{*}\) and \(T_{BZ1}^{R}\) respectively.
2. The interval of improvement of \(T_{01}^{*R}\) and \(T_{BZ1}^{R}\) larger than \(T_{01}^{*}\) and \(T_{BZ1}^{R}\) respectively.
3. The PRI of \(T_{01}^{*R}\) and \(T_{BZ1}^{R}\) decreases slowly as \(\theta_{1}\) and \(\theta_{2}\) increases.
We have a similar observation that can be made for the linear loss function.
### Type-II censoring
Researcher often encountered in reliability and life-testing experiments in which experimental units are either lost or removed from the experiment before failure. For example, experimental units breaks down accidentally before time in many industrial experiments; an individual withdraw from a clinical trial or the experiment may be terminated due to lack of funds. Experimenter intentionally may terminate the experiment to save time and cost associated with testing. Data obtained from such type of experiments are called censored data. One such censoring scheme is Type-II censoring. In this scheme the experimenter decides to terminate the experiment after a specified number of items \(r\leq n\) fail. For further details on this topic one may refer to [5].
Let a sample of size \(n\) be drawn from an exponential distribution \(E(\theta_{i},\sigma)\) and the observations are available in order, that is, \(X_{i(1)}\leq X_{i(2)}\leq\cdots\leq X_{i(n)}\) for \(i=1,2,\ldots,k\). Here \(X_{i(j)}\) is the \(j^{th}\) smallest observation in a sample of \(n\) observation taken from exponential \(E(\theta_{i},\sigma)\) population. Now consider the first \(r\) ordered observations \(X_{i(1)},X_{i(2)},\ldots,X_{i(r)}\), \(r\leq n\), \(i=1,2,\ldots,k\). We consider the estimation of \(\ln\sigma\) based on censored sample under bowl shaped location in variant loss function \(L(t)\). Define \(S=\sum_{j=1}^{k}\left[\sum_{i=1}^{r}(X_{j(i)}-X_{j(1)})+(n-r)(X_{j(n)}-X_{j(1) })\right]\). Then in this set-up \((X_{1(1)},X_{2(1)},\ldots,X_{k(1)},S)\) is a minimal sufficient statistic, where for \(i=1,2,\ldots,k\), \(X_{i}=nX_{i(1)}\) and \(S\) follow exponential distribution \(E(n\theta_{i},\sigma)\) and gamma distribution \(Gamma(k(n-1),\sigma)\) respectively. Consequently, the improved estimators of \(\ln\sigma\) can be derived using Theorems 2.1 and 3.3.
### Progressive Type-II censoring
Under censoring lifetimes distributions are more popular due to wide applications in science, engineering, social sciences, public health and medicine. There are several censored scheme. One important censoring scheme is progressive Type-II censoring. Censored data are of progressively Type-II when they are censored by removing a prefixed number of surviving units when an individual unit fails. This process continues until a fixed number of failures has occurred, at which stage the remainder of the surviving individuals are also removed/censored. For detailed one can see [29], [5], [4].
Now we will describe the progressive Type-II censoring scheme. The description here is similar to [22]. Let \(X_{i1},X_{2},\ldots,X_{im}\) be life times of \(m\) independent units placed on a life testing experiment with \(X_{ij}\) following an exponential distribution \(E(\theta_{i},\sigma),i=1,2,\ldots,k\). For \(r=1,2,\ldots,n\), \(n\leq m\), at the time of \(r-\)th failure, a prefixed number of \(R_{r}\) surviving units are withdrawn from the experiment, where \(R_{n}=m-n-R_{1}-R_{2}-\cdots-R_{n-1}\). Let \(X_{i1:n:m}\leq X_{i2:n:m}\leq\cdots\leq X_{in:n:n:m}\) be the corresponding progressive Type-II censored sample for \(i=1,2,\ldots,k\). We consider the estimation of \(\ln\sigma\) based on progressive Type-II censored sample. Define \(S=\sum_{i=1}^{k}\sum_{j=1}^{n}\left[(R_{j}+1)(X_{ij:n:m}-X_{i1:n:m})\right]\). In this case \((X_{11:n:m},X_{21:n:m},\ldots X_{k1:n:m},S)\) is a minimal sufficient statistic. Define \(X_{i}=nX_{i1:n:m},i=1,2,\ldots,k\). Then \(X_{i}\) follows an exponential distribution \(E(n\mu_{i},\sigma)\) and \(S\) follows a gamma distribution \(Gamma(k(n-1),\sigma)\). Consequently the improved estimators of \(\ln\sigma\) can be found by Theorem 2.1 and 3.3.
Conclusions
In several areas of applied statistics such as reliability engineering, molecular biology, finance, information theory, statistical physics etc., the measure of uncertainty of a probability distribution plays an important role. The Shannon's and Renyi entropy are the widely used measure of uncertainty. Similar to mean, standard deviation, variance and quantile, entropy is also an important characteristic of a parametric family of distributions. In the present manuscript, we deal with the problem of estimating the entropy of several exponential distributions with respect to the bowl-shaped location invariant loss function. At first, we derived MRIE based on \(S\). Now using the information contained in \((\mathbf{X},S)\), we have derived estimators which improve upon the MRIE of the entropy \(\ln\sigma\). The techniques of [26], and [7] have been adopted to derive improved estimators. As an application, we have derived the improved estimators for squared error and linear loss functions. We have observed that the the improved estimators for (i) record values (ii) type-II censoring (iii) progressive type-II censoring can be obtaied using the results of i.i.d. sampling. Finally we have conducted a simulation study to compare the risk performance of the proposed estimators numerically. From the simulation, it is seen that the performance of the improved estimators is better for the record sample.
|
2307.06386 | Narayana numbers as product of three repdigits in base $g$ | In this paper, we show that there are only finitely many Narayana's numbers
which can be written as product of three repdigits in base $g$ with $g \geq 2$.
Moreover, for $2 \leq g \leq 10$, we determine all these numbers. | Pagdame Tiebekabe, K. R. Kakanou, H. Ben Yakkou | 2023-07-10T16:07:30Z | http://arxiv.org/abs/2307.06386v1 | # Narayana numbers as product of three repdigits in base \(g\)
###### Abstract
In this paper, we show that there are only finitely many Narayana's numbers which can be written as product of three repdigits in base \(g\) with \(g\geq 2\). Moreover, for \(2\leq g\leq 10\), we determine all these numbers.
**Keywords and phrases**: Narayana numbers, repdigits, linear form in logarithms; Baker's method, reduction method.
**2020 Mathematics Subject Classification**: 11B39, 11J86, 11Y50, 11D61.
## 1 Introduction
The problems of the terms of linear recurrence sequences written as a product of repdigits in any base have been intensely studied by several researchers specialized in Number Theory. In this article, we consider the linear recurrent sequence of third order, The Narayana's Cows numbers defined as follows:
\[\mathcal{N}_{n}=\mathcal{N}_{n-1}+\mathcal{N}_{n-3}\quad\text{for}\quad n\geq 3 \quad\text{with}\quad\mathcal{N}_{0}=0,\mathcal{N}_{1}=\mathcal{N}_{2}=1.\]
For more details on the work related to the determination of the terms of linear recurrent sequences which are repdigits in any base, we refer the reader to the following recent results [1]-[4].
The concept of Narayana's cows numbers, derived from Indian mythology and Hinduism, holds a significant place in mathematics. These numbers have been extensively studied due to their properties and relationships with other mathematical sequences, and their important applications in other various fields such as cryptography, coding theory, and graph theory. In this paper, we delve into a fascinating aspect of Narayana numbers by examining their representation as product s of three repdigits in base \(g\) with \(g\geq 2\).
Repdigits, which consist of repeated digits, have garnered attention for their mathematical properties and patterns. In a fixed base \(g\geq 2\), a repdigit has the following form,
\[\sum_{i=0}^{n-1}d\times g^{i}=d\times\frac{g^{n}-1}{g-1},\]
where \(1\leq d\leq g-1\) and \(n\) a positive integer.
The proofs of our main results are based on a double application of Baker's method and on a reduction algorithm using computations based on continued fractions. The method used to determine Narayana numbers, which are products of three repdigits is similar to that used by Adedji [2] and by Adedji et al. [3].
The present paper is organized as follows: in Section 2, we present our main results, Section 3 is devoted to reminding necessary results for the proofs of our results, and in Section 4, we prove our results.
Statement of main results
In this section, we state all the main results obtained in this paper.
**Theorem 1**.: _Let \(g\geq 2\) be an integer. Then the Diophantine equation_
\[\mathcal{N}_{k}=d_{1}\frac{g^{\ell}-1}{g-1}\cdot d_{2}\frac{g^{m}-1}{g-1}\cdot d _{3}\frac{g^{n}-1}{g-1} \tag{1}\]
_has only finitely many solutions in integers \(k,d_{1},d_{2},d_{3},\ell,m,n\) such that \(1\leq d_{i}\leq g-1\) for \(i=1,2,3\) and \(n\geq m\geq\ell\geq 1\). Further, we have_
\[n<5.91\times 10^{49}\log^{9}g\quad\text{and}\quad k<4.73\times 10^{50}\log^{10}g.\]
Under the notation and assumptions of the Theorem 1, if (1) holds for \(\left(k,d_{1},d_{2},d_{3},\ell,m,n\right)\), then we write
\[\mathcal{N}_{k}=[a,b,c]_{g}=a\times b\times c,\]
where
\[a=d_{1}\times\frac{g^{\ell}-1}{g-1}=\underbrace{\overline{d_{1}\cdots d_{1_{ \text{\text{\text{\text{\text{\text{\text{\text{\text{\text{\text{\
## 3 Preliminary Results
In this section, we give some notation and recall certain definitions and results required for the proof of our main results.
### Some properties of Narayana sequence
Narayana's cows sequence comes from a problem with cows proposed by Indian mathematician Narayana in the 14th century. In this problem, we assume that there is a cow at the beginning and each cow produces a calf every year from the 4th year. Narayana's cow problem counts the number of calves produced each year [5].
The characteristic polynomial of Narayana's cows sequence \(\left\{\mathcal{N}_{n}\right\}_{n\geq 0}\) is
\[\varphi(x)=x^{3}-x^{2}-1.\]
\begin{table}
\begin{tabular}{|c|c|c|c|} \hline \(k\) & \(\mathcal{N}_{k}\) & \(\left[a,b,c\right]_{g}\) \\ \hline \(1,2,3\) & \(1\) & \(\left[1,1,1\right]_{g}\) for \(g=2,\ldots,10.\) \\ \hline \(4\) & \(2\) & \(\left[1,1,2\right]_{g}\) for \(g=3,\ldots,10.\) \\ \hline \(5\) & \(3\) & \(\left[1,1,11\right]_{2}\), \(\left[1,1,3\right]_{g}\) for \(g=4,\ldots,10.\) \\ \hline \(6\) & \(4\) & \(\left[1,1,11\right]_{3}\), \(\left[1,1,4\right]_{g}\) for \(g=5,\ldots,10,\) \(\left[1,2,2\right]_{g}\) for \(g=3,\ldots,10.\) \\ \hline \(7\) & \(6\) & \(\left[1,1,11\right]_{5}\), \(\left[1,2,3\right]_{g}\) for \(g=4,\ldots,10,\) \(\left[1,1,6\right]_{g}\) for \(g=7,\ldots,10.\) \\ \hline \(8\) & \(9\) & \(\left[1,11,11\right]_{2}\), \(\left[1,1,111\right]_{3}\), \(\left[1,1,11\right]_{8}\), \(\left[1,1,9\right]_{10}\), \\ & \(\left[1,3,3\right]_{g}\) for \(g=4,\ldots,10.\) \\ \hline \(9\) & \(13\) & \(\left[1,1,111\right]_{3}\) \\ \hline \(11\) & \(28\) & \(\left[1,1,44\right]_{6}\), \(\left[1,2,22\right]_{6}\), \(\left[1,4,11\right]_{6}\), \(\left[2,2,11\right]_{6}\), \(\left[1,4,7\right]_{g}\) for \(g=8,\ldots,10,\) \\ & \(8,\ldots,10,\)\(\left[2,2,7\right]_{g}\) for \(g=8,\ldots,10.\) \\ \hline \(13\) & \(60\) & \(\left[2,2,33\right]_{4}\), \(\left[2,3,22\right]_{4}\), \(\left[1,1,66\right]_{9}\), \(\left[1,2,33\right]_{9}\), \(\left[1,3,22\right]_{9}\), \(\left[1,6,11\right]_{9}\), \(\left[2,3,11\right]_{9}\), \(\left[2,5,6\right]_{g}\) for \(g=7,\ldots,10,\) \\ & \(\left[3,4,5\right]_{g}\) for \(g=6,\ldots,10.\) \\ \hline \(14\) & \(88\) & \(\left[1,1,88\right]_{10}\), \(\left[1,2,44\right]_{10}\), \(\left[1,4,22\right]_{10}\), \(\left[1,8,11\right]_{10}\), \(\left[2,2,22\right]_{10}\), \\ & \(\left[2,4,11\right]_{10}\). \\ \hline \(15\) & \(129\) & \(\left[1,1,333\right]_{6}\), \(\left[1,3,111\right]_{6}\). \\ \hline \(16\) & \(189\) & \(\left[1,11,11111\right]_{2}\), \(\left[1,3,333\right]_{4}\), \(\left[3,3,111\right]_{4}\), \(\left[3,3,33\right]_{6}\), \(\left[1,3,77\right]_{8}\), \\ & \(\left[1,7,33\right]_{8}\), \(\left[3,7,11\right]_{8}\), \(\left[3,7,9\right]_{10}\). \\ \hline \end{tabular}
\end{table}
Table 1: Narayana numbers which are a product of three repdigits in base \(g\), \(2\leq g\leq 10\).
Furthermore, the zeros of \(\varphi(x)\) are
\[\alpha_{\mathcal{N}} =\frac{1}{3}\bigg{(}\sqrt[3]{\frac{1}{2}(29-3\sqrt{93})}+\sqrt[3]{ \frac{1}{2}(3\sqrt{93}+29)}+1\bigg{)},\] \[\beta_{\mathcal{N}} =\frac{1}{3}-\frac{1}{6}\bigg{(}1-i\sqrt{3}\bigg{)}\sqrt[3]{\frac{ 1}{2}(29-3\sqrt{93})}-\frac{1}{6}\bigg{(}1+i\sqrt{3}\bigg{)}\sqrt[3]{\frac{1} {2}(3\sqrt{93}+29)},\] \[\gamma_{\mathcal{N}} =\frac{1}{3}-\frac{1}{6}\bigg{(}1+i\sqrt{3}\bigg{)}\sqrt[3]{\frac{ 1}{2}(29-3\sqrt{93})}-\frac{1}{6}\bigg{(}1-i\sqrt{3}\bigg{)}\sqrt[3]{\frac{1} {2}(3\sqrt{93}+29)}.\]
Then, the Narayana sequence can be obtained by Binet formula
\[\mathcal{N}_{n}=a_{\mathcal{N}}\alpha_{\mathcal{N}}^{n}+b_{\mathcal{N}}\beta_ {\mathcal{N}}^{n}+c_{\mathcal{N}}\gamma_{\mathcal{N}}^{n}. \tag{2}\]
From the three initial values of Nayarana sequence, and using Vieta's theorem, one has
\[a_{\mathcal{N}}=\frac{\alpha_{\mathcal{N}}^{2}}{\alpha_{\mathcal{N}}^{3}+2}, \quad b_{\mathcal{N}}=\frac{\beta_{\mathcal{N}}^{2}}{\beta_{\mathcal{N}}^{3}+ 2},\quad\text{and}\quad c_{\mathcal{N}}=\frac{\gamma_{\mathcal{N}}^{2}}{ \gamma_{\mathcal{N}}^{3}+2} \tag{3}\]
The minimal polynomial of \(a_{\mathcal{N}}\) over \(\mathbb{Z}\) is \(31x^{3}-3x-1\).
Setting \(\Pi(n)=\mathcal{N}_{n}-a_{\mathcal{N}}\alpha_{\mathcal{N}}^{n}=b_{\mathcal{N}} \beta_{\mathcal{N}}^{n}+c_{\mathcal{N}}\gamma_{\mathcal{N}}^{n}\), we notice that
\[\bigg{|}\Pi(n)\bigg{|}<\frac{1}{\alpha_{\mathcal{N}}^{n/2}}\quad\text{for all $n\geq 1$}. \tag{4}\]
We note that the characteristic polynomial has a real zero \(\alpha_{\mathcal{N}}(>1)\) and two complex conjugate zeros \(\beta_{\mathcal{N}}\) and \(\gamma_{\mathcal{N}}\) with \(|\beta_{\mathcal{N}}|=|\gamma_{\mathcal{N}}|<1\). In fact, \(\alpha_{\mathcal{N}}\approx 1.46557\). We also have the following property of \((\mathcal{N}_{n})_{n\geq 0}\).
**Lemma 1**.: _For the sequence \((\mathcal{N}_{n})_{n\geq 0}\), we have,_
\[\alpha_{\mathcal{N}}^{n-2}\leq\mathcal{N}_{n}\leq\alpha_{\mathcal{N}}^{n-1}, \quad\text{for}\quad n\geq 1.\]
**Proof 1**.: One can easily prove Lemma 1 using induction on \(n\).
Let \(\mathbb{K}_{\varphi}:=\mathbb{Q}(\alpha_{\mathcal{N}},\beta_{\mathcal{N}})\) be the splitting field of the polynomial \(\varphi\) over \(\mathbb{Q}\). Then, \([\mathbb{K}_{\varphi},\mathbb{Q}]=6\). Furthermore, \([\mathbb{Q}(\alpha_{\mathcal{N}}):\mathbb{Q}]=3\). The Galois group of \(\mathbb{K}\) over \(\mathbb{Q}\) is given by
\[\mathcal{G}_{\varphi}:=\text{Gal}(\mathbb{K}/\mathbb{Q})\cong\{(1),(\alpha_{ \mathcal{N}}\beta_{\mathcal{N}}),(\alpha_{\mathcal{N}}\gamma_{\mathcal{N}}),( \beta_{\mathcal{N}}\gamma_{\mathcal{N}}),(\alpha_{\mathcal{N}}\beta_{ \mathcal{N}}\gamma_{\mathcal{N}}),(\alpha_{\mathcal{N}}\gamma_{\mathcal{N}} \beta_{\mathcal{N}})\}\cong S_{3}.\]
Thus, we identify the automorphisms of \(\mathcal{G}_{\varphi}\) with the permutations of the zeros of the polynomial \(\varphi\). For example, the permutation \((\alpha_{\mathcal{N}}\beta_{\mathcal{N}})\) corresponds to the automorphisms \(\sigma_{\varphi}:\alpha_{\mathcal{N}}\to\beta_{\mathcal{N}},\ \beta_{\mathcal{N}}\to\alpha_{ \mathcal{N}},\ \gamma_{\mathcal{N}}\to\gamma_{\mathcal{N}}\).
### Linear forms in logarithms
We begin this subsection with a few reminders about the logarithmic height of an algebraic number. Let \(\eta\) be an algebraic number of degree \(d\), \(a_{0}>0\) be the leading coefficient of its minimal polynomial over \(\mathbb{Z}\) and let \(\eta=\eta^{(1)},\ldots,\eta^{(d)}\) denote its conjugates. The quantity defined by
\[h(\eta)=\frac{1}{d}\bigg{(}\log|a_{0}|+\sum_{j=1}^{d}\log\max(1,|\eta^{(j)}|) \bigg{)}\]
is called the logarithmic height of \(\eta\). Some properties of height are as follows. For \(\eta_{1},\eta_{2}\) algebraic numbers and \(m\in\mathbb{Z}\), we have
\[h(\eta_{1}\pm\eta_{2}) \leq h(\eta_{1})+h(\eta_{2})+\log 2,\] \[h(\eta_{1}\eta_{2}^{\pm 1}) \leq h(\eta_{1})+h(\eta_{2}),\] \[h(\eta_{1}^{m}) =|m|h(\eta_{1}).\]
In particular, if \(\eta=p/q\in\mathbb{Q}\) is a rational number in its reduced form with \(q>0\), then \(h(\eta)=\log(\max\{|p|,q\})\).
We can now present the famous Matveev's result used in this study. Let \(\mathbb{L}\) be a real number field of degree \(d_{\mathbb{L}}\), \(\eta_{1},\ldots,\eta_{s}\in\mathbb{L}\) and \(b_{1},\ldots,b_{s}\in\mathbb{Z}\setminus\{0\}\). Let \(B\geq\max\{|b_{1}|,\ldots,|b_{s}|\}\) and
\[\Lambda=\eta_{1}^{b_{1}}\cdots\eta_{s}^{b_{s}}-1.\]
Let \(A_{1},\ldots,A_{s}\) be real numbers such that
\[A_{i}\geq\max\{d_{\mathbb{L}}h(\eta_{i}),|\log\eta_{i}|,0.16\},\quad i=1, \ldots,s.\]
With the above notation, Matveev [6] proved the following result.
**Theorem 3**.: _Assume that \(\Lambda\neq 0\). Then_
\[\log|\Lambda|>-1.4\cdot 30^{s+3}\cdot s^{4.5}\cdot d_{\mathbb{L}}^{2}\cdot(1+ \log d_{\mathbb{L}})\cdot(1+\log B)\cdot A_{1}\cdots A_{s}.\]
We also need the following result from Sanchez and Luca [7].
**Lemma 2**.: _Let \(r\geq 1\) and \(H>0\) be such that \(H>(4r^{2})^{r}\) and \(H>L/(\log L)^{r}\). Then_
\[L<2^{r}H(\log H)^{r}.\]
### Reduction method
The bounds on the variables obtained via Baker's theory [8] are too large for any computational purposes. To reduce the bounds, we use the reduction method due to Dujella and Petho [9, Lemma 5a]. For a real number \(X\), \(|X|:=\min\{|X-n|:n\in\mathbb{Z}\}\) stands for the distance of \(X\) to the nearest integer.
**Lemma 3**.: _Let \(M\) be a positive integer, \(p/q\) be a convergent of the continued fraction expansion of an irrational number \(\tau\) such that \(q>6M\), and \(A,B,\mu\) be some real numbers with \(A>0\) and \(B>1\). Furthermore, let_
\[\varepsilon:=|\mu q|-M\cdot|\tau q|.\]
_If \(\varepsilon>0\), then there is no solution to the inequality_
\[0<|u\tau-v+\mu|<AB^{-w} \tag{5}\]
_in positive integers \(u,v\) and \(w\) with_
\[u\leq M\text{ and }w\geq\frac{\log(Aq/\varepsilon)}{\log B}.\]
Proofs of main results
### Proof of Theorem 1
To prove Theorem 1, we will use the following lemma which provides a relation on the size of \(k\) versus \(n\) and \(g\).
**Lemma 4**.: _All solutions of the Diophantine equation (1) satisfy_
\[k<8n\log g.\]
**Proof 2**.: From (1), we have
\[\alpha_{\mathcal{N}}^{k-2}\leq\mathcal{N}_{k}=d_{1}\frac{g^{\ell}-1}{g-1} \cdot d_{2}\frac{g^{m}-1}{g-1}\cdot d_{3}\frac{g^{n}-1}{g-1}\leq(g^{n}-1)^{3}< g^{3n}.\]
Taking logarithm on both sides, we get \((k-2)\log\alpha_{\mathcal{N}}<3n\log g\). Since \(n\geq 2\) and \(g\geq 2\), we obtain the desired inequality. This ends the proof.
**Proof 3** (Proof of Theorem 1).: If \(n=1\), then \(\ell=m=1\). So, equation (1) becomes
\[\mathcal{N}_{k}=d_{1}d_{2}d_{3}\]
which implies
\[\alpha_{\mathcal{N}}^{k-2}\leq\left(g-1\right)^{3}\]
which leads to
\[k<2+3\frac{\log g}{\log\alpha_{\mathcal{N}}}.\]
Now, suppose \(n\geq 2\). From (1) and (2), we have
\[\mathcal{N}_{k}=a_{\mathcal{N}}\alpha_{\mathcal{N}}^{k}+b_{\mathcal{N}}\beta_ {\mathcal{N}}^{k}+c_{\mathcal{N}}\gamma_{\mathcal{N}}^{k}=d_{1}\frac{g^{\ell} -1}{g-1}\cdot d_{2}\frac{g^{m}-1}{g-1}\cdot d_{3}\frac{g^{n}-1}{g-1}.\]
Which implies
\[\begin{split} a_{\mathcal{N}}\alpha_{\mathcal{N}}^{k}-\frac{d_{1 }d_{2}d_{3}g^{\ell+m+n}}{(g-1)^{3}}&=-\frac{d_{1}d_{2}d_{3} \bigg{(}g^{\ell+m}+g^{\ell+n}+g^{m+n}\bigg{)}}{(g-1)^{3}}\\ &+\frac{d_{1}d_{2}d_{3}\bigg{(}g^{l}+g^{m}+g^{n}\bigg{)}}{(g-1)^ {3}}-\frac{d_{1}d_{2}d_{3}}{(g-1)^{3}}-\Pi(k).\end{split} \tag{6}\]
Taking the absolute values of both sides of (6) and using (4), we get
\[\begin{split}\bigg{|}a_{\mathcal{N}}\alpha_{\mathcal{N}}^{k}- \frac{d_{1}d_{2}d_{3}g^{\ell+m+n}}{(g-1)^{3}}\bigg{|}&<\frac{d_{ 1}d_{2}d_{3}\bigg{(}g^{\ell+m}+g^{\ell+n}+g^{m+n}\bigg{)}}{(g-1)^{3}}\\ &+\frac{d_{1}d_{2}d_{3}\bigg{(}g^{l}+g^{m}+g^{n}\bigg{)}}{(g-1)^ {3}}+\frac{d_{1}d_{2}d_{3}}{(g-1)^{3}}+\frac{1}{\alpha_{\mathcal{N}}^{k/2}}. \end{split} \tag{7}\]
Multiplying both sides of (7) by \(\dfrac{(g-1)^{3}}{d_{1}d_{2}d_{3}g^{\ell+n+m}}\) and noticing the fact that \(\ell\leq m\leq n\), we get the inequality
\[\bigg{|}\dfrac{(g-1)^{3}\cdot a_{\mathcal{N}}\alpha_{\mathcal{N}} ^{k}\cdot g^{-(\ell+n+m)}}{d_{1}d_{2}d_{3}}-1\bigg{|} <\dfrac{1}{g^{\ell}}+\dfrac{1}{g^{m}}+\dfrac{1}{g^{n}}+\dfrac{1}{ g^{\ell+m}}+\dfrac{1}{g^{\ell+n}}\] \[+\dfrac{1}{g^{m+n}}+\dfrac{1}{g^{\ell+m+n}}+\dfrac{(g-1)^{3}}{ \alpha_{\mathcal{N}}^{k/2}d_{1}d_{2}d_{3}g^{\ell+n+m}}\] \[<8\cdot g^{-\ell}.\]
So, we get
\[\bigg{|}\dfrac{a_{\mathcal{N}}(g-1)^{3}}{d_{1}d_{2}d_{3}}\cdot \alpha_{\mathcal{N}}^{k}\cdot g^{-(\ell+n+m)}-1\bigg{|}<8\cdot g^{-\ell}. \tag{8}\]
We put
\[\Gamma_{1}:=\dfrac{a_{\mathcal{N}}(g-1)^{3}}{d_{1}d_{2}d_{3}}\cdot\alpha_{ \mathcal{N}}^{k}\cdot g^{-(\ell+n+m)}-1.\]
Let us show \(\Gamma_{1}\neq 0\). We proceed by the contrary. Assume that \(\Gamma_{1}=0\). Then
\[a_{\mathcal{N}}\alpha_{\mathcal{N}}^{k}=\dfrac{d_{1}d_{2}d_{3}}{(g-1)^{3}} \cdot g^{\ell+m+n}\]
which implies
\[\sigma_{\varphi}\bigg{(}a_{\mathcal{N}}\alpha_{\mathcal{N}}^{k}\bigg{)}=b_{ \mathcal{N}}\beta_{\mathcal{N}}^{k}=\dfrac{d_{1}d_{2}d_{3}}{(g-1)^{3}}\cdot g^ {\ell+m+n}.\]
Taking the absolute value, we get
\[\bigg{|}b_{\mathcal{N}}\beta_{\mathcal{N}}^{k}\bigg{|}=\bigg{|}\dfrac{d_{1}d_{ 2}d_{3}}{(g-1)^{3}}\cdot g^{\ell+m+n}\bigg{|}.\]
We have \(\bigg{|}b_{\mathcal{N}}\beta_{\mathcal{N}}^{k}\bigg{|}<1\) instead \(\bigg{|}\dfrac{d_{1}d_{2}d_{3}}{(g-1)^{3}}\cdot g^{\ell+m+n}\bigg{|}>1\) since \(1\leq\ell\leq m\leq n\), which leads to a contradiction. Hence \(\Gamma_{1}\neq 0\).
In order to apply Matveev's result to \(\Gamma_{1}\), set
\[t:=3,\quad\eta_{1}:=\dfrac{a_{\mathcal{N}}(g-1)^{3}}{d_{1}d_{2} d_{3}},\quad\eta_{2}:=\alpha_{\mathcal{N}},\quad\eta_{3}:=g,\] \[b_{1}:=1,\quad b_{2}:=k,\quad b_{3}:=-(\ell+m+n),\]
and \(\mathbb{K}:=\mathbb{Q}(\eta_{1},\eta_{2},\eta_{3})=\mathbb{Q}(\alpha_{ \mathcal{N}})\) which is a real number field of degree \(d_{\mathbb{K}}=3\).
Using properties of the logarithmic height, we get
\[h(\eta_{2})=h(\alpha_{\mathcal{N}})=\dfrac{\log\alpha_{\mathcal{N}}}{3},\quad h (\eta_{3})=h(g)=\log g\]
and
\[h(\eta_{1}) =h\bigg{(}\dfrac{a_{\mathcal{N}}(g-1)^{3}}{d_{1}d_{2}d_{3}}\bigg{)}\] \[\leq h(a_{\mathcal{N}})+h\bigg{(}\dfrac{(g-1)^{3}}{d_{1}d_{2}d_{3} }\bigg{)}\] \[\leq\dfrac{1}{3}\log 23+\log\bigg{(}\max\bigg{\{}(g-1)^{3},d_{1}d_{2} d_{3}\bigg{\}}\bigg{)}\] \[<3\log(g)+2<6\log g\quad\text{since $g\geq 2$}.\]
Thus, we can take
\[A_{1}=18\log(g),\quad A_{2}=\log\alpha_{\mathcal{N}},\quad\text{and }A_{3}=3\log g.\]
By Lemma 4, we have \(k<8n\log g\), so we put \(B=8n\log g\).
Using Theorem 3, we see that
\[\log|\Gamma_{1}|> -1.4\times 30^{6}\times 3^{4.5}\times 3^{2}(1+\log 3)(1+\log(8n\log g))\] \[\qquad\times(18\log(g))(3\log g\log\alpha_{\mathcal{N}})\] \[>-5.6\times 10^{13}(1+\log(8n\log g))(\log^{2}g).\]
Comparing the above inequality with (8), we obtain that
\[\ell\log g-\log 8<5.6\times 10^{13}(1+\log(8n\log g))(\log^{2}g).\]
Since \(g\geq 2\) and \(n\geq 2\), we have
\[1+\log(8n\log g)<8\log(n\log g)\]
so we get
\[\ell<4.5\times 10^{14}\log n\log^{2}g.\]
Rewrite (1), we get
\[\frac{a_{\mathcal{N}}\alpha_{\mathcal{N}}^{k}(g-1)}{d_{1}(g^{\ell}-1)}+\frac {\Pi(k)(g-1)}{d_{1}(g^{\ell}-1)}=\frac{d_{2}d_{3}}{(g-1)^{2}}\bigg{(}g^{n+m}- g^{m}-g^{n}+1\bigg{)},\]
which implies
\[\frac{a_{\mathcal{N}}\alpha_{\mathcal{N}}^{k}(g-1)}{d_{1}(g^{\ell}-1)}-\frac {d_{2}d_{3}g^{n+m}}{(g-1)^{2}}=-\frac{\Pi(k)(g-1)}{d_{1}(g^{\ell}-1)}-\frac{ d_{2}d_{3}g^{m}}{(g-1)^{2}}-\frac{d_{2}d_{3}g^{n}}{(g-1)^{2}}+\frac{d_{2}d_{3}}{(g-1)^ {2}}. \tag{9}\]
Taking the absolute values of both sides of (9), we have
\[\bigg{|}\frac{a_{\mathcal{N}}\alpha_{\mathcal{N}}^{k}(g-1)}{d_{1}(g^{\ell}-1 )}-\frac{d_{2}d_{3}g^{n+m}}{(g-1)^{2}}\bigg{|}<\frac{(g-1)}{d_{1}(g^{\ell}-1) \alpha_{\mathcal{N}}^{k/2}}+\frac{d_{2}d_{3}g^{m}}{(g-1)^{2}}+\frac{d_{2}d_{3 }g^{n}}{(g-1)^{2}}+\frac{d_{2}d_{3}}{(g-1)^{2}}.\]
Dividing both sides of the inequality above by \(\frac{d_{2}d_{3}g^{n+m}}{(g-1)^{2}}\) and using the fact that \(n\geq 2\), we see that
\[\bigg{|}\frac{(g-1)^{3}}{d_{1}d_{2}d_{3}(g^{\ell}-1)}\cdot a_{ \mathcal{N}}\alpha_{\mathcal{N}}^{k}\cdot g^{-(n+m)}-1\bigg{|} \leq\frac{(g-1)^{3}}{d_{1}d_{2}d_{3}(g^{\ell}-1)\alpha_{ \mathcal{N}}^{k/2}g^{n+m}}+\frac{1}{g^{n}}+\frac{1}{g^{m}}+\frac{1}{g^{n+m}}\] \[<4\cdot g^{-m}.\]
Then, we have
\[\bigg{|}\frac{(g-1)^{3}}{d_{1}d_{2}d_{3}(g^{\ell}-1)}\cdot a_{ \mathcal{N}}\alpha_{\mathcal{N}}^{k}\cdot g^{-(n+m)}-1\bigg{|}<\frac{4}{g^{m}}. \tag{10}\]
We put
\[\Gamma_{2}=\frac{(g-1)^{3}}{d_{1}d_{2}d_{3}(g^{\ell}-1)}\cdot a_{ \mathcal{N}}\alpha_{\mathcal{N}}^{k}\cdot g^{-(n+m)}-1.\]
One can check that \(\Gamma_{2}\neq 0\), proceeding as we do for \(\Gamma_{1}\). Let us apply Matveev's result for \(\Gamma_{2}\). Let
\[t:=3,\quad\eta_{1}:=\frac{(g-1)^{3}}{d_{1}d_{2}d_{3}(g^{\ell}-1)} \cdot a_{\mathcal{N}},\quad\eta_{2}:=\alpha_{\mathcal{N}},\quad\eta_{3}:=g,\] \[b_{1}:=1,\quad b_{2}:=k,\quad b_{3}:=-(m+n),\]
and \(\mathbb{K}:=\mathbb{Q}(\eta_{1},\eta_{2},\eta_{3})=\mathbb{Q}(\alpha_{ \mathcal{N}})\) of degree \(d_{\mathbb{K}}=3\). By Lemma 4, we have \(k<8n\log g\), so we put \(B=8n\log g\). We have
\[h(\eta_{2})=h(\alpha_{\mathcal{N}})=\frac{\log\alpha_{\mathcal{N}}}{3},\quad h (\eta_{3})=h(g)=\log g,\]
and
\[h(\eta_{1}) =h\bigg{(}\frac{(g-1)^{3}}{d_{1}d_{2}d_{3}(g^{\ell}-1)}\cdot a_{ \mathcal{N}}\bigg{)}\] \[\leq h(a_{\mathcal{N}})+h\bigg{(}\frac{(g-1)^{3}}{d_{1}d_{2}d_{3 }(g^{\ell}-1)}\bigg{)}\] \[\leq\frac{1}{3}\log 23+\log\biggl{(}\max\biggl{\{}(g-1)^{3},d_{1}d_{2 }d_{3}\biggr{\}}\bigg{)}+h\bigg{(}\frac{1}{g^{\ell}-1}\bigg{)}\] \[<2+3\log(g-1)+\log(g^{\ell}-1)\] \[<(3+\ell)\log(g)+2\] \[<(6+\ell)\log g\quad\text{since $g\geq 2$}.\]
Thus, we can take
\[A_{1}=(18+3\ell)\log(g),\quad A_{2}=\log\alpha_{\mathcal{N}},\quad\text{and $A_{3}=3\log g$}.\]
Using Theorem 3, we see that
\[\log|\Gamma_{2}|> -1.4\times 30^{6}\times 3^{4.5}\times 3^{2}(1+\log 3)(1+\log(8n \log g))\] \[\quad\times((18+3\ell)\log(g))(3\log g\log\alpha_{\mathcal{N}})\] \[>-3.1\times 10^{12}(1+\log(8n\log g))(\log^{2}g)(18+3\ell).\]
Comparing with (10), we get
\[m\log g-\log 4<3.1\times 10^{12}(1+\log(8n\log g))(\log^{2}g)(18+3\ell).\]
We have
\[1+\log(8n\log g)<8\log n\log g\quad\text{and}\quad\ell<4.5\times 10^{14}\log n \log^{2}g.\]
So,
\[m<3.8\times 10^{28}\log^{2}n\log^{4}g.\]
Reorganizing (1), we get
\[\frac{d_{3}g^{n}}{g-1}-\frac{(g-1)^{2}\cdot a_{\mathcal{N}}\alpha_{\mathcal{N }}^{k}}{d_{1}d_{2}(g^{\ell}-1)(g^{m}-1)}=\frac{d_{3}}{g-1}+\frac{\Pi(k)(g-1)^{ 2}}{d_{1}d_{2}(g^{\ell}-1)(g^{m}-1)}.\]
We have
\[\left|\frac{d_{3}g^{n}}{g-1}-\frac{(g-1)^{2}\cdot a_{\mathcal{N}}\alpha_{ \mathcal{N}}^{k}}{d_{1}d_{2}(g^{\ell}-1)(g^{m}-1)}\right|<\frac{d_{3}}{g-1}+ \frac{(g-1)^{2}}{\alpha_{\mathcal{N}}^{k/2}d_{1}d_{2}(g^{\ell}-1)(g^{m}-1)}\]
by taking the absolute values of both sides of (9). Dividing both sides of the above inequality by \((d_{3}g^{n})/(g-1)\) and using the fact that \(n\geq 2\), we see that
\[\left|1-\frac{a_{\mathcal{N}}(g-1)^{3}}{d_{1}d_{2}d_{3}(g^{\ell}-1) (g^{m}-1)}\cdot g^{-n}\cdot\alpha_{\mathcal{N}}^{k}\right| <\frac{1}{g^{n}}+\frac{1}{g^{n-1}}<\frac{2}{g^{n-1}}.\] \[<2\cdot g^{1-n}.\]
Then, we have
\[\left|\frac{a_{\mathcal{N}}(g-1)^{3}}{d_{1}d_{2}d_{3}(g^{\ell}-1) (g^{m}-1)}\cdot g^{-n}\cdot\alpha_{\mathcal{N}}^{k}-1\right|<2\cdot g^{1-n}. \tag{11}\]
We put
\[\Gamma_{3}=\frac{a_{\mathcal{N}}(g-1)^{3}}{d_{1}d_{2}d_{3}(g^{ \ell}-1)(g^{m}-1)}\cdot g^{-n}\cdot\alpha_{\mathcal{N}}^{k}-1.\]
One can verify that \(\Gamma_{3}\neq 0\). Let us analyze Matveev's result for \(\Gamma_{3}\). Let
\[t:=3,\quad\eta_{1}:=\frac{a_{\mathcal{N}}(g-1)^{3}}{d_{1}d_{2}d _{3}(g^{\ell}-1)(g^{m}-1)},\quad\eta_{2}:=\alpha_{\mathcal{N}},\quad\eta_{3}:=g,\] \[b_{1}:=1,\quad b_{2}:=k,\quad b_{3}:=-n,\]
and \(\mathbb{K}:=\mathbb{Q}(\eta_{1},\eta_{2},\eta_{3})=\mathbb{Q}(\alpha_{ \mathcal{N}})\) of degree \(d_{\mathbb{K}}=3\). By Lemma 4, we have \(k<8n\log g\), so we put \(B=8n\log g\). We have
\[h(\eta_{2})=h(\alpha_{\mathcal{N}})=\frac{\log\alpha_{\mathcal{N}}}{3},\quad h (\eta_{3})=h(g)=\log g,\]
and
\[h(\eta_{1}) =h\bigg{(}\frac{a_{\mathcal{N}}(g-1)^{3}}{d_{1}d_{2}d_{3}(g^{ \ell}-1)(g^{m}-1)}\bigg{)}\] \[\leq h(a_{\mathcal{N}})+h\bigg{(}\frac{(g-1)^{3}}{d_{1}d_{2}d_{3 }(g^{\ell}-1)(g^{m}-1)}\bigg{)}\] \[\leq\frac{1}{3}\log 23+\log\biggl{(}\max\{(g-1)^{3},d_{1}d_{2}d_{3 }\}\bigg{)}+h\bigg{(}\frac{1}{g^{\ell}-1}\bigg{)}+h\bigg{(}\frac{1}{g^{m}-1} \bigg{)}\] \[<2+3\log\biggl{(}g-1\bigg{)}+\log\biggl{(}g^{\ell}-1\bigg{)}+\log \biggl{(}g^{m}-1\bigg{)}\] \[<(3+\ell+m)\log(g)+2\] \[<(6+\ell+m)\log g\quad\text{since $g\geq 2$}.\]
Thus, we can take
\[A_{1}=3(6+\ell+m)\log\biggl{(}g\biggr{)},\quad A_{2}=\log\alpha_{\mathcal{N}},\quad\text{and $A_{3}=3\log g$}.\]
Using Theorem 3, we see that
\[\log\biggl{|}\Gamma_{3}\biggr{|} >-1.4\times 30^{6}\times 3^{4.5}\times 3^{3}(1+\log 3)(1+\log(8n \log g))\] \[\qquad\times((6+\ell+m)\log\biggl{(}g\biggr{)})(3\log g\log\alpha _{\mathcal{N}})\] \[>-9.31\times 10^{12}(1+\log(8n\log g))(\log^{2}g)(6+\ell+m).\]
Comparing with (11), we get
\[(n-1)\log g-\log 2<9.31\times 10^{12}(1+\log(8n\log g))(\log^{2}g)(6+\ell+m)\]
We have
\[1+\log(8n\log g)<8\log n\log g,\quad m<3.8\times 10^{28}\times \log^{2}n\log^{4}g\] \[\text{and}\quad\ell<4.5\times 10^{14}\log n\log^{2}g.\]
Thus
\[6+\ell+m <4.51\times 10^{14}\log n\log^{2}g+3.8\times 10^{28}\times\log^{2}n \log^{4}g\] \[<4\times 10^{28}\times\log^{2}n\log^{4}g.\]
So, we have
\[n<3\times 10^{42}\log^{3}n\log^{6}g.\]
Now we apply Lemma 2, by setting
\[r:=3,\quad L:=n\quad\text{and}\quad H:=3\times 10^{42}\cdot\log^{6}g,\]
we get
\[n <2^{3}\cdot 3\times 10^{42}\cdot\log^{6}g\times\log^{3}\!\left(3 \times 10^{42}\cdot\log^{6}g\right)\] \[<2.4\times 10^{43}\cdot\log^{6}g\cdot(95.6+6\log\log g)^{3}\] \[<5.91\times 10^{49}\log^{9}g.\]
Notice that we have used the inequality \(95.6+6\log\log g<135\log g\) which holds since \(g\geq 2\).
### Proof of Theorem 2
Since \(2\leq g\leq 10\), according to Theorem 1, we have
\[\ell\leq m\leq n<1.08\times 10^{53}\quad\text{and}\quad k<1.99\times 10^{54}.\]
Consequently, the next step is to reduce the upper bounds above in order to identify the set of the interval in which the possible solutions of (1) lie. To do this, we proceed in three steps.
**Step 1.**
Using (8), let
\[\Lambda_{1}:=-\log(\Gamma_{1}+1)=(\ell+m+n)\log g-k\log\alpha_{\mathcal{N}}- \log\!\left(\frac{(g-1)^{3}a_{\mathcal{N}}}{d_{1}d_{2}d_{3}}\right)\!.\]
Notice that (8) can be rewritten as
\[\left|e^{-\Lambda_{1}}-1\right|<\frac{8}{g^{\ell}}\]
Observe that \(\Lambda_{1}\neq 0\), since \(e^{-\Lambda_{1}}-1=\Gamma_{1}\neq 0\). Assume that \(\ell\geq 5\). Then
\[\left|e^{-\Lambda_{1}}-1\right|<\frac{8}{g^{\ell}}<\frac{1}{2}.\]
Since \(\left|x\right|<2\left|\mathrm{e}^{x}-1\right|\), if \(\left|x\right|<\dfrac{1}{2}\) holds, then
\[\left|\Lambda_{1}\right|<\dfrac{16}{g^{\ell}}.\]
Substituting \(\Lambda_{1}\) in the above inequality with its value and dividing through by \(\log\alpha_{\mathcal{N}}\), we get
\[\left|(\ell+m+n)\bigg{(}\dfrac{\log g}{\log\alpha_{\mathcal{N}}}\bigg{)}-k+ \dfrac{\log\bigg{(}\dfrac{(g-1)^{3}a_{\mathcal{N}}}{d_{1}d_{2}d_{3}}\bigg{)}}{ \log\alpha_{\mathcal{N}}}\right|<\dfrac{16}{\log\alpha_{\mathcal{N}}g^{\ell}}.\]
Then, we can apply Lemma 3 with the data
\[\tau :=\dfrac{\log g}{\log\alpha_{\mathcal{N}}},\quad\mu:=\dfrac{ \log\bigg{(}\dfrac{(g-1)^{3}a_{\mathcal{N}}}{d_{1}d_{2}d_{3}}\bigg{)}}{\log \alpha_{\mathcal{N}}},\quad A:=\dfrac{16}{\log\alpha_{\mathcal{N}}},\] \[B :=g,\quad\text{and}\quad w:=\ell\quad\text{with}\quad 1\leq d_{1} \leq d_{2}\leq d_{3}\leq g-1.\]
We can take \(M:=1.99\times 10^{54}\), since \(k<8n\log g<1.99\times 10^{54}\). So, for the remaining proof, we use _Mathematica_ to apply Lemma 3. For the computations, if the first convergent \(q_{t}\) is such that \(q_{t}>6M\) does not satisfy the condition \(\varepsilon>0\), then we use the next convergent until we find the one that satisfies the conditions. Thus, we have that
Therefore
\[1\leq\ell\leq\dfrac{\log((16/\log\alpha_{\mathcal{N}})\cdot q_{118}/0.36)}{ \log 2}\leq 194.\]
#### Step 2.
In this step, we have to reduce the upper bound on \(m\). To do this, let us consider
\[\Lambda_{2}:=-\log(\Gamma_{2}+1)=(m+n)\log g-k\log\alpha_{\mathcal{N}}+\log \biggl{(}\dfrac{(g-1)^{3}a_{\mathcal{N}}}{d_{1}d_{2}d_{3}\cdot(g^{\ell}-1)} \biggr{)}.\]
Thus inequality (10) becomes
\[\left|e^{-\Lambda_{2}}-1\right|<\dfrac{4}{g^{m}}<\dfrac{1}{2},\]
\begin{table}
\begin{tabular}{|c||c|c|c|c|c|c|c|c|c|} \hline \(g\) & \(2\) & \(3\) & \(4\) & \(5\) & \(6\) & \(7\) & \(8\) & \(9\) & \(10\) \\ \hline \hline \(q_{t}\) & \(q_{118}\) & \(q_{100}\) & \(q_{110}\) & \(q_{115}\) & \(q_{90}\) & \(q_{106}\) & \(q_{112}\) & \(q_{102}\) & \(q_{96}\) \\ \hline \(\varepsilon\geq\) & \(0.36\) & \(0.26\) & \(0.03\) & \(0.01\) & \(0.06\) & \(0.001\) & \(0.0019\) & \(0.005\) & \(0.01\) \\ \hline \(\ell\leq\) & \(194\) & \(121\) & \(99\) & \(87\) & \(76\) & \(72\) & \(67\) & \(62\) & \(59\) \\ \hline \end{tabular}
\end{table}
Table 2: Upper bound on \(\ell\)
which holds for \(m\geq 4\). It follows that
\[\bigg{|}(m+n)\frac{\log g}{\log\alpha_{\mathcal{N}}}-k+\frac{\log\biggl{(}\frac{( g-1)^{3}a_{\mathcal{N}}}{d_{1}d_{2}d_{3}\cdot(g^{\ell}-1)}\biggr{)}}{\log\alpha_{ \mathcal{N}}}\bigg{|}<\frac{8}{g^{m}\log\alpha_{\mathcal{N}}}. \tag{12}\]
So, the conditions of Lemma 3 are satisfied. Applying this lemma to Inequality (12) with the following data
\[\tau:=\frac{\log g}{\log\alpha_{\mathcal{N}}},\quad\mu:=\frac{\log\biggl{(} \frac{(g-1)^{3}a_{\mathcal{N}}}{d_{1}d_{2}d_{3}\cdot(g^{\ell}-1)}\biggr{)}}{ \log\alpha_{\mathcal{N}}},\quad A:=\frac{8}{\log\alpha_{\mathcal{N}}},\quad B: =g,\quad\text{and}\quad w:=m\]
with \(1\leq d_{1}\leq d_{2}\leq d_{3}\leq g-1\) and \(1\leq\ell\leq 194\).
As \(k<8n\log g<1.99\times 10^{54}\), we can take \(M:=1.99\times 10^{54}\). With _Mathematica_ we get the following results : In all cases, we can conclude that
\[1\leq m\leq\frac{\log((8/\log\alpha_{\mathcal{N}})\cdot q_{115}/0.0009)}{\log 2 }\leq 200.\]
**Step 3.**
Finally, to reduce the bound on \(n\) we have to choose
\[\Lambda_{3}:=\log(\Gamma_{3}+1)=(n)\log g-k\log\alpha_{\mathcal{N}}+\log\biggl{(} \frac{(g-1)^{3}a_{\mathcal{N}}}{d_{1}d_{2}d_{3}\cdot(g^{\ell}-1)(g^{m}-1)} \biggr{)}.\]
We have,
\[\bigg{|}e^{-\Lambda_{3}}-1\bigg{|}<\frac{2}{g^{n-1}}<\frac{1}{2},\]
which is valid for \(n\geq 4\) and \(g\geq 2\). It follows that
\[\bigg{|}m\frac{\log g}{\log\alpha_{\mathcal{N}}}-k+\frac{\log\biggl{(}\frac{( g-1)^{3}a_{\mathcal{N}}}{d_{1}d_{2}d_{3}\cdot(g^{\ell}-1)(g^{m}-1)}\biggr{)}}{\log \alpha_{\mathcal{N}}}\bigg{|}<\frac{4}{g^{n-1}\alpha_{\mathcal{N}}}. \tag{13}\]
Now we have to apply Lemma 3 to (13) by taking the following parameters
\[\tau:=\frac{\log g}{\log\alpha_{\mathcal{N}}},\quad\mu:=\frac{ \log\biggl{(}\frac{(g-1)^{3}a_{\mathcal{N}}}{d_{1}d_{2}d_{3}\cdot(g^{\ell}-1)( g^{m}-1)}\biggr{)}}{\log\alpha_{\mathcal{N}}},\quad A:=\frac{8}{\log\alpha_{ \mathcal{N}}},\] \[B:=g,\quad\text{and}\quad w:=n-1\]
\begin{table}
\begin{tabular}{|c|c|c|c|c|c|c|c|c|} \hline \(g\) & \(2\) & \(3\) & \(4\) & \(5\) & \(6\) & \(7\) & \(8\) & \(9\) & \(10\) \\ \hline \hline \(q_{t}\) & \(q_{118}\) & \(q_{100}\) & \(q_{110}\) & \(q_{115}\) & \(q_{90}\) & \(q_{106}\) & \(q_{112}\) & \(q_{102}\) & \(q_{96}\) \\ \hline \(\varepsilon\geq\) & \(0.004\) & \(0.0007\) & \(0.0003\) & \(0.001\) & \(0.0002\) & \(0.0005\) & \(0.0001\) & \(0.005\) & \(0.001\) \\ \hline \(m\leq\) & \(199\) & \(125\) & \(102\) & \(88\) & \(78\) & \(72\) & \(68\) & \(62\) & \(60\) \\ \hline \end{tabular}
\end{table}
Table 3: Upper bound on \(m\)
with \(1\leq d_{1}\leq d_{2}\leq d_{3}\leq g-1,\quad 1\leq\ell\leq 194\) and \(1\leq m\leq 183\).
Using the fact that \(k<8n\log g<1.99\times 10^{54}\), we can take \(M:=1.99\times 10^{54}\), and we get
It follows from the above table that
\[1\leq n\leq\frac{\log((4/\log\alpha_{\mathcal{N}})\cdot q_{118}/10^{-6})}{\log 2 }\leq 205,\]
which is valid for all \(g\) such that \(2\leq g\leq 10\). In light of the above results, we need to check the equation (1) in the cases \(2\leq g\leq 10\) for \(1\leq d_{1},d_{2},d_{3}\leq 9\), \(1\leq n\leq 205\), \(1\leq m\leq 200\), \(1\leq\ell\leq 194\) and \(1\leq k\leq 11500\). A quick inspection using _Sagemath_ reveals that the Diophantine equation (1) in the range \(2\leq g\leq 10\) has only the solution listed in the statement of Theorem 2. This completes the proof of Theorem 2.
## 5 Discussions
In addition to Baker's method and linear forms in logarithms, there are other so-called "classical" methods and techniques for solving exponential Diophantine equations. These include the modular arithmetic method, \(p\)-adic analysis, Fermat's method of infinite descent, the factorization method, solving using inequalities, the mathematical induction method, the parametric method, and so on. It would be interesting to treat the same problems approached in this article with other methods than those of the linear forms in logarithms. The modular arithmetic method could be used to determine Narayana numbers, which are products of three repdigits in base \(g\) with \(g\geq 2\) due to the interesting divisibility properties possessed by the repdigits.
## 6 Acknowledgments
The authors express their gratitude to the anonymous reviewer for the instructive suggestions. The first author is partially supported by Universite de Kara (Togo) and the second author is supported by UCAD, the Universite Cheikh Anta Diop de Dakar. This project was initiated when the first author visited UCAD on a research stay. He thanks the authorities for the warm hospitality and the working environment.
|
2303.03709 | Bootstrap The Original Latent: Learning a Private Model from a Black-box
Model | In this paper, considering the balance of data/model privacy of model owners
and user needs, we propose a new setting called Back-Propagated Black-Box
Adaptation (BPBA) for users to better train their private models via the
guidance of the back-propagated results of a Black-box foundation/source model.
Our setting can ease the usage of foundation/source models as well as prevent
the leakage and misuse of foundation/source models. Moreover, we also propose a
new training strategy called Bootstrap The Original Latent (BTOL) to fully
utilize the foundation/source models. Our strategy consists of a domain adapter
and a freeze-and-thaw strategy. We apply our BTOL under BPBA and Black-box UDA
settings on three different datasets. Experiments show that our strategy is
efficient and robust in various settings without manual augmentations. | Shuai Wang, Daoan Zhang, Jianguo Zhang, Weiwei Zhang, Rui Li | 2023-03-07T07:47:41Z | http://arxiv.org/abs/2303.03709v4 | # Bootstrap The Original Latent: Learning a Private Model from a Black-box Model
###### Abstract
In this paper, considering the balance of data/model privacy of model owners and user needs, we propose a new setting called _Back-Propagated Black-Box Adaptation (BPBA)_ for users to better train their private models via the guidance of the back-propagated results of a Black-box foundation/source model. Our setting can ease the usage of the foundation/source models as well as prevent the leakage and misuse of foundation/source models. These phenomenons are more severe and common in medical image analysis. To better deal with the problems, we propose a new paradigm called _Bootstrap The Original Latent (BTOL)_ to fully utilize the foundation/source models. Our strategy consists of a trainable adapter and a freeze-and-thaw strategy. We apply our _BTOL_ under BPBA and Black-box UDA on three different medical image segmentation datasets. Experiments show that our paradigm is efficient and robust under various settings.
## 1 Introduction
Foundation Models such as ChatGPT [20] and BEIT-3 [32] are robust and efficient. They have an increasing power to generalize different types of data. It is a trend that people can adapt their private data by generating a tiny task-specific model under the instruction of the foundation models [32][20][21]. However, making such a foundation model available to everyone is a luxury. On the one hand, training such a big model is expensive even for big-name companies. On the other hand, the owners should take into account commercial considerations and data misuse. Therefore, foundation models such as GPT31, ERNIE 3.0 [25] are only provided as a service when only black-box API can be accessed.
Footnote 1: These authors contributed equally to this work.
Under the so-called Model-as-a-Service (MaaS) [24] scenario, users can adapt their data via zero-shot or black-box source-free domain adaptation. These techniques work well on language tasks but are inefficient in visual tasks because the distribution gap in vision tasks are large [35]. Even foundation vision models cannot generalize on such varieties of data. Thus, foundation models can help improve the efficiency of their task-specific models for the users [27, 28], especially for those who only have little or unannotated data. However, it still requires a tuning process for users to adapt their data [33, 34].
The phenomenon and need in the medical image analysis scenarios are more severe and common [10, 17, 38]. The performance of the source model can be substantially degraded when transferred to a different distribution. Also, the need for data/model privacy protection in the hospital is more strict than in other scenarios [29, 30].
Thus, a common question in both MaaS and medical image analysis scenarios has emerged: **Can we balance the privacy of the foundation/source models and the practical needs for users to train their models?** A recently proposed strategy called black-box domain adaptation [14] provides a solution for model owners by only providing the forward propagated logits to users. In this paper, we extend this strategy to provide both the forward and backward propagated results. This operation can ease the users to adapt their data simultaneously without affecting the privacy of source data and models. This proposed paradigm is called _Back-Propagated Black-Box Adaptation (BPBA)_.
For a better illustration, as shown in Table. 1, conventional UDA [8] (Unsupervised Domain Adaptation) allows users to access both source data and model. White-box UDA [13] bans the user's access to source data to preserve data privacy compared to conventional UDA. Black-box UDA [37] only allows the users to achieve the forward-propagation results (logits, etc.) to preserve the model privacy. While our BPBA additionally allows users to utilize the back-propagated information (usually gradients) compared to Black-box UDA. This relieves the burden for users to establish a tiny model on the private data as well as preserves the model privacy for source model owners.
### BPBA and Self-supervised learning
To better understand the BPBA, we give an analysis and comparison of BPBA and self-supervised learning [6, 9, 11, 36]. Since no source data and no ground truth are provided, we consider the adaption problem as a **conditional** self-supervised **alignment** problem. Both settings aim to learn good representations from the unlabeled data for the downstream tasks. Nevertheless, in BPBA, we have a conditional well-pre-trained model that contains lots of implicit information which can be inherited to optimize the target model. This is the crucial difference compared to the conventional self-supervised problem. How to leverage the hidden information in the source model shall be considered in the model design.
\begin{table}
\begin{tabular}{|c|c|c|c|c|} \hline & Source Data & Param. Avail. & For-propa. & Back-propa. \\ \hline Conventional UDA [8] & ✓ & ✓ & ✓ & ✓ \\ White-box UDA [13] & \(\times\) & ✓ & ✓ & ✓ \\ Black-box UDA [37] & \(\times\) & \(\times\) & ✓ & \(\times\) \\ BPBA (Ours) & \(\times\) & \(\times\) & ✓ & ✓ \\ \hline \end{tabular}
\end{table}
Table 1: **Comparison of Different Existing Paradigms.** UDA indicates unsupervised domain adaptation; Param. Avail. indicates the availability or updating of the parameters of source model; For-propa. indicates forward-propagation; Back-propa. indicates back-propagation outputs.
Moreover, take a brief look at traditional self-supervised learning; the critical properties of the success of self-supervised learning strategies is to maintain the alignment and uniformity of representations [31]. The alignment indicates that the representations of the positive pair are close in the latent space. The uniformity indicates that the representations should be distributed as evenly as possible on the unit sphere. If we fail to align the positive pair, the model will learn the sub-optimal representations. If we lose uniformity, the model will suffer from mode collapse. In BPBA, we also need to consider both properties when adapting unlabeled data. However, the pre-trained model provides a strict mapping for unlabeled data, thus holding a stable and uniform distribution in the latent space. Therefore, other than unleashing the potential of the source model, another critical point to optimize the model is to build a better alignment strategy to align the unlabeled and invisible source data.
### Bootstrap The Original Latent
To better utilize the user's data and guarantee the alignment, we propose a _Freeze-and-thaw Adapter_ strategy called _BTOL_ as shown in Fig. 1. Considering the potential domain gap between the unseen source data/model and target data, we design a novel cross-supervised paradigm to align the distribution via an adapter implicitly. To fully exploit the source model and unlabeled data, a freeze-and-thaw strategy is developed for the adapter and target model to teach each other. Notice that our _BTOL_ can also be utilized in the Black-box UDA task via a simulator as proposed in the _lower_ part of Fig. 1. Under both Black-box UDA and BPBA settings, our _BTOL_ can make full use of the source model and target data.
The contributions of this paper can be summarized into three folds:
(1) We find a better solution to balance the model privacy and user needs in both MaaS and medical image analysis by allowing users access to back-propagated information. We extend the Black-box UDA to Back-Propagated Black-Box Adaptation (BPBA) which can benefit both model owners and users.
(2) We propose a novel training strategy called _BTOL_ under the proposed setting. Our solution consists of an adapter and a freeze-and-thaw strategy. We then broaden our strategy to the Black-box UDA task, and our model also achieves state-of-the-art performance.
(3) Our paradigm outperforms all the Black-box UDA and White-box UDA methods on different settings and datasets.
## 2 Methodology
In this section, we introduce the novel approach _BTOL_ to better uncover the potential of the source model without any manual regularizations. For simplicity, in our method, adapters are used to narrow the distribution gap between the unseen source data and target data, freeze-and-thaw strategy can provide a constraint to avoid the representation collapse. To better exploit the potential
of our method, we utilize our _BTOL_ in both the BPBA and black-box UDA settings. **Without** any manual augmentations, our method can easily outperform any existing black-box UDA methods and even some white-box UDA methods. Detailed methods and analysis are presented below.
### BTOL in the BPBA
We first build the _BTOL_ strategy for BPBA, which is shown in the _upper_ part of Fig. 1. In BPBA, given a foundation/source model in the cloud server, which only allows users to achieve the output logits and back-propagated gradients. Users can utilize the two feed-backs to train their models.
As the source model is efficient only on the source domain data, we aim to transfer the target domain to source-domain-like distribution to satisfy the source model. Thus the source model can produce a more accurate result. We set an adapter to deal with the domain transfer and align the distribution between the unseen source and target domains.
We then introduce a dual EM [18] approach called the _freeze-and-thaw_ strategy to deal with the alignment. In the conventional EM algorithm, the expectation step and maximization step are interleaved by recalculating the parameters and estimating the distribution. In our method, we make an extension and adaptation for the BPBA problem. The updating of the adapter and target model are interleaved by freezing one module and training another one. The pseudo-code is presented in Algorithm 1.
The whole pipeline of our strategy is straightforward and easy to understand. We first trained the target model using the pseudo labels generated from source models. Then we train the adapter and target model via the freeze-and-thaw strategy. All the results will be reported in the experiments.
### BTOL in the Black-box UDA
In the Black-box UDA setting, The back-propagated result is unavailable. Existing methods can only utilize manual augmentations, label-denoising techniques, and regularizations. They all fail to bootstrap the inner latent information in the original source model. Under this tougher condition, we introduce a simulator to simulate the source data distribution; then, we can apply our freeze-and-thaw under this setting.
The simulation step is presented in the second and third steps of the _lower_ part in Fig. 1 after training the target model via the pseudo labels. We copy the weights of the target model to initialize the simulator. In the second step, we train the adapter to satisfy both the source and target models. This step helps the adapter output a mixture of source and target domains. In the third step, we freeze the adapter and train the simulator. This allows the simulator to get familiar with the output of the trained adapter, which is the mixed distribution of source and target domains. In the simulation procedure, the ideal distribution the simulator can imitate is the average mixture of the source and target
distributions. After the simulation, we can use the trained simulator to replace the invisible source model to execute the freeze-and-thaw strategy.
## 3 Experiments
### Dataset and Evaluation Metric.
We evaluate our method on three tasks: fundus segmentation, cardiac structure segmentation, and prostate segmentation. For the fundus segmentation task, we use the training set of REFUGE challenge [19] as the source domain and use the RIM-ONE-r3 [7] as the target domain. Following [4], we split the target domain into 99/60 images for training and testing, respectively. We resize each image to \(512\times 512\) to feed the network. For the cardiac structure segmentation task, we choose the public ACDC dataset [3] contains 200 volumes as the source domain. For the target domain, we use the LGE dataset from Multi-sequence Cardiac MR segmentation Challenge (MSCMR2019) [39]. We split it into 80%/20% for training and testing. We use 2d slices for training, and all images are resized to \(192\times 192\) as the network input. As for prostate segmentation, we use the
Figure 1: **Overview of our _BTOL_ under different settings. Upper: BPBA setting ; lower: Black-box UDA setting**
MSD05 [1] as the source domain and Promise12 [15] as the target domain. We also split it into 80%/20% for training and testing. For evaluation, we use two commonly-used metrics in the medical segmentation field: Dice Score (DSC) and Average Surface Distance (ASD). DSC measures the overlap between prediction and ground truth, and ASD represents the performance at the object boundary. Higher DSC and lower ASD mean better performance.
\begin{table}
\begin{tabular}{|c|c|c|c|c|c|c|c|} \hline Methods & Type & \multicolumn{3}{c|}{Dice \(\uparrow\)} & \multicolumn{3}{c|}{ASD\(\downarrow\)} \\ \hline & & Disc & Cup & Avg. & Disc & Cup & Avg. \\ \hline Source Model & - & \(88.34^{\pm 4.48}\) & \(71.35^{\pm 22.75}\) & \(79.85^{\pm 12.59}\) & \(10.65^{\pm 4.27}\) & \(10.75^{\pm 0.34}\) & \(10.7^{\pm 4.18}\) \\ Baseline Model & - & \(91.34^{\pm 4.49}\) & \(71.85^{\pm 17.00}\) & \(81.59^{\pm 9.83}\) & \(7.62^{\pm 3.65}\) & \(10.64^{\pm 6.71}\) & \(9.00^{\pm 3.87}\) \\ SRDA [2] & W & \(89.37^{\pm 2.70}\) & \(77.61^{\pm 13.58}\) & \(83.49^{\pm 8.14}\) & \(9.91^{\pm 2.45}\) & \(10.15^{\pm 5.75}\) & \(10.03^{\pm 4.10}\) \\ AdvEnt [26] & W & \(89.73^{\pm 3.66}\) & \(77.99^{\pm 21.08}\) & \(83.86^{\pm 12.37}\) & \(9.84^{\pm 3.86}\) & \(7.57^{\pm 4.24}\) & \(8.71^{\pm 8.10}\) \\ DAE [12] & W & \(89.08^{\pm 3.32}\) & \(79.01^{\pm 12.82}\) & \(84.41^{\pm 8.07}\) & \(11.63^{\pm 6.84}\) & \(10.31^{\pm 8.45}\) & \(10.97^{\pm 7.65}\) \\ DPL [4] & W & \(90.13^{\pm 3.06}\) & \(79.78^{\pm 11.05}\) & \(84.95^{\pm 7.06}\) & \(9.43^{\pm 3.46}\) & \(9.01^{\pm 5.59}\) & \(9.22^{\pm 4.53}\) \\ \hline EMD [16] & B & \(90.50^{\pm 3.78}\) & \(73.50^{\pm 11.56}\) & \(82.00^{\pm 8.76}\) & \(10.52^{\pm 4.18}\) & \(7.12^{\pm 4.15}\) & \(8.82^{\pm 2.59}\) \\ \(BTOL\)(Ours) & B & \(92.54^{\pm 4.08}\) & \(72.86^{\pm 13.46}\) & \(82.70^{\pm 8.24}\) & \(8.45^{\pm 3.32}\) & \(9.14^{\pm 3.64}\) & \(8.80^{\pm 3.02}\) \\ \(BTOL\)(Ours) & P & \(91.52^{\pm 4.19}\) & \(78.73^{\pm 11.87}\) & \(\mathbf{85.13^{\pm 7.14}}\) & \(7.53^{\pm 3.36}\) & \(8.76^{\pm 3.66}\) & \(\mathbf{8.14^{\pm 2.74}}\) \\ \hline \end{tabular}
\end{table}
Table 2: Experiments of fundus segmentation. The source model is trained on REFUGE challenge and the target data is RIM-ONE-r3. The baseline model is the target model which is trained on the pseudo label. Type ”W” indicates the method is a white-box UDA method; the Type ”B” indicates BTOL in the black-box UDA; Type ”P” indicates BTOL in the BPBA.
### Implementation Details
For all experiments, we use DeepLabV3+ [5] with MobileNetV2 [23] backbone as the segmentation model (e.g., Target model or Simulator in Figure 1). For the adapter, we use a neural network consisting of three blocks, and each block
\begin{table}
\begin{tabular}{|c|c|c|c|c|c|c|} \hline Methods & Type & \multicolumn{3}{c|}{Dice \(\uparrow\)} & \multicolumn{3}{c|}{ASD\(\downarrow\)} \\ \hline & & Disc & Cup & Avg. & Disc & Cup & Avg. \\ \hline Baseline Model & - & \(46.39^{\pm 34.86}\) & \(75.06^{\pm 27.79}\) & \(60.73^{\pm 29.42}\) & \(14.90^{\pm 11.52}\) & \(15.61^{\pm 14.45}\) & \(15.26^{\pm 8.95}\) \\ \hline \(BTOL\)(Ours) & B & \(47.40^{\pm 31.20}\) & \(76.48^{\pm 26.55}\) & \(61.94^{\pm 28.47}\) & \(14.56^{\pm 10.76}\) & \(13.00^{\pm 12.45}\) & \(13.78^{\pm 8.62}\) \\ \(BTOL\)(Ours) & P & \(49.70^{\pm 32.68}\) & \(78.15^{\pm 26.80}\) & \(\mathbf{63.93^{\pm 27.67}}\) & \(13.56^{\pm 9.80}\) & \(12.17^{\pm 10.64}\) & \(\mathbf{12.87^{\pm 9.15}}\) \\ \hline \end{tabular}
\end{table}
Table 4: Experiments of fundus segmentation of different backbones. The source model is DeepLabV3+ and target model is UNet [22]. The source model is trained on REFUGE challenge and the target data is RIM-ONE-r3. Experiments show that our method can work on different target model backbones.
\begin{table}
\begin{tabular}{|c|c|c|c|c|} \hline \multicolumn{5}{|c|}{**Cardiac Structure Segmentation**} \\ \hline Methods & Type & \multicolumn{3}{c|}{Dice \(\uparrow\)} \\ \hline & & RV & Myo & LV & Avg. \\ \hline Source Model & - & \(40.28^{\pm 26.73}\) & \(48.83^{\pm 10.84}\) & \(76.45^{\pm 10.21}\) & \(55.19^{\pm 14.19}\) \\ Baseline Model & - & \(46.67^{\pm 30.49}\) & \(53.95^{\pm 9.70}\) & \(76.01^{\pm 9.11}\) & \(58.87^{\pm 13.37}\) \\ DPL [4] & W & \(48.14^{\pm 29.45}\) & \(53.76^{\pm 9.11}\) & \(78.42^{\pm 8.64}\) & \(60.11^{\pm 15.73}\) \\ \hline EMD [16] & B & \(47.59^{\pm 28.46}\) & \(53.67^{\pm 9.79}\) & \(75.48^{\pm 9.58}\) & \(58.91^{\pm 13.48}\) \\ \(BTOL\)(Ours) & B & \(47.12^{\pm 29.45}\) & \(53.85^{\pm 10.15}\) & \(78.45^{\pm 9.66}\) & \(59.81^{\pm 12.86}\) \\ \(BTOL\)(Ours) & P & \(49.78^{\pm 25.45}\) & \(54.12^{\pm 9.46}\) & \(76.52^{\pm 10.40}\) & \(\mathbf{60.14^{\pm 13.04}}\) \\ \hline \end{tabular} \begin{tabular}{|c|c|c|c|c|c|} \hline \multicolumn{5}{|c|}{**Cardiac Structure Segmentation**} \\ \hline Methods & Type & \multicolumn{3}{c|}{ASD \(\downarrow\)} \\ \hline & & RV & Myo & LV & Avg. \\ \hline Source Model & - & \(4.50^{\pm 3.42}\) & \(4.60^{\pm 2.51}\) & \(5.78^{\pm 2.02}\) & \(4.96^{\pm 1.74}\) \\ Baseline Model & - & \(2.03^{\pm 1.61}\) & \(4.14^{\pm 1.87}\) & \(5.52^{\pm 2.36}\) & \(3.90^{\pm 1.28}\) \\ DPL [4] & W & \(1.55^{\pm 1.24}\) & \(4.75^{\pm 2.04}\) & \(4.95^{\pm 2.23}\) & \(3.75^{\pm 1.20}\) \\ \hline EMD [16] & B & \(2.12^{\pm 1.47}\) & \(4.25^{\pm 1.95}\) & \(5.13^{\pm 2.78}\) & \(3.83^{\pm 1.41}\) \\ \(BTOL\)(Ours) & B & \(1.76^{\pm 1.45}\) & \(4.55^{\pm 1.34}\) & \(5.32^{\pm 2.78}\) & \(3.88^{\pm 1.24}\) \\ \(BTOL\)(Ours) & P & \(1.32^{\pm 1.11}\) & \(4.24^{\pm 1.92}\) & \(5.45^{\pm 2.18}\) & \(\mathbf{3.67^{\pm 1.14}}\) \\ \hline \end{tabular}
\begin{tabular}{|c|c|c|c|c|c|c|} \hline \multicolumn{5}{|c|}{**Prostate Segmentation**} \\ \hline Methods & Type & Dice \(\uparrow\) & ASD \(\downarrow\) \\ \hline Source Model & - & \(47.50^{\pm 26.21}\) & \(9.80^{\pm 8.84}\) \\ Baseline Model & - & \(51.68^{\pm 24.56}\) & \(8.99^{\pm 8.25}\) \\ DPL [4] & W & \(52.95^{\pm 23.14}\) & \(7.98^{\pm 7.26}\) \\ \hline EMD [16] & B & \(52.47^{\pm 23.18}\) & \(8.11^{\pm 7.85}\) \\ \(BTOL\)(Ours) & B & \(52.40^{\pm 23.18}\) & \(7.88^{\pm 7.02}\) \\ \(BTOL\)(Ours) & P & \(\mathbf{54.16^{\pm 22.75}}\) & \(\mathbf{6.58^{\pm 6.98}}\) \\ \hline \end{tabular}
\end{table}
Table 3: Experiments of cardiac structure segmentation and prostate segmentation. Due to the page limitation, we choose the previous sota method DPL for a fair comparison.
includes a 2D convolution layer and Relu activation function. We use Adam optimizer with learning rate as \(1e^{-4}\) and set batch size as 8 without specific choice. We train the target model from scratch 100 epochs in the target model initialization stage. After that, we set \(T=4\), \(E_{1}=10\) and \(E_{2}=30\) in Figure 1.
### Experimental Results
We test our method mainly on medical image datasets from different body parts to prove efficacy and robustness, including fundus segmentation, cardiac structure segmentation, and prostate segmentation. As presented in Table. 3, 2, When applying _BTOL_ to black-box UDA tasks, our model can easily outperform previous methods **without** any augmentations, which means our method can fully exploit the potential information from the source model. When applying our method to the BPBA setting without getting access to or updating the source model parameters, our model outperforms all white-box UDA methods under all the settings. This proves that our model can make more efficient use of the source model, even with fewer operations. We believe that if adding more augmentations and regularizations, our method can achieve more impressive results.
We also test out strategy by changing the backbone of the target model as shown in 4. We changed the target model backbone from DeepLabV3+ to UNet [22]. Our strategy can still stably outperform the baseline model, which shows our strategy is robust and easy to use in various types of backbone.
## 4 Discussion
From the view of self-supervised learning, our _BTOL_ can be thought of as a conditional dual SimSiam [6]. SimSiam attributes the collapsing solutions to the stop-gradient operation. In our method, since there is a well-trained source model, we can thus broaden the unilateral stop-gradient operation into the dual freeze-and-thaw strategy without considering the model collapse. This EM-like approach can adequately utilize the source model. Moreover, our model does not need a pair of augmented inputs like SimSiam because the source model can provide a fixed distribution to maintain uniformity. If adding more augmentation procedures like SimSiam, we believe our model can achieve an improvement.
## 5 Conclusion
In this paper, we address a new setting, BPBA, to deal with the balance between model privacy and user needs when adapting the Black-box foundation/source models. Compared to Black-box UDA, we release the back-propagated information for users to train their models. We also propose a new strategy called _BTOL_ to fully utilize the implicit information from the source model to optimize the target model. _BTOL_ contains an adapter and a freeze-and-thaw strategy to cross-teach the adapter and target model. Our paradigm outperforms all the methods on different settings. Our setting and solution can benefit nearly all the fields which utilize the foundation/source models. We hope our work can provide a new perspective and solution in the new era of foundation models. |
2301.03170 | Cluster Formation and Relaxation in Particle Rafts under Uniform Radial
Expansion | Particle rafts floating at an air-liquid interface exhibit a variety of
behaviors when interfacial flow is introduced. Motivated by previous
experimental observations, the failure pattern of particle rafts under uniform
radial expansion is reported in this paper. The expansion process is
specifically designed to expand the system affinely in the radial direction and
to keep the velocity gradient constant throughout. A strong resemblance to the
results of particle rafts under uniform uniaxial expansion[1] is found. The
size of the cluster emerging as the rafts are pulled apart scales inversely
with the pulling velocity. This is a result of two competing velocities: the
inter-particle separation speed provided by the flow and a size-dependent
relaxation speed for clustering. A model, generalized from a one-dimensional
linear (in)stability calculation, is in agreement with the failure morphology
found for this radially expanded system. Nonlinear relaxation and particle
rearrangement is observed after the initial clustering occurs. This is a
feature unique to a two-dimensional system. With its easily accessible particle
dynamics at the microscopic level, this system provides insights into the
morphology controlled by two competing mechanisms in two or higher dimensions
and across different scales. | Khá-Î Tô | 2023-01-09T05:07:17Z | http://arxiv.org/abs/2301.03170v1 | # Cluster Formation and Relaxation in Particle Rafts under Uniform Radial Expansion
###### Abstract
Particle rafts floating at an air-liquid interface exhibit a variety of behaviors when interfacial flow is introduced. Motivated by previous experimental observations, the failure pattern of particle rafts under uniform radial expansion is reported in this paper. The expansion process is specifically designed to expand the system affinely in the radial direction and to keep the velocity gradient constant throughout. A strong resemblance to the results of particle rafts under uniform uniaxial expansion [1] is found. The size of the cluster emerging as the rafts are pulled apart scales inversely with the pulling velocity. This is a result of two competing velocities: the inter-particle separation speed provided by the flow and a size-dependent relaxation speed for clustering. A model, generalized from a one-dimensional linear (in)stability calculation, is in agreement with the failure morphology found for this radially expanded system. Nonlinear relaxation and particle rearrangement is observed after the initial clustering occurs. This is a feature unique to a two-dimensional system. With its easily accessible particle dynamics at the microscopic level, this system provides insights into the morphology controlled by two competing mechanisms in two or higher dimensions and across different scales.
## I Introduction
Sub-millimeter particles floating at an air-liquid interface can aggregate into particle rafts. The inter-particle attraction arises from the coupling between the particles and the fluid interface [2; 3; 4]. The resulting particle rafts can be treated like two-dimensional solids [5]. For example, compressing the raft from two of its opposing ends at small strain can induce an instability in which the particles buckle out of the plane [6; 7; 8]. Likewise, pulling the raft from its opposing ends quasi-statically can induce ductile deformation [9]. The underlying fluid plays a crucial role. Not only, does it produce the capillary forces that create the inter-particle attractions, it also provides substantial expansion or compression forces when the fluid surface area is varied sufficiently rapidly. Thus the fluid interface is a valuable tool with which to control particle-raft dynamics when the flow is relevant.
Different regimes of particle-raft behavior have been investigated. A previous study [1] explored the failure of particle rafts under expansion with a uniformly expanding metric in one dimension. This produced cracks throughout the material. By keeping the velocity gradient uniform along the extensional axis, the failure was found to be homogeneous with no single crack dominating the resultant morphology. A smooth change in the distance between neighboring cracks was observed as the pulling speed was varied; the failure pattern evolved from an almost unperturbed, densely-packed raft at low shear rates to a uniformly distributed set of rifts separating small clusters of particles at high shear rates. In these experiments, the orientations of the particle clusters were found to be nearly isotropic; despite having a specific pulling direction, the clusters were oriented with no strong variation with respect to the extension axis. This suggests that the direction of the applied shear plays no significant role in determining the failure pattern of particle rafts.
In other studies, the rafts were pulled apart radially so that the extension was in all directions simultaneously. In one case, this expansion was applied by adding surfactant near its center in order to impose a surface-tension gradient across the raft [10; 11]. In another study, advection was generated by pumping water into a funnel so that the raft expanded as the water level rose [12]. In both cases, as the systems evolved under the applied forces, wide cracks appeared preferentially near the center of the rafts. The morphology in these cases is thus significantly different from what was found in the one-dimensional experiments.
This raises a puzzle as to what aspects of the expansion gives rise to the differences observed between the radial and linear pulling geometries. To address this puzzle, this paper examines the failure morphology of particle rafts pulled apart under radial expansion using a protocol that more closely aligns with that used in the one-dimensional experiments. In particular, the experiments are designed to produce a nearly uniform velocity gradient everywhere on the air-water interface. I verify the velocity field using Particle Image Velocimetry (PIV). The results of these experiments are more similar to the one-dimensional pulling studies suggesting the importance of minimizing velocity gradients in the expansion dynamics.
In order to analyze these results, I generalize two dimensions to an (in)stability calculation used to determine the cluster sizes, _i.e._, the distance between rifts, in the one-dimensional (1D) situation [1]. With no further assumptions on the elastic properties of the aggregate, the calculation showed that the morphology was determined by a cross-over phenomenon due to the competition between the rate of separation (caused by the pulling velocity) and the rate of relaxation (caused by the inter-particle interactions) at different cluster sizes. In this model, the average cluster length scales inversely with
the velocity, in good agreement with the experimental results.
In addition to exploring the failure pattern of particle rafts under a different expansion metric, the present study is also inspired by the recognition that the study of structure formation under isotropic competing mechanisms is relevant in many systems across different scales [13]. At the largest scale, the competition between gravity and cosmological expansion leads to the cluster formation in the universe [14]. In particle rafts, the capillary attraction is asymptotically inversely proportional to the distance between particles, which has the same form as two-dimensional gravity. Therefore, working with an radial-expansion metric provides a two-dimensional table-top experiment for studying this type of phenomenon. At much smaller scales, the clustering of colloids [15; 16] and the microbial growth on liquid substrate [17] both have isotropic competing interaction/flow. Two-dimensional particle aggregates under radial expansion can offer a macroscopic platform (with access to the microscopic entities involved) to study the morphology controlled by these competing processes.
## II Experimental details
The particle rafts consist of polyethylene spheres floating at an air-water interface. The polyethylene particles have density \(1080\ kgm^{-3}\) and are a mixture of two diameter ranges: \(d=550\pm 50\mu m\) (small) and \(d=655\pm 55\mu m\) (large). Polydisperse packings are made by mixing roughly equal volumes of these two sizes. The particles naturally aggregate into a raft due to the lateral capillary attraction, also known as the "Cheerios" effect [2; 3; 4]. A monolayer of densely-packed particles is prepared by gathering the particles manually into the center of the apparatus as shown in the left panel of Fig. 1(A). The rafts are compacted by slight vibrations of the interface and have an initial packing fraction of \(0.72\pm 0.01\). The particle raft floats on deionized water which has density \(\rho_{w}=998\ kgm^{-3}\); dynamic viscosity \(\eta_{w}=0.95\ mPas\) and surface tension, \(\sigma_{w}=0.073\ Nm^{-1}\) at \(22^{\circ}C\).
To create an isotropic flow, a two-dimensional radial expander is custom-built to create a coordinated movement of twelve nodes going radially outward simultaneously. A schematic of the experimental apparatus is shown in Fig. 1A. The nodes are connected by twelve moving Mylar strips to form an expanding twelve-sided polygon. The Mylar strips are inserted vertically into the water as shown. Because Mylar is slightly hydrophilic, the particles rafts are enclosed inside the boundaries but do not make _direct_ contact with the boundaries. This is shown in the middle panel of Fig. 1A. The rafts are thus pulled only by the flow of the fluid which rises incrementally and uniformly everywhere inside the boundaries as the Mylar boundaries are extended outward radially. The fluid interface can appropriately be considered as an expanding metric on which the particles float. As the nodes simultaneously move outward, as shown in "Top View" in Fig. 1A, an approximately isotropic flow is generated by the moving Mylar strips. (See Appendix A about how Mylar strips are attached and how the expander is driven.)
The distance between the midpoint of each side to the center of the dodecagon is \(L_{0}/2\) and \(L_{0}\approx 61\ mm\). Each side moves outwards away from the center at the same pulling speed \(V/2\). The Particle Image Velocimetry (PIV) measurement at various \(V\) is shown in Appendix B. This allows the flow field \(\mathbf{u}(x,y)\) to be described as an affine expansion:
\[\mathbf{u}(x,y)=\frac{V}{L}(x\mathbf{\hat{x}}+y\mathbf{\hat{y}}) \tag{1}\]
where the origin is the center of the raft and \(x\) and \(y\) are the two orthogonal coordinates in the plane. The flow is radially outward away from the center.
The pulling speed, \(V\), is varied from \(2.8\,\mathrm{mm/s}\) to \(260\,\mathrm{mm/s}\). The raft diameter increases as \(L=L_{0}+Vt\) and the stretch ratio is defined as \(\lambda=L/L_{0}\). The initial rate of stretch, \(\dot{\lambda}_{0}=V/L_{0}\), is varied from \(5\times 10^{-2}\,\mathrm{s}^{-1}\) to \(4.6\,\mathrm{s}^{-1}\). The experiments are stopped at a maximum stretch ratio \(\lambda_{max}\sim 1.75\).
I use the method described in [1] to measure the cluster length, \(\ell\), inside each raft. This requires counting the number of pixels between rifts on each line of the raster-scanned image. The only complication is due to the difference in pixelization at angles different from the rectangular coordinates of the images produced by the camera.
## III Experimental results
As the twelve-sided boundary moves outward so that the diameter increases at velocity \(V\), the raft of particles is stretched by the underlying flow so that micro-cracks, or rifts, begin to form as shown in Fig. 1B. At intermediate to high \(V\), the rifts emerge soon after the onset of expansion. They are distributed diffusely throughout the entire raft and grow larger with time. The experiments are stopped at a stretch ratio (\(\lambda_{max}\sim 1.75\)). After their formation and initial growth, the same rifts remain up to a larger stretch ratio; once a rift is formed, it does not collapse at later times. This can be observed in the movies included in the Supplementary Information (SI). Likewise, the particles in between the initially-formed rifts remain connected. These clusters of particles maintain their configurations and are distributed homogeneously throughout the entire system with a characteristic size. The average size of these clusters of particles, \(\ell\), is strongly dependent on the pulling speed \(V\), as shown in Fig. 1B, C.
When \(V\) is small enough, the raft shape is only very slightly perturbed at the initiation of expansion and remains unchanged afterwards. There is a small drift in the
position due to secondary flows but the internal structure is undisturbed. In this low-velocity regime, no significant rifts can be observed and the cluster width, \(\ell\), remains close to the initial system size, \(L_{0}\).
With increasing \(V\), the number of rifts increases while the cluster size, \(\ell\), decreases. The cluster size, \(\ell\) decreases until it reaches the size of a single particle at large velocities, as shown in the right column of Fig. 1B. Figure 1C shows how the internal features change with increasing \(V\): a single relatively closely-packed structure at low \(V\), evolves upon increasing \(V\) into separate clusters consisting of only a few (sometimes only one) particles.
The average cluster length, \(\langle\ell\rangle\), is obtained by averaging the measurement at all different angles. This is because there is no specific tensile axis in this radial expansion system. The percentage standard deviation of this average quantity is smaller than 3% for all velocities, which is comparable to the fluctuation due to rotating pixelated images. This indicates that the variation in cluster orientations is very low and below the uncertainty level of detection.
Figure 2 shows the angular-averaged distribution of
Figure 1: Failure morphology at different pulling velocity \(V\). (A) A schematic of the experimental apparatus. Left: The side view of the whole apparatus. A radial expander inserted into water creates an nearly-radial flow to stretch the particle raft enclosed inside. Middle: The side view of the boundary of the Mylar strip boundary. Right: The top view of the expander at early and late time. The gray arms are connected and move coherently to make the twelve orange inner nodes move outward. The inner nodes are connected by Mylar strips painted in blue, which set the boundaries of the system. The dotted black lines show the weight tied to the Mylar strips to keep them tout during the motion. (B, C) Snapshots and zoomed-in images of experiments at a fixed stretch ratio \(\lambda=1.5\) for different pulling velocities \(V\).
cluster sizes at a liquid stretch ratio \(\lambda=1.5\) for different pulling velocities. The ordinate is the most dominant cluster length, \(\ell P(\ell)/\langle\ell\rangle\), where \(P(\ell)\) is the probability of finding a cluster of length \(\ell\) and the average cluster length \(\langle\ell\rangle=\sum\ell P(\ell)\). Since the statistics with small cluster lengths always outnumber those with larger \(\ell\), I plot the most dominant length, that is how much material has cluster length \(\ell\). This can help better identify the cluster distribution at each velocity, \(V\). At small \(V\), a large portion of the distribution remains close to the initial size of the raft, \(L_{0}\). The distribution shifts to smaller \(\ell\) as \(V\) increases. At the highest pulling speed, \(V\), most clusters have lengths between \(d\) and \(2d\).
Figure 3A shows the average cluster length \(\langle\ell\rangle\) versus pulling speed \(V\) at a fixed value of \(\lambda=1.5\), for all the experiments. One can see that \(\langle\ell\rangle\) decreases monotonically with increasing \(V\) and levels off at high velocity. Because a cluster cannot be significantly larger than the initial system size, \(L_{0}\), or smaller than a particle diameter, \(d\). The data can be interpolated between these two extremes using a similar form as was used in the one-dimensional pulling experiments [1]:
\[\langle\ell\rangle=\frac{1}{aV^{b}+1/(L_{0}/\sqrt{2}-d_{max})}+d_{max}. \tag{2}\]
This assumes a power-law dependence of the average cluster length away from the two extremes. The difference from the one-dimensional fitting form is that initial raft size \(L_{0}\) is scaled by \(\sqrt{2}\) because the cluster lengths are measured in a square box of size \(L_{0}/\sqrt{2}\) located in the center of the raft to avoid complexity caused by the round shape of the raft.
The fit gives us \(a=(9\pm 2)\times 10^{3}\) and \(b=1.20\pm 0.05\). This result is similar to the results in uniaxial expansion. The average cluster length, \(\langle\ell\rangle\), scales inversely with the pulling velocity \(V\).
## IV Discussion
Figure 3B compares the results from these radial expansion experiments with those found in the 1D experiments. Both data sets are shown at a similar time in the expansion: the liquid strain \(\varepsilon=0.5\) for 1D experiment and \(\lambda=1.5\) for radial experiment both occur when the system size is 1.5 times larger than the original size of the raft. The experiments under radial, isotropic expansion give very similar results to the 1D experiments.
Figure 2: Angle-averaged distribution of cluster lengths, \(\ell\), for different velocities at fixed stretch ratio, \(\lambda=1.5\). The most dominant length, that is the probability of cluster length \(\ell\) multiplied by \(\ell\) normalized by \(\langle\ell\rangle=\sum\ell P(\ell)\), is plotted versus \(\ell\) with three pulling speeds, \(V\): \(2.8\ mm/s\) (blue), \(26\ mm/s\) (green) and \(260\ mm/s\) (purple).
This suggests that the experiments with isotropic expansion can in fact be generalized from the analysis used in one-dimension.
To understand this similarity, let us reexamine the flow field described in Eq. 1. The velocity field can be expanded around any arbitrary point, \((x_{0},y_{0})\), within the boundaries:
\[\mathbf{u}(x,y)=\mathbf{u_{0}}+\frac{V}{L}((x-x_{0})\mathbf{\hat{x}}+(y-y_{0})\mathbf{\hat{y}})\]
where \(\mathbf{u_{0}}\) is the velocity at \((x_{0},y_{0})\). Independent of the position, \((x_{0},y_{0})\), the velocity field expands radially outward from that point; all points are equivalent and every particle feels the surrounding fluid retreating at the same constant velocity in all directions. Thus, the local motion can be reduced to multiple one-dimensional chains as treated in the case of uniaxial expansion.
The large cracks that were observed near the raft centers in previous experiments where the rafts were pulled radially in two-dimensions [10; 11; 12], are not present in the data presented here. The difference in the way the pulling of the raft is accomplished is likely the reason for this difference. The present experiment was designed to minimize higher-order gradients in the flow. When using the funnel method [12], I also observed the formation of large cracks which I attributed to gradients in the velocity field produced by the influx of liquid near the center of the raft.
To understand the two-dimensional effects intrinsic to this systems, I analyze both the uniaxial and radial expansion datasets. As shown in Fig. 4A, the time evolution of the cluster sizes shows a similar trend in both cases. Here I focus on the time evolution in the uniaxial expansion experiments because they have the greater range. Only the behavior for liquid strain \(\varepsilon\geq 1.0\) are examined because the cluster sizes at early times is complicated by limitations of image processing. A square observation window of size \(L_{x0}\) located at the center of the rafts is also chosen to avoid dealing with the irregular edges of the rafts.
At low \(V\), the cluster size is nearly independent of \(\varepsilon\). At higher pulling speed, however, the average cluster sizes consistently decrease with increasing \(\varepsilon\). This suggests non-linear, late-time relaxation of the raft.
One possibility for this is that, after the initial expansion, the particles which are further apart experience a smaller velocity gradient. However, the particles that are initially more compact continue to experience strong attraction from their neighbors; they rearrange so the particle clusters become more compact. Another possibility is that the clusters simply continue to break up as time goes on.
To understand the cause of the decrease in cluster sizes, I calculate the particle ratio inside the clusters that have been identified. The particle ratio is defined as the area of _particles_ inside the clusters divided by the _total_ area of the clusters. This quantity indicates if the voids inside
Figure 4: Time evolution of cluster sizes in both radial and uniaxial experiments. (A) Average cluster length, \(\langle\ell\rangle\), versus liquid stretch ratio \(\lambda\) or strain \(\varepsilon\) at all \(V\). The results from radial and uniaxial expansion experiments are shown in crosses and dots respectively. The color from dark purple to light orange represent results from fast to slow \(V\), and span the entire pulling velocity range in both experiments.) (B) The ratio of particles occupied in the cluster versus strain \(\varepsilon\) for uniaxial experiments. The results from four fastest pulling speeds, \(V=20\)\(mm/s\) (orange) to \(200\)\(mm/s\) (purple), are presented. (C) The change in \(\langle\ell\rangle\), divided by \(\langle\ell\rangle\) at \(\varepsilon=1.0\) (solid circles), and the inverse of normalized number of clusters, \(n_{e}\), scaled by \(n_{e}\) at \(\varepsilon=1.0\) (empty circles), are plotted versus \(\varepsilon\).
the clusters are shrinking in time. As shown in Fig. 4B, for four higher pulling velocities, the particle ratios all increase in time. This suggests that rearrangement of neighboring particles inside a cluster is indeed taking place.
On the other hand, Fig. 4C examines whether the clusters are breaking up with time. The solid dots represent the cluster lengths \(\langle\ell\rangle\) normalized by \(\langle\ell_{\varepsilon=1.0}\rangle\), the measurement at \(\varepsilon=1.0\) versus strain. I also plot the inverse of the normalized number of clusters, \(n_{c}\) (scaled by \(n_{c,\varepsilon=1.0}\)) as empty circles. \(n_{c}\) is the number of clusters in the observation region normalized by the total area of particles. This quantity shows how many clusters are formed per unit area of materials. If the inverse of \(n_{c}\) matches the change in cluster length, this implies that the change in the number of clusters is the main contributor to the change in cluster sizes, which is the case at \(V=20\ mm/s\) Moreover, the overall change in \(\langle\ell\rangle\) is small because the particles are already at a lower-energy configuration to start with. The slight increase in \(n_{c}^{-1}\) after \(\varepsilon=1.7\), that is the decrease in the number of clusters, suggests that a larger-scale coalescence has occurred as the global shear became very weak.
At \(V=200\ mm/s\), a significant difference between \(n_{c}^{-1}\) and \(\langle\ell\rangle\) can be observed. The average cluster length decreases by more than 20% while the inverse of \(n_{c}\) only changed by 10%. A substantial rearrangement between neighboring particles is therefore responsible for this change, as shown in Fig. 4B as well.
## V Conclusions
In this paper, the morphology of particle rafts floating at an air-water interface is studied as the underlying fluid is forced to flow radially outward at a uniform constant velocity gradient. The experiments are designed to minimize any high-order velocity gradients in the system. By examining the flow field of the radial expansion with particle image velocimetry (PIV), the local expansion metric is found to be uniform inside the apparatus. As was found in the case of uniaxial expansion described in Ref. [1], the failure morphology due to the outward flow varies smoothly as a function of the fluid expansion velocity. When the velocity is low, the rafts remain intact throughout the expansion without the formation of visible internal cracks. For larger expansion velocities, rifts open up inside the rafts. The rifts are distributed diffusely with a spacing between them that decreases with increasing velocity. At the largest velocities used in the experiment, the separation decreases until the cluster size, that is the length between rifts, becomes only slightly larger than the size of a single particle. The measured average cluster sizes are very close to that of the one-dimensional expansion experiment and follow the same scaling behavior despite the fact that the geometry of the expansion metric is different. This indicates that the 1D (in)stability model that was used to describe the relaxation and cluster formation in [1] can be generalized to two-dimensions and is sufficient to explain the radial experiment.
By measuring the time evolution of the clusters, a nonlinear relaxation is found at a later time; after the initial formation of clusters, the clusters start to shrink in sizes. The decrease in particle ratio inside the clusters shows that the particles continue to rearrange and become more compact even while the overall outward expansion continues. This can be understood because, once the initial expansion rate sets the dominant cluster length, the velocity gradient between clusters is no longer that used initially. Because the particles are farther apart (in the rifts), the rifts will increase in size faster than the spaces between the more densely packed particles. Moreover, the global velocity gradient is also decreasing monotonically as the system size becomes larger. The particles inside an individual cluster will rearrange according to the competition between the local relaxation rate and the local shear rate set by neighboring particles. This later-time aspect of the relaxation is not captured by the one-dimensional linear (in)stability analysis.
This experiment provides a flexible platform for studying how morphology can form by the competition between two isotropic mechanisms. There are many other systems that share this common attribute. As pointed out in [1], an obvious analog is cosmological expansion [14]. In this table top experiment, the interaction between particles has the same form as gravity in two dimensions. Other examples include gel formation[15] and structure formation by colloids in polymer solutions[16]. This two dimensional system gives us a direct access to the clustering morphology with minimal imaging complexity which would be troublesome in three dimensional systems. In addition, without tuning the details of the potentials chemically or at a small scale, one of the competing processes can be easily manipulated by changing the mechanical driving protocol, while the other can be controlled by changing specific particle-liquid or particle-particle interactions.
## VI Acknowledgement
I thank T. A. Witten, V. Vitelli and M. M. Bandi for insightful discussion. I am deeply grateful to S. R. Nagel for his support and mentorship. This work was supported by the National Science Foundation (MRSEC program NSF-DMR 2011854), by the Simons Foundation for the collaboration Cracking the Glass Problem Award #348125 and by Government Scholarship to Study Abroad by the Ministry of Education in Taiwan.
## Appendix A Details of the Radial Expander
The design of the radial expander used to create the flow with minimal velocity gradients is shown in Fig. 5.
The twenty-four arms are connected by binding posts (green nodes), which allow them to rotate freely, as shown in the gray pieces in the Fig. 5A. Half of them are on the top (light gray) and half of them are at the bottom (dark gray). Two of the opposite inner nodes are each connected to the opposite side of a belt. As the belt is driven by a motor, a motion of the arms to move the twelve inner (orange) nodes outward is created. Ball bearings are glued underneath the binding posts (shown in Fig. 5B) to hold the expander and make it glide smoothly on the platform. The inner nodes are long screws, that go below the platform and a 3D-printed chip is attached to the tip. This chip, as shown in Fig. 5B, has two threaded cylinders with a \(1.22~{}mm\) slit in between them. Each of the cylinder serves as a column for one side of the Mylar strip. Each Mylar strip forms a triangle and is attached to a weight at the corner against the side connecting adjacent nodes. This provides sufficient tension to the strip during the motion to create an outward-going boundary.
## Appendix B Particle Image Velocimetry
To measure the expansion of the liquid surface due to the extension of the pullers, the Particle Image Velocimetry (PIV) measurement on water at \(V=2.8\), \(26\) and \(260\)\(mm/s\). The result is shown in Fig. 6. A layer of light, non-interacting floating particles are spread sparsely on the water surface prior to expansion. By tracking the motion of these particles and computing the correlation between adjacent frames, the underlying fluid flow is shown to lead to an uniform radial expansion of the surface, as described in 1; the spacing of the particles increases proportional to their relative position. Fig. 6 shows the gradient of the velocity is constant radially.
Figure 5: Details of the radial expander design. (A) Schematics of the top view of the radial expander. The inner nodes are shown as orange and the outer nodes are green. Two of the inner (orange) nodes are connected to the opposite sides of a belt which is driven by a motor. The movement of the belt drives those two nodes in opposite directions. This opens (closes) the entire structure when the belt is rotated clockwise (counter-clockwise). Because of the geometry connecting all the nodes, pulling on just two of the inner nodes is sufficient for the entire structure to open smoothly. (B) A side view of the green and orange nodes. The green nodes are placed to ensure that the entire structure is stable and does not wobble vertically during the opening of the expander. The bottom of a green node is attached to a ball bearing so that it slides smoothly on a clear acrylic platform. Each orange node is a screw with a 3D-printed attachment at its tip. This attachment is composed of two closely-spaced columns that are threaded by two adjacent Mylar ribbons. These ribbons slide smoothly around the two columns as the expander is moving. Each ribbon is held in place by the attachments at two adjacent orange nodes. To keep the Mylar ribbon taut during movement of the expander, a weight is attached that pulls the ribbon outwards on the opposite side from where the particles are located. The Mylar ribbons are partly submerged below the liquid surface. It is these ribbons which create the boundary for the particles as shown in Fig. 1A. (C) Top: A photograph of the angled top view of the expander. Bottom: The side view of the chips and Mylar strips before inserting into water.
Figure 6: The Particle Image Velocimetry results of the radial expander across whole velocity range. The magnitude of the velocity \(\mathbf{u}\) normalized by the pulling speed \(V\) is plotted versus the position relative to the center, normalized by the radius of the raft, \(L_{0}/2\). A linear relation can be observed in all \(V\), indicating an radially affine flow field can be created by the expander. The measurements near the boundaries are excluded due to the difficulty of image processing. |
2307.07098 | Eliciting Informative Priors by Modelling Expert Decision Making | This article introduces a new method for eliciting prior distributions from
experts. The method models an expert decision-making process to infer a prior
probability distribution for a rare event $A$. More specifically, assuming
there exists a decision-making process closely related to $A$ which forms a
decision $Y$, where a history of decisions have been collected. By modelling
the data observed to make the historic decisions, using a Bayesian model, an
analyst can infer a distribution for the parameters of the random variable $Y$.
This distribution can be used to approximate the prior distribution for the
parameters of the random variable for event $A$. This method is novel in the
field of prior elicitation and has the potential of improving upon current
methods by using real-life decision-making processes, that can carry real-life
consequences, and, because it does not require an expert to have statistical
knowledge. Future decision making can be improved upon using this method, as it
highlights variables that are impacting the decision making process. An
application for eliciting a prior distribution of recidivism, for an
individual, is used to explain this method further. | Julia R. Falconer, Eibe Frank, Devon L. L. Polaschek, Chaitanya Joshi | 2023-07-14T00:06:16Z | http://arxiv.org/abs/2307.07098v1 | # Eliciting Informative Priors by Modelling Expert Decision Making
###### Abstract
This article introduces a new method for eliciting prior distributions from experts. The method models an expert decision-making process to infer a prior probability distribution for a rare event \(A\). More specifically, assuming there exists a decision-making process closely related to \(A\) which forms a decision \(Y\), where a history of decisions have been collected. By modelling the data observed to make the historic decisions, using a Bayesian model, an analyst can infer a distribution for the parameters of the random variable \(Y\). This distribution can be used to approximate the prior distribution for the parameters of the random variable for event \(A\). This method is novel in the field of prior elicitation and has the potential of improving upon current methods by using real-life decision-making processes, that can carry real-life consequences, and, because it does not require an expert to have statistical knowledge. Future decision making can be improved upon using this method, as it highlights variables that are impacting the decision making process. An application for eliciting a prior distribution of recidivism, for an individual, is used to explain this method further.
Bayesian Methods Prior Elicitation Subjective
## 1 Introduction
Beginning with some prior knowledge (a prior probability distribution), Bayesian inference updates the prior by taking information from observed data (a likelihood) to build a posterior distribution over the parameters of interest, \(\theta\):
\[p(\theta|y)\propto p(\theta)p(y|\theta), \tag{1}\]
A prior distribution that has minimal influence on the posterior distribution, a 'non-informative' prior, is often used. Where there is large amounts of data, the choice of prior is largely irrelevant since the likelihood dominates the posterior distribution. However, if there is limited data, the influence from the likelihood becomes minimal, producing a posterior that relies heavily on the prior information. For such instances, an informative prior distribution could be used [1].
We consider scenarios exhibiting an event, \(A\), that is of serious consequence and where data on \(A\) is limited as the event rarely occurs. An analyst (see Table 1 for definitions used throughout this paper) wishes to obtain an informative prior distribution for \(A\). Although there may not be any data on \(A\), there may be some other related source of information that can be used to obtain a prior for \(A\). The most common way to do this is to elicit a distribution from an expert in the relevant field of interest. Methods to obtain an informative prior distribution from an expert are described in [2], which assigns methods to three categories; 1) Direct Interrogation Methods, 2) Indirect Interrogation Methods, and 3) Graphical/ Visual Methods. Direct Interrogation methods [3, 4, 5] involve asking experts about the probability distributions directly. This can be challenging because experts must first have a firm grasp of probability theory and distributions. There are circumstances where an expert can first be taught key probability concepts [3, 6, 7], but this can prove difficult and create inaccurate prior distributions [8, 9]. This issue can also be seen in some graphical/visual methods [2]. Indirect Interrogation methods have been introduced to help combat the requirement of experts needing knowledge of probability theory. Indirect Interrogation methods involve asking the expert questions that are not directly based on the probability distributions themselves, but instead are easy for the expert to comprehend. From there, an analyst will use mathematical logic to infer a prior distribution. Two examples of Indirect Interrogation that display the simplicity of questioning are: getting the expert to place bets on which event they think is more likely [10] and getting the expert to rank the likelihood of events [11, 12, 9]. As highlighted in [2], some Indirect Interrogation methods can be thought of as hypothetical decision-making tasks. Hypothetical decision-making implies that whether the decision is correct or incorrect has no real consequence for the expert. Therefore, prior elicited in this way may not accurately reflect the expert's thinking in real life.
The use of experts during the process of elicitation has the added complexity of introducing cognitive and motivational biases. In Direct Interrogation elicitation, the simple mistake of asking a question a certain way can produce cognitive biases which influence the experts response (e.g., anchoring and adjusting [13], where values in the questions are used by an expert to anchor their response value). Prior elicitation methods that use experts may also have cognitive biases based on an expert's work experience (e.g., judgment by availability [13], where an expert will put more weight on an event just because the expert witnessed that event more recently) or, to put it more generally, an expert's life experience, that includes biases they have formed over time (e.g., gender bias, racial bias). Using a group of experts to elicit one prior distribution can help an analyst gain a wider view of the whole field of interest [14]. A common way to do this is to get a group of experts to discuss opinions to form a consensus, however, this method can also come with cognitive biases that an analyst should be aware of, such as _groupthink_[15]. Groupthink is where the need to reach a consensus, while maintaining harmony within the group, means individuals do not voice alternative perspectives
\begin{table}
\begin{tabular}{l l} \hline \hline
**Name** & **Description** \\ \hline \hline _Prior Elicitation_ & The process of obtaining knowledge from a source to form a prior distribution that can be used for further Bayesian analysis. \\ \hline _Expert_ & An individual (or a group of individuals) who has extensive knowledge on a certain subject matter. The expert is also referred to as the decision maker in this text. \\ \hline _Decision Maker_ & The individual who performs a decision making task. In most cases, the Decision Maker and the Expert will be the same individual. \\ \hline _Analyst_ & An individual who performs the task of forming a prior distribution using prior elicitation techniques. \\ \hline _Facillator_ & An individual who performs the task of eliciting knowledge. In some cases, the Facilitator and the Analyst may be the same individual. \\ \hline \hline \end{tabular}
\end{table}
Table 1: Definitions expanded from a table from [2]
that may be outside the social "norm" or maybe against the perspective of a strongly influential individual, skewing the group's elicited prior in one direction [15]. Instead of having experts reach a consensus, some methods allow analysts to combine experts' individual priors mathematically [3] to avoid cognitive biases that are formed from group consensus, such as groupthink. Some methods can elicit a prior distribution without an expert's input by using historical data (e.g., use the posterior from a similar historical study [16]), however, in most cases this historical data will not exist. Also, historical data is not immune from the effects of biases, and it is not just an individual expert's cognitive biases that an analyst must be aware of. Sometimes available data might encompass societal biases [17]. A famous example is the Correctional Offender Management Profiling for Alternative Sanction, COMPAS [18]. COMPAS was a risk assessment tool that was used to obtain a recidivism score for defendants. Although ethnicity was not a factor in the model, the tool was still more likely to class black individuals as high risk than other individuals [18]. This was because the model had learned from historic discriminatory court cases and enhanced the prejudices in the judicial system [17]. Another example is a tool that was used to rank the top five applicants based on their resumes for job vacancies at Amazon; it was found to be penalising applicants that were women and favouring those that were male [19, 17]. This was because the model learned patterns from historic data where women were not hired for positions at tech companies [17]. The societal biases of blacks being more likely to commit crimes and females being less adequate for specific jobs were shown in the data applied to these models and influenced the outputs. Lack of information or inadequate information can also produce a bias [20]. If the available information is heavily dominated by information on one group then it is obvious that the results produced with this information could be considered biased, like in the COMPAS example. Often the available information is tabular data, which may be missing key information that is needed to give accurate outputs. Tabular data variables may also represent multiple factors of interest that are not directly collected in the data (confounding variables), making it hard to understand what variables are truly influencing the output. Reducing the impact of biases on the elicited prior is a key interest in prior elicitation [14, 3].
### Motivation
We believe the key limitations of current methods are: a) the statistical knowledge required of experts to perform elicitation by Direct Interrogation methods, b) the "hypothetical" decision-making tasks in Indirect Interrogation methods that have no real-life impact and could affect the accuracy of the elicited prior, and, c) the difficulty of identifying biases when eliciting an expert prior. We introduce a method that eliminates some of these limitations by obtaining an approximation of a prior distribution through modelling an expert's past decision-making tasks. Our method eliminates the statistical knowledge required by utilising a decision-making task that an expert performs as part of their duties. Often this decision-making task has real-life implications, meaning more importance is placed on the decision, and the experts will strive to be more accurate in their decisions. Thus, by modelling their past decisions, we may be able to capture their thinking more accurately than methods that rely on hypothetical decision-making. Also, modelling past real-life decisions eliminates biases that could be introduced in direct interrogation methods. Although, because we are using experts, there may still be cognitive biases affecting the elicited distribution. Modelling data from the past decision-making tasks may allow analysts to identify variables that may be considered to be inducing bias in the decision-making process. The method is explained further in Section 2. We discuss ways to assess model behaviour in Section 3, with Section 4 outlining a simple example application. Finally, we close in Section 5 with a summary of conclusions and further work.
## 2 Eliciting Uncertainty from Decision Making
We introduce a method that combines concepts from Indirect Interrogation methods as well as those that use historical data, by forming a prior distribution from an expert's past decision-making task. We are concerned with an undesirable future event \(A\). The expert wishes to prevent \(A\) from occurring and considers a (preventative) decision \(Y\). Let \(X\) be the information that is available to the decision maker at the time. The expert is interested in being able to quantify the prior probability on \(A|X\), that is, what is the probability that \(A\) will occur given the available information \(X\). Using the expert's past decisions, the decision process \(Y|X\) can be modelled. We conjecture that given \(X\), the uncertainty in the outcome of \(Y\) reflects the experts' uncertainty on whether \(A\) would occur or not if no preventative measures were taken. Therefore, \(A|X\) and \(Y|X\) are intimately related. For simplicity, we assume that the event \(A\) is binary (_occurs or not_), so also is the preventative decision \(Y\) (_prevention put in place or not_). Let \(Y|X\sim Bernoulli(p)\) and \(A|X\sim Bernoulli(q)\). To be able to model the decision-making process \(Y|X\) accurately, the process should be repetitive (carried out often) and its outcomes and the information used to make the decision should be available.
Let \(Y_{i}\) denote the decision made at the \(i^{th}\) instance (hereafter referred to as a _case_) and \(X_{i}\) be the information used by the decision maker to make that decision. Suppose that the data on \(n\) cases is available so that we have
\(\mathbf{Y}=\{Y_{1},Y_{2},\ldots,Y_{n}\}\) and \(\mathbf{X}=\{X_{1},X_{2},\ldots,X_{n}\}\). Let \(\boldsymbol{\theta}\) be model parameters that link the decisions \(\mathbf{Y}\) to the available information \(\mathbf{X}\) such that \(\mathbf{Y}\sim f(\mathbf{Y}|\mathbf{X},\boldsymbol{\theta})\). Given, \(\mathbf{Y}\), \(\mathbf{X}\) and a prior distribution \(\pi(\boldsymbol{\theta})\), we can find the posterior distribution \(\pi(\boldsymbol{\theta}|\mathbf{Y},\mathbf{X})\). Assuming information on a sufficient number \(n\) of similar cases and an appropriate model \(f\), it is reasonable to believe that using the information \(X^{*}\) for the next case, we could accurately predict the decision \(Y^{*}\) that the decision maker is likely to make using the posterior predictive distribution.
\[P(Y^{*}|X^{*},\mathbf{Y},\mathbf{X})=\int P(Y^{*}|X^{*},\boldsymbol{\theta}) \pi(\boldsymbol{\theta}|\mathbf{Y},\mathbf{X})\,d\boldsymbol{\theta}. \tag{2}\]
Let \(A_{i}\) be the undesirable consequence for the \(i^{th}\) case, which may or may not materialize. The data on (some or all of) past \(A_{i}\) may be available, but that is not considered here at this stage. Since \(Y_{i}\) is the preventative decision to mitigate the risk of \(A_{i}\), it is clear that \(Y_{i}\) reflects the decision maker's prediction on \(A_{i}\). That is, that a preventative decision was put in place implies that the decision maker believes that \(A_{i}\) is likely to occur. Similarly, if the preventative measures were not put in place, this would reflect the decision maker's belief that \(A_{i}\) is unlikely to occur. That is,
\[A_{i}|X_{i}\overset{\text{d}}{\approx}Y_{i}|X_{i}. \tag{3}\]
Therefore, given the information \(X^{*}\) for the next case, the conditional predictive prior for \(A^{*}\) can be approximated using the posterior predictive distribution in Equation (2). That is,
\[\pi(A^{*}|X^{*})\approx P(Y^{*}|X^{*},\mathbf{Y},\mathbf{X}). \tag{4}\]
See the accompanying influence diagram (Figure 1) that depicts the relationship between the variables.
As an illustrative example, let \(A\) be the event that a property in an industrial area will be burgled. This threat could be potentially mitigated by employing the services of a security consultant who would review the relevant information, \(X\), make an assessment, \(Y\), about the imminent risk and provide recommendations of security features that could be installed to prevent the threat from eventuating. If the data on \(n\) recent property evaluations by the same consultant are available, then we can model the consultant's risk perception using a statistical model. The goal is to obtain the probability distribution of a new property being burgled using the relevant information available \(X\). This probability distribution can be considered as an approximation to the consultant's prior probability distribution on whether the event \(A\) will occur given \(X\).
Note that our goal is not to accurately predict \(A\). Instead, we want the model to accurately mimic the experts' decision-making process, and capture the experts' uncertainty about the event \(A\), by considering the uncertainty in the model for the surrogate event \(Y\). To ascertain whether the model is accurately mimicking the expert's decision-making process, an analyst can observe at least one of the measures of central tendency of the probability distribution of the
Figure 1: Influence Diagram for Eliciting Prior Distributions from Expert Decision Making
parameter \(p_{i}\), and assess whether it correctly predicts \(Y_{i}\) in most of the cases (see Section 3). Moreover, we conjecture that the aleatory uncertainty captured by the model reflects the aleatory uncertainty of the expert on whether \(A\) will occur or not given \(X\). Our conjecture assumes that the decision maker recognizes that due to natural variability, an event may or may not occur even when it is very likely to occur and vice versa.
We will illustrate the use of this method with an example in Section 4 using Bayesian logistic regression. Given \(Y_{i}|X_{i}\sim Bernoulli(p_{i})\), the logistic regression model, with a link function \(g(.)\), is represented as,
\[g(p_{i})=\theta_{0}+\theta_{1}x_{1}i+...\]
For example, with a simple logit link function and one predictor variable,
\[logit(p_{i})=log(\frac{p_{i}}{1-p_{i}})=\theta_{0}+\theta_{1}x_{i}\]
\[\Rightarrow p_{i}=\frac{exp(\theta_{0}+\theta_{1}x_{i})}{1+exp(\theta_{0}+ \theta_{1}x_{i})} \tag{5}\]
A Bayesian approach is implemented by placing prior distributions on the model parameters, \(\boldsymbol{\theta}=\{\theta_{0},\theta_{1},...\}\). Sampling methods, such as MCMC methods, can be used to approximate the posterior distribution of \(\boldsymbol{\theta}\). An analyst can select the prior distribution for model parameters and the sampling method and adjust them to build the most appropriate model (Section 3). To approximate the probability distribution for \(p_{i}\) from this model, we can sample from the posteriors of the model parameters, \(\pi(\boldsymbol{\theta}|\mathbf{Y},\mathbf{X})\). These samples will be used in the model equation (for example Equation 5) to obtain samples of \(p_{i}\). An approach such as the methods of moments can then be used to fit a Beta distribution to these samples, which forms the elicited prior distribution of \(q_{i}\) for the model \(A_{i}|X_{i}\sim Bernoulli(q_{i})\).
There are many models that are used to predict rare or undesirable events, including Bayesian logistic regression models (e.g., for predicting recidivism [21, 22, 23, 24]). However, these models, to the best of our knowledge, have not yet been used to model expert decision-making or, to elicit an experts' prior distribution using the posterior predictive distribution. We reiterate that our goal is not to predict a rare or undesirable event, instead, we wish to capture the uncertainty surrounding said event occurring.
## 3 Model Selection Diagnostics
To be able to elicit expert uncertainty accurately, we expect our model to behave like a decision-maker. We want it to be more uncertain when it sees data it has never seen before (wider distributions of \(p_{i}\) that could be centered around 0.5) and less uncertain when it encounters familiar data (narrower distributions). Looking at the accuracy of the model is standard practice when assessing model performance (how accurately the model is predicting the response variable, \(Y\), for a given test data set). If we wish to obtain the accuracy of a model which predicts the probability, \(p_{i}\), of a binary decision, \(Y_{i}\), labels are typically assigned as follows. If \(p_{i}\) is less than 0.5 then the decision is labeled "no" and if \(p_{i}\) is greater than 0.5 then the decision is labelled "yes" (or whatever the labels may be). When we are taking samples of \(p_{i}\), it is common practice to take the mean of those samples as our estimate of \(p_{i}\) that assigns labels. However, to assess how well the model captures the experts' thinking, model accuracy is not the only diagnostic that is of importance, as we must also take into consideration the variability of the elicited distributions and the uncertainty that they capture.
It is easy to show that using the mean, of the sampled \(p_{i}\) values, to assign labels for model accuracy may not give a fair representation of the variability of the distributions. For example, Figure 2 shows three distributions where the model would assign the same label if the means of \(p_{i}\) were used for assigning labels. However, we can see that using the mean does not accurately capture the difference in variability of the distributions and that using the median or the mode of the posterior predictive distribution would have assigned labels differently. We could also gain further insights by looking at the credible intervals of the distributions and assigning labels on whether the value of \(p_{i}\) needed to assign a certain label, lies within the credible interval. The credible interval also allows an analyst to assess the uncertainty of the elicited distributions, which is of importance when selecting an appropriate model. If the credible interval is wide and contains 0.5, then we can assume that our expert is fairly uncertain, and if it is narrow and on either side of 0.5, we can assume they are fairly certain. In the same way, the area _area under curve_ (AUC) of the distribution can be used. To further assess the model's capabilities to capture uncertainty, an analyst can observe the entropy of the elicited distributions. If the entropy value is close to zero, then we assume the expert is fairly certain; if it is close to one, then we assume they are fairly uncertain. Assessing whether or not the model is behaving appropriately is case specific. If the analyst knows
the decision making task has a lot of uncertainty, then they would expect high entropy values and will need to assess the trade-off between high entropy and high accuracy values. However, if the task is fairly certain, involving black and white responses, then we would expect low entropy values and aim for high accuracy from our model.
These suggested diagnostics help an analyst assess the performance of the model, without looking at every single distribution produced. We advise analysts to look at multiple different model diagnostics to make sure the model is suitable for the task of prior elicitation and, also, to ensure they have a well-fitted model (Table 2). The analyst's goal should be to maximise the model's accuracy (how well it is predicting the response for a given data set) while also producing distributions that accurately capture uncertainty.
## 4 Example
Let \(A\) be the event that a prisoner commits a crime upon release from prison. Information on a specific prisoner re-offending is limited and often censored, as we only know if a released prisoner commits a crime if they were caught. However, there exists an expert decision-making process that can be used to infer a prior distribution on the event \(A\). This is the parole board hearing process. The parole board considers a report from a prisoner's case worker and decides whether or not to give a prisoner parole. When making a decision, the parole board is already taking into consideration the risk of the prisoner re-offending upon release, so this decision-making process can be used to infer a prior distribution on \(A\). For example, if parole is not granted, this implies that the risk of re-committing a crime for an individual is high.
### Data
We use a publicly available data set from the New York State Parole Board's interview calendar made available by The Parole Hearing Data Project 2. This data set contains information on the prisoner, the hearing process, and the final decision3. It has 46 variables in total. We choose to take a subset of this data set by only looking at the initial parole board interviews. That is, the first time a prisoner appears before the parole board. The final data set has 9580 observations (Not Granted - 6962, Granted - 2618). The variables selected for our model are shown in Table 3. Variables were selected based on their perceived relevance to the decision and if a variable had no impact on model performance it was removed. The posterior of each variable was also observed to see if the 95% credible interval contained zero (meaning it has little to no impact on the model).
Footnote 2: Data source [https://github.com/crackerman/parole-hearing-data](https://github.com/crackerman/parole-hearing-data)
Footnote 3: Data library [https://publicapps.doccs.ny.gov/ParoleBoardCalendar/About?form=datadef#Decision](https://publicapps.doccs.ny.gov/ParoleBoardCalendar/About?form=datadef#Decision)
### Model
We wish to model the Parole Board Decision (response variable) using all other variables as explanatory variables (Table 3). Numeric variables are standardised and categorical variables are changed to dummy variables. The model is fitted and posterior distributions are found on a training data set that consists of 80% of the full data set (7664 observations). The performance measures are assessed for a test data set of observations the model has never seen. The
Figure 2: Distributions of \(p_{i}\) for three different individuals that would obtain the same label assigned based on mean probability prediction.
test data set consists of the remaining 20% of the full data set (1916 observations). For a more accurate picture of how the model behaves, we randomly sampled five different testing and training sets and fitted the model separately in each case. We then took the average of the five different accuracy readings produced to get the final values. The structure of the model is shown in Equation 6.
\[Decision_{i} = \beta_{0}+\beta_{1}\times gender\_male_{i}+\beta_{2}\times age_{i} +\beta_{3}\times num\_years\_release_{i}+\beta_{4}\times num\_years\_papole_{i} \tag{6}\] \[+\beta_{5}\times crime\_count_{i}+\beta_{6}\times agg\_min\_ sent_{i}+\beta_{7}\times agg\_max\_sent_{i}+\beta_{8}\times eth\_hispanic_{i}\] \[+\beta_{9}\times eth\_white_{i}+\beta_{10}\times eth\_ other_{i}+\beta_{11}\times crime\_class\_B_{i}+\beta_{12}\times crime\_class\_C_{i}\] \[+\beta_{13}\times crime\_class\_D_{i}+\beta_{14}\times crime\_ class\_E_{i}+\beta_{15}\times crime\_conviction\_assault_{i}\] \[+\beta_{16}\times crime\_conviction\_burglary_{i}+\ldots\] \[p_{i} = \frac{1}{1+e^{Decision_{i}}}\]
\begin{table}
\begin{tabular}{p{142.3pt} p{284.5pt}} \hline \hline
**Name** & **Description** \\ \hline \hline _Mean Accuracy_ & Percentage of correct predictions the model makes by using the mean of the sampled probabilities \(p_{i}\). \\ \hline _Mode Accuracy_ & Percentage of correct predictions the model makes by using the mode of the sampled probabilities \(p_{i}\). \\ \hline _Median Accuracy_ & Percentage of correct predictions the model makes by using the median of the sampled probabilities \(p_{i}\). \\ \hline _Area Under Curve (AUC) Accuracy_ & Percentage of correct predictions the model makes by taking the largest area either side of 0.5 as the measure to form the model prediction. \\ \hline _95\% Credible Interval (CI) Accuracy_ & Percentage of correct predictions the model makes by observing the 95\% CI of \(p_{i}\). If the 95\% CI contains 0.5 then the assigned label can be either ”Accept” or ”Reject” and is a correct prediction. If the 95\% CI is contained below 0.5 and the true label is ”Accept” then it is a correct prediction. If the 95\% CI is contained above 0.5 and the true label is ”Reject” then it is a correct prediction. If the 95\% CI is contained above 0.5 and ”The true label is ”Reject” then it is a correct prediction. \\ \hline _Percentage of the 95\% CI correct predictions that contain 0.5._ & This will allow the analysts to see how many central distributions are elicited. \\ \hline _Percentage of the 95\% CI correct predictions that are either side of 0.5._ & This will allow the analysts to see how many skewed distributions are elicited. \\ \hline _F-Score [25]_ & A measure which shows the specificity (true negative rate) and sensitivity (true positive rate) of the model. The mean of the samples of \(p_{i}\) is used to assign labels. The highest possible value of an F-score is 1.0, indicating perfect specificity and sensitivity, and the lowest possible value is 0, if either the specificity or the sensitivity is zero. \\ & \\ & \\ & \\ & \\ & \\ & \\ & \\ & \\ \hline _Confusion Matrix [26]_ & Shows the percentage of the mean predictions by whether the prediction is a true negative, true positive, false negative or false positive, showing the specificity and sensitivity of the model. The mean of the samples of \(p_{i}\) is used to assign labels. \\ \hline _Entropy [27]_ & A measure of the amount of uncertainty in a distribution. A narrow distribution will give a value close to zero (showing a certain prediction), and a wide distribution will give a value close to 1 (showing an uncertain distribution). To make sure the model is behaving correctly, it will be helpful to observe a histogram of all entropy values for the training set, as well as observing the histograms of the entropy values of correct and incorrect predictions separately. \\ \hline _Calibration Plot_ & A calibration plot shows how well the prediction probabilities match the true percentage probabilities of the data. The mean of the samples of \(p_{i}\) is used as prediction probabilities. \\ \hline \hline \end{tabular}
\end{table}
Table 2: Model diagnostics we suggest to help select an appropriate model for prior elicitation.
All parameters were initialised with a \(Normal(0,0.001)\) prior. All trace plots of the parameters were acceptable.
### Model Diagnostics
Accuracy readings were taken for the five different test sets and can be found in Table 4. The model obtains about 79% classification accuracy overall. The CI accuracy is approximately 84%, with 87% of the CIs being on either side of 0.5, showing that the model is making more certain predictions than predictions that could be either "Granted" or "Not Granted" (corresponding to CI's containing 0.5). The F-score is around 0.87, which is close to one, showing that the model has relatively good specificity and sensitivity. Figure (a)a shows the entropy of all observations in a single test set4. There are two peaks, one around zero and another around 0.5. From this, we can conclude that our model has some very certain predictions (peak around zero) and some less certain or very uncertain predictions (peak around 0.5). To gain further insight into the behaviour of our model in terms of entropy, Figure (b)b displays the entropy of correct predictions the model made and Figure (c)c displays the entropy of incorrect predictions. We can see that for incorrect predictions the large peak at zero is not present (Figure (c)c), whereas it is still present for correct predictions (Figure (b)b) meaning our model is less certain with its predictions when it is incorrect. The model looks relatively well-calibrated to the data (Figure (a)a). The confusion matrix (Figure (b)b) shows that the model has a high true positive rate, that is, the model is predicting "Not Granted" well, which is to be expected due to the disproportionate amount of "Not Granted" versus "Granted" parole decisions in the data-set. Overall, we believe the model show acceptable behaviour for the proposed task.
\begin{table}
\begin{tabular}{p{142.3pt} p{142.3pt}} \hline
**Variable Name** & **Variable Description** \\ \hline \hline _Parole Board Decision_ & Simplified labels to a binary response: Granted = {Open Date, Granted, Paroled }, Not Granted = {Denied, Not Granted}. \\ \hline _Gender_ & Male, Female \\ \hline _Ethnicity_ & Black, White, Hispanic, Other \\ \hline _Age_ & Years from birth date to interview date. \\ \hline _Crime 1 Class_ & Felony codes A, B, C, D and E. A felonies being the most serious and E felonies being the least serious. \\ \hline _Number of Years to Release Date_ & Years from interview date to release date. \\ \hline _Number of Years to Parole Date_ & Years from interview date to parole eligibility date. \\ \hline _Aggregated Maximum Sentence_ & Maximum aggregated amount of time a prisoner must serve for the crimes they are convicted of. \\ \hline _Aggregated Minimum Sentence_ & Minimum aggregated amount of time a prisoner must serve for the crimes they are convicted of. \\ \hline _Crime Count_ & Number of crimes a prisoner was convicted of under the given sentence (not all criminal history, just crimes for the current prison stay). \\ \hline _Crime 1 Conviction_ & Simplified down to the following set: {Possession: Crimes involving possession of an illegal substance or firearm; Grand Larceny: taking of goods in excess of $1000; Assault: Crimes involving assault, excl. sexual assault; DWI: Driving under the influence of drugs or alcohol; Court: Crimes involving court proceedings(e.g., perjury, contempt); Sale: Crimes involving sale of an illegal substance or firearm; Sexual: Any sex related crime (e.g., sexual assault, rape); Fake: Crimes where an individual has faked something (e.g., forgery, identify theft); Death: Any crime where an individual has caused death excl. murder (e.g., manslaughter, homicide); Stalking: including surveillance and harassment; Conspiracy, Murder, Robbery, Arson, Fraud, Kidnapping, Other: All other crimes which do not come under any of the other labels }. Reducing categories in this way is common practice in statistics and is done throughout crime modelling [28]. \\ \hline \hline \end{tabular}
\end{table}
Table 3: Variable names and descriptions
### Elicited Prior Distribution
After selecting the appropriate model, we can now obtain the elicited prior distribution for a new case. To produce a distribution of expert uncertainty for a single case, we obtain samples of \(p_{i}\), the probability of a prisoner re-committing a crime, using the available information on the prisoner. We do this by sampling 100 times from the posterior distributions of the model parameters. These samples are then used to calculate samples of \(p_{i}\), the probability of a decision \(Y_{i}|X_{i}\). Then, the method of moments is used to fit a beta distribution to the samples of \(p_{i}\), producing a final distribution capturing uncertainty. An analyst can also choose to fit other distributions to the data by MLE. They can then select the best distribution by the Kolmogorov-Smirnov test [29].
Consider three prisoners: Prisoner 1, Prisoner 2 and Prisoner 3 (the prisoners' attributes are found in Table 5). The elicited prior distributions are shown in Figure 5. Prisoner 1 yielded a \(Beta\sim(74.111,266.202)\) prior distribution (Figure 4(a)). Prisoner 2 yielded a \(Beta\sim(382.491,154.224)\) prior distribution (Figure 4(b)). Prisoner 3 yielded a \(Beta\sim(1181.395,7.210)\) prior distribution (Figure 4(c)). These elicited distributions can now be used as prior distributions for recidivism for the given individuals and can be used to aid further decision-making.
### Influential Variables
For this example, we have shown how an analyst can elicit a prior distribution from an expert decision-making process using tabular data. However, can an analyst trust that this elicited distribution is reliable? Can they trust the expert's decisions? Could some variables be wrongly influencing decisions? We chose to consider these questions by exploring variables seen in the decision-making process that should not have a cause-effect relationship with the decision. The variables we chose to explore are ethnicity, gender, and age. To explore the effect of these variables, we first created models without these variables and compared them to the original model. Each model was run five times with different testing and training data sets to produce an average of all accuracy measures.
The model without ethnicity obtained the lowest average accuracy, and in fact, all five testing data sets gave lower accuracy than the full model (Table 6). It is also interesting to see that the model without Ethnicity has a higher percentage of 95% CI correct predictions that contain 0.5. The model without age behaves roughly similar to the
\begin{table}
\begin{tabular}{l c} \hline \hline
**Accuracy Measure** & **Average** \\ \hline \hline _Mean Accuracy_ & 79.538\% \\ \hline _Mode Accuracy_ & 79.498\% \\ \hline _Median Accuracy_ & 79.51\% \\ \hline _AUC Accuracy_ & 79.488\% \\ \hline _95\% CI Accuracy_ & 84.542\% \\ \hline _Percentage of the 95\% CI correct predictions that contain 0.5_ & 12.832\% \\ \hline _Percentage of the 95\% CI correct predictions that are either side of 0.5_ & 87.164\% \\ \hline _F-Score_ & 0.867 \\ \hline \hline \end{tabular}
\end{table}
Table 4: Average performance measures from five models.
Figure 3: Entropy Plots
full model and the model without gender is only slightly less accurate. We also look at the behaviour of the elicited distribution of a test point from each model (Figure 6). It can be seen that for each prisoner the full model and the models without age or gender perform similarly, however, the model without ethnicity produces a different distribution (Figure 6). This finding is consistent for all prisoners considered. We can further explore the impact of the variable ethnicity by using the full model and looking at a single prisoner and changing their ethnicity (Figure 7). Again, there is a clear difference between the different ethnicity's elicited distributions. This shows us that ethnicity has an impact on the decision. Removing ethnicity from the model may reduce bias in the elicited prior distribution, but, it should be noted, that sometimes variables in tabular form can be representing other information that may be valuable to elicit an accurate prior distribution (confounding variables). For example, the variable ethnicity may be a proxy for socioeconomic status [30]. This is a limitation of incomplete tabular data, as an analyst can only assume what this other information is. In this context, it is worth noting that there may be other methods that can go beyond tabular data and allow an analyst to use all the information a decision maker considered to elicit a prior distribution so that all the necessary information is kept in the model to elicit a prior distribution.
Figure 4: Other performance plots
\begin{table}
\begin{tabular}{l l l l} \hline \hline
**Attribute** & **Prisoner 1** & **Prisoner 2** & **Prisoner 3** \\ \hline \hline Age: & 34 years & 23 Years & 29 years \\ \hline Number of years to release date: & 0 years & 0 years & 1 year \\ \hline Number of years to parole date: & 0 years & 0 years & 0 years \\ \hline Aggregated Maximum Sentence: & 3 years & 3 years & 4 years \\ \hline Aggregated Minimum Sentence: & 1 year & 1 year & 1 years \\ \hline Gender: & Male & Male & Male \\ \hline Ethnicity: & White & Black & White \\ \hline Crime Count: & 1 & 1 & 2 \\ \hline Crime 1 Conviction: & Burglary & Possession & DWI \\ \hline Crime 1 Class: & D & E & E \\ \hline Decision: & Granted & Not & Not \\ & & Granted & Granted \\ \hline \hline \end{tabular}
\end{table}
Table 5: : The prisoners’ attributes used in Example 1
### Summary
This example shows how to elicit expert uncertainty present when considering whether a prisoner will re-commit a crime upon release, using a Bayesian logistic regression model to model parole board decision-making. The proposed process enables an analyst to also observe the impact of variables that may be influencing the decisions. The example has limitation; the parole board usually makes its decisions based on a report submitted by a prisoner's case worker. The only available data considered in the example was tabular data, which does not provide all the information that would be in the report. It would be interesting to see if modelling the report data would provide different results to those obtained above. Also, as with any elicitation method, there may be questions regarding the accuracy of the elicited prior distributions. The accuracy of elicited prior distributions is an ongoing concern of the prior elicitation field [31] and should be a continual path for future research.
## 5 Conclusions and Future Work
We introduce a new method to elicit prior distributions for an event, by modelling an expert decision-making task. We assume that a decision, \(Y\), is closely related to the event \(A\) so that samples from \(P(Y|X,\theta)\), for different values of \(\theta\), can be used to approximate the prior distribution for \(A\) given a particular case \(X\). This method allows an analyst to elicit a prior distribution from a real-world expert decision-making process, without the expert needing knowledge of probability concepts. This method can also be easily implemented for multiple experts where a decision is made in consensus because it models one decision, no matter if an individual or group makes the decision. We introduced this method with an example of recidivism using tabular data. This example used Bayesian logistic regression to model the parole board decision-making process. Once an appropriate model was fitted, samples from the posterior distributions of the parameters were taken to form a distribution that can be used as a prior distribution for recidivism.
\begin{table}
\begin{tabular}{l c c c c} \hline \hline
**Accuracy Measure** & **Full Model** & **Model without Ethnicity** & **Model without age** & **Model without gender** \\ \hline \hline _Mean Accuracy_ & 79.538\% & 78.286\% & 79.77\% & 78.988\% \\ \hline _Mode Accuracy_ & 79.498\% & 78.288\% & 79.488\% & 78.904\% \\ \hline _Median Accuracy_ & 79.51\% & 78.298\% & 79.72\% & 78.988\% \\ \hline _AUC Accuracy_ & 79.488\% & 78.298\% & 79.72\% & 78.978\% \\ \hline _95\% CI Accuracy_ & 84.542\% & 85.564\% & 84.394\% & 84.3\% \\ \hline _Percentage of the 95\% CI correct predictions that contain 0.5_ & 12.832\% & 17.608\% & 12.074\% & 14.168\% \\ \hline _Percentage of the 95\% CI correct predictions that are either_ & 87.164\% & 82.388\% & 87.904\% & 85.83\% \\ \hline _F-Score_ & 0.867 & 0.857 & 0.868 & 0.86356 \\ \hline \hline \end{tabular}
\end{table}
Table 6: Accuracy measures of models where the variables of interest are removed
Figure 5: Prior distributions for three different prisoners
Using this method also enables an analyst to explore variables that may be strongly influencing the decision-making process. What to do with this information should be a topic of future research. Should an analyst remove this information, or should it be shared with the experts to help train for future decision-making? A limitation of the logistic regression example considered in this paper is that the use of tabular data makes it challenging for an analyst to truly ascertain what is influencing the decision-making as this type of data only provides limited information and is often not what an expert would use to make their decisions. It would be interesting to explore modelling decision-making tasks that involve more complex data, such as images or reports. Basic statistical models cannot perform these tasks, instead, machine-learning approaches will have to be implemented. Another limitation of the scenario considered in this paper is that we only consider decisions that have a binary outcome, however, there are many circumstances where decisions are not binary. There are ways to extend Bayesian logistic regression to the multinomial case, which should be explored further for prior elicitation. A concern in the field of prior elicitation is how accurate the elicited prior distribution is, further research could be taken to see how accurate this method of prior elicitation is and if there is a method to calibrate the elicited distribution (See example in [31]). If there exist cases where the outcome of the event \(A\) has been observed, these could potentially be used to calibrate the elicited prior distribution. Overall, although we hope to have argued successfully that the proposed method is a promising candidate for prior elicitation in practical applications, further research should be performed to improve the practicality and generality of the approach.
|
2306.08082 | Uniqueness for 2D Euler and transport equations via extrapolation | Using extrapolation theory, we develop a new framework to prove the
uniqueness of solutions for transport equations. We apply our methodology to
unify and extend the classical results of Yudovich and Vishik for 2D Euler
equations. In particular, we establish the uniqueness for the Euler flow whose
vorticity belongs to new scales of function spaces that contain both Yudovich
spaces and BMO. We give a self contained presentation. | Oscar Dominguez, Mario Milman | 2023-06-13T18:55:47Z | http://arxiv.org/abs/2306.08082v1 | # Uniqueness for 2D Euler and transport equations via extrapolation
###### Abstract.
Using extrapolation theory, we develop a new framework to prove the uniqueness of solutions for transport equations. We apply our methodology to unify and extend the classical results of Yudovich and Vishik for 2D Euler equations. In particular, we establish the uniqueness for the Euler flow whose vorticity belongs to new scales of function spaces that contain both Yudovich spaces and BMO. We give a self contained presentation.
Key words and phrases:2D incompressible Euler equations, transport equations, Yudovich's uniqueness theorem, Vishik's uniqueness theorem, BMO, extrapolation 2020 Mathematics Subject Classification: Primary 76B03, 46M35. Secondary 46E30, 46E35. _Acknowledgements._ Part of this research was carried out while the first-named author was a postdoc at the Institut Camille Jordan, Lyon, supported by the Labex Milyon and the French National Research Agency (ANR-10-LABX-0070), (ANR-11-IDEX-0007). It is our pleasure to thank Prof. Petru Mironescu for some helpful comments that helped to improve the presentation of the paper.
###### Contents
* 1 Introduction
* 2 The uniqueness of weak solutions of the Cauchy problem for the Euler equations of an ideal incompressible fluid on \(\Omega=\mathbb{R}^{2}\) or \(\mathbb{T}^{2}\) still presents challenging open questions. In 2D the Euler equations can be formulated in terms of a transport equation
* 2.1 The uniqueness of weak solutions of the Cauchy problem for the Euler equations of an ideal incompressible fluid on \(\Omega=\mathbb{R}^{2}\) or \(\mathbb{T}^{2}\) still presents challenging open questions. In 2D the Euler equations can be formulated in terms of a transport equation
* 2.2 The uniqueness of weak solutions of the Cauchy problem for the Euler equations of an ideal incompressible fluid on \(\Omega=\mathbb{R}^{2}\) or \(\mathbb{T}^{2}\) still presents challenging open questions. In 2D the Euler equations can be formulated in terms of a transport equation
* 2.3 The uniqueness of weak solutions to the Euler equations of an ideal incompressible fluid on \(\Omega=\mathbb{R}^{2}\) or \(\mathbb{T}^{2}\) still presents challenging open questions. In 2D the Euler equations can be formulated in terms of a transport equation
* 2.4 The uniqueness of weak solutions to the Euler equations of an ideal incompressible fluid on \(\Omega=\mathbb{R}^{2}\) or \(\mathbb{T}^{2}\) still presents challenging open questions. In 2D the Euler equations can be formulated in terms of a transport equation
* 2.5 The uniqueness of weak solutions to the Euler equations of an ideal incompressible fluid on \(\Omega=\mathbb{R}^{2}\) or \(\mathbb{T}^{2}\) still presents challenging open questions. In 2D the Euler equations can be formulated in terms of a transport equation
* 2.6 The uniqueness of weak solutions to the Euler equations of an ideal incompressible fluid on \(\Omega=\mathbb{R}^{2}\) or \(\mathbb{T}^{2}\) still presents challenging open questions. In 2D the Euler equations can be formulated in terms of a transport equation
* 2.7 The uniqueness of weak solutions to the Euler equations of an ideal incompressible fluid on \(\Omega=\mathbb{R}^{2}\) or \(\mathbb{T}^{2}\) still presents challenging open questions. In 2D the Euler equations can be formulated in terms of a transport equation
* 2.8 The uniqueness of weak solutions to the Euler equations of an ideal incompressible fluid on \(\Omega=\mathbb{R}^{2}\) or \(\mathbb{T}^{2}\) still presents challenging open questions. In 2D the Euler equations can be formulated in terms of a transport equation
* 2.9 The uniqueness of weak solutions to the Euler equations of an ideal incompressible fluid on \(\Omega=\mathbb{R}^{2}\) or \(\mathbb{T}^{2}\) still presents challenging open questions. In 2D the Euler equations can be formulated in terms of a transport equation
* 2.10 The uniqueness of weak solutions to the Euler equations of an ideal incompressible fluid on \(\Omega=\mathbb{R}^{2}\) or \(\mathbb{T}^{2}\) still presents challenging open questions. In 2D the Euler equations can be formulated in terms of a transport equation
* 2.11 The uniqueness of weak solutions to the Euler equations of an ideal incompressible fluid on \(\Omega=\mathbb{R}^{2}\) or \(\mathbb{T}^{2}\) still presents challenging open questions. In 2D the Euler equations can be formulated in terms of a transport equation
* 2.12 The uniqueness of weak solutions to the Euler equations of an ideal incompressible fluid on \(\Omega=\mathbb{R}^{2}\) or \(\mathbb{T}^{2}\) still presents challenging open questions. In 2D the Euler equations can be formulated in terms of a transport equation
* 2.13 The uniqueness of weak solutions to the Euler equations of an ideal incompressible fluid on \(\Omega=\mathbb{R}^{2}\) or \(\mathbb{T}^{2}\) still presents challenging open questions. In 2D the Euler equations can be formulated in terms of a transport equation
* 2.14 The uniqueness of weak solutions to the Euler equations of an ideal incompressible fluid on \(\Omega=\mathbb{R}^{2}\) or \(\mathbb{T}^{2}\) still presents challenging open questions. In 2D the Euler equations can be formulated in terms of a transport equation
* 2.15 The uniqueness of weak solutions to the Euler equations of an ideal incompressible fluid on \(\Omega=\mathbb{R}^{2}\) or \(\mathbb{T}^{2}\) still presents challenging open questions. In 2D the Euler equations can be formulated in terms of a transport equation
* 2.16 The uniqueness of weak solutions to the Euler equations of an ideal incompressible fluid on \(\Omega=\mathbb{R}^{2}\) or \(\mathbb{T}^{2}\) still presents challenging open questions. In 2D the Euler equations can be formulated in terms of a transport equation
* 2.17 The uniqueness of weak solutions to the Euler equations of an ideal incompressible fluid on \(\Omega=\mathbb{R}^{2}\) or \(\mathbb{T}^{2}\) still presents challenging open questions. In 2D the Euler equations can be formulated in terms of a transport equation
* 2.18 The uniqueness of weak solutions to the Euler equations of an ideal incompressible fluid on \(\Omega=\mathbb{R}^{2}\) or \(\mathbb{T}^{2}\) still presents challenging open questions. In 2D the Euler equations can be formulated in terms of a transport equation
* 2.19 The uniqueness of weak solutions to the Euler equations of an ideal incompressible fluid on \(\Omega=\mathbb{R}^{2}\) or \(\mathbb{T}^{2}\) still presents challenging open questions. In 2D the Euler equations can be formulated in terms of a transport equation
* 2.20 The uniqueness of weak solutions to the Euler equations of an ideal incompressible fluid on \(\Omega=\mathbb{R}^{2}\) or \(\mathbb{T}^{2}\) still presents challenging open questions. In 2D the Euler equations can be formulated in terms of a transport equation
* 2.21 The uniqueness of weak solutions to the Euler equations of an ideal incompressible fluid on \(\Omega=\mathbb{R}^{2}\) or \(\mathbb{T}^{2}\) still presents challenging open questions. In 2D the Euler equations can be formulated in terms of a transport equation
* 2.22 The uniqueness of weak solutions to the Euler equations of an ideal incompressible fluid on \(\Omega=\mathbb{R}^{2}\) or \(\mathbb{T}^{2}\) still presents challenging open questions. In 2D the Euler equations can be formulated in terms of a transport equation
* 2.23 The uniqueness of weak solutions to the Euler equations of an ideal incompressible fluid on \(\Omega=\mathbb{R}^{2}\) or \(\mathbb{T}^{2}\) still presents challenging open questions. In 2D the Euler equations can be formulated in terms of a transport equation
* 2.24 The uniqueness of weak solutions to the Euler equations of an ideal incompressible fluid on \(\Omega=\mathbb{R}^{2}\) or \(\mathbb{T}^{2}\) still presents challenging open questions. In 2D the Euler equations can be formulated in terms of a transport equation
* 2.25 The uniqueness of weak solutions to the Euler equations of an ideal incompressible fluid on \(\Omega=\mathbb{R}^{2}\) or \(\mathbb{T}^{2}\) still presents challenging open questions. In 2D the Euler equations can be formulated in terms of a transport equation
* 2.26 The uniqueness of weak solutions to the Euler equations of an ideal incompressible fluid on \(\Omega=\mathbb{R}^{2}\) or \(\mathbb{T}^{2}\) still presents challenging open questions. In 2D the Euler equations can be formulated in terms of a transport equation
* 2.27 The uniqueness of weak solutions to the Euler equations of an ideal incompressible fluid on \(\Omega=\mathbb{R}^{2}\) or \(\mathbb{T}^{2}\) still presents challenging open questions. In 2D the Euler equations can be formulated in terms of a transport equation
* 2.28 The uniqueness of weak solutions to the Euler equations of an ideal incompressible fluid on \(\Omega=\mathbb{R}^{2}\) or \(\mathbb{T}^{2}\) still presents challenging open questions. In 2D the Euler equations can be formulated in terms of a transport equation
* 2.29 The uniqueness of weak solutions to the Euler equations of an ideal incompressible fluid on \(\Omega=\mathbb{R}^{2}\) or \(\mathbb{T}^{2}\) still presents challenging open questions. In 2D the Euler equations can be formulated in terms of a transport equation
* 2.29 The uniqueness of weak solutions to the Euler equations of an ideal incompressible fluid on \(\Omega=\mathbb{R}^{2}\) or \(\mathbb{T}^{2}\) still presents challenging open questions. In 2D the Euler equations can be formulated in terms of a transport equation
* 2.30 The uniqueness of weak solutions to the Euler equations of an ideal incompressible fluid on \(\Omega=\mathbb{R}^{2}\) or \(\mathbb{T}^{2}\) still presents challenging open questions. In 2D the Euler equations can be formulated in terms of a transport equation
* 2.31 The uniqueness of weak solutions to the Euler equations of an ideal incompressible fluid on \(\Omega=\mathbb{R}^{2}\) or \(\mathbb{T}^{2}\) still presents challenging open questions. In 2D the Euler equations can be formulated in terms of a transport equation
* 2.32 The uniqueness of weak solutions to the Euler equations of an ideal incompressible fluid on \(\Omega=\mathbb{R}^{2}\) or \(\mathbb{T}^{2}\) still presents challenging open questions. In 2D the Euler equations can be formulated in terms of a transport equation
* 2.33 The uniqueness of weak solutions to the Euler equations of an ideal incompressible fluid on \(\Omega=\mathbb{R}^{2}\) or \(\mathbb{T}^{2}\) still presents challenging open questions. In 2D the Euler equations can be formulated in terms of a transport equation
* 2.34 The uniqueness of weak solutions to the Euler equations of an ideal incompressible fluid on \(\Omega=\mathbb{R}^{2}\) or \(\mathbb{T}^{2}\) still presents challenging open questions. In 2D the Euler equations can be formulated in terms of a transport equation
* 2.35 The uniqueness of weak solutions to the Euler equations of an ideal incompressible fluid on \(\Omega=\mathbb{R}^{2}\) or \(\mathbb{T}^{2}\) still presents challenging open questions. In 2D the Euler equations can be formulated in terms of a transport equation
* 2.36 The uniqueness of weak solutions to the Euler equations of an ideal incompressible fluid on \(\Omega=\mathbb{R}^{2}\) or \(\mathbb{T}^{2}\) still presents challenging open questions. In 2D the Euler equations can be formulated in terms of a transport equation
* 2.37 The uniqueness of weak solutions to the Euler equations of an ideal incompressible fluid on \(\Omega=\mathbb{R}^{2}\) or \(\mathbb{T}^{2}\) still presents challenging open questions. In 2D the Euler equations can be formulated in terms of a transport equation
* 2.38 The uniqueness of weak solutions to the Euler equations of an ideal incompressible fluid on \(\Omega=\mathbb{R}^{2}\) or \(\mathbb{T}^{2}\) still presents challenging open questions. In 2D the Euler equations can be formulated in terms of a transport equation
* 2.39 The uniqueness of weak solutions to the Euler equations of an ideal incompressible fluid on \(\Omega=\mathbb{R}^{2}\) or \(\mathbb{T}^{2}\) still presents challenging open questions. In 2D the Euler equations can be formulated in terms of a transport equation
* 2.39 The uniqueness of weak solutions to the Euler equations of an ideal incompressible fluid on \(\Omega=\mathbb{R}^{2}\) or \(\mathbb{T}^{2}\) still presents challenging open questions. In 2D the Euler equations can be formulated in terms of a transport equation
* 2.40 The uniqueness of weak solutions to the Euler equations of an ideal incompressible fluid on \(\Omega=\mathbb{R}^{2}\) or \(\mathbb{T}^{2}\) still presents challenging open questions. In 2D the Euler equations can be formulated in terms of a transport equation
* 2.41 The uniqueness of weak solutions to the Euler equations of an ideal incompressible fluid on \(\Omega=\mathbb{R}^{2}\) or \(\mathbb{T}^{2}\) still presents challenging open questions. In 2D the Euler equations can be formulated in terms of a transport equation
* 2.42 The uniqueness of weak solutions to the Euler equations of an ideal incompressible fluid on \(\Omega=\mathbb{R}^{2}\) or \(\mathbb{T}^{2}\) still presents challenging open questions. In 2D the Euler equations can be formulated in terms of a transport equation
* 2.43 The uniqueness of weak solutions to the Euler equations of an ideal incompressible fluid on \(\Omega=\mathbb{R}^{2}\) or \(\mathbb{T}^{2}\) still presents challenging open questions. In 2D the Euler equations can be formulated in terms of a transport equation
* 2.44 The uniqueness of weak solutions to the Euler equations of an ideal incompressible fluid on \(\Omega=\mathbb{R}^{2}\) or \(\mathbb{T}^{2}\) still presents challenging open questions. In 2D the Euler equations can be formulated in terms of a transport
The uniqueness results for \(Y^{\Theta}_{p_{0}}\) vorticities are formulated in terms of functions associated with a growth function \(\Theta\), which we shall term _Yudovich functions4_\(y_{\Theta}\),
Footnote 4: In [36], the function \(y_{\Theta}\) is defined in slightly different way for \(r\in(0,1)\). However, this minor modification will not play a role in what follows, cf. (1.7).
\[y_{\Theta}(r):=\inf_{p>p_{0}}\{\Theta(p)r^{1/p}\},\qquad r>0. \tag{1.5}\]
In the theory of [36], Yudovich functions appear naturally in the crucial differential inequality connecting \(Y^{\Theta}_{p_{0}}\) solutions with the energy method. Indeed, let \((v_{i},\omega_{i}),i=1,2\), be two solutions of (1.1) and let \(E(t)=\left\|v_{1}(t,\cdot)-v_{2}(t,\cdot)\right\|_{L^{2}(\Omega)}^{2},\) from the definitions of weak solution and the norm of \(Y^{\Theta}_{p_{0}}(\Omega)\), combined with the Biot-Savart law (1.2) together with the sharp \(L^{p}\)-norm estimates for CZO, one has
\[\frac{dE(t)}{dt}\leq c\left\|\omega_{1}\right\|_{Y^{\Theta}_{p_{0}}(\Omega)}E (t)\,y_{\Theta_{1}}(E(t)^{-1}),\]
where \(y_{\Theta_{1}}\) is the Yudovich function associated with the growth function \(\Theta_{1}\),
\[\Theta_{1}(p):=p\,\Theta(p). \tag{1.6}\]
Uniqueness (i.e., \(v_{1}=v_{2}\)) is then achieved under the following Osgood condition on \(y_{\Theta_{1}}\)(cf. [36])
\[\int_{0}^{1}\frac{dr}{ry_{\Theta_{1}}(\frac{1}{r})}=\infty. \tag{1.7}\]
It is easy to verify that (1.7) holds for \(\Theta(p)\approx 1\), therefore from (1.4) we obtain the classical uniqueness result of [35]. It also holds for \(\Theta(p)\approx\log p\), whose corresponding space \(Y^{\Theta}_{p_{0}}\) can be seen to include unbounded vorticities of the form \(\omega(x)\approx\left|\log\left|\log\left|x|||\right|\right|\right|\). However, (1.7) places a severe restriction on \(\Theta\) and, indeed, it fails for linear growth \(\Theta(p)\approx p\), which corresponds to vorticities \(\omega\) in the Orlicz space \(e^{L}\) of exponentially integrable functions. It follows that, by this method, uniqueness cannot be guaranteed for vorticities of the form \(\omega(x)\approx\left|\log\left|x|||\right|\right.\), the prototype of an unbounded function in BMO (cf. [21]). We mention that an alternative and elementary approach to the well-posedness of 2D Euler equations in \(Y^{\Theta}_{p_{0}}\) has been recently proposed by Crippa and Stefani [12].
Working in the full plane, the expected uniqueness result for vorticities with logarithmic singularities was obtained somewhat later by Vishik [32], using a different method. Interestingly, Vishik's method is also constructive and relies on the introduction of the "Vishik spaces" \(B_{\Pi}(\mathbb{R}^{d})\) associated with "growth functions" \(\Pi\), that control the growth of partial sums of the \(L^{\infty}\)-norm of dyadic frequency localizations \(\{\Delta_{j}f\}_{j\geq 0}\) of \(f\in\mathcal{S}^{\prime}(\mathbb{R}^{d})\). Specifically,
\[\left\|f\right\|_{B_{\Pi}(\mathbb{R}^{d})}:=\sup_{N\geq 0}\frac{1}{\Pi(N)} \,\sum_{j=0}^{N}\left\|\Delta_{j}f\right\|_{L^{\infty}(\mathbb{R}^{d})}<\infty. \tag{1.8}\]
Note that \(\Pi(N)\approx 1\) gives \(B_{\Pi}(\mathbb{R}^{d})=B^{0}_{\infty,1}(\mathbb{R}^{d})\), a classical Besov space.
In this approach the Osgood condition on \(\Pi\), controlling uniqueness, is given by
\[\int_{1}^{\infty}\frac{dr}{\Pi(r)}=\infty. \tag{1.9}\]
Under this assumption, uniqueness of the Euler flow of is guaranteed provided that5
Footnote 5: The result holds for \(d\) arbitrary, under the assumption \(p_{0}\in(1,d)\).
\[\omega\in L^{\infty}([0,T];B_{\Pi}(\mathbb{R}^{2})\cap L^{p_{0}}(\mathbb{R}^{ 2})) \tag{1.10}\]
for some \(p_{0}\in(1,2)\).
In particular, \(\Pi(p)\approx p\) satisfies (1.9) and uniqueness for vorticities satisfying6
Footnote 6: \(\operatorname{bmo}(\mathbb{R}^{d})\) refers to the local (a.k.a. inhomogeneous) version of \(\operatorname{BMO}(\mathbb{R}^{d})\). Recall that \(\operatorname{bmo}(\mathbb{R}^{d})\subseteq\operatorname{BMO}(\mathbb{R}^{d})\).
\[\omega\in L^{\infty}([0,T];\operatorname{bmo}(\mathbb{R}^{2})\cap L^{p_{0}}( \mathbb{R}^{2})),\qquad p_{0}\in(1,2), \tag{1.11}\]
can be deduced from the embeddings
\[\operatorname{bmo}(\mathbb{R}^{2})\hookrightarrow B^{0}_{\infty,\infty}( \mathbb{R}^{2})\hookrightarrow B_{\Pi}(\mathbb{R}^{2}). \tag{1.12}\]
Another approach to uniqueness in \(\operatorname{BMO}\) is due to Azzam and Bedrossian [3]. In fact, these authors deal with a large class of active scalar equations, including Euler and Navier-Stokes equations and modified SQG equations. Their method (built on \(H^{-1}\) norms) may be considered as a further refinement of the energy method of [35], that takes into account not only the integrability, but also the inherited regularity properties of \(\operatorname{BMO}\) functions.
In what concerns larger classes of vorticities, a challenging open problem in the theory is to decide whether uniqueness of solutions for (1.1) can be achieved for vorticities in \(L^{p}.\) In recent remarkable work [33, 34], Vishik (cf. also the lecture notes [1]) shows that, for any \(2<p<\infty\), there exist \(\omega_{0}\in L^{1}(\mathbb{R}^{2})\cap L^{p}(\mathbb{R}^{2})\), and a force \(f\), such that there exist infinitely many weak solutions of
\[\omega_{t}+v\cdot\nabla\omega=f,\]
with \(\omega\in L^{1}(\mathbb{R}^{2})\cap L^{p}(\mathbb{R}^{2})\), uniformly in time.
In view of the above discussion, an outstanding question in the area is to obtain sufficiently large classes of vorticities between \(L^{p}\) and \(\operatorname{BMO}\) that still guarantee uniqueness for the 2D Euler flow. In this paper, we address such a question by introducing the _sharp Yudovich space_\(Y_{p_{0}}^{\#\Theta}(\Omega)\) (the precise definition will be postponed to Section 1.7, Definition 1). This space has some remarkable features. Indeed, we show that the Yudovich functions \(y_{\Theta_{1}}\) (cf. (1.5) and (1.6)) associated with \(Y_{p_{0}}^{\Theta}\) and \(Y_{p_{0}}^{\#\Theta}\) are the same, but now, for every growth7\(\Theta\),
Footnote 7: The space \(\operatorname{BMO}(\Omega)\) in (1.13) should be replaced by \(\operatorname{BMO}(\Omega)\cap L^{p_{0}}(\Omega)\) if \(\Omega=\mathbb{R}^{2}\).
\[Y_{p_{0}}^{\Theta}(\Omega)\cup\operatorname{BMO}(\Omega)\subset Y_{p_{0}}^{ \#\Theta}(\Omega). \tag{1.13}\]
As a consequence, we establish the following uniqueness assertion for \(Y_{p_{0}}^{\#\Theta}\).
**Theorem 1.1**.: _Let \(\Omega=\mathbb{R}^{2},\mathbb{T}^{2}\). Assume that the growth function \(\Theta\) satisfies the Osgood type condition (1.7). Then a (Lagrangian) weak solution \(\omega\) of (1.1), such that_
\[\omega\in L^{\infty}([0,\infty);Y_{p_{0}}^{\#\Theta}(\Omega))\qquad\text{for some}\qquad p_{0}\in(2,\infty), \tag{1.14}\]
_is uniquely determined by its initial value \(\omega_{0}\)._
The previous result tells us that the uniqueness vorticity classes \(Y_{p_{0}}^{\Theta}\) and \(\operatorname{BMO}\) considered in [36] and [32] can be considerably enlarged. Note that (1.14) holds provided that \(\omega_{0}\in Y_{p_{0}}^{\Theta}(\Omega)\) (cf. (1.13)), while Theorem 1.1 with \(\Theta(p)\approx 1\) recovers uniqueness in
\[Y_{p_{0}}^{\#\Theta}(\Omega)=L^{p_{0}}(\Omega)\cap\operatorname{BMO}(\Omega). \tag{1.15}\]
Hence, we improve the classical result (1.4) in the sense that \(L^{\infty}(\Omega)\) is now replaced by \(\operatorname{BMO}(\Omega)\). However, the applicability of Theorem 1.1 goes further beyond \(Y_{p_{0}}^{\Theta}\) and \(\operatorname{BMO}\). Indeed, the admissible growth \(\Theta(p)\approx\log p\) in Theorem 1.1 provides uniqueness for vorticities of type \(\omega(x)\approx(1+|\log|x||)\log(1+|\log|x||)\) (cf. Section 2.3, Example 4). Note that these vorticities do not belong to \(Y_{p_{0}}^{\Theta}\) nor to \(\operatorname{BMO}\) (they do not even belong to the larger space8\(e^{L}\)). In fact, they grow to
infinity faster than both \(\log(1+|\log|x||)\) and \(1+|\log|x||\), which are the prototypes of vorticity in Yudovich's and Vishik's methods, respectively. Furthermore, for a general growth \(\Theta\), we propose a simple approach to construct elements in \(Y_{p_{0}}^{\#\Theta}(\Omega)\) that are not in the classical scale \(Y_{p_{0}}^{\Theta}(\Omega)\); cf. Section 2.3. The limiting case for the \(Y_{p_{0}}^{\#\Theta}\) scale is once again the linear growth \(\Theta(p)\approx p.\) This suggests a possible route to settle the \(L^{p}\) uniqueness problem by means of showing a counterexample belonging to \(Y_{p_{0}}^{\#\Theta}\) with \(\Theta(p)\approx p.\)
In the forthcoming sections, we will adequately motivate and explain how the spaces \(Y_{p_{0}}^{\#\Theta}(\Omega)\) arise naturally in the study of Euler equations via techniques coming from extrapolation theory.
### A novel methodology via extrapolation
In this paper, we apply the extrapolation theory of Jawerth-Milman [19] to develop a new methodology9 that allows us to significantly enlarge the known uniqueness classes of vorticities for Euler equations and, at the same time, establish new uniqueness results for a large class of active scalar equations, including SQG equations and their generalizations. Furthermore, our framework includes the Yudovich and Vishik spaces as particular examples.
Footnote 9: Our presentation was inspired by the Colombian writer García Marquez and his novel “Chronicle of a death foretold”, vintage publishers, 2003, that starts with a murder and then develops the plot backwards. In particular, further explanations and documentation on extrapolation theory will be given in due time.
Indeed, using extrapolation we will construct new uniqueness spaces that contain BMO. Our new understanding of Yudovich spaces (cf. Section 1.3) motivated the construction of extrapolation spaces, where the role played by \((L^{p_{0}},L^{\infty})\) in Yudovich's theory is now replaced by the larger interpolation pair \((L^{p_{0}},\text{BMO}).\) In concrete terms, new spaces \(Y_{p_{0}}^{\#\Theta}(\Omega)\) (cf. Definition 1 below) are introduced by means of replacing \(\|\omega\|_{L^{p}(\Omega)}\) in (1.3) by \(\|M_{\Omega}^{\#}\omega\|_{L^{p}(\Omega)},\) where \(M_{\Omega}^{\#}\omega\) denotes the _Stromberg-Jawerth-Torchinsky maximal operator_[20]
\[M_{\Omega}^{\#}\omega(x):=\sup_{\Omega\supset Q\ni x}\inf_{c\in\mathbb{R}}((f- c)\chi_{Q})^{*}(|Q|/\alpha). \tag{1.16}\]
Here, \(f^{*}\) is the _non-increasing rearrangement_ of \(f\), \(Q\) is a cube with sides parallel to the axes of coordinates, and \(\alpha>0\) is a sufficiently small fixed parameter. While this change worsens the \(L^{p}\) norm by a factor of10 "\(p\)", this is compensated by the fact that CZO are bounded on BMO. However, there are obstacles when trying to implement this idea using the delicate energy method of [36]. We therefore avoid the energy method and apply extrapolation directly, exploiting the operators involved in the Biot-Savart law, to obtain a priori estimates for the modulus of continuity of the flow. This strategy naturally leads to the proof of uniqueness, in the Lagrangian formulation, under the assumption that a certain Osgood condition (cf. (1.7) and (1.9)) is satisfied. In our approach, the interplay between two extrapolation spaces arises. On the one hand, the extrapolation space related to the vorticities (say \(Y_{p_{0}}^{\#\Theta}\)) and, on the other hand, the extrapolation space in the Besov scale (e.g. associated with the pair \((L^{\infty},\dot{W}_{\infty}^{1})\)). The latter controls the modulus of continuity of the flow and informs the corresponding Osgood condition. Breaking up the argument in this fashion allows to treat uniqueness not only for Euler equations, but also for a large class of active scalar equations, for which SQG equations and their generalizations are distinguished examples.
Footnote 10: Recall that the classical Fefferman–Stein inequality [16] says, loosely speaking, that \(L^{p}\)-norms of \(\omega\) and \(M_{\Omega}^{\#}\omega\) are comparable, but the equivalence constant deteriorates as \(p\) when \(p\to\infty.\)
### Extrapolation and the role of the Yudovich functions: a preview
To explain the connection of \(Y_{p_{0}}^{\#\Theta},\) BMO, and CZO with extrapolation we need to develop some background information. This subsection contains some basic definitions with more details and documentation
to follow (cf. Appendix A). Two important aspects regarding the level of generality that we require should be emphasized here:
* Since it is crucial for us to consider a variety of scales of function spaces (e.g., \(L^{p}\), BMO, Besov, Sobolev, Yudovich, Vishik) it will be necessary to formulate the definitions in a sufficiently general context.
* To deal with different types of scales of function spaces, that measure different characteristics of their elements (smoothness, integrability, oscillations, etc.), while at the same achieving a unified description, the Peetre \(K\)-functional _associated with each scale_ (cf. (1.17) below) will be an invaluable tool. Indeed, using the \(K\)-functional the format of the formulae for the interpolation norms is the same for _all_ the scales under consideration, since it is the particular \(K\)-functional that contains the quantitative information associated with the given scale. This explains the "universal" characterization of extrapolation spaces (cf. (1.21) below).
In an informal manner, in interpolation we start with a pair \(\bar{A}=(A_{0},A_{1})\) of compatible11 Banach spaces and we wish to extract as information as possible on intermediate spaces from the end-points \(A_{0}\) and \(A_{1}\). Let \((\theta,p)\in(0,1)\times[1,\infty]\), the _real interpolation space_\(\bar{A}_{\theta,p}\) is the set of all \(f\in A_{0}+A_{1}\) such that
Footnote 11: Informally speaking, this means that \(A_{0}+A_{1}\) “makes sense”.
\[\left\|f\right\|_{\bar{A}_{\theta,p}}:=\bigg{\{}\int_{0}^{\infty}[t^{-\theta} K(t,f;A_{0},A_{1})]^{p}\frac{dt}{t}\bigg{\}}^{1/p}<\infty\]
(with the usual modification if \(p=\infty\)), where
\[K(t,f;A_{0},A_{1}):=\left\|f\right\|_{A_{0}+tA_{1}}=\inf\left\{\left\|f_{0} \right\|_{A_{0}}+t\left\|f_{1}\right\|_{A_{1}}:f=f_{0}+f_{1},\quad f_{i}\in A _{i},\quad i=0,1\right\} \tag{1.17}\]
is the _Peetre \(K\)-functional_ (cf. [5, 6]). It is convenient to normalize the spaces in order to have a continuous scale with respect to \(\theta\). This is achieved by letting \(\bar{A}_{\theta,p}^{\bullet}=c_{\theta,p}\,\bar{A}_{\theta,p}\), where12\(c_{\theta,p}:=(\theta(1-\theta)p)^{1/p}\) (cf. [19]), and
Footnote 12: If \(p=\infty\), we let \(c_{\theta,p}=1.\) Then \(\bar{A}_{\theta,\infty}^{\bullet}=\bar{A}_{\theta,\infty}\) with equality of norms.
\[\left\|f\right\|_{\bar{A}_{\theta,p}^{\bullet}}:=c_{\theta,p}\left\|f\right\| _{\bar{A}_{\theta,p}}. \tag{1.18}\]
The philosophy behind extrapolation may be viewed as the converse of interpolation, i.e., in extrapolation we start with a family of intermediate spaces and we wish to extract as information as possible on their end-points. The rigorous definition is as follows. Given a growth function \(\Theta\), the \(\Delta\)-_extrapolation space_\(\Delta_{\theta\in(0,1)}\{\frac{\bar{A}_{\theta,p(\theta)}^{\bullet}}{ \Theta(\frac{1}{1-\theta})}\}\) is defined as the set of all \(f\in\cap_{\theta\in(0,1)}\bar{A}_{\theta,p(\theta)}^{\bullet}\) such that (cf. [18, 19])
\[\left\|f\right\|_{\Delta_{\theta\in(0,1)}\{\frac{\bar{A}_{\theta,p(\theta)}^{ \bullet}}{\Theta(\frac{1}{1-\theta})}\}}:=\sup_{\theta\in(0,1)}\frac{\left\|f \right\|_{\bar{A}_{\theta,p(\theta)}^{\bullet}}}{\Theta(\frac{1}{1-\theta})}<\infty. \tag{1.19}\]
The characterization of these spaces hinges upon the fact that the second index \(p(\theta)\) can be replaced by \(\infty\), in other words the second index is not important at the level of normalized norms (cf. [19])
\[\Delta_{\theta\in(0,1)}\bigg{\{}\frac{(A_{0},A_{1})_{\theta,p(\theta)}^{ \bullet}}{\Theta(\frac{1}{1-\theta})}\bigg{\}}=\Delta_{\theta\in(0,1)}\bigg{\{} \frac{(A_{0},A_{1})_{\theta,\infty}^{\bullet}}{\Theta(\frac{1}{1-\theta})} \bigg{\}}. \tag{1.20}\]
Thus, the commutation of the underlying suprema ("Fubini"!) yields13
Footnote 13: See the discussion in Appendix A.2.
\[\Delta_{\theta\in(0,1)}\bigg{\{}\frac{(A_{0},A_{1})_{\theta,p(\theta)}^{\bullet}} {\Theta(\frac{1}{1-\theta})}\bigg{\}}=\bigg{\{}f:\sup_{t\in(0,\infty)}\frac{K(t,f;A_{0},A_{1})}{t\varphi_{\Theta}\big{(}\frac{1}{t}\big{)}}<\infty\bigg{\}}, \tag{1.21}\]
where
\[\varphi_{\Theta}(t):=\inf_{\theta\in(0,1)}\bigg{\{}\Theta\bigg{(}\frac{1}{1- \theta}\bigg{)}\,t^{1-\theta}\bigg{\}}. \tag{1.22}\]
For the remaining of this section, we shall place ourselves under the conditions of [36] and focus our attention on the spaces \(Y_{p_{0}}^{\Theta}(\Omega),\,\Omega=\mathbb{T}^{2}\) (although there are analogous statements for \(\Omega=\mathbb{R}^{2}\) or even general domains \(\Omega\subset\mathbb{R}^{d}\)). It is known that, with equivalence of norms independent of \(\theta\),
\[(L^{p_{0}}(\Omega),L^{\infty}(\Omega))_{\theta,p(\theta)}^{\bullet}=L^{p( \theta)}(\Omega),\qquad\frac{1}{p(\theta)}=\frac{1-\theta}{p_{0}} \tag{1.23}\]
(cf. [23, eq. (31), p. 61]). Using this fact, together with the monotonicity properties of the Lebesgue scale and (1.20), yields
\[Y_{p_{0}}^{\Theta}(\Omega)=\Delta_{\theta\in(0,1)}\bigg{\{}\frac{(L^{p_{0}}( \Omega),L^{\infty}(\Omega))_{\theta,p(\theta)}^{\bullet}}{\Theta(\frac{p_{0}} {1-\theta})}\bigg{\}}=\Delta_{\theta\in(0,1)}\bigg{\{}\frac{(L^{p_{0}}(\Omega ),L^{\infty}(\Omega))_{\theta,\infty}^{\bullet}}{\Theta(\frac{p_{0}}{1-\theta })}\bigg{\}}. \tag{1.24}\]
Furthermore, comparing (1.5) and (1.22) via the change \(p\leftrightarrow\frac{p_{0}}{1-\theta}\), we find that, with constants depending only on \(p_{0}\),
\[\varphi_{\Theta}(t)\approx y_{\Theta}(t). \tag{1.25}\]
To give a full characterization of \(Y_{p_{0}}^{\Theta}\) we recall that (cf. (A.9))
\[K(t,f;L^{1}(\Omega),L^{\infty}(\Omega))=\int_{0}^{t}f^{*}(s)\,ds.\]
Consider the maximal function \(f^{**}(t):=\frac{1}{t}\int_{0}^{t}f^{*}(s)\,ds\). This information combined with (1.20), (1.21), (1.24) and (1.25) yields14
Footnote 14: Since \(|\Omega|<\infty\), it is not hard to see that we can restrict to \(t\in(0,1).\) Indeed, observe that \(\int_{0}^{t}f^{*}(s)\,ds=\int_{0}^{1}f^{*}(s)\,ds\) if \(t>1\) and \(\sup_{t>1}\frac{1}{t\varphi_{\Theta}(\frac{1}{t})}<\infty\) because \(\Theta\) is a non-decreasing function.
\[\|f\|_{Y_{p_{0}}^{\Theta}(\Omega)}\approx\sup_{t\in(0,\infty)}\frac{\int_{0} ^{t}f^{*}(s)\,ds}{t\varphi_{\Theta}(\frac{1}{t})}\approx\sup_{t\in(0,1)}\, \frac{f^{**}(t)}{y_{\Theta}(\frac{1}{t})}\approx\sup_{t\in(0,1)}\,\frac{f^{ *}(t)}{y_{\Theta}(\frac{1}{t})}, \tag{1.26}\]
where the last equivalence follows from (2.16) below. In this way Yudovich spaces are identified with the more familiar Marcinkiewicz spaces, that have been extensively studied in the literature (cf. [25, 5]).
**Example 1**.: Let \(\Theta(p)\approx p\), then \(Y_{p_{0}}^{\Theta}(\mathbb{T}^{2})=e^{L}(\mathbb{T}^{2})\) (cf. [18, 19]) with
\[\|f\|_{Y_{p_{0}}^{\Theta}(\mathbb{T}^{2})}\approx\sup_{t\in(0,1)}\,\frac{f^{** }(t)}{1-\log t}\approx\sup_{t\in(0,1)}\,\frac{f^{*}(t)}{1-\log t}.\]
**Example 2**.: More generally, suppose that \(\Theta\) is a growth function such that \(p\in(p_{0},\infty)\mapsto e^{p_{0}/p}\Theta(p)\) is a quasi-decreasing function15 (i.e., equivalent to a decreasing function), then
Footnote 15: Some examples are \(\Theta(p)\approx p^{\alpha}(\log p)^{\alpha_{1}}(\log_{2}p)^{\alpha_{2}}\cdots( \log_{m}p)^{\alpha_{m}}\), where \(\alpha,\alpha_{i}\in\mathbb{R}\) and \(\log_{m}p=\underbrace{\log\ldots\log}_{\text{m times}}p\) for \(m\geq 2\).
\[\|f\|_{Y^{\Theta}_{p_{0}}(\mathbb{T}^{2})}\approx\sup_{t\in(0,1)}\;\frac{f^{** }(t)}{\Theta(1-\log t)}\approx\sup_{t\in(0,1)}\;\frac{f^{*}(t)}{\Theta(1-\log t )}.\]
This is a consequence of (1.26) and Lemma 2 below.
### A priori estimates via extrapolation
It will be instructive to revisit the main uniqueness result of [36] using our method. As a first step we use the Biot-Savart law to obtain a priori estimates for the modulus of continuity of the solutions.
To simplify the exposition, we work with \(\Omega=\mathbb{T}^{2}\) (but similar results hold for \(\Omega=\mathbb{R}^{2}\) or \(\Omega\) a smooth domain in \(\mathbb{R}^{2}\)). Fix \(p_{0}>2\). By (1.23), (1.2), the sharp \(L^{p}\) norm inequalities for CZO, and the definition of \(Y^{\Theta}_{p_{0}}(\Omega)\), we derive
\[\|\nabla v\|_{(L^{p_{0}}(\Omega),L^{\infty}(\Omega))_{\theta,p(\theta)}^{ \bullet}}\leq c_{p_{0}}\,\frac{1}{1-\theta}\,\Theta\!\left(\frac{1}{1-\theta} \right)\|\omega\|_{Y^{\Theta}_{p_{0}}(\Omega)}\,. \tag{1.27}\]
The interpolation theory of Sobolev spaces (cf. [13]) allows us to rewrite the left-hand side as
\[\|\nabla v\|_{(L^{p_{0}}(\Omega),L^{\infty}(\Omega))_{\theta,p(\theta)}^{ \bullet}}\approx\|v\|_{(\dot{W}^{1}_{p_{0}}(\Omega),\dot{W}^{1}_{\infty}( \Omega))_{\theta,p(\theta)}^{\bullet}}\,. \tag{1.28}\]
Then, inserting this information in (1.27), and rewriting the right-hand side using the definition of \(\Theta_{1}\) (cf. (1.6)), we obtain
\[\|v\|_{(\dot{W}^{1}_{p_{0}}(\Omega),\dot{W}^{1}_{\infty}(\Omega))_{\theta,p( \theta)}^{\bullet}}\leq c_{p_{0}}\Theta_{1}\!\left(\frac{1}{1-\theta}\right) \|\omega\|_{Y^{\Theta}_{p_{0}}(\Omega)}\,.\]
Consequently,
\[\|v\|_{\Delta_{\theta\in(0,1)}}\Big{\{}\frac{(\dot{W}^{1}_{p_{0}}(\Omega), \dot{W}^{1}_{\infty}(\Omega))_{\theta,p(\theta)}^{\bullet}}{\Theta_{1}(\frac {1}{1-\theta})}\Big{\}}=\sup_{\theta\in(0,1)}\frac{\|v\|_{(\dot{W}^{1}_{p_{0} }(\Omega),\dot{W}^{1}_{\infty}(\Omega))_{\theta,p(\theta)}^{\bullet}}}{\Theta _{1}(\frac{1}{1-\theta})}\lesssim\|\omega\|_{Y^{\Theta}_{p_{0}}(\Omega)}\,. \tag{1.29}\]
The Sobolev embedding theorem (recall \(p_{0}>d=2\), where \(d\) denotes the dimension of the ambient space) combined with (1.29) yields
\[\|v\|_{\Delta_{\theta\in(0,1)}}\Big{\{}\frac{(L^{\infty}(\Omega),\dot{W}^{1} _{\infty}(\Omega))_{\theta,p(\theta)}^{\bullet}}{\Theta_{1}(\frac{1}{1-\theta })}\Big{\}}\lesssim\|\omega\|_{Y^{\Theta}_{p_{0}}(\Omega)}\,.\]
The extrapolation norm that appears on the left-hand side can be computed explicitly using (1.21), (1.25) and the well-known fact that (cf. (A.11))
\[K(t,v;L^{\infty}(\Omega),\dot{W}^{1}_{\infty}(\Omega))\approx\sup_{|x-y|\leq t }\left|v(x)-v(y)\right|. \tag{1.30}\]
It then follows that
\[\left|v(x)-v(y)\right|\lesssim\left|x-y\right|y_{\Theta_{1}}\!\left(\frac{1} {\left|x-y\right|}\right)\|\omega\|_{Y^{\Theta}_{p_{0}}(\Omega)}\,, \tag{1.31}\]
where
\[y_{\Theta_{1}}(t)=\inf_{\theta\in(0,1)}\;\bigg{\{}\Theta_{1}\!\left(\frac{1}{1 -\theta}\right)t^{1-\theta}\bigg{\}}. \tag{1.32}\]
This is the main estimate of the modulus of continuity in [36, Theorem 2]. To obtain the corresponding result of [35], let \(\Theta(p)\approx 1.\) Then, \(Y_{p_{0}}^{\Theta}(\Omega)=L^{p_{0}}(\Omega)\cap L^{\infty}(\Omega)\) (cf. (1.4)), and \(y_{\Theta_{1}}(t)\approx\log t\) if \(t>1\), yielding
\[|v(x)-v(y)|\lesssim|x-y|\,|\log|x-y||\,\|\omega\|_{L^{\infty}(\Omega)}\]
if \(|x-y|<1\). See also [26, Section 4.1].
### Uniqueness of Lagrangian weak solutions
The proof of uniqueness now appeals to the Lagrangian formulation, cf. [12]. Recall that \((\omega,v)\) is said to be a _Lagrangian weak solution_ to (1.1) if \(\omega\) is obtained via the usual _push-forward_
\[\omega(t,x)=\omega_{0}(\phi^{-1}(t,x)),\]
where \(\phi\) is the flow map relative to \(v\) according to
\[\frac{d}{dt}\phi(t,x)=v(t,\phi(t,x)),\qquad\phi(0,x)=x. \tag{1.33}\]
To avoid unnecessary technical issues, throughout this paper, we shall restrict our attention to this class of solutions. However, this is not restrictive at all since, following [2], [10] and [7], every integrable weak solution of (1.1) is Lagrangian.
The uniqueness result then follows from a well-known
**Lemma 1**.: _Let \(X\) be a function space and assume that there exists a continuous nondecreasing function \(L:(0,\varepsilon_{L})\to(0,\infty)\), where \(\varepsilon_{L}\in(0,\infty]\), such that_
\[|v(t,x)-v(t,y)|\leq L(|x-y|)\,\|\omega(t)\|_{X}\qquad\text{for a.e.}\quad t \in(0,\varepsilon_{L}),\]
_where_
\[\|\omega(t)\|_{X}\in L^{1}_{loc}(0,\varepsilon_{L}),\]
_and \(L\) satisfies the Osgood condition_
\[\int_{0}^{\varepsilon_{L}}\frac{dr}{L(r)}=\infty.\]
_Then, the solution \(\phi\) to (1.33) must be unique._
Proof.: Indeed, if \(\phi_{1}\) and \(\phi_{2}\) are two flows related to (1.33), then
\[|\phi_{1}(t,x)-\phi_{2}(t,x)| \leq\int_{0}^{t}|v(s,\phi_{1}(s,x))-v(s,\phi_{2}(s,x))|\,ds\] \[\leq\int_{0}^{t}L(|\phi_{1}(s,x)-\phi_{2}(s,x)|)\,\|\omega(s)\|_{ X}\,ds\]
and the desired uniqueness follows immediately as an application of Osgood's lemma in the form stated in [4, Lemma 3.4].
### Summary of our approach to uniqueness
The method described above can be summarized as follows.
**Step 1:** Compute the extrapolation spaces involved.
**Step 2:** Use the Biot-Savart laws to obtain a priori estimates of the smoothness of solutions via extrapolation.
**Step 3:** Prove uniqueness via Osgood conditions.
### Extending Yudovich's uniqueness theorem beyond BMO
As already announced in Theorem 1.1 (and, in particular, the subsequent discussion), in this paper we extend BMO uniqueness to spaces that contain _functions of unbounded mean oscillation_. This will be done through the introduction of the new (extrapolation) spaces \(Y_{p_{0}}^{\#\Theta}\).
**Definition 1** (Sharp Yudovich spaces).: Let16\(\Omega=\mathbb{R}^{2},\,\mathbb{T}^{2}\) and let \(p_{0}\in[1,\infty)\). Given a growth function \(\Theta\), the _sharp Yudovich space_\(Y_{p_{0}}^{\#\Theta}(\Omega)\) is defined to be the set of all17\(f\in\cap_{p>p_{0}}(L^{p})^{\#}(\Omega)\) such that
Footnote 16: The definition can obviously be given in a more general context.
Footnote 17: \(\|f\|_{(L^{p})^{\#}(\Omega)}:=\|M_{\Omega}^{\#}f\|_{L^{p}(\Omega)}.\) In particular, \((L^{\infty})^{\#}(\Omega)=\mathrm{BMO}(\Omega)\) and \((L^{p})^{\#}(\Omega)=L^{p}(\Omega)\), \(p\in(1,\infty)\), suitably interpreted modulo constants; cf. [20, Corollaries 2.5 and 2.6].
\[\|f\|_{Y_{p_{0}}^{\#\Theta}(\Omega)}:=\sup_{p>p_{0}}\frac{\|f\|_{(L^{p})^{\#} (\Omega)}}{\Theta(p)}<\infty.\]
As in the case of Yudovich spaces, the definition of \(Y_{p_{0}}^{\#\Theta}(\Omega)\), with \(\Omega=\mathbb{T}^{2}\), does not depend on \(p_{0}\). When \(\Omega=\mathbb{R}^{2}\), we only have the trivial embeddings
\[Y_{p_{0}}^{\#\Theta}(\Omega)\hookrightarrow Y_{p_{1}}^{\#\Theta}(\Omega), \qquad\text{if}\qquad p_{1}>p_{0}.\]
For example, when \(\Theta(p)\approx 1\),
\[Y_{p_{0}}^{\#\Theta}(\mathbb{T}^{2})=\mathrm{BMO}(\mathbb{T}^{2})\qquad\text {and}\qquad Y_{p_{0}}^{\#\Theta}(\mathbb{R}^{2})=\mathrm{BMO}(\mathbb{R}^{2}) \cap L^{p_{0}}(\mathbb{R}^{2}),\]
cf. (1.15). On the other hand, since \(\|f\|_{(L^{p})^{\#}(\Omega)}\lesssim\|f\|_{L^{p}(\Omega)}\), \(1<p_{0}<p\leq\infty\), we have
\[Y_{p_{0}}^{\Theta}(\Omega)\hookrightarrow Y_{p_{0}}^{\#\Theta}(\Omega). \tag{1.34}\]
In Section 2.1 we show that \(Y_{p_{0}}^{\#\Theta}\) fits into the abstract extrapolation framework proposed above. In this context, BMO plays the same role as \(L^{\infty}\) in connection with \(Y_{p_{0}}^{\Theta}\). In particular, we obtain characterizations of \(Y_{p_{0}}^{\#\Theta}\) corresponding to those for \(Y_{p_{0}}^{\Theta}\) (cf. Table 1).
## 2.
\begin{table}
\begin{tabular}{|c|c|} \hline \(Y_{p_{0}}^{\Theta}\) & \(Y_{p_{0}}^{\#\Theta}\) \\ \hline \(\left\{f\in\bigcap_{p>p_{0}}L^{p}:\sup_{p>p_{0}}\frac{\|f\|_{L^{p}}}{\Theta(p)}< \infty\right\}\) & \(\left\{f\in\bigcap_{p>p_{0}}(L^{p})^{\#}:\sup_{p>p_{0}}\frac{\|f\|_{(L^{p})^{ \#}}}{\Theta(p)}<\infty\right\}\) \\ \hline \(\Delta_{\theta\in(0,1)}\left\{\frac{(L^{p_{0}},L^{\infty})_{\theta,p(\theta)}^ {\bullet}}{\Theta(\frac{p_{0}}{1-\theta})}\right\}\) & \(\Delta_{\theta\in(0,1)}\left\{\frac{(L^{p_{0}},\mathrm{BMO})_{\theta,p(\theta)}^ {\bullet}}{\Theta(\frac{p_{0}}{1-\theta})}\right\}\) \\ \hline \(\left\{f:\sup_{t\in(0,\infty)}\frac{K(t^{1/p_{0}},f;L^{p_{0}},L^{\infty})}{t^ {1/p_{0}}y_{\Theta}(\frac{1}{t})}<\infty\right\}\) & \(\left\{f:\sup_{t\in(0,\infty)}\frac{K(t^{1/p_{0}},f;L^{p_{0}},\mathrm{BMO})}{t^ {1/p_{0}}y_{\Theta}(\frac{1}{t})}<\infty\right\}\) \\ \hline \(\left\{f:\sup_{t\in(0,\infty)}\frac{(|f|^{p_{0}})^{\ast\ast}(t)^{1/p_{0}}}{y_{ \Theta}(\frac{1}{t})}<\infty\right\}\) & \(\left\{f:\sup_{t\in(0,\infty)}\frac{(|M^{\#}f|^{p_{0}})^{\ast\ast}(t)^{1/p_{0}}} {y_{\Theta}(\frac{1}{t})}<\infty\right\}\) \\ \hline \end{tabular}
\end{table}
Table 1. Yudovich vs. sharp Yudovich
The usefulness of \(Y_{p_{0}}^{\#\Theta}\) emerges when establishing a priori estimates for the modulus of continuity of the velocity.
**Theorem 1.2**.: _Assume that \(\omega\in Y_{p_{0}}^{\#\Theta}(\Omega)\) for some \(p_{0}\in(2,\infty)\). Then_
\[|v(x)-v(y)|\lesssim|x-y|\,y_{\Theta_{1}}\bigg{(}\frac{1}{|x-y|}\bigg{)}\,\| \omega\|_{Y_{p_{0}}^{\#\Theta}(\Omega)}, \tag{1.35}\]
_where \(y_{\Theta_{1}}\) is given by (1.32)._
The proof of this result will be given in Section 2.2. In particular, (1.35) extends the classical Yudovich's estimate (1.31) from \(Y_{p_{0}}^{\Theta}\) to \(Y_{p_{0}}^{\#\Theta}\). As a consequence (cf. Section 1.5), we achieve the desired uniqueness result for \(Y_{p_{0}}^{\#\Theta}\) stated in Theorem 1.1.
### Vishik's uniqueness theorem
As already claimed in Section 1.2, we show that Vishik spaces \(B_{\Pi}(\Omega)\) (cf. (1.8)) are also special examples of extrapolation constructions in the sense of (1.19). In fact, we obtain several characterizations of Vishik spaces in terms of a variety of means (extrapolation, interpolation, Yudovich functions, and growths of classical Besov norms) showing the full analogy between \(B_{\Pi}\) and \(Y_{p_{0}}^{\Theta}\). The results are collected in Table 2 (where, for simplicity again, we let \(p_{0}=1\) in the definition of \(Y_{p_{0}}^{\Theta}\)) and the proofs may be found in Section 3.1 below.
Having at hand the information contained in Table 2, we are in a position to apply the extrapolation approach to uniqueness developed in Section 1.6. Our results are formulated in terms of homogeneous function spaces, rather than their inhomogeneous counterparts. As we will see later, this will result in several improvements.
**Theorem 1.3**.: _Assume that the growth function \(\Pi\in\mathcal{P}_{1}\) (cf. Definition 3) and \(\omega\in\dot{B}_{\Pi}(\mathbb{R}^{2})\cap\dot{B}_{\infty,1}^{-1}(\mathbb{R}^{2})\). Then_
\[|v(x)-v(y)|\lesssim|x-y|\,y_{\Pi}\bigg{(}\frac{1}{|x-y|}\bigg{)}\,\|\omega\| _{\dot{B}_{\Pi}(\mathbb{R}^{2})\cap\dot{B}_{\infty,1}^{-1}(\mathbb{R}^{2})},\]
\begin{table}
\begin{tabular}{|c|c|} \hline \(Y_{1}^{\Theta}\) & \(B_{\Pi}\) \\ \hline \(\Big{\{}f\in\bigcap_{p>1}L^{p}:\sup_{p>1}\frac{\|f\|_{LP}}{\Theta(p)}<\infty \Big{\}}\) & \(\Big{\{}f\in\bigcap_{\alpha\in(-1,0)}B_{\infty,1}^{\alpha}:\sup_{\alpha\in(-1,0 )}\frac{\|f\|_{B_{\infty,1}^{\alpha}}}{\Pi(-\frac{1}{\alpha})}<\infty\Big{\}}\) \\ \hline \(\Delta_{\theta\in(0,1)}\left\{\frac{(L^{1},L^{\infty})_{\theta,\frac{1}{1-\theta }}^{\bullet}}{\Theta(\frac{1}{1-\theta})}\right\}\) & \(\Delta_{\theta\in(0,1)}\bigg{\{}\frac{(B_{\infty,1}^{-1},B_{\infty,1}^{0})_{ \theta,\frac{1}{1-\theta}}^{\bullet}}{\Pi(\frac{1}{1-\theta})}\bigg{\}}\) \\ \hline \(\Big{\{}f:\sup_{t\in(0,\infty)}\frac{K(t,f;L^{1},L^{\infty})}{ty_{\Theta}( \frac{1}{t})}<\infty\Big{\}}\) & \(\Big{\{}f:\sup_{t\in(0,\infty)}\frac{K(t,f;B_{\infty,1}^{-1},B_{\infty,1}^{0}) }{ty_{\Pi}(\frac{1}{t})}<\infty\Big{\}}\) \\ \hline \(\Big{\{}f:\sup_{0<t<\infty}\frac{f^{**}(t)}{y_{\Theta}(\frac{1}{t})}<\infty \Big{\}}\) & \(\Big{\{}f:\sup_{N\geq 0}\frac{1}{\Pi(N)}\sum\limits_{j=0}^{N}\|\Delta_{j}f\|_{L^{ \infty}}<\infty\Big{\}}\) \\ \hline \end{tabular}
\end{table}
Table 2. Yudovich vs. Vishik
_where \(y_{\Pi}\) is the Yudovich function associated to \(\Pi\) (cf. (1.5))._
As a by-product, we arrive at the following uniqueness statement for Vishik spaces.
**Theorem 1.4**.: _Assume that the growth function \(\Pi\in\mathcal{P}_{1}\) satisfies the Osgood condition_
\[\int_{1}^{\infty}\frac{dr}{ry_{\Pi}(r)}=\infty. \tag{1.36}\]
_Then, a Lagrangian weak solution \(\omega\) of (1.1), such that_
\[\omega\in L^{\infty}([0,T];\dot{B}_{\Pi}(\mathbb{R}^{2})\cap\dot{B}_{\infty,1 }^{-1}(\mathbb{R}^{2})) \tag{1.37}\]
_is uniquely determined by its initial value \(\omega_{0}\)._
_Remark 1.5_.: It follows from the trivial estimate \(y_{\Pi}(r)\lesssim\Pi(\log r)\) that
\[\int_{1}^{\infty}\frac{dr}{\Pi(r)}\lesssim\int_{1}^{\infty}\frac{dr}{ry_{\Pi} (r)}.\]
Therefore, the validity of the Osgood condition (1.9) implies18 (1.36). Furthermore, the formulation (1.36) shows a connection with (1.7) via the exchange \(\Theta_{1}\leftrightarrow\Pi\).
Footnote 18: In fact, under natural assumptions on \(\Pi\), we have \(y_{\Pi}(r)\approx\Pi(\log r)\); cf. Lemma 2 below. Then, a simple change of variables gives \(\int_{1}^{\infty}\frac{dr}{\Pi(r)}\approx\int_{1}^{\infty}\frac{dr}{ry_{\Pi}( r)}\).
In Section 3.4 we will show that Theorem 1.4 gives an improvement of Vishik's uniqueness theorem [32, Theorem 7.1] (cf. also (1.10)) in the following sense
\[B_{\Pi}(\mathbb{R}^{2})\cap L^{p_{0}}(\mathbb{R}^{2})\hookrightarrow\dot{B}_ {\Pi}(\mathbb{R}^{2})\cap\dot{B}_{\infty,1}^{-1}(\mathbb{R}^{2}),\]
for every growth \(\Pi\) and \(p_{0}\in(1,2)\); cf. Proposition 1. In particular, if \(\Pi(r)\approx r\) (so that \(y_{\Pi}(r)\approx|\log r|\) and thus (1.36) holds) then
\[\operatorname{BMO}(\mathbb{R}^{2})\hookrightarrow\dot{B}_{\infty,\infty}^{0} (\mathbb{R}^{2})\hookrightarrow\dot{B}_{\Pi}(\mathbb{R}^{2});\]
compare with the inhomogeneous statement (1.12). According to Theorem 1.4, uniqueness of the Euler flow is guaranteed provided that
\[\omega\in L^{\infty}([0,T];\operatorname{BMO}(\mathbb{R}^{2})\cap\dot{B}_{ \infty,1}^{-1}(\mathbb{R}^{2})). \tag{1.38}\]
As a consequence, we are able to extend Vishik's uniqueness condition (1.11) from \(\operatorname{bmo}(\mathbb{R}^{2})\) to \(\operatorname{BMO}(\mathbb{R}^{2})\). Moreover, we remove (at least, working with Lagrangian solutions) the decay at infinity of the vorticity inherited to the \(L^{p_{0}}(\mathbb{R}^{2})\) assumption. Specifically, in Proposition 2 we prove the following embedding
\[\operatorname{bmo}(\mathbb{R}^{2})\cap L^{p_{0}}(\mathbb{R}^{2})\hookrightarrow \operatorname{BMO}(\mathbb{R}^{2})\cap\dot{B}_{\infty,1}^{-1}(\mathbb{R}^{2}), \qquad p_{0}\in(1,2).\]
### Active scalar equations
So far, we have discussed the role played by a variety of function spaces in connection with 2D Euler equations. However, it is also important to deal with the more general class of active scalar equations modelled by
\[\omega_{t}+TS\omega\cdot\nabla\omega=0, \tag{1.39}\]
and equipped with the abstract Biot-Savart law described (at least, formally) as
\[TS:\omega\mapsto v=TS\omega. \tag{1.40}\]
Here, \(\omega=\omega(t,x)\), \(x\in\mathbb{R}^{d},t>0\), is a scalar function and \(T\) and \(S\) are (linear) operators such that \(TS=ST\).
Distinguished examples of (1.39)-(1.40) in 2D are given by
\[T=R=(R_{l})_{l=1,\ldots,d}\quad\text{(Riesz-type transforms)},\qquad S=(-\Delta)^{ \frac{\beta-1}{2}}, \tag{1.41}\]
for19\(\beta\in\mathbb{R}\). In particular, if \(\beta=0\) then one recovers the classical 2D Euler equations in its vorticity form (cf. (1.1)), while \(\beta=1\) corresponds to _surface quasi-geostrophic (SQG) equations_, and the range \(\beta\in(0,1)\) refers to the so-called _intermediate SQG equations_. The SQG equations and their generalizations arise from applications in atmospheric science and have attracted a big deal of attention in recent times. In particular, Constantin, Majda and Tabak [11] established a remarkable connection between SQG equation and the 3D-Euler equation. We also refer the reader to the survey paper [24] (and the references therein), where the outstanding issue of singularity formation in SQG equations is discussed. We mention that singular20 regimes \(\beta>2\) are also of interest, e.g. \(\beta=3\) is connected to Hall-magnetohydrodynamics (see e.g. [8]).
Footnote 19: Recall that \(R=\nabla^{\perp}(-\Delta)^{-\frac{1}{2}}.\) Then \(v=\nabla^{\perp}(-\Delta)^{\frac{\beta}{2}-1}\omega\).
Footnote 20: Note that \(TS=\nabla^{\perp}(-\Delta)^{\frac{\beta}{2}-1}\) and \(\lim_{|\xi|\to\infty}(-\Delta)^{\frac{\beta}{2}-1}(\xi)=\infty\) if \(\beta>2\).
One of the advantages of the extrapolation method, described in previous sections for Euler equations, is its flexibility. Indeed, it allows us to consider in a unified fashion a variety of function spaces, and also works for the class of active scalar equations (1.39)-(1.41). In this regard, the basic observation is that \(S^{-1}=(-\Delta)^{\frac{1-\beta}{2}}\) plays the role21 of \(\nabla\) when applying the Sobolev embedding theorem with \(\beta=0\) (i.e., the Eulerian setting) in Section 1.4.
Footnote 21: Recall the informal assertion \(\nabla``=”(-\Delta)^{1/2}.\)
Based on the above considerations, in Section 3.2 we show that Theorem 1.4 is a special case (for the 2D Euler equations) of a more general phenomenon related to generalized SQG equations (or more generally, (1.39)-(1.40)). Our results will be formulated in terms of the Vishik spaces \(\dot{B}_{\Pi}^{\beta}\), which are constructed in the same way as \(\dot{B}_{\Pi}\) (cf. (1.8)), but now with \(\dot{B}_{\infty,1}^{\beta}\) playing the role previously assigned to \(\dot{B}_{\infty,1}^{0}\) (cf. Definition 2).
In order to facilitate the reading for non experts, we close the paper with an Atlas on Interpolation and Extrapolation (cf. Appendix A), where we collect documentation and supplementary material.
We believe that the techniques developed in this paper could be useful in other related contexts.
### Road map
We simply remark that the previous discussion and the table of contents show the local organization of the paper.
## 2. The spaces \(Y_{p_{0}}^{\#\Theta}\)
### Characterizations
We establish several characterizations of \(Y_{p_{0}}^{\#\Theta}\) (cf. Definition 1) in terms of extrapolation of interpolation scales and \(K\)-functionals involving BMO, and maximal functions.
**Theorem 2.1** (Characterization via extrapolation).: _Let \(p_{0}\in[1,\infty)\), \(\theta\in(0,1),\) and let \(\Omega=\mathbb{R}^{2},\mathbb{T}^{2}\). We have_
\[(L^{p_{0}}(\Omega),\mathrm{BMO}(\Omega))_{\theta,p(\theta)}^{\bullet}=(L^{p( \theta)})^{\#}(\Omega),\qquad\frac{1}{p(\theta)}=\frac{1-\theta}{p_{0}}, \tag{2.1}\]
_with equivalence constants independent of \(\theta\). As a consequence,_
\[Y_{p_{0}}^{\#\Theta}(\Omega)=\Delta_{\theta\in(0,1)}\bigg{\{}\frac{(L^{p_{0}}( \Omega),\mathrm{BMO}(\Omega))_{\theta,p(\theta)}^{\bullet}}{\Theta(\frac{p_{0} }{1-\theta})}\bigg{\}}. \tag{2.2}\]
Proof.: Using the Jawerth-Torchinsky formula (cf. (A.10))
\[K(t,f;L^{p_{0}}(\Omega),\mathrm{BMO}(\Omega))\approx\bigg{(}\int_{0}^{t^{p_{0}}}[ (M_{\Omega}^{\#}f)^{*}(\xi)]^{p_{0}}\,d\xi\bigg{)}^{1/p_{0}}, \tag{2.3}\]
where the maximal function \(M_{\Omega}^{\#}\) is defined by (1.16), we have
\[\|f\|_{(L^{p_{0}}(\Omega),\mathrm{BMO}(\Omega))_{\theta,p(\theta)}}\approx \bigg{\{}\int_{0}^{\infty}\bigg{(}\frac{1}{t}\int_{0}^{t}[(M_{\Omega}^{\#}f)^{ *}(\xi)]^{p_{0}}\,d\xi\bigg{)}^{p(\theta)/p_{0}}dt\bigg{\}}^{1/p(\theta)}. \tag{2.4}\]
Applying the sharp version of Hardy's inequality (note that \(p(\theta)=\frac{p_{0}}{1-\theta}>p_{0}\)) stated in [31, Appendix A.4, page 272], we can estimate (2.4) as follows
\[\|f\|_{(L^{p_{0}}(\Omega),\mathrm{BMO}(\Omega))_{\theta,p(\theta)}}\lesssim \theta^{-1/p_{0}}\|f\|_{(L^{p(\theta)})^{\#}(\Omega)}. \tag{2.5}\]
Conversely, one can invoke the reverse Hardy inequality (cf. [28] and the references therein). Indeed
\[\|f\|_{(L^{p(\theta)})^{\#}(\Omega)} =\|(M^{\#}f)^{*}\|_{L^{p(\theta)}(0,\infty)}=\|(M^{\#}f)^{p_{0}*} \|_{L^{p(\theta)/p_{0}}(0,\infty)}^{1/p_{0}}\] \[\lesssim\theta^{1/p(\theta)}\,\|[(M^{\#}f)^{p_{0}}]^{**}\|_{L^{p( \theta)/p_{0}}(0,\infty)}^{1/p_{0}}\approx\theta^{1/p(\theta)}\,\|f\|_{(L^{p_ {0}}(\Omega),\mathrm{BMO}(\Omega))_{\theta,p(\theta)}}, \tag{2.6}\]
where in the last estimate we have used (2.4). Combining (2.5) and (2.6) we obtain
\[\|f\|_{(L^{p_{0}}(\Omega),\mathrm{BMO}(\Omega))_{\theta,p(\theta )}^{\bullet}}\lesssim c_{\theta p(\theta)}\theta^{-1/p_{0}}\|f\|_{(L^{p( \theta)})^{\#}(\Omega)}\] \[\lesssim\theta^{-\theta/p_{0}}\,\|f\|_{(L^{p_{0}}(\Omega), \mathrm{BMO}(\Omega))_{\theta,p(\theta)}^{\bullet}}\approx\|f\|_{(L^{p_{0}}( \Omega),\mathrm{BMO}(\Omega))_{\theta,p(\theta)}^{\bullet}}.\]
Now, (2.1) follows readily since \(c_{\theta p(\theta)}=(\theta(1-\theta)p(\theta))^{1/p(\theta)}\approx\theta^{ 1/p(\theta)}\approx\theta^{1/p_{0}}\).
**Theorem 2.2** (Characterization via \(K\)-functional and maximal function).: _Let \(p_{0}\in[1,\infty)\) and \(\Omega=\mathbb{R}^{2},\mathbb{T}^{2}\)._
1. _We have_ (2.7) \[\|f\|_{Y_{p_{0}}^{\#\Theta}(\Omega)}\approx\sup_{t\in(0,\infty)}\frac{K(t^{1/ p_{0}},f;L^{p_{0}}(\Omega),\mathrm{BMO}(\Omega))}{t^{1/p_{0}}y_{\Theta}(\frac{1}{t })}\approx\sup_{t\in(0,\infty)}\frac{[(M_{\Omega}^{\#}f)^{p_{0}}]^{**}(t)^{1/p _{0}}}{y_{\Theta}(\frac{1}{t})},\] _where_ \(y_{\Theta}\) _is given by (_1.5_)._
2. _Assume that the map_ (2.8) \[p\in(p_{0},\infty)\mapsto e^{p_{0}/p}\Theta(p)\qquad\text{is quasi-decreasing}.\] _Then_ (2.9) \[\|f\|_{Y_{p_{0}}^{\#\Theta}(\Omega)} \approx\|f\|_{(L^{p_{0}})^{\#}(\Omega)}+\sup_{t\in(0,e^{-1})} \frac{K(t,f;L^{p_{0}}(\Omega),\mathrm{BMO}(\Omega))}{t\,\Theta(-\log t)}\] \[\approx\|f\|_{(L^{p_{0}})^{\#}(\Omega)}+\sup_{t\in(0,e^{-1})} \frac{[(M_{\Omega}^{\#}f)^{p_{0}}]^{**}(t)^{1/p_{0}}}{\Theta(-\log t)}.\]
In the proof of Theorem 2.2 we will make use of the following lemma, which provides an explicit characterization of \(y_{\Theta}\) in terms of \(\Theta\).
**Lemma 2**.: _Let \(y_{\Theta}\) be the Yudovich function relative to \(\Theta\) (cf. (1.5))._
1. _We have_ (2.10) \[y_{\Theta}(r)=\Theta(p_{0})r^{1/p_{0}},\qquad\text{if}\qquad r\in(0,1).\]
2. _Suppose that_ \(\Theta\) _satisfies (_2.8_). Then_ \[y_{\Theta}(r)\approx\Theta(\log r),\qquad\text{if}\qquad r>e^{p_{0}}.\]
Proof.: (i) The formula (2.10) is an immediate consequence of the fact that for each \(r\in(0,1)\), the map \(p\in(p_{0},\infty)\mapsto\Theta(p)r^{1/p}\) is increasing.
(ii) Note that \(y_{\Theta}(r)\leq\Theta(\log r)\,r^{1/\log r}\approx\Theta(\log r)\). Conversely, given any \(p>p_{0}\),
\[\Theta(p)r^{1/p}\geq\Theta(\log r)\qquad\text{if}\qquad p>\log r.\]
If \(p\in(p_{0},\log r)\), then
\[\Theta(p)r^{1/p}=e^{p_{0}/p}\Theta(p)\bigg{(}\frac{r}{e^{p_{0}}}\bigg{)}^{1/p }\gtrsim e^{p_{0}/\log r}\Theta(\log r)\bigg{(}\frac{r}{e^{p_{0}}}\bigg{)}^{1/ \log r}\approx\Theta(\log r).\]
Hence,
\[y_{\Theta}(r)=\inf_{p>p_{0}}\{\Theta(p)r^{1/p}\}\gtrsim\Theta(\log r).\]
Proof of Theorem 2.2.: (i) By the reiteration property of the \(\Delta\)-extrapolation method (cf. (A.3)), we have
\[\Delta_{\theta\in(0,1)}\bigg{\{}\frac{(L^{p_{0}}(\Omega),\mathrm{BMO}(\Omega) )_{\theta,p(\theta)}^{\bullet}}{\Theta(\frac{p_{0}}{1-\theta})}\bigg{\}}= \Delta_{\theta\in(0,1)}\bigg{\{}\frac{(L^{p_{0}}(\Omega),\mathrm{BMO}(\Omega) )_{\theta,\infty}^{\bullet}}{\Theta(\frac{p_{0}}{1-\theta})}\bigg{\}}. \tag{2.11}\]
Then (cf. (A.4) and (A.5))
\[\|f\|_{\Delta_{\theta\in(0,1)}\Big{\{}\frac{(L^{p_{0}}(\Omega),\mathrm{BMO}( \Omega))_{\theta,p(\theta)}^{\bullet}}{\Theta(\frac{p_{0}}{1-\theta})}\Big{\}} }\approx\sup_{t\in(0,\infty)}\frac{K(t^{1/p_{0}},f;L^{p_{0}}(\Omega),\mathrm{BMO }(\Omega))}{t^{1/p_{0}}y_{\Theta}(\frac{1}{t})}. \tag{2.12}\]
Combining with (2.2) we obtain the first equivalence in (2.7). The second equivalence now follows from (2.3).
(ii) Suppose now that \(\Theta\) satisfies (2.8). By Lemma 2 and (2.3), we have
\[\sup_{t\in(0,\infty)}\frac{K(t^{1/p_{0}},f;L^{p_{0}}(\Omega), \mathrm{BMO}(\Omega))}{t^{1/p_{0}}y_{\Theta}(\frac{1}{t})}\approx\] \[\sup_{t\in(0,e^{-p_{0}})}\frac{K(t^{1/p_{0}},f;L^{p_{0}}(\Omega),\mathrm{BMO}(\Omega))}{t^{1/p_{0}}y_{\Theta}(\frac{1}{t})}+\sup_{t\in(1, \infty)}\frac{K(t^{1/p_{0}},f;L^{p_{0}}(\Omega),\mathrm{BMO}(\Omega))}{t^{1/p_ {0}}y_{\Theta}(\frac{1}{t})}\] \[\approx\sup_{t\in(0,e^{-p_{0}})}\frac{K(t^{1/p_{0}},f;L^{p_{0}}( \Omega),\mathrm{BMO}(\Omega))}{t^{1/p_{0}}\Theta(-\log t)}+\sup_{t\in(1, \infty)}K(t^{1/p_{0}},f;L^{p_{0}}(\Omega),\mathrm{BMO}(\Omega)) \tag{2.13}\] \[\approx\sup_{t\in(0,e^{-1})}\frac{K(t,f;L^{p_{0}}(\Omega), \mathrm{BMO}(\Omega))}{t\Theta(-\log t)}+\|M_{\Omega}^{\#}f\|_{L^{p_{0}}( \Omega)}.\]
Similarly, one can show that
\[\sup_{t\in(0,\infty)}\frac{[(M_{\Omega}^{\#}f)^{p_{0}}]^{**}(t)^{1/p_{0}}}{y_ {\Theta}(\frac{1}{t})}\approx\sup_{t\in(0,e^{-1})}\frac{[(M_{\Omega}^{\#}f)^{ p_{0}}]^{**}(t)^{1/p_{0}}}{\Theta(-\log t)}+\|M_{\Omega}^{\#}f\|_{L^{p_{0}}( \Omega)}. \tag{2.14}\]
Combining (2.7) with (2.13) and (2.14), we complete the proof of (2.9).
When \(\Omega=\mathbb{T}^{2}\), the characterizations provided by Theorem 2.2 can be supplemented as follows.
**Theorem 2.3**.: _Let \(p_{0}\in[1,\infty)\) and suppose that \(\Theta\) satisfies (2.8). Then_
\[\|f\|_{Y_{p_{0}}^{\#\Theta}(\mathbb{T}^{2})} \approx\sup_{t\in(0,e^{-1})}\frac{K(t,f;L^{p_{0}}(\mathbb{T}^{2}), \operatorname{BMO}(\mathbb{T}^{2}))}{t\Theta(-\log t)} \tag{2.15}\] \[\approx\sup_{t\in(0,e^{-1})}\frac{[(M_{\mathbb{T}^{2}}^{\#}f)^{p _{0}}]**(t)^{1/p_{0}}}{\Theta(-\log t)}\approx\sup_{t\in(0,e^{-1})}\frac{(M_{ \mathbb{T}^{2}}^{\#}f)^{*}(t)}{\Theta(-\log t)}.\]
_Remark 2.4_.: Note that the last expression in (2.15) is independent of \(p_{0}\). This is in accord with the fact that the definition of \(Y_{p_{0}}^{\#\Theta}(\mathbb{T}^{2})\) does not depend on \(p_{0}\).
Proof of Theorem 2.3.: Using (2.3) we can write
\[K(e^{-1},f;L^{p_{0}}(\mathbb{T}^{2}),\operatorname{BMO}(\mathbb{T}^{2})) \approx[(M_{\mathbb{T}^{2}}^{\#}f)^{p_{0}}]**(e^{-1})\approx\|f\|_{(L^{p_{0}}) ^{\#}(\mathbb{T}^{2})}.\]
Accordingly, the first and second equivalences in (2.15) are consequences of (2.9).
We now prove the last equivalence in (2.15). We claim that
\[\int_{0}^{t}y_{\Theta}\bigg{(}\frac{1}{s}\bigg{)}^{p_{0}}\,ds\lesssim ty_{ \Theta}\bigg{(}\frac{1}{t}\bigg{)}^{p_{0}},\qquad t\in(0,1). \tag{2.16}\]
Assuming momentarily the validity of (2.16), we have
\[[(M_{\mathbb{T}^{2}}^{\#}f)^{p_{0}}]**(t)^{1/p_{0}} =\bigg{[}\frac{1}{t}\int_{0}^{t}\bigg{[}\frac{(M_{\mathbb{T}^{2}} ^{\#}f)^{*}(s)}{y_{\Theta}(\frac{1}{s})}\ y_{\Theta}\bigg{(}\frac{1}{s}\bigg{)} \bigg{]}^{p_{0}}\,ds\bigg{]}^{1/p_{0}}\] \[\leq\bigg{[}\frac{1}{t}\int_{0}^{t}y_{\Theta}\bigg{(}\frac{1}{s} \bigg{)}^{p_{0}}\,ds\bigg{]}^{1/p_{0}}\sup_{t\in(0,1)}\frac{(M_{\mathbb{T}^{2} }^{\#}f)^{*}(t)}{y_{\Theta}(\frac{1}{t})}\] \[\lesssim y_{\Theta}\bigg{(}\frac{1}{t}\bigg{)}\,\sup_{t\in(0,1)} \frac{(M_{\mathbb{T}^{2}}^{\#}f)^{*}(t)}{y_{\Theta}(\frac{1}{t})}.\]
Consequently,
\[\sup_{t\in(0,1)}\frac{[(M_{\mathbb{T}^{2}}^{\#}f)^{p_{0}}]**(t)^{1/p_{0}}}{y_ {\Theta}(\frac{1}{t})}\lesssim\sup_{t\in(0,1)}\frac{(M_{\mathbb{T}^{2}}^{\#}f )^{*}(t)}{y_{\Theta}(\frac{1}{t})}.\]
The converse estimate is a simple consequence of the fact that \((M_{\mathbb{T}^{2}}^{\#}f)^{*}\) is decreasing. This completes the proof of the last equivalence in (2.15), under the assumption that (2.16) holds true.
Next we turn to the proof of (2.16). For this purpose we need the following easily established fact:
\[y_{\Theta}(r)\approx\widetilde{y}_{\Theta}(r):=\inf_{p>2p_{0}}\{\Theta(p)r^{1 /p}\},\qquad\text{for}\qquad r>1. \tag{2.17}\]
Indeed, the estimate \(y_{\theta}(r)\leq\widetilde{y}_{\Theta}(r)\) is obvious. Conversely, let \(p>p_{0}\), then \(2p>2p_{0}\), and using the doubling property of \(\Theta\) we see that, for \(r>1\),
\[\Theta(p)r^{\frac{1}{p}}\approx\Theta(2p)r^{\frac{1}{p}}\geq\Theta(2p)r^{ \frac{1}{2p}}\geq\widetilde{y}_{\Theta}(r).\]
Therefore, taking the infimum over all \(p>p_{0}\), we obtain \(y_{\Theta}(r)\gtrsim\widetilde{y}_{\Theta}(r)\), completing the proof of (2.17).
Let \(\varepsilon\in(\frac{1}{2p_{0}},\frac{1}{p_{0}})\). Observe that
\[r\in(1,\infty)\mapsto r^{-\varepsilon}\,\widetilde{y}_{\Theta}(r)=\inf_{p>2p _{0}}\{\Theta(p)r^{-\varepsilon+1/p}\}\quad\text{is a decreasing function.} \tag{2.18}\]
Let \(t\in(0,1),\) then by (2.17) and (2.18), we have
\[\int_{0}^{t}y_{\Theta}\bigg{(}\frac{1}{s}\bigg{)}^{p_{0}}\,ds \approx\int_{0}^{t}\widetilde{y}_{\Theta}\bigg{(}\frac{1}{s}\bigg{)} ^{p_{0}}\,ds=\int_{0}^{t}s^{-\varepsilon p_{0}}\bigg{[}\bigg{(}\frac{1}{s} \bigg{)}^{-\varepsilon}\widetilde{y}_{\Theta}\bigg{(}\frac{1}{s}\bigg{)}\bigg{]} ^{p_{0}}\,ds\] \[\approx t^{1-\varepsilon p_{0}}\bigg{[}\bigg{(}\frac{1}{t}\bigg{)} ^{-\varepsilon}\widetilde{y}_{\Theta}\bigg{(}\frac{1}{t}\bigg{)}\bigg{]}^{p_{ 0}}\approx ty_{\Theta}\bigg{(}\frac{1}{t}\bigg{)}^{p_{0}},\]
this concludes the proof of (2.16) and consequently the theorem is proved.
### Proof of Theorem 1.2
We shall assume that all function spaces are defined on \(\Omega=\mathbb{R}^{2}\) (cf. Remark 2.5 below for the modifications needed to deal with \(\Omega=\mathbb{T}^{2}\)).
Let \(p_{0}\in(1,\infty).\) Recall that CZOs act boundedly on \(L^{p_{0}}\) and BMO (cf. [30]), in particular, for \(\mathcal{K}\) given by (1.2) we have,
\[\mathcal{K}:L^{p_{0}}\to L^{p_{0}}\qquad\text{and}\qquad\mathcal{K}:\text{BMO }\to\text{BMO}.\]
By interpolation we find that, with norm independent of \(\theta,\)
\[\mathcal{K}:(L^{p_{0}},\text{BMO})^{\bullet}_{\theta,p(\theta)}\to(L^{p_{0}},\text{BMO})^{\bullet}_{\theta,p(\theta)},\]
where \(p(\theta)=p_{0}/(1-\theta)\). This implies (cf. (1.2))
\[\|\nabla v\|_{(L^{p_{0}},\text{BMO})^{\bullet}_{\theta,p(\theta)}}\lesssim\| \omega\|_{(L^{p_{0}},\text{BMO})^{\bullet}_{\theta,p(\theta)}} \tag{2.19}\]
with constant independent of \(\theta.\)
Next we show that
\[\|f-f_{\infty}\|_{(L^{p_{0}},L^{\infty})^{\bullet}_{\theta,p(\theta)}}\lesssim (1-\theta)^{-1}\|f\|_{(L^{p_{0}},\text{BMO})^{\bullet}_{\theta,p(\theta)}}, \tag{2.20}\]
where \(f_{\infty}:=\lim_{|Q|\to\infty}f_{Q}\) and \(f_{Q}\) denotes the _integral average_ of \(f\) related to the cube \(Q,\) i.e., \(f_{Q}:=\frac{1}{|Q|}\int_{Q}f\).
We start by reformulating the following quantitative version of the John-Nirenberg embedding [21], which asserts that BMO is locally embedded into \(e^{L}\) (cf. [5, Proposition 8.10, p. 398])
\[(f-f_{\infty})^{**}(t)\lesssim\int_{t}^{\infty}f^{\#*}(s)\,\frac{ds}{s}. \tag{2.21}\]
Here \(f^{\#}\) is the _sharp maximal function of Fefferman-Stein_22[16]. Indeed, for our purposes, it is convenient to rewrite (2.21) in terms of \(M^{\#}f\) rather than \(f^{\#}.\) The connection between these maximal functions is given by (cf. [20, Lemma 3.4])
Footnote 22: Recall that \(f^{\#}(x)=\sup_{Q\ni x}\frac{1}{|Q|}\int_{Q}|f-f_{Q}|.\)
\[f^{\#*}(t)\approx(M^{\#}f)^{**}(t). \tag{2.22}\]
Therefore, (2.21) can be expressed as
\[(f-f_{\infty})^{**}(t)\lesssim\int_{t}^{\infty}(M^{\#}f)^{**}(s)\,\frac{ds}{s}. \tag{2.23}\]
Applying \(L^{p}\)-norms on both sides of (2.23) and estimating the right-hand side using the pair of Hardy inequalities in [31, Appendix A.4, page 272], we arrive at
\[\bigg{\{}\int_{0}^{\infty}[(f-f_{\infty})^{**}(t)]^{p}\,dt\bigg{\}}^ {1/p} \lesssim\bigg{\{}\int_{0}^{\infty}\bigg{[}\int_{t}^{\infty}(M^{\#} f)^{**}(s)\,\frac{ds}{s}\bigg{]}^{p}\,dt\bigg{\}}^{1/p}\] \[\leq p\,\bigg{\{}\int_{0}^{\infty}[(M^{\#}f)^{**}(t)]^{p}\,dt \bigg{\}}^{1/p}\] \[=p\,\bigg{\{}\int_{0}^{\infty}\bigg{[}\frac{1}{t}\int_{0}^{t}(M^{ \#}f)^{*}(s)\,ds\bigg{]}^{p}\,dt\bigg{\}}^{1/p}\] \[\leq\frac{p^{2}}{p-1}\,\|f\|_{(L^{p})^{\#}}.\]
Consequently, we have the following variant of the Fefferman-Stein inequality (cf. [5])
\[\|f-f_{\infty}\|_{L^{p}}=\bigg{\{}\int_{0}^{\infty}[(f-f_{\infty})^{*}(t)]^{p }\,dt\bigg{\}}^{1/p}\lesssim\frac{p^{2}}{p-1}\,\|f\|_{(L^{p})^{\#}}.\]
In particular, since \(p_{0}>1\), and \(\frac{1}{p(\theta)}=\frac{1-\theta}{p_{0}}\),
\[\|f-f_{\infty}\|_{L^{p(\theta)}}\lesssim\frac{(1-\theta)^{-1}}{p_{0}-1+\theta }\,\|f\|_{(L^{p(\theta)})^{\#}}\lesssim(1-\theta)^{-1}\|f\|_{(L^{p(\theta)})^{ \#}}.\]
In view of (1.23) and (2.1), the previous estimate can be rewritten as
\[\|f-f_{\infty}\|_{(L^{p_{0}},L^{\infty})^{\bullet}_{\theta,p(\theta)}} \lesssim(1-\theta)^{-1}\|f\|_{(L^{p_{0}},\mathrm{BMO})^{\bullet}_{\theta,p( \theta)}},\]
proving that (2.20) holds.
Applying (2.20) to \(f=\nabla v\) (and noting that \((\nabla v)_{\infty}=0\)), combined with (2.19), yields
\[\|\nabla v\|_{(L^{p_{0}},L^{\infty})^{\bullet}_{\theta,p(\theta)}}\lesssim(1- \theta)^{-1}\|\omega\|_{(L^{p_{0}},\mathrm{BMO})^{\bullet}_{\theta,p(\theta)}}\]
uniformly with respect to \(\theta\in(0,1)\). Multiplying both sides of the above estimate by the factor \((1-\theta)\Theta\big{(}\frac{p_{0}}{1-\theta}\big{)}^{-1}\) and taking the supremum over all \(\theta\in(0,1)\), we find
\[\|\nabla v\|_{\Delta_{\theta\in(0,1)}}\Big{\{}\tfrac{(L^{p_{0}},L^{\infty})^{ \bullet}_{\theta,p(\theta)}}{\Theta_{1}(\frac{p_{0}}{1-\theta})}\Big{\}} \lesssim\|\omega\|_{\Delta_{\theta\in(0,1)}}\Big{\{}\tfrac{(L^{p_{0}}, \mathrm{BMO})^{\bullet}_{\theta,p(\theta)}}{\Theta_{1}(\frac{p_{0}}{1-\theta} )}\Big{\}}\approx\|\omega\|_{Y^{\#\theta}_{p_{0}}}, \tag{2.24}\]
where \(\Theta_{1}\) was introduced in (1.6) and we have also used (2.2) in the last step.
Now using (1.28), we rewrite the left-hand side of (2.24) as
\[\|\nabla v\|_{\Delta_{\theta\in(0,1)}}\Big{\{}\tfrac{(L^{p_{0}},L^{\infty})^{ \bullet}_{\theta,p(\theta)}}{\Theta_{1}(\frac{p_{0}}{1-\theta})}\Big{\}} \approx\|v\|_{\Delta_{\theta\in(0,1)}}\Big{\{}\tfrac{(\dot{W}^{1}_{p_{0}}, \dot{W}^{1}_{\infty})^{\bullet}_{\theta,p(\theta)}}{\Theta_{1}(\frac{p_{0}}{1 -\theta})}\Big{\}}\cdot\]
Furthermore, the Sobolev embedding theorem \(\dot{W}^{1}_{p_{0}}\hookrightarrow L^{\infty}\) (recall that \(p_{0}>2\)), and the reiteration property of \(\Delta\)-extrapolation (cf. Appendix A.2, (A.3)) yield
\[\Delta_{\theta\in(0,1)}\bigg{\{}\frac{(\dot{W}^{1}_{p_{0}},\dot{W }^{1}_{\infty})^{\bullet}_{\theta,p(\theta)}}{\Theta_{1}(\frac{p_{0}}{1- \theta})}\bigg{\}} \hookrightarrow\Delta_{\theta\in(0,1)}\bigg{\{}\frac{(L^{\infty},\dot {W}^{1}_{\infty})^{\bullet}_{\theta,p(\theta)}}{\Theta_{1}(\frac{p_{0}}{1- \theta})}\bigg{\}}\] \[=\Delta_{\theta\in(0,1)}\bigg{\{}\frac{(L^{\infty},\dot{W}^{1}_{ \infty})^{\bullet}_{\theta,\infty}}{\Theta_{1}(\frac{p_{0}}{1-\theta})}\bigg{\}}.\]
Consequently,
\[\|v\|_{\Delta_{\theta\in(0,1)}\left\{\frac{(L^{\infty},\dot{W}^{1}_{\infty}) \boldsymbol{\theta}^{\bullet}_{,\infty}}{\Theta_{1}(\frac{P0}{1-\theta})} \right\}}\lesssim\|v\|_{\Delta_{\theta\in(0,1)}\left\{\frac{(W^{1}_{\theta_{ 1}},\dot{W}^{1}_{\infty})\boldsymbol{\theta}^{\bullet}_{,p(\theta)}}{\Theta_{1 }(\frac{P0}{1-\theta})}\right\}}. \tag{2.25}\]
Updating (2.24) via (2.25) we arrive at
\[\|v\|_{\Delta_{\theta\in(0,1)}\left\{\frac{(L^{\infty},\dot{W}^{1}_{\infty}) \boldsymbol{\theta}^{\bullet}_{,\infty}}{\Theta_{1}(\frac{P0}{1-\theta})} \right\}}\lesssim\|\omega\|_{Y^{\#\Theta}_{p_{0}}}. \tag{2.26}\]
Next we compute the norm of the extrapolation space on the left-hand side of (2.26) and show that
\[\|v\|_{\Delta_{\theta\in(0,1)}\left\{\frac{(L^{\infty},\dot{W}^{1}_{\infty}) \boldsymbol{\theta}^{\bullet}_{,\infty}}{\Theta_{1}(\frac{P0}{1-\theta})} \right\}}\approx\sup_{x,y}\,\frac{|v(x)-v(y)|}{\inf_{t>|x-y|}ty_{\Theta_{1}}( \frac{1}{t})}. \tag{2.27}\]
The argument was already outlined in Section 1.4. For the sake of completeness, next we give full details. Indeed, in light of (1.5), we have (cf. (A.4) and (A.5))
\[\|v\|_{\Delta_{\theta\in(0,1)}\left\{\frac{(L^{\infty},\dot{W}^{1}_{\infty}) \boldsymbol{\theta}^{\bullet}_{,\infty}}{\Theta_{1}(\frac{P0}{1-\theta})} \right\}}\approx\sup_{t\in(0,\infty)}\frac{K(t,v;L^{\infty},\dot{W}^{1}_{ \infty})}{ty_{\Theta_{1}}(\frac{1}{t})}. \tag{2.28}\]
Applying the characterization of the \(K\)-functional for \((L^{\infty},\dot{W}^{1}_{\infty})\) (cf. (1.30)) we find
\[\|v\|_{\Delta_{\theta\in(0,1)}\left\{\frac{(L^{\infty},\dot{W}^{ 1}_{\infty})\boldsymbol{\theta}^{\bullet}_{,\infty}}{\Theta_{1}(\frac{P0}{1- \theta})}\right\}} \approx\sup_{t\in(0,\infty)}\frac{\sup_{|x-y|<t}|v(x)-v(y)|}{ty _{\Theta_{1}}(\frac{1}{t})}\] \[=\sup_{x,y}|v(x)-v(y)|\sup_{t>|x-y|}\frac{1}{ty_{\Theta_{1}}( \frac{1}{t})}.\]
Hence, the desired result (2.27) follows.
Putting together (2.26) and (2.27), we obtain
\[\sup_{x,y\in\Omega}\,\frac{|v(x)-v(y)|}{\inf_{t>|x-y|}ty_{\Theta_{1}}(\frac{1} {t})}\lesssim\|\omega\|_{Y^{\#\Theta}_{p_{0}}}.\]
In particular,
\[|v(x)-v(y)|\lesssim|x-y|\,y_{\Theta_{1}}\bigg{(}\frac{1}{|x-y|}\bigg{)}\| \omega\|_{Y^{\#\Theta}_{p_{0}}},\]
thus completing the proof of Theorem 1.2.
_Remark 2.5_.: The above proof can be easily adapted to deal with \(\Omega=\mathbb{T}^{2}\). In particular, the periodic counterpart of (2.21) is given by (cf. [5, Corollary 7.4, p. 379]), for \(t\in(0,\frac{1}{6})\),
\[(f-f_{\mathbb{T}^{2}})^{**}(t)\lesssim\int_{t}^{1}f^{\#*}(s)\,\frac{ds}{s},\]
where \(f_{\mathbb{T}^{2}}\) denotes the integral mean of \(f\). Accordingly, the fact that \((\nabla v)_{\infty}=0\) in the above proof is replaced by23\((\nabla v)_{\mathbb{T}^{2}}=0\).
### Examples of vorticities in \(Y_{p_{0}}^{\#\Theta}\) that are not in \(Y_{p_{0}}^{\Theta}\)
To simplify the exposition, throughout this section we assume that \(\Omega=\mathbb{T}^{2}\). Recall that by construction, given a growth function \(\Theta\), \(Y_{p_{0}}^{\#\Theta}\) is a bigger space than \(Y_{p_{0}}^{\Theta}\) (cf. (1.34)). Furthermore, in the special case \(\Theta(p)\approx 1\), we have \(Y_{p_{0}}^{\Theta}=L^{\infty}\subsetneq\operatorname{BMO}=Y_{p_{0}}^{\#\Theta}\). It is of interest to understand better the relationship between these spaces. In this section, we provide a method to construct explicit examples of functions \(\omega\in Y_{p_{0}}^{\#\Theta}\backslash Y_{p_{0}}^{\Theta}\), for a variety of growths.
**Example 3** (The case \(\Theta(p)\approx p^{\alpha}\), \(\alpha>0\)).: Let
\[\omega(x)=|\log|x||^{\alpha+1}.\]
We will show that
\[\omega\in Y^{\#\Theta},\qquad\omega\not\in Y^{\Theta}. \tag{2.29}\]
Indeed, basic computations lead to
\[\omega^{*}(t)\approx(-\log t)^{\alpha+1} \tag{2.30}\]
and (recall that \(\Omega=\mathbb{T}^{2}\))
\[|\nabla\omega|^{*}(t)\approx t^{-1/2}(-\log t)^{\alpha}. \tag{2.31}\]
Next we are able to bound \(\omega^{\#*}\) via the following pointwise estimate obtained in [15, Theorem 2.6]:
\[\omega^{\#*}(t)\lesssim\sum_{l=0}^{1}\bigg{[}t^{-1/r_{0}}\bigg{(}\int_{0}^{t} (\xi^{1/r}\,|\nabla^{l}\omega|^{*}(\xi))^{r_{0}}\,\frac{d\xi}{\xi}\bigg{)}^{1 /r_{0}}+\sup_{t<\xi<1}\xi^{1/2}\,|\nabla^{l}\omega|^{*}(\xi)\bigg{]}, \tag{2.32}\]
where \(r_{0}>2\) and \(r=\frac{2r_{0}}{2+r_{0}}\). We treat the four terms appearing on the right-hand side of the previous estimate. Assume first that \(l=0\) (i.e., \(|\nabla^{0}\omega|^{*}=\omega^{*}\)). By (2.30), we have
\[\bigg{(}\int_{0}^{t}(\xi^{1/r}\,\omega^{*}(\xi))^{r_{0}}\,\frac{d\xi}{\xi} \bigg{)}^{1/r_{0}}\approx\bigg{(}\int_{0}^{t}[\xi^{1/r}(-\log\xi)^{\alpha+1}]^ {r_{0}}\,\frac{d\xi}{\xi}\bigg{)}^{1/r_{0}}\approx t^{1/r}(-\log t)^{\alpha+1}\]
and
\[\sup_{t<\xi<1}\xi^{1/2}\,\omega^{*}(\xi)\approx\sup_{t<\xi<1}\xi^{1/2}\,(- \log\xi)^{\alpha+1}\approx 1.\]
Putting these estimates together, we obtain
\[t^{-1/r_{0}}\bigg{(}\int_{0}^{t}(\xi^{1/r}\,\omega^{*}(\xi))^{r_ {0}}\,\frac{d\xi}{\xi}\bigg{)}^{1/r_{0}}+\sup_{t<\xi<1}\xi^{1/2}\,\omega^{*}(\xi)\] \[\approx t^{1/2}(-\log t)^{\alpha+1}+1\approx 1. \tag{2.33}\]
Next we deal with the term on the right-hand side of (2.32) that corresponds to \(l=1\). Using (2.31), we get the following estimates
\[\bigg{(}\int_{0}^{t}(\xi^{1/r}\,|\nabla\omega|^{*}(\xi))^{r_{0} }\,\frac{d\xi}{\xi}\bigg{)}^{1/r_{0}} \approx\bigg{(}\int_{0}^{t}(\xi^{1/r_{0}}(-\log\xi)^{\alpha})^{r_ {0}}\,\frac{d\xi}{\xi}\bigg{)}^{1/r_{0}}\] \[\approx t^{1/r_{0}}(-\log t)^{\alpha}\]
and (since \(\alpha>0\))
\[\sup_{t<\xi<1}\xi^{1/2}\,|\nabla\omega|^{*}(\xi)\approx\sup_{t<\xi<1}(-\log \xi)^{\alpha}=(-\log t)^{\alpha}.\]
Hence
\[t^{-1/r_{0}}\bigg{(}\int_{0}^{t}(\xi^{1/r}\,|\nabla\omega|^{*}(\xi))^{r_{0}}\, \frac{d\xi}{\xi}\bigg{)}^{1/r_{0}}+\sup_{t<\xi<1}\xi^{1/2}\,|\nabla\omega|^{*}( \xi)\approx(-\log t)^{\alpha}. \tag{2.34}\]
Inserting (2.33) and (2.34) into (2.32), we obtain the following upper estimate for \(\omega^{\#*}\),
\[\omega^{\#*}(t)\lesssim(-\log t)^{\alpha},\]
or equivalently (cf. (2.22))
\[(M^{\#}\omega)^{**}(t)\lesssim(-\log t)^{\alpha}. \tag{2.35}\]
Since we are working on \(\Omega=\mathbb{T}^{2}\), without loss of generality we may assume that \(p_{0}=1\). Applying Theorem 2.3 and Example 2 (with \(\Theta(p)\approx p^{\alpha}\)) and using (2.30) and (2.35), we compute
\[\|\omega\|_{Y^{\Theta}}\approx\sup_{t\in(0,e^{-1})}\frac{\omega^{**}(t)}{ \Theta(-\log t)}\approx\sup_{t\in(0,e^{-1})}\frac{(-\log t)^{\alpha+1}}{(- \log t)^{\alpha}}=\infty\]
and
\[\|\omega\|_{Y^{\#\Theta}}\approx\sup_{t\in(0,e^{-1})}\frac{(M^{\#}\omega)^{** }(t)}{\Theta(-\log t)}\lesssim\sup_{t\in(0,e^{-1})}\frac{(-\log t)^{\alpha}}{ (-\log t)^{\alpha}}=1.\]
This concludes the proof of (2.29).
**Example 4** (The case \(\Theta(p)\approx\log p\)).: This example is motivated by the fact that \(\log(1+|\log|x||)\) is a prototype of a function in \(Y^{\Theta}\), while \(|\log|x||\) is a prototype of a function in BMO. We consider their product, namely,
\[\omega(x)=(1+|\log|x||)\log(1+|\log|x||),\]
and show that
\[\omega\in Y^{\#\Theta},\qquad\omega\not\in Y^{\Theta}. \tag{2.36}\]
We follow closely the method of Example 3. Specifically, by elementary manipulations we find
\[\omega^{*}(t)\approx(1-\log t)\log(1-\log t) \tag{2.37}\]
and
\[|\nabla\omega|^{*}(t)\approx t^{-1/2}\,(1+\log(1-\log t)).\]
Recall that \(r=\frac{2r_{0}}{2+r_{0}}\), where \(r_{0}>2\). Therefore
\[t^{-1/r_{0}}\bigg{(}\int_{0}^{t}(\xi^{1/r}\,\omega^{*}(\xi))^{r_{0}}\,\frac{ d\xi}{\xi}\bigg{)}^{1/r_{0}}+\sup_{t<\xi<1}\xi^{1/2}\,\omega^{*}(\xi)\approx t^{1/2} (1-\log t)\log(1-\log t)+1\approx 1\]
and
\[t^{-1/r_{0}}\bigg{(}\int_{0}^{t}(\xi^{1/r}\,|\nabla\omega|^{*}(\xi))^{r_{0}}\, \frac{d\xi}{\xi}\bigg{)}^{1/r_{0}}+\sup_{t<\xi<1}\xi^{1/2}\,|\nabla\omega|^{*} (\xi)\approx\log(1-\log t).\]
Inserting these two estimates into (2.32), we obtain
\[\omega^{\#*}(t)\lesssim\log(1-\log t). \tag{2.38}\]
Invoking Theorem 2.3 and Example 2 (with \(\Theta(p)\approx\log p\)) together with (2.37) and (2.38), we get
\[\|\omega\|_{Y^{\Theta}}\approx\sup_{t\in(0,e^{-1})}(-\log t)=\infty\]
and
\[\|\omega\|_{Y^{\#\Theta}}\lesssim\sup_{t\in(0,e^{-1})}\frac{\log(-\log t)}{\log(- \log t)}=1.\]
Hence \(\omega\) fulfils (2.36).
_Remark 2.6_.: It is possible to extend the methodology applied in Examples 3 and 4 to deal with more general growths \(\Theta\) of logarithmic type. Further details are left to the reader.
## 3. The spaces \(\dot{B}^{\beta}_{\Pi}\)
Let \(\mathcal{S}(\mathbb{R}^{d})\) denote the Schwartz space and \(\mathcal{S}^{\prime}(\mathbb{R}^{d})\) the space of tempered distributions. We consider the space \(\dot{\mathcal{S}}(\mathbb{R}^{d})\) formed by \(\varphi\in\mathcal{S}(\mathbb{R}^{d})\) with \((D^{\alpha}\widehat{\varphi})(0)=0\) for any multi-index \(\alpha\in\mathbb{N}^{d}_{0}\), where \(\mathbb{N}_{0}=\mathbb{N}\cup\{0\}\); this space carries the natural Frechet topology inherited from \(\mathcal{S}(\mathbb{R}^{d})\). Let \(\dot{\mathcal{S}}^{\prime}(\mathbb{R}^{d})\) be its dual space, which can be identified with \(\mathcal{S}^{\prime}(\mathbb{R}^{d})\) modulo polynomials.
Let \(\varphi\in C^{\infty}_{c}(\mathbb{R}^{d})\) be a radial function with
\[\text{supp }\varphi\subset\bigg{\{}\xi:\frac{3}{4}<|\xi|<\frac{7}{4}\bigg{\}},\]
\[\varphi(\xi)=1\quad\text{for}\quad\frac{7}{8}<|\xi|<\frac{9}{8},\]
\[\sum_{j\in\mathbb{Z}}\varphi(2^{-j}\xi)=1\quad\text{for all}\quad\xi\neq 0.\]
Then
\[\widehat{\Delta_{j}f}(\xi):=\varphi(2^{-j}\xi)\widehat{f}(\xi),\]
and
\[\Delta_{j}:=\dot{\Delta}_{j}\quad\text{if}\quad j>0,\qquad\Delta_{0}:=\text{ Id}-\sum_{j>0}\Delta_{j}.\]
**Definition 2**.: Let \(\beta\in\mathbb{R}\), and let \(\Pi\) be a growth function. Thus \(\dot{B}^{\beta}_{\Pi}(\mathbb{R}^{d})\) will denote the (homogeneous) _Vishik space_ formed by all \(f\in\dot{\mathcal{S}}^{\prime}(\mathbb{R}^{d})\) such that
\[\|f\|_{\dot{B}^{\beta}_{\Pi}(\mathbb{R}^{d})}:=\sup_{N\geq 0}\,\frac{1}{\Pi(N )}\,\sum_{j=-\infty}^{N}2^{j\beta}\,\|\dot{\Delta}_{j}f\|_{L^{\infty}( \mathbb{R}^{d})}<\infty.\]
The inhomogeneous counterpart, \(B^{\beta}_{\Pi}(\mathbb{R}^{d})\), is the set of all \(f\in\mathcal{S}^{\prime}(\mathbb{R}^{d})\) such that
\[\|f\|_{B^{\beta}_{\Pi}(\mathbb{R}^{d})}:=\sup_{N\geq 0}\,\frac{1}{\Pi(N)}\, \sum_{j=0}^{N}2^{j\beta}\,\|\Delta_{j}f\|_{L^{\infty}(\mathbb{R}^{d})}<\infty.\]
_Remark 3.1_.:
1. Standard properties of multipliers can be used to show that Vishik spaces do not depend (up to equivalence of norms) on the chosen generator \(\varphi\).
2. In the special case \(\beta=0\), \(B^{\beta}_{\Pi}(\mathbb{R}^{d})\) coincides with the classical space \(B_{\Pi}(\mathbb{R}^{d})\) (cf. (1.8)).
3. Assume \(\Pi(N)\approx 1\). Then \(\dot{B}^{\beta}_{\Pi}(\mathbb{R}^{d})=\dot{B}^{\beta}_{\infty,1}(\mathbb{R}^{ d})\), the classical Besov space (cf. [6]) defined by (3.1) \[\|f\|_{\dot{B}^{\beta}_{\infty,1}(\mathbb{R}^{d})}:=\sum_{j\in\mathbb{Z}}2^{j \beta}\,\|\dot{\Delta}_{j}f\|_{L^{\infty}(\mathbb{R}^{d})}.\]
Analogously, \(B^{\beta}_{\Pi}(\mathbb{R}^{d})=B^{\beta}_{\infty,1}(\mathbb{R}^{d})\) with
\[\|f\|_{B^{\beta}_{\infty,1}(\mathbb{R}^{d})}:=\sum_{j\in\mathbb{N}_{0}}2^{j\beta }\,\|\Delta_{j}f\|_{L^{\infty}(\mathbb{R}^{d})}.\]
### Characterizations
The goal of this section is to provide characterizations of the Vishik spaces as extrapolation spaces. We show analogs of the results obtained earlier for the \(Y^{\Theta}_{p_{0}}\) and \(Y^{\#\Theta}_{p_{0}}\) spaces (cf. Section 2.1).
We need to impose some natural restrictions on the growth functions used in this section.
**Definition 3**.: Let \(\kappa>0.\) We shall denote by \(\mathcal{P}_{\kappa}\), the set of all growth functions \(\Pi\) satisfying the following conditions:
1. \(\Pi:[0,\infty)\to(0,\infty)\) is non-decreasing,
2. \(\Pi\) is doubling,
3. \(e^{1/p}\,\Pi(p)\) is quasi-decreasing,
4. \(\sum_{j=N}^{\infty}2^{-j\kappa}\Pi(j)\lesssim 2^{-N\kappa}\Pi(N)\) for every \(N\geq 0\).
Clearly, \(\Pi(p)=(p+1)^{\alpha}(\log(p+e))^{\alpha_{1}}(\log_{2}(p+e))^{\alpha_{2}} \cdots(\log_{m}(p+e))^{\alpha_{m}}\), where \(\alpha,\alpha_{i}\geq 0\), are examples of growth functions in \(\mathcal{P}_{\kappa}\).
In dealing with interpolation of Besov spaces, we make use of the fact that the computation of \(K\)-functionals for the Besov pair \((\dot{B}^{\beta-\kappa}_{\infty,1}(\mathbb{R}^{d}),\dot{B}^{\beta}_{\infty,1} (\mathbb{R}^{d})\) can be reduced, via the method of retracts (cf. Appendix A.1 for further details), to the computation of \(K\)-functionals for the vector-valued sequence spaces \((\ell^{\beta-\kappa}_{1}(L^{\infty}(\mathbb{R}^{d}),\ell^{\beta}_{1}(L^{ \infty}(\mathbb{R}^{d}))\) (cf. (A.1)).
Using the retract technique we proved Theorem A.1 below which, in particular, implies
\[(\dot{B}^{\beta-\kappa}_{\infty,1}(\mathbb{R}^{d}),\dot{B}^{\beta}_{\infty,1} (\mathbb{R}^{d}))^{\bullet}_{\theta,1}=\dot{B}^{\alpha}_{\infty,1}(\mathbb{R} ^{d}) \tag{3.2}\]
with \(\alpha=(1-\theta)(\beta-\kappa)+\theta\beta\). Here, the equivalence constant is independent of \(\theta\).
**Theorem 3.2**.: _Let \(\Pi\in\mathcal{P}_{\kappa}\) and \(\beta\in\mathbb{R}\). Then_
\[\begin{split}\|f\|_{\dot{B}^{\beta}_{\Pi}(\mathbb{R}^{d})\cap \dot{B}^{\beta-\kappa}_{\infty,1}(\mathbb{R}^{d})}&\approx\sup_{t \in(0,\infty)}\frac{K(t,f;\dot{B}^{\beta-\kappa}_{\infty,1}(\mathbb{R}^{d}), \dot{B}^{\beta}_{\infty,1}(\mathbb{R}^{d}))}{ty_{\Pi}(\frac{1}{t})}\\ &\approx\|f\|_{\Delta_{\theta\in(0,1)}\Big{\{}\frac{(\dot{B}^{ \beta-\kappa}_{\infty,1}(\mathbb{R}^{d}),\dot{B}^{\beta}_{\infty,1}(\mathbb{R} ^{d}))^{\bullet}_{\theta,\frac{1}{1-\theta}}}{\Pi(\frac{1}{1-\theta})}\Big{\}} }\\ &\approx\sup_{\alpha\in(\beta-\kappa,\beta)}\frac{\|f\|_{\dot{B}^ {\alpha}_{\infty,1}(\mathbb{R}^{d})}}{\Pi(\frac{1}{\beta-\alpha})}.\end{split} \tag{3.3}\]
_Here \(y_{\Pi}\) is given by (1.5) (with \(p_{0}=1\))._
Proof.: According to (A.3) and (A.4) (with \(p_{0}=1\)),
\[\begin{split}\|f\|_{\Delta_{\theta\in(0,1)}\Big{\{}\frac{(\dot{B} ^{\beta-\kappa}_{\infty,1}(\mathbb{R}^{d}),\dot{B}^{\beta}_{\infty,1}(\mathbb{R }^{d}))^{\bullet}_{\theta,\frac{1}{1-\theta}}}{\Pi(\frac{1}{1-\theta})}\Big{\}} }&\approx\sup_{t\in(0,\infty)}\frac{K(t,f;\dot{B}^{\beta-\kappa}_{ \infty,1}(\mathbb{R}^{d}),\dot{B}^{\beta}_{\infty,1}(\mathbb{R}^{d}))}{ty_{\Pi }(\frac{1}{t})}.\end{split} \tag{3.4}\]
This shows the second equivalence in (3.3).
We now prove the third equivalence in (3.3). We do this using the reiteration property for the \(\Delta\)-extrapolation method given in (A.3) and (A.6) (with \(p=1\))
\[\Delta_{\theta\in(0,1)}\bigg{\{}\frac{(\dot{B}_{\infty,1}^{\beta-\kappa}(\mathbb{ R}^{d}),\dot{B}_{\infty,1}^{\beta}(\mathbb{R}^{d}))_{\theta,\frac{1}{1-\theta}}^{ \bullet}}{\Pi(\frac{1}{1-\theta})}\bigg{\}}=\Delta_{\theta\in(0,1)}\bigg{\{} \frac{(\dot{B}_{\infty,1}^{\beta-\kappa}(\mathbb{R}^{d}),\dot{B}_{\infty,1}^{ \beta}(\mathbb{R}^{d}))_{\theta,1}^{\bullet}}{\Pi(\frac{1}{1-\theta})}\bigg{\}}\]
combined with (3.2), and a change of variables, to obtain
\[\|f\|\] \[\qquad\Delta_{\theta\in(0,1)}\Big{\{}\frac{(\dot{B}_{\infty,1}^{ \beta-\kappa}(\mathbb{R}^{d}),\dot{B}_{\infty,1}^{\beta}(\mathbb{R}^{d}))_{ \theta,\frac{1}{1-\theta}}^{\bullet}}{\Pi(\frac{1}{1-\theta})}\bigg{\}} \approx\sup_{\alpha\in(\beta-\kappa,\beta)}\frac{\|f\|_{\dot{B}_{\infty,1}^{ \alpha}(\mathbb{R}^{d})}}{\Pi(\frac{1}{\beta-\alpha})}.\]
Finally, we prove the first equivalence in (3.3). Note that in view of the properties of \(y_{\Pi}\), proved in Lemma 2, and the monotonicity properties of \(K\)-functionals, we have
\[\sup_{t\in(0,\infty)}\frac{K(t,f;\dot{B}_{\infty,1}^{\beta-\kappa }(\mathbb{R}^{d}),\dot{B}_{\infty,1}^{\beta}(\mathbb{R}^{d}))}{ty_{\Pi}(\frac{ 1}{t})}\approx\sup_{t\in(0,1/e)}\frac{K(t,f;\dot{B}_{\infty,1}^{\beta-\kappa} (\mathbb{R}^{d}),\dot{B}_{\infty,1}^{\beta}(\mathbb{R}^{d}))}{t\Pi(-\log t)}\] \[\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad+\sup_{t \in(1/e,1)}\frac{K(t,f;\dot{B}_{\infty,1}^{\beta-\kappa}(\mathbb{R}^{d}),\dot {B}_{\infty,1}^{\beta}(\mathbb{R}^{d}))}{ty_{\Pi}(\frac{1}{t})}\] \[\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad+\sup_{t\in(1, \infty)}K(t,f;\dot{B}_{\infty,1}^{\beta-\kappa}(\mathbb{R}^{d}),\dot{B}_{ \infty,1}^{\beta}(\mathbb{R}^{d})) \tag{3.5}\] \[\qquad\qquad\qquad\qquad\qquad\qquad\qquad\approx I+II,\]
where
\[I:=\sup_{t\in(0,1/e)}\frac{K(t,f;\dot{B}_{\infty,1}^{\beta-\kappa}(\mathbb{R}^ {d}),\dot{B}_{\infty,1}^{\beta}(\mathbb{R}^{d}))}{t\Pi(-\log t)}\]
and
\[II:=\sup_{t\in(1,\infty)}K(t,f;\dot{B}_{\infty,1}^{\beta-\kappa}(\mathbb{R}^ {d}),\dot{B}_{\infty,1}^{\beta}(\mathbb{R}^{d})).\]
The desired equivalence in (3.3) will then follow if we show the following
\[I\approx\|f\|_{\dot{B}_{\Pi}^{\beta}(\mathbb{R}^{d})} \tag{3.6}\]
and
\[II\approx\|f\|_{\dot{B}_{\infty,1}^{\beta-\kappa}(\mathbb{R}^{d})}. \tag{3.7}\]
To prove these claims we revert to sequences of vector-valued functions using the method of retracts. Then we see that
\[I \approx\sup_{t\in(0,1/e)}\frac{1}{t\Pi(-\log t)}\,K(t,\{\dot{ \Delta}_{j}f\}_{j\in\mathbb{Z}};\ell_{1}^{\beta-\kappa}(L^{\infty}(\mathbb{R}^ {d})),\ell_{1}^{\beta}(L^{\infty}(\mathbb{R}^{d})))\] \[\approx\sup_{N\geq 0}\frac{1}{2^{-N\kappa}\,\Pi(N)}\,K(2^{-N \kappa},\{\dot{\Delta}_{j}f\}_{j\in\mathbb{Z}};\ell_{1}^{\beta-\kappa}(L^{ \infty}(\mathbb{R}^{d})),\ell_{1}^{\beta}(L^{\infty}(\mathbb{R}^{d}))),\]
where in the last step we have used the monotonicity properties of \(\Pi\) and \(K\)-functionals. Applying known characterizations for \(K\)-functionals (cf. (A.2)), we derive
\[I \approx\sup_{N\geq 0}\,\frac{1}{2^{-N\kappa}\,\Pi(N)}\,\sum_{j=- \infty}^{\infty}\min\{1,2^{(j-N)\kappa}\}\,2^{j(\beta-\kappa)}\|\dot{\Delta}_{j }f\|_{L^{\infty}(\mathbb{R}^{d})}\] \[\approx\sup_{N\geq 0}\,\frac{1}{\Pi(N)}\,\sum_{j=-\infty}^{N}2^{j \beta}\|\dot{\Delta}_{j}f\|_{L^{\infty}(\mathbb{R}^{d})}\] \[\qquad\quad+\sup_{N\geq 0}\,\frac{1}{2^{-N\kappa}\,\Pi(N)}\, \sum_{j=N}^{\infty}2^{j(\beta-\kappa)}\|\dot{\Delta}_{j}f\|_{L^{\infty}( \mathbb{R}^{d})}\] \[=:I_{1}+I_{2}.\]
We claim that
\[I_{2}\lesssim I_{1}, \tag{3.8}\]
from where it follows that (cf. Definition 2)
\[I\approx I_{1}=\|f\|_{\dot{B}^{\beta}_{\Pi}(\mathbb{R}^{d})},\]
i.e., (3.6) holds.
It remains to prove (3.8). Let \(N\geq 0\). We use the assumption (iv) in the definition of \(\mathcal{P}_{\kappa}\) (cf. Definition 3), to estimate
\[\sum_{j=N}^{\infty}2^{j(\beta-\kappa)}\|\dot{\Delta}_{j}f\|_{L^{ \infty}(\mathbb{R}^{d})} \leq\sup_{j\geq 0}\,\{\Pi(j)^{-1}2^{j\beta}\|\dot{\Delta}_{j}f\|_{L ^{\infty}(\mathbb{R}^{d})}\}\,\sum_{j=N}^{\infty}2^{-j\kappa}\Pi(j)\] \[\approx 2^{-N\kappa}\Pi(N)\,\sup_{j\geq 0}\,\{\Pi(j)^{-1}2^{j \beta}\|\dot{\Delta}_{j}f\|_{L^{\infty}(\mathbb{R}^{d})}\}\] \[\leq 2^{-N\kappa}\Pi(N)\,I_{1}.\]
Taking the supremum over all \(N\geq 0\), we arrive at the desired estimate (3.8).
We turn to the proof of (3.7). The estimate \(\lesssim\) follows trivially from the very definition of \(K\)-functional. The converse estimate can be obtained from (A.2). To be more precise, let \(N\in\mathbb{N}\), then
\[K(2^{N\kappa},\{f_{j}\}_{j\in\mathbb{Z}};\ell_{1}^{\beta-\kappa }(L^{\infty}(\mathbb{R}^{d})),\ell_{1}^{\beta}(L^{\infty}(\mathbb{R}^{d}))) \approx\sum_{j=-\infty}^{\infty}\min\{1,2^{(j+N)\kappa}\}\,2^{j (\beta-\kappa)}\|f_{j}\|_{L^{\infty}(\mathbb{R}^{d})}\] \[\geq\sum_{j=-N}^{\infty}2^{j(\beta-\kappa)}\|f_{j}\|_{L^{\infty} (\mathbb{R}^{d})}.\]
Hence
\[\sup_{N\in\mathbb{N}}\,K(2^{N\kappa},\{f_{j}\}_{j\in\mathbb{Z}};\ell_{1}^{ \beta-\kappa}(L^{\infty}(\mathbb{R}^{d})),\ell_{1}^{\beta}(L^{\infty}( \mathbb{R}^{d})))\gtrsim\sum_{j=-\infty}^{\infty}2^{j(\beta-\kappa)}\|f_{j}\|_ {L^{\infty}(\mathbb{R}^{d})}.\]
Using once again the retraction method, we obtain (cf. (3.1))
\[II\gtrsim\|f\|_{\dot{B}^{\beta-\kappa}_{\infty,1}(\mathbb{R}^{d})}.\]
This ends the proof of (3.7).
Finally, putting together (3.5), (3.6) and (3.7),
\[\sup_{t\in(0,\infty)}\frac{K(t,f;\dot{B}^{\beta-\kappa}_{\infty,1}(\mathbb{R}^{d}),\dot{B}^{\beta}_{\infty,1}(\mathbb{R}^{d}))}{ty_{\Pi}(\frac{1}{t})}\approx \|f\|_{\dot{B}^{\beta}_{\Pi}(\mathbb{R}^{d})\cap\dot{B}^{\beta-\kappa}_{\infty,1}(\mathbb{R}^{d})}.\]
This concludes the proof of the theorem.
_Remark 3.3_.: The inhomogeneous counterpart of Theorem 3.2 also holds. Note that, in this case, the outcome is independent of \(\kappa>0\) since
\[B^{\beta}_{\Pi}(\mathbb{R}^{d})\cap B^{\beta-\kappa}_{\infty,1}(\mathbb{R}^{d })=B^{\beta}_{\Pi}(\mathbb{R}^{d}).\]
Indeed, for any fixed \(N\in\mathbb{N}\), we have
\[\|f\|_{B^{\beta-\kappa}_{\infty,1}(\mathbb{R}^{d})} =\sum_{j=0}^{N}2^{j(\beta-\kappa)}\|\Delta_{j}f\|_{L^{\infty}( \mathbb{R}^{d})}+\sum_{j=N+1}^{\infty}2^{j(\beta-\kappa)}\|\Delta_{j}f\|_{L^{ \infty}(\mathbb{R}^{d})}\] \[\leq\sum_{j=0}^{N}2^{j\beta}\|\Delta_{j}f\|_{L^{\infty}(\mathbb{R }^{d})}+\sum_{j=N+1}^{\infty}2^{j(\beta-\kappa)}\|\Delta_{j}f\|_{L^{\infty}( \mathbb{R}^{d})}\] \[\leq\Pi(N)\|f\|_{B^{\beta}_{\Pi}(\mathbb{R}^{d})}+\bigg{(}\sum_{j =N+1}^{\infty}2^{-j\kappa}\Pi(j)\bigg{)}\,\|f\|_{B^{\beta}_{\Pi}(\mathbb{R}^{ d})}\] \[\lesssim\Pi(N)(1+2^{-N\kappa})\,\|f\|_{B^{\beta}_{\Pi}(\mathbb{R }^{d})},\]
where the last step follows from the property (iv) in the definition of \(\mathcal{P}_{\kappa}\) (cf. Definition 3).
One may also show that the characterizations provided by Theorem 3.2 in the inhomogoneous setting are independent of \(\kappa\) via
\[\sup_{\alpha\in(\beta-\kappa,\beta)}\frac{\|f\|_{B^{\alpha}_{\infty,1}( \mathbb{R}^{d})}}{\Pi(\frac{1}{\beta-\alpha})}\approx\sup_{\alpha\in(\beta- \kappa_{0},\beta)}\frac{\|f\|_{B^{\alpha}_{\infty,1}(\mathbb{R}^{d})}}{\Pi( \frac{1}{\beta-\alpha})}\]
for \(\kappa,\kappa_{0}>0\). The latter is an immediate consequence of the trivial embedding \(B^{\alpha}_{\infty,1}(\mathbb{R}^{d})\hookrightarrow B^{\alpha-\varepsilon}_{ \infty,1}(\mathbb{R}^{d})\) for any \(\varepsilon>0\).
A third explanation to the independence of \(\kappa\) in Theorem 3.2, now from an extrapolation point of view, may be found in (A.7) below.
### Uniqueness for a family of active scalar equations
Consider the class of active scalar equations on \(\mathbb{R}^{d}\) described in Section 1.9, namely,
\[\left\{\begin{array}{l}\omega_{t}+v\cdot\nabla\omega=0,\\ v=R(-\Delta)^{\frac{\beta-1}{2}}\omega,\end{array}\right. \tag{3.9}\]
where \(\beta\in\mathbb{R}\). The main goal of this section is to establish the uniqueness result for (3.9) in the class of Vishik spaces. To do this, we will follow the extrapolation approach outlined in Section 1.6 adequately adapted to (3.9) and Vishik spaces. The corresponding Step 1 was already carried out in Section 3.1. Next, we turn our attention to Step 2, i.e., estimates for the modulus of continuity of \(v\).
**Theorem 3.4**.: _Assume that the growth function \(\Pi\in\mathcal{P}_{1},\) and let \(\omega\in\dot{B}^{\beta}_{\Pi}(\mathbb{R}^{d})\cap\dot{B}^{\beta-1}_{\infty,1} (\mathbb{R}^{d})\). Then_
\[|v(x)-v(y)|\lesssim|x-y|\,y_{\Pi}\bigg{(}\frac{1}{|x-y|}\bigg{)}\,\|\omega\|_{ \dot{B}^{\beta}_{\Pi}(\mathbb{R}^{d})\cap\dot{B}^{\beta-1}_{\infty,1}(\mathbb{ R}^{d})},\]
_where \(y_{\Pi}\) is the Yudovich function associated to \(\Pi\) (cf. (1.5) with \(p_{0}=1\))._
Proof.: Recall the well-known fact that \(R\) (Riesz-type transforms) acts boundedly on the Besov space \(\dot{B}^{\beta}_{\infty,1}(\mathbb{R}^{d})\) for all \(\beta\in\mathbb{R}\) (see e.g. [17, Proposition 4.7]). By interpolation, it follows that
\[\|R\omega\|_{(\dot{B}^{\beta-1}_{\infty,1}(\mathbb{R}^{d}),\dot{B}^{\beta}_{ \infty,1}(\mathbb{R}^{d}))^{\bullet}_{\theta,\frac{1}{1-\theta}}}\lesssim\| \omega\|_{(\dot{B}^{\beta-1}_{\infty,1}(\mathbb{R}^{d}),\dot{B}^{\beta}_{ \infty,1}(\mathbb{R}^{d}))^{\bullet}_{\theta,\frac{1}{1-\theta}}}\]
uniformly with respect to \(\theta\in(0,1)\). Using the Biot-Savart law given in (3.9), we can rewrite the previous estimate as
\[\|(-\Delta)^{\frac{-\beta+1}{2}}v\|_{(\dot{B}^{\beta-1}_{\infty,1}(\mathbb{R}^ {d}),\dot{B}^{\beta}_{\infty,1}(\mathbb{R}^{d}))^{\bullet}_{\theta,\frac{1}{ 1-\theta}}}\lesssim\|\omega\|_{(\dot{B}^{\beta-1}_{\infty,1}(\mathbb{R}^{d}), \dot{B}^{\beta}_{\infty,1}(\mathbb{R}^{d}))^{\bullet}_{\theta,\frac{1}{1- \theta}}}. \tag{3.10}\]
Furthermore, as a consequence of basic multiplier assertions (cf. [6, Lemma 6.2.1]), the operator \((-\Delta)^{\frac{-\beta+1}{2}}\) acts as an isomorphism from \(\dot{B}^{0}_{\infty,1}(\mathbb{R}^{d})\) onto \(\dot{B}^{\beta-1}_{\infty,1}(\mathbb{R}^{d})\) and from \(\dot{B}^{1}_{\infty,1}(\mathbb{R}^{d})\) onto \(\dot{B}^{\beta}_{\infty,1}(\mathbb{R}^{d})\). Once again by interpolation, we derive
\[\|(-\Delta)^{\frac{-\beta+1}{2}}v\|_{(\dot{B}^{\beta-1}_{\infty,1}(\mathbb{R} ^{d}),\dot{B}^{\beta}_{\infty,1}(\mathbb{R}^{d}))^{\bullet}_{\theta,\frac{1}{ 1-\theta}}}\approx\|v\|_{(\dot{B}^{0}_{\infty,1}(\mathbb{R}^{d}),\dot{B}^{1} _{\infty,1}(\mathbb{R}^{d}))^{\bullet}_{\theta,\frac{1}{1-\theta}}}.\]
Therefore we can update (3.10) as follows
\[\|v\|_{(\dot{B}^{0}_{\infty,1}(\mathbb{R}^{d}),\dot{B}^{1}_{\infty,1}(\mathbb{ R}^{d}))^{\bullet}_{\theta,\frac{1}{1-\theta}}}\lesssim\|\omega\|_{(\dot{B}^{ \beta-1}_{\infty,1}(\mathbb{R}^{d}),\dot{B}^{\beta}_{\infty,1}(\mathbb{R}^{d }))^{\bullet}_{\theta,\frac{1}{1-\theta}}}. \tag{3.11}\]
It follows from the trivial embedding
\[\dot{B}^{0}_{\infty,1}(\mathbb{R}^{d})\hookrightarrow L^{\infty}(\mathbb{R}^{ d}) \tag{3.12}\]
and the classical Bernstein inequality for entire functions of exponential type (cf. [29, p. 116]) that
\[\|\nabla v\|_{L^{\infty}(\mathbb{R}^{d})} \lesssim\|\nabla v\|_{\dot{B}^{0}_{\infty,1}(\mathbb{R}^{d})}= \sum_{j=-\infty}^{\infty}\|\nabla\dot{\Delta}_{j}v\|_{L^{\infty}(\mathbb{R}^{ d})}\] \[\lesssim\sum_{j=-\infty}^{\infty}2^{j}\,\|\dot{\Delta}_{j}v\|_{L^ {\infty}(\mathbb{R}^{d})}=\|v\|_{\dot{B}^{1}_{\infty,1}(\mathbb{R}^{d})}.\]
In other words,
\[\dot{B}^{1}_{\infty,1}(\mathbb{R}^{d})\hookrightarrow\dot{W}^{1}_{\infty}( \mathbb{R}^{d}). \tag{3.13}\]
According to (3.12) and (3.13),
\[\|v\|_{(L^{\infty}(\mathbb{R}^{d}),\dot{W}^{1}_{\infty}(\mathbb{R}^{d}))^{ \bullet}_{\theta,\frac{1}{1-\theta}}}\lesssim\|v\|_{(\dot{B}^{0}_{\infty,1}( \mathbb{R}^{d}),\dot{B}^{1}_{\infty,1}(\mathbb{R}^{d}))^{\bullet}_{\theta, \frac{1}{1-\theta}}},\]
which implies (cf. (3.11))
\[\|v\|_{(L^{\infty}(\mathbb{R}^{d}),\dot{W}^{1}_{\infty}(\mathbb{R}^{d}))^{ \bullet}_{\theta,\frac{1}{1-\theta}}}\lesssim\|\omega\|_{(\dot{B}^{\beta-1}_{ \infty,1}(\mathbb{R}^{d}),\dot{B}^{\beta}_{\infty,1}(\mathbb{R}^{d}))^{ \bullet}_{\theta,\frac{1}{1-\theta}}}.\]
Multiplying this inequality by \(1/\Pi((1-\theta)^{-1})\) and taking supremum over all \(\theta\in(0,1)\), we arrive at the extrapolation inequality
\[\|v\|_{\Delta_{\theta\in(0,1)}\left\{\frac{(L^{\infty}(\mathbb{R}^{d}),\dot{W}^{ 1}_{\infty}(\mathbb{R}^{d}))^{\bullet}_{\theta,\frac{1}{1-\theta}}}{\Pi(\frac{1 -\theta}{1-\theta})}\right\}}\lesssim\|\omega\|_{\Delta_{\theta\in(0,1)} \left\{\frac{(\dot{B}^{\beta-1}_{\infty,1}(\mathbb{R}^{d}),\dot{B}^{\beta}_{ \infty,1}(\mathbb{R}^{d}))^{\bullet}_{\theta,\frac{1}{1-\theta}}}{\Pi(\frac{1 -\theta}{1-\theta})}\right\}}. \tag{3.14}\]
The right-hand side of (3.14) was computed in Theorem 3.2:
\[\Delta_{\theta\in(0,1)}\bigg{\{}\frac{(\dot{B}_{\infty,1}^{\beta-1}(\mathbb{R}^{d} ),\dot{B}_{\infty,1}^{\beta}(\mathbb{R}^{d}))_{\theta,\frac{1}{1-\theta}}^{ \bullet}}{\Pi(\frac{1}{1-\theta})}\bigg{\}}=\dot{B}_{\Pi}^{\beta}(\mathbb{R}^ {d})\cap\dot{B}_{\infty,1}^{\beta-1}(\mathbb{R}^{d}). \tag{3.15}\]
While the left-hand side of (3.14) was already characterized in (2.27) (replace \(\Theta_{1}\) by \(\Pi\) and let \(p_{0}=1\)) as
\[\|v\|_{\Delta_{\theta\in(0,1)}}\bigg{\{}\frac{(L^{\infty}(\mathbb{R}^{d}), \dot{W}_{\infty}^{1}(\mathbb{R}^{d}))_{\theta,\frac{1}{1-\theta}}^{\bullet}}{ \Pi(\frac{1}{1-\theta})}\bigg{\}}\approx\sup_{x,y\in\mathbb{R}^{d}}\frac{|v(x )-v(y)|}{\inf_{t>|x-y|}ty_{\Pi}(\frac{1}{t})}. \tag{3.16}\]
Inserting (3.15) and (3.16) into (3.14), we have
\[|v(x)-v(y)|\lesssim|x-y|y_{\Pi}\bigg{(}\frac{1}{|x-y|}\bigg{)}\,\|\omega\|_{ \dot{B}_{\Pi}^{\beta}(\mathbb{R}^{d})\cap\dot{B}_{\infty,1}^{\beta-1}( \mathbb{R}^{d})}.\]
To obtain a uniqueness result for transport equations (3.9) in Vishik spaces we impose an Osgood condition on the growth.
**Theorem 3.5**.: _Assume that the growth function \(\Pi\in\mathcal{P}_{1}\) satisfies the Osgood type condition_
\[\int_{1}^{\infty}\frac{dr}{ry_{\Pi}(r)}=\infty.\]
_Then a Lagrangian weak solution \(\omega\) of (3.9), such that_
\[\omega\in L^{\infty}([0,T];\dot{B}_{\Pi}^{\beta}(\mathbb{R}^{d})\cap\dot{B}_ {\infty,1}^{\beta-1}(\mathbb{R}^{d}))\]
_is uniquely determined by its initial value \(\omega_{0}\)._
Proof.: Apply the methodology developed in Section 1.5 together with Theorem 3.4.
### Proof of Theorems 1.3 and 1.4
Invoke Theorems 3.4 and 3.5 with \(d=2\) and \(\beta=0\) (cf. Remark 3.1(ii)).
### Comparison of Theorem 1.4 with the uniqueness result of Vishik
As already mentioned in Remark 1.5, both conditions (1.9) and (1.36) are equivalent for \(\Pi\in\mathcal{P}_{1}\). In this section, we investigate the relationships between the function spaces involved in (1.10) and (1.37), namely,
\[B_{\Pi}(\mathbb{R}^{2})\cap L^{p_{0}}(\mathbb{R}^{2}),\qquad p_{0}\in(1,2),\]
and
\[\dot{B}_{\Pi}(\mathbb{R}^{2})\cap\dot{B}_{\infty,1}^{-1}(\mathbb{R}^{2}),\]
respectively.
**Proposition 1**.: _Let \(p_{0}\in(1,d)\). Then_
\[B_{\Pi}(\mathbb{R}^{d})\cap L^{p_{0}}(\mathbb{R}^{d})\hookrightarrow\dot{B}_{ \Pi}(\mathbb{R}^{d})\cap\dot{B}_{\infty,1}^{-1}(\mathbb{R}^{d}).\]
Proof.: Assume \(f\in B_{\Pi}(\mathbb{R}^{d})\cap L^{p_{0}}(\mathbb{R}^{d})\). The norm of \(f\) in \(\dot{B}_{\Pi}(\mathbb{R}^{d})\) can be estimated as follows. For each \(N\in\mathbb{N}\), we can apply the classical Nikolskii's inequality for entire functions of exponential type (cf. [29, Theorem 3.3.5, p. 126]) and basic multiplier assertions in order to get
\[\sum_{j=-\infty}^{N}\|\dot{\Delta}_{j}f\|_{L^{\infty}(\mathbb{R}^ {d})} =\sum_{j=-\infty}^{0}\|\dot{\Delta}_{j}f\|_{L^{\infty}(\mathbb{R}^ {d})}+\sum_{j=1}^{N}\|\dot{\Delta}_{j}f\|_{L^{\infty}(\mathbb{R}^{d})}\] \[\lesssim\sum_{j=-\infty}^{0}2^{jd/p_{0}}\|\dot{\Delta}_{j}f\|_{L ^{p_{0}}(\mathbb{R}^{d})}+\Pi(N)\|f\|_{B_{\Pi}(\mathbb{R}^{d})}\] \[\lesssim\|f\|_{L^{p_{0}}(\mathbb{R}^{d})}+\Pi(N)\|f\|_{B_{\Pi}( \mathbb{R}^{d})}\] \[\lesssim\Pi(N)\|f\|_{B_{\Pi}(\mathbb{R}^{d})\cap L^{p_{0}}( \mathbb{R}^{d})}.\]
Hence
\[B_{\Pi}(\mathbb{R}^{d})\cap L^{p_{0}}(\mathbb{R}^{d})\hookrightarrow\dot{B}_ {\Pi}(\mathbb{R}^{d}) \tag{3.17}\]
for any \(p_{0}\in(1,\infty)\).
On the other hand, the \(\dot{B}_{\infty,1}^{-1}\)-norm of \(f\) can be split into
\[\|f\|_{\dot{B}_{\infty,1}^{-1}(\mathbb{R}^{d})}=\sum_{j=-\infty}^{\infty}2^{- j}\|\dot{\Delta}_{j}f\|_{L^{\infty}(\mathbb{R}^{d})}=I+II, \tag{3.18}\]
where
\[I:=\sum_{j=-\infty}^{0}2^{-j}\|\dot{\Delta}_{j}f\|_{L^{\infty}(\mathbb{R}^{d})} \tag{3.19}\]
and
\[II:=\sum_{j=1}^{\infty}2^{-j}\|\dot{\Delta}_{j}f\|_{L^{\infty}(\mathbb{R}^{d})}. \tag{3.20}\]
We can estimate \(I\) applying Nikolskii's inequality:
\[I\lesssim\sum_{j=-\infty}^{0}2^{-j(1-\frac{d}{p_{0}})}\|\dot{\Delta}_{j}f\|_{ L^{p_{0}}(\mathbb{R}^{d})}\lesssim\|f\|_{L^{p_{0}}(\mathbb{R}^{d})}\sum_{j=0}^{ \infty}2^{j(1-\frac{d}{p_{0}})}\lesssim\|f\|_{L^{p_{0}}(\mathbb{R}^{d})}, \tag{3.21}\]
where the last step follows from the assumption \(p_{0}<d\).
To estimate \(II\), we can argue as follows. The fact that \(f\in B_{\Pi}(\mathbb{R}^{d})\) implies that
\[\|\Delta_{j}f\|_{L^{\infty}(\mathbb{R}^{d})}\leq\Pi(j)\|f\|_{B_{\Pi}(\mathbb{ R}^{d})},\qquad j\in\mathbb{N}.\]
Therefore, taking into account that \(\Pi\in\mathcal{P}_{1}\) (cf. item (iv) in Definition 3), we obtain
\[II=\sum_{j=1}^{\infty}2^{-j}\|\Delta_{j}f\|_{L^{\infty}(\mathbb{R}^{d})}\leq \|f\|_{B_{\Pi}(\mathbb{R}^{d})}\sum_{j=1}^{\infty}2^{-j}\Pi(j)\lesssim\|f\|_{ B_{\Pi}(\mathbb{R}^{d})}. \tag{3.22}\]
Inserting (3.21) and (3.22) into (3.18), one achieves
\[B_{\Pi}(\mathbb{R}^{d})\cap L^{p_{0}}(\mathbb{R}^{d})\hookrightarrow\dot{B}_{ \infty,1}^{-1}(\mathbb{R}^{d}). \tag{3.23}\]
The combination of (3.17) and (3.23) yields the desired embedding
\[B_{\Pi}(\mathbb{R}^{d})\cap L^{p_{0}}(\mathbb{R}^{d})\hookrightarrow\dot{B}_ {\Pi}(\mathbb{R}^{d})\cap\dot{B}_{\infty,1}^{-1}(\mathbb{R}^{d}).\]
We close this section by showing that Theorem 1.4 with \(\Pi(p)\approx p\) (cf. (1.38)) also improves the classical Vishik's uniqueness result formulated in terms of (1.11).
**Proposition 2**.: _Let \(p_{0}\in(1,d)\). Then_
\[\operatorname{bmo}(\mathbb{R}^{d})\cap L^{p_{0}}(\mathbb{R}^{d})\hookrightarrow \operatorname{BMO}(\mathbb{R}^{d})\cap\dot{B}^{-1}_{\infty,1}(\mathbb{R}^{d}). \tag{3.24}\]
Proof.: Assume \(f\in\operatorname{bmo}(\mathbb{R}^{d})\cap L^{p_{0}}(\mathbb{R}^{d})\). The Besov \(\dot{B}^{-1}_{\infty,1}\)-norm can be expressed as in (3.18)-(3.20), with \(I\) satisfying (3.21). Hence
\[\|f\|_{\dot{B}^{-1}_{\infty,1}(\mathbb{R}^{d})}=I+II\lesssim\|f\|_{L^{p_{0}}( \mathbb{R}^{d})}+II. \tag{3.25}\]
Next we estimate \(II\). The well-known embedding \(\operatorname{bmo}(\mathbb{R}^{d})\hookrightarrow B^{0}_{\infty,\infty}( \mathbb{R}^{d})\) (cf. (1.12)) implies
\[II=\sum_{j=1}^{\infty}2^{-j}\|\Delta_{j}f\|_{L^{\infty}(\mathbb{R}^{d})} \lesssim\|f\|_{B^{0}_{\infty,\infty}(\mathbb{R}^{d})}\lesssim\|f\|_{ \operatorname{bmo}(\mathbb{R}^{d})}. \tag{3.26}\]
According to (3.25) and (3.26),
\[\operatorname{bmo}(\mathbb{R}^{d})\cap L^{p_{0}}(\mathbb{R}^{d})\hookrightarrow \dot{B}^{-1}_{\infty,1}(\mathbb{R}^{d}).\]
This together with the trivial embedding \(\operatorname{bmo}(\mathbb{R}^{d})\hookrightarrow\operatorname{BMO}(\mathbb{R }^{d})\) implies the desired result (3.24).
## Appendix A Interpolation and Extrapolation: An Atlas
In order to help non experts in extrapolation, in this appendix we give a summary of results used in the paper, with documentation, commentaries and examples. We keep the notation and assumptions laid out in the previous sections. For the sake of convenience we also recall the location of some basic definitions.
### Interpolation of Besov spaces via retraction revisited
A common technique used in interpolation theory is to translate interpolation of function spaces into equivalent interpolation problems for sequence spaces ("the method of retracts"). In particular, in this paper the \(\ell^{\beta}_{1}(L^{\infty}(\mathbb{R}^{d}))\) spaces of sequences of vector-valued functions play an important role. We say that \(\{f_{j}\}_{j\in\mathbb{Z}}\in\ell^{\beta}_{1}(L^{\infty}(\mathbb{R}^{d}))\) if
(A.1) \[\|\{f_{j}\}_{j\in\mathbb{Z}}\|_{\ell^{\beta}_{1}(L^{\infty}(\mathbb{R}^{d}))}: =\sum_{j\in\mathbb{Z}}2^{j\beta}\|f_{j}\|_{L^{\infty}(\mathbb{R}^{d})}<\infty.\]
Their usefulness for us is that we can translate the interpolation of \(\dot{B}^{\beta}_{\infty,1}(\mathbb{R}^{d})\) spaces into interpolation of \(\ell^{\beta}_{1}(L^{\infty}(\mathbb{R}^{d})).\) This is effected by showing that the map
\[f\mapsto\{\dot{\Delta}_{j}f\}_{j\in\mathbb{Z}}\]
defines a retract from \(\dot{B}^{\beta}_{\infty,1}(\mathbb{R}^{d})\) onto \(\ell^{\beta}_{1}(L^{\infty}(\mathbb{R}^{d}))\) (cf. [6, Definition 6.4.1 and Theorem 6.4.3, pages 150-152]). In particular, \(K\)-functionals relative to vector-valued sequence spaces can be explicitly computed, cf. [6, p. 120].
**Example 5**.: We have
(A.2) \[K(t,\{f_{j}\}_{j\in\mathbb{Z}};\ell_{1}^{\beta-\kappa}(L^{\infty}(\mathbb{R}^{d})),\ell_{1}^{\beta}(L^{\infty}(\mathbb{R}^{d})))\approx\sum_{j=-\infty}^{\infty} \min\{1,2^{j\kappa}t\}\,2^{j(\beta-\kappa)}\,\|f_{j}\|_{L^{\infty}(\mathbb{R}^{ d})}.\]
In fact, more general statements are available in the literature. For example, (A.2) holds true with \(L^{\infty}(\mathbb{R}^{d})\) replaced by any Banach space \(X\). However, for our purposes, it is enough to restrict attention to (A.2).
The next result is a (possible) slight improvement of a known result (cf. [6, Section 6.4]) since we provide sharp constants.
**Theorem A.1**.: _Let \(-\infty<s_{0}<s_{1}<\infty,\,\theta\in(0,1)\), and \(s=(1-\theta)s_{0}+\theta s_{1}\). Then_
\[(\dot{B}^{s_{0}}_{\infty,1}(\mathbb{R}^{d}),\dot{B}^{s_{1}}_{\infty,1}( \mathbb{R}^{d}))_{\theta,1}^{\bullet}=\dot{B}^{s}_{\infty,1}(\mathbb{R}^{d})\]
_with underlying equivalence constants independent of \(\theta\)._
Proof.: We use the retraction method of interpolation, which implies that
\[\|f\|_{(\dot{B}^{s_{0}}_{\infty,1}(\mathbb{R}^{d}),\dot{B}^{s_{1}}_{\infty,1}( \mathbb{R}^{d}))_{\theta,1}}\approx\|\{\dot{\Delta}_{j}f\}_{j\in\mathbb{Z}}\| _{(\ell_{1}^{s_{0}}(L^{\infty}(\mathbb{R}^{d})),\ell_{1}^{s_{1}}(L^{\infty}( \mathbb{R}^{d})))_{\theta,1}}\]
with constants independent of \(\theta\). On the other hand, it follows from (A.2) that, for \(\{f_{j}\}_{j\in\mathbb{Z}}\in\ell_{1}^{s_{0}}(L^{\infty}(\mathbb{R}^{d}))+ \ell_{1}^{s_{1}}(L^{\infty}(\mathbb{R}^{d}))\),
\[\|\{f_{j}\}_{j\in\mathbb{Z}}\|_{(\ell_{1}^{s_{0}}(L^{\infty}( \mathbb{R}^{d})),\ell_{1}^{s_{1}}(L^{\infty}(\mathbb{R}^{d})))_{\theta,1}} \approx\sum_{\nu=-\infty}^{\infty}2^{-\theta\nu(s_{1}-s_{0})}K(2^{\nu(s_{1}-s_ {0})},\{f_{j}\}_{j\in\mathbb{Z}};\ell_{1}^{s_{0}}(L^{\infty}(\mathbb{R}^{d})),\ell_{1}^{s_{1}}(L^{\infty}(\mathbb{R}^{d})))\] \[\approx\sum_{\nu=-\infty}^{\infty}2^{-\theta\nu(s_{1}-s_{0})} \sum_{j=-\infty}^{\infty}\min\{2^{js_{0}},2^{js_{1}}2^{\nu(s_{1}-s_{0})}\}\,\| f_{j}\|_{L^{\infty}(\mathbb{R}^{d})}\] \[=\sum_{j=-\infty}^{\infty}2^{js_{0}}\|f_{j}\|_{L^{\infty}( \mathbb{R}^{d})}\bigg{(}2^{j(s_{1}-s_{0})}\sum_{\nu=-\infty}^{-j}2^{\nu(s_{1} -s_{0})(1-\theta)}+\sum_{\nu=-j}^{\infty}2^{-\theta\nu(s_{1}-s_{0})}\bigg{)}\] \[\approx(\theta(1-\theta))^{-1}\sum_{j=-\infty}^{\infty}2^{js}\|f _{j}\|_{L^{\infty}(\mathbb{R}^{d})}.\]
As a consequence,
\[\|f\|_{(\dot{B}^{s_{0}}_{\infty,1}(\mathbb{R}^{d}),\dot{B}^{s_{1 }}_{\infty,1}(\mathbb{R}^{d}))_{\theta,1}^{\bullet}}\approx(\theta(1-\theta))\| \{\dot{\Delta}_{j}f\}_{j\in\mathbb{Z}}\|_{(\ell_{1}^{s_{0}}(L^{\infty}(\mathbb{ R}^{d})),\ell_{1}^{s_{1}}(L^{\infty}(\mathbb{R}^{d})))_{\theta,1}}\] \[\approx\sum_{j=-\infty}^{\infty}2^{js}\|\dot{\Delta}_{j}f\|_{L^{ \infty}(\mathbb{R}^{d})}=\|f\|_{\dot{B}^{s}_{\infty,1}(\mathbb{R}^{d})}\,,\]
as we wished to show.
_Remark A.2_.: The above proof can be easily adapted to deal with the general scale of the Besov spaces \(\dot{B}^{s}_{p,q}(\mathbb{R}^{d})\). Namely, if \(1\leq p,q\leq\infty,-\infty<s_{0}<s_{1}<\infty,\theta\in(0,1)\), and \(s=(1-\theta)s_{0}+\theta s_{1}\), then
\[(\dot{B}^{s_{0}}_{p,q}(\mathbb{R}^{d}),\dot{B}^{s_{1}}_{p,q}(\mathbb{R}^{d}))_{ \theta,q}^{\bullet}=\dot{B}^{s}_{p,q}(\mathbb{R}^{d})\]
with equivalence constants independent of \(\theta\).
### Results from Section 1.3
**Lemma A.3**.: _Let \(\theta\in(0,1)\), \(\theta\in(0,
#### a.2.1. Characterizations of \(\Delta\)-extrapolation spaces
A very useful result for the computation of \(\Delta\)-extrapolation spaces is the formula (1.20), which for convenience we reproduce here
(A.3) \[\Delta_{\theta\in(0,1)}\bigg{\{}\frac{(A_{0},A_{1})_{\theta,p(\theta)}^{\bullet}} {\Theta(\frac{1}{1-\theta})}\bigg{\}}=\Delta_{\theta\in(0,1)}\bigg{\{}\frac{(A _{0},A_{1})_{\theta,\infty}^{\bullet}}{\Theta(\frac{1}{1-\theta})}\bigg{\}}.\]
The result is used implicitly in [19] and a detailed proof can be found in [27, Theorem 21, page 44]. Further results and generalizations can be found in [23]. The import of (A.3) relies on the fact that the norm of the space on the right-hand side is a double supremum, which allows us to apply "Fubini" as follows
(A.4) \[\|f\|_{\Delta_{\theta\in(0,1)}\bigg{\{}\frac{(A_{0},A_{1})_{ \theta,\infty}^{\bullet}}{\Theta(\frac{1}{1-\theta})}\bigg{\}}} =\sup_{\theta\in(0,1)}\sup_{t>0}\frac{t^{-\theta}K(t,f;A_{0},A_{1 })}{\Theta(\frac{1}{1-\theta})}\] \[=\sup_{t>0}\frac{K(t,f;A_{0},A_{1})}{t}\sup_{\theta\in(0,1)} \frac{t^{1-\theta}}{\Theta(\frac{1}{1-\theta})}\] \[=\sup_{t>0}\frac{K(t,f;A_{0},A_{1})}{t^{1/p_{0}}\varphi_{\Theta} (\frac{1}{t})},\]
where
\[\varphi_{\Theta}(t)=\inf_{\theta\in(0,1)}\bigg{\{}\Theta\bigg{(}\frac{1}{1- \theta}\bigg{)}\,t^{\frac{1-\theta}{p_{0}}}\bigg{\}},\qquad p_{0}>0.\]
The \(\varphi_{\Theta}\) functions thus played a major role in [19]. Remarkably, as we pointed out in Section 1.3, although coming from completely different considerations they essentially coincide with the original Yudovich functions \(y_{\Theta}\) (cf. (1.5))
(A.5) \[\varphi_{\Theta}(t)\approx y_{\Theta}(t);\]
see also (1.25).
In this paper we make a strong use of both (A.3) and (A.4). In particular, to give an explicit characterization of the Yudovich spaces \(Y^{\Theta}\) (cf. (1.24) and (1.26)) and the sharp Yudovich spaces \(Y^{\#\Theta}\) (cf. Theorem 2.2, specially (2.11) and (2.12)), to establish various characterizations of the Vishik spaces \(B_{\Pi}^{\beta}\) (cf. Theorem 3.2, in particular, (3.4)), and to get estimates for the modulus of smoothness (cf. (2.28)).
#### a.2.2. Reiteration
The scale \(\{(A_{0},A_{1})_{\theta,p(\theta)}\}_{\theta\in(0,1)}\) is an example of _interpolation scale of exact order \(\theta\)_. We refer to [19, p. 7] for the precise definition. Another important example is given by \(\{(A_{0},A_{1})_{\theta,p}^{\bullet}\}_{\theta\in(0,1)}\) for a fixed \(p\in[1,\infty]\) (cf. (1.18)). It turns out that the extrapolation formula (A.3) is only a special case of a more general phenomenon based on interpolation scales of exact order \(\theta\). In particular, the formula is still valid when \(\{(A_{0},A_{1})_{\theta,p(\theta)}\}_{\theta\in(0,1)}\) is replaced by \(\{(A_{0},A_{1})_{\theta,p}^{\bullet}\}_{\theta\in(0,1)}\), namely,
(A.6) \[\Delta_{\theta\in(0,1)}\bigg{\{}\frac{(A_{0},A_{1})_{\theta,p}^{\bullet}}{ \Theta(\frac{1}{1-\theta})}\bigg{\}}=\Delta_{\theta\in(0,1)}\bigg{\{}\frac{(A _{0},A_{1})_{\theta,\infty}^{\bullet}}{\Theta(\frac{1}{1-\theta})}\bigg{\}}.\]
See [27, Theorem 21, page 44].
_The ordered case \(A_{1}\hookrightarrow A_{0}\)._ In this case we can achieve further simplifications for the computation of the \(\Delta\)-extrapolation spaces. In particular, we claim that
(A.7) \[\Delta_{\theta\in(0,1)}\biggl{\{}\frac{(A_{0},A_{1})_{\theta,p(\theta)}^{\bullet }}{\Theta(\frac{1}{1-\theta})}\biggr{\}}=\Delta_{\theta\in(\theta_{0},1)} \biggl{\{}\frac{(A_{0},A_{1})_{\theta,p(\theta)}^{\bullet}}{\Theta(\frac{1}{1- \theta})}\biggr{\}}\]
for every \(\theta_{0}\in(0,1)\). This result may be considered as the extrapolation version of the fact that the definitions of \(Y_{p_{0}}^{\Theta}(\Omega)\) and \(Y_{p_{0}}^{\#\Theta}(\Omega)\) with \(|\Omega|<\infty\) are independent of \(p_{0}\in[1,\infty)\). Indeed, recall that \(Y_{p_{0}}^{\Theta}(\Omega)\) and \(Y_{p_{0}}^{\#\Theta}(\Omega)\) are the \(\Delta\)-extrapolation spaces relative to the pairs \((L^{p_{0}}(\Omega),L^{\infty}(\Omega))\) and \((L^{p_{0}}(\Omega),\operatorname{BMO}(\Omega))\), respectively; cf. (1.23)-(1.24) and (2.1)-(2.2).
Proof of (A.7).: The non trivial part of the statement is the embedding \(\hookhook\). This can be derived as follows. We have
\[\|f\|_{\Delta_{\theta\in(0,1)}\left\{\frac{(A_{0},A_{1})_{\theta,p(\theta)}^{ \bullet}}{\Theta(\frac{1}{1-\theta})}\right\}}\leq\|f\|_{\Delta_{\theta\in(0, \theta_{0})}\left\{\frac{(A_{0},A_{1})_{\theta,p(\theta)}^{\bullet}}{\Theta( \frac{1}{1-\theta})}\right\}}+\|f\|_{\Delta_{\theta\in(0_{0},1)}\left\{\frac{( A_{0},A_{1})_{\theta,p(\theta)}^{\bullet}}{\Theta(\frac{1}{1-\theta})}\right\}}\cdot\]
To estimate the first term on the right-hand side of the previous inequality, we will make use of the fact that
(A.8) \[(A_{0},A_{1})_{\theta_{0},p(\theta_{0})}\hookrightarrow(A_{0},A_{1})_{\theta, p(\theta)}^{\bullet}\qquad\text{if}\qquad\theta<\theta_{0},\]
where the embedding constant is independent of \(\theta\). This result can be shown by using similar techniques as in [14, Lemma 4.5(ii)]. To make the presentation self-contained, we next provide full details. Using that \(A_{1}\hookrightarrow A_{0}\), it is plain to see that
\[K(t,f;A_{0},A_{1})\approx\|f\|_{A_{0}}\approx K(1,f;A_{0},A_{1}),\qquad\text{ for}\qquad t>1.\]
Then, by Holder's inequality (noting that \(p(\theta)=\frac{1}{1-\theta}<\frac{1}{1-\theta_{0}}=p(\theta_{0})\)) and monotonicity properties of \(K\)-functionals,
\[\|f\|_{(A_{0},A_{1})_{\theta,p(\theta)}}^{p(\theta)} =\int_{0}^{1}[t^{-\theta}K(t,f;A_{0},A_{1})]^{p(\theta)}\,\frac{ dt}{t}+\int_{1}^{\infty}[t^{-\theta}K(t,f;A_{0},A_{1})]^{p(\theta)}\,\frac{dt}{t}\] \[\approx\int_{0}^{1}[t^{-\theta}K(t,f;A_{0},A_{1})]^{p(\theta)}\, \frac{dt}{t}+\frac{1}{\theta p(\theta)}\,\|f\|_{A_{0}}^{p(\theta)}\] \[\leq\bigg{(}\int_{0}^{1}[t^{-\theta_{0}}K(t,f;A_{0},A_{1})]^{p( \theta_{0})}\,\frac{dt}{t}\bigg{)}^{p(\theta)/p(\theta_{0})}+\frac{1}{\theta p (\theta)}\,\|f\|_{A_{0}}^{p(\theta)}\] \[\lesssim\frac{1}{\theta}\,\bigg{(}\int_{0}^{1}[t^{-\theta_{0}}K(t,f;A_{0},A_{1})]^{p(\theta_{0})}\,\frac{dt}{t}\bigg{)}^{p(\theta)/p(\theta_{0})}\] \[\leq\frac{1}{\theta}\,\|f\|_{(A_{0},A_{1})_{\theta_{0},p(\theta_{0 })}}^{p(\theta)},\]
where we have also used that \(\theta\in(0,\theta_{0})\) (and so \(p(\theta)\approx 1\)) in the penultimate estimate. The proof of (A.8) is finished.
It follows from (A.8) that
\[\sup_{\theta\in(0,\theta_{0})}\frac{\|f\|_{(A_{0},A_{1})_{\theta,p( \theta)}^{\bullet}}}{\Theta(\frac{1}{1-\theta})} \lesssim\|f\|_{(A_{0},A_{1})_{\theta_{0},p(\theta_{0})}}\sup_{ \theta\in(0,\theta_{0})}\frac{1}{\Theta(\frac{1}{1-\theta})}\] \[=\frac{\|f\|_{(A_{0},A_{1})_{\theta_{0},p(\theta_{0})}}}{\Theta(1)}\] \[\lesssim\sup_{\theta\in(\theta_{0},1)}\frac{\|f\|_{(A_{0},A_{1}) _{\theta,p(\theta)}^{\bullet}}}{\Theta(\frac{1}{1-\theta})}.\]
This completes the proof of (A.7).
### Computability of \(K\)-functionals
A central issue in interpolation theory is to find explicit expressions for \(K\)-functionals. Next we list some well-known examples of characterizations for \(K\)-functionals (see also Example 5) and we refer the interested reader to [5, 6] for further examples.
**Example 6**.: Let \((L^{p_{0}}(\Omega),L^{\infty}(\Omega)),\,p_{0}\in(0,\infty)\). Then (cf. [6, Theorem 5.2.1, page 109] and [5, Theorem 1.6, page 298])
(A.9) \[K(t,f;L^{p_{0}}(\Omega),L^{\infty}(\Omega))\approx\bigg{(}\int_{0}^{t^{p_{0}}} (f^{*}(\xi))^{p_{0}}\,d\xi\bigg{)}^{1/p_{0}}.\]
Equality holds in (A.9) if \(p_{0}=1\).
**Example 7**.: Let \((L^{p_{0}}(\mathbb{R}^{d}),\mathrm{BMO}(\mathbb{R}^{d})),\,p_{0}\in(0,\infty)\). Then (cf. [20, Corollary 3.3])
(A.10) \[K(t,f;L^{p_{0}}(\mathbb{R}^{d}),\mathrm{BMO}(\mathbb{R}^{d}))\approx\bigg{(} \int_{0}^{t^{p_{0}}}[(M_{\mathbb{R}^{d}}^{\#}f)^{*}(\xi)]^{p_{0}}\,d\xi\bigg{)} ^{1/p_{0}},\]
where \(M_{\mathbb{R}^{d}}^{\#}\) is the maximal function given in (1.16). The local counterpart for function spaces defined on cubes also holds true. In order to suitably interpret \((L^{p_{0}}(\mathbb{R}^{d}),\mathrm{BMO}(\mathbb{R}^{d}))\) as an interpolation pair, it is necessary to factor out constant functions. Then \(L^{p_{0}}(\mathbb{R}^{d})\) can be identified with \(L^{p_{0}}(\mathbb{R}^{d})/\mathbb{C}\), where \(\|f\|_{L^{p_{0}}(\mathbb{R}^{d})/\mathbb{C}}=\inf_{c\in\mathbb{C}}\|f-c\|_{L^ {p_{0}}(\mathbb{R}^{d})}\).
**Example 8**.: The \(K\)-functional for the pair \((L^{\infty}(\mathbb{R}^{d}),\dot{W}^{1}_{\infty}(\mathbb{R}^{d}))\) plays an important role in our work (cf. (1.30)) and can be characterized as (cf. [5, (4.42), p. 341] and [22, Theorem 1])
(A.11) \[K(t,f;L^{\infty}(\mathbb{R}^{d}),\dot{W}^{1}_{\infty}(\mathbb{R}^{d}))\approx \sup_{|x-y|\leq t}|f(x)-f(y)|\,.\]
The corresponding formula for periodic functions is also true.
A natural assumption for \((A_{0},A_{1})\) is to be _Gagliardo closed_ in the sense that
\[\|f\|_{A_{0}}\approx\sup_{t>0}K(t,f;A_{0},A_{1})\qquad\text{and}\qquad\|f\|_ {A_{1}}\approx\sup_{t>0}\frac{K(t,f;A_{0},A_{1})}{t}.\]
See [5, p. 320]. This condition is easily verified for many classical pairs of spaces, in particular, for the pairs given in Examples 6-8.
**Proposition 3**.: _Suppose that \((A_{0},A_{1})\) is Gagliardo closed, and let \(\Theta(p)\approx 1\). Then_
\[\Delta_{\theta\in(0,1)}\bigg{\{}\frac{(A_{0},A_{1})_{\theta,p(\theta)}^{\bullet }}{\Theta(\frac{1}{1-\theta})}\bigg{\}}=A_{0}\cap A_{1}.\]
Proof.: Using (A.3) and the monotonicity properties of \(K\)-functionals (note that \(K(t,f;A_{0},A_{1})\) increases and \(\frac{K(t,f;A_{0},A_{1})}{t}\) decreases), we have
\[\|f\|_{\Delta_{\theta\in(0,1)}\left\{\frac{(A_{0},A_{1})^{\bullet} \theta,p(\theta)}{\Theta(\frac{1}{1-\theta})}\right\}} \approx\sup_{\theta\in(0,1)}\sup_{t>0}t^{-\theta}K(t,f;A_{0},A_{1})\] \[\approx\sup_{\theta\in(0,1)}\sup_{t>1}t^{-\theta}K(t,f;A_{0},A_{1 })+\sup_{\theta\in(0,1)}\sup_{t<1}t^{-\theta}K(t,f;A_{0},A_{1})\] \[=\sup_{t>1}K(t,f;A_{0},A_{1})\sup_{\theta\in(0,1)}t^{-\theta}+ \sup_{t<1}\frac{K(t,f;A_{0},A_{1})}{t}\sup_{\theta\in(0,1)}t^{1-\theta}\] \[=\sup_{t>1}K(t,f;A_{0},A_{1})+\sup_{t<1}\frac{K(t,f;A_{0},A_{1})} {t}\] \[\approx\left\|f\right\|_{A_{0}}+\left\|f\right\|_{A_{1}}.\]
The previous result can be used to readily justify the assertions made in (1.4) and (1.15) for \(Y_{p_{0}}^{\Theta}(\Omega)\) and \(Y_{p_{0}}^{\#\Theta}(\Omega)\) via the extrapolation formulae (1.24) and (2.2). For Vishik spaces, we have \(\dot{B}_{\Pi}^{\beta}(\mathbb{R}^{d})=\dot{B}_{\infty,1}^{\beta}(\mathbb{R}^ {d})\), \(\Pi(p)\approx 1\), and one could apply Proposition 3 to obtain the corresponding result that appears in (3.3).
|
2305.18919 | A sharp interface approach for wetting dynamics of coated droplets and
soft particles | The wetting dynamics of liquid particles, from coated droplets to soft
capsules, holds significant technological interest. Motivated by the need to
simulate liquid metal droplet with an oxidize surface layer, in this work we
introduce a computational scheme that allows to simulate droplet dynamics with
general surface properties and model different levels of interface stiffness,
describing also cases that are intermediate between pure droplets and capsules.
Our approach is based on a combination of the immersed boundary (IB) and the
lattice Boltzmann (LB) methods. Here, we validate our approach against the
theoretical predictions in the context of shear flow and static wetting
properties and we show its effectiveness in accessing the wetting dynamics,
exploring the ability of the scheme to address a broad phenomenology. | Francesca Pelusi, Fabio Guglietta, Marcello Sega, Othmane Aouane, Jens Harting | 2023-05-30T10:14:52Z | http://arxiv.org/abs/2305.18919v3 | # A sharp interface approach for wetting dynamics of hydrophobic coated droplets and soft particles
###### Abstract
The wetting dynamics of liquid particles, from coated droplets to soft capsules, holds significant technological interest. Motivated by the need to simulate liquid metal droplet with an oxidize surface layer, in this work we introduce a computational scheme that allows to simulate droplet dynamics with general surface properties and model different levels of interface stiffness, describing also cases that are intermediate between pure droplets and capsules. Our approach is based on a combination of the immersed boundary (IB) and the lattice Boltzmann (LB) methods. Here, we validate our approach against the theoretical predictions in the context of shear flow and static wetting properties and we show its effectiveness in accessing the wetting dynamics, exploring the ability of the scheme to address a broad phenomenology.
## I Introduction
The wetting of a solid surface by a liquid coincides with its ability to preserve the contact with the liquid [1; 2; 3; 4]. The wettability of a solid substrate by a pure droplet is quantified by the droplet's equilibrium contact angle \(\theta_{eq}\), which, in turn, is determined by the balance between adhesive and cohesive forces of the three phases involved (solid, liquid, vapor). At the macroscopic scale, Young's equation [5] describes this balance as
\[\cos\theta_{eq}=\frac{\sigma_{sl}-\sigma_{sg}}{\sigma}, \tag{1}\]
where \(\sigma_{sl}\), \(\sigma_{sg}\) and \(\sigma\) are the solid-liquid, solid-gas and liquid-gas surface, respectively. Eq. (1) also estimates the degree of wettability, making the distinction between poor (\(\theta_{eq}>90^{\circ}\)) and good (\(\theta_{eq}<90^{\circ}\)) wetting regimes. Out of equilibrium, the additional complexities arising from time dependence and viscous dissipation make dynamic wetting critical to a wide range of phenomena including droplet spreading, capillary rise, imbibition, and more complex situations like fluid displacement in porous media or multiphase flow in oil recovery [6; 7; 8; 9]. The recent development of new catalytic devices, for example, requires the usage of liquid metals and metal alloys in the form of catalytic liquid droplets adsorbed on a porous solid support [10; 11]. However, several liquid metals such as gallium and gallium-based alloys oxidize when exposed to air and an inherent oxide layer appears on the top of the surface. This oxide layer acts as a solid-like "skin", encapsulating a liquid metal core [12; 13] and changes the wetting properties of the droplet [14; 15; 16; 17]. Another example concerns the so called liquid marbles, realised by rolling a small liquid droplet in an poorly wetting powder. Because of the layer of powder grains at the liquid-air interface, the wetting of these droplets is inhibited [18; 19], as required in some recent technological and microfluidic applications [20; 21]. The lattice
Boltzmann (LB) method has been used since decades to address problems in wettability [22; 23; 24; 25; 26; 27; 28; 29; 30; 31; 32; 33]. A typical strategy used to simulate droplets within the framework of LB includes introducing non-ideal interface force models, such as the Shan-Chen [34] and the Free-Energy ones [35]. However, these approaches model a diffuse interface. Simulating droplets with a complex rheology, specifically coated droplets including for example liquid metal ones with an oxide layer or liquid marbles, require the use of a constitutive law for the interface. In this case it is more convenient to use a method that reproduces the sharp-interface limit of hydrodynamics. In addition, pseudopotential or free-energy approaches do not easily allow to model a behaviour that, as it is typical for coated droplets, is intermediate between the case of a pure droplet and that of a capsule, as is the case for coated droplets. For these reasons, here we have opted for combining the LB model with an immersed boundary (IB) method, which naturally preserves the hydrodynamic sharp-interface limit, to simulate the complex droplet's wetting dynamics (see Fig. 1).
Our goal is to introduce a comprehensive numerical approach that allows modelling droplets with complex interfacial properties in a consistent way. Here, we model the interface as a 3D triangular mesh and employ a constitutive law based on the theory of Barthes-Biesel and Rallison [36] to explore the case of coated droplets. This approach allows us to describe in a continuous way the transition from pure droplet to capsule-like models by minimising the number of involved parameters. We provide a validation of our IBLB numerical simulations against the theoretical prediction in case of a simple shear flow experiment. Then, we perform wetting dynamics simulations, which show a good agreement with experimental observations in the case of a pure droplet, and we explore the range of accessible contact angles in terms of the involved parameters and the intensity of the interaction with the wall. With this approach we aim at providing a qualitative approximation of the mechanical behaviour of droplets with a complete and wide range of interfacial properties. Nevertheless, the model can be further refined to include additional interface properties, enhancing its accuracy and applicability, such as an extended model to mimic the oxidized layer thickness when dealing with liquid metal droplets.
The paper is organised as follows: in Sec. II we describe the interface model introduced in the IBLB framework. Then, in Sec. III we summarise the main features of the IBLB model employed. The benchmark of a droplet in a simple
Figure 1: Sketch of the wetting dynamics of a generic particle with initial radius \(R\) initially placed in contact with a flat wall. The interface is resolved with a 3D triangular mesh. On each triangular face \(j\), some force contributions \(\boldsymbol{\varphi}\) are computed and distributed to the vertices \(i\in j\), with the aim to consider (_i_) the interface elasticity/rigidity, (_ii_) the volume conservation, and (_iii_) the wall-particle interaction. We also report the corresponding involved parameters.
\begin{table}
\begin{tabular}{|l|l|l|l|l|l|l|} \hline \multicolumn{6}{|c|}{Pre-stressed particles} \\ \hline \(\alpha=(\alpha_{1},\alpha_{2},\alpha_{3})\) & Model & \(a\) & \(b\) & \(c\) & \(d\) \\ \hline \(\alpha=\bar{\alpha}(1,0,0)\) & Pure droplet & 0.78 & 0.42 & 0.0 & 1.0 \\ \(\alpha=\bar{\alpha}(1,0,1)\) & Softly coated droplet & 0.87 & 0.5 & 0.0 & 1.0 \\ \(\alpha=\bar{\alpha}(1,1,1)\) & Rigidly coated droplet & 0.91 & 0.55 & 0.0 & 1.0 \\ \hline \end{tabular}
\begin{tabular}{|l|l|l|l|l|l|l|} \hline \multicolumn{6}{|c|}{Non-pre-stressed particles} \\ \hline \(\alpha=(\alpha_{1},\alpha_{2},\alpha_{3})\) & Model & \(a\) & \(b\) & \(c\) & \(d\) \\ \hline \(\alpha=\bar{\alpha}(0,0,1)\) & Pure elastic capsule & 0.71 & 2.0 & 0.2 & 0.75 \\ \(\alpha=\bar{\alpha}(0,1,1)\) & Non-pre-stressed capsule & 0.7 & 1.6 & 0.2 & 0.75 \\ \hline \end{tabular}
\end{table}
Table 1: List of system models that can be explored by tuning the parameter \(\alpha_{1}\), \(\alpha_{2}\), and \(\alpha_{3}\) in the interface model for a generic particle reported in Eq. (2). The left and right tables refer to the pre-stressed and non-pre-stressed particle classes, respectively. For each model we display the corresponding fitting parameters \(a\), \(b\), \(c\) and \(d\) appearing in Eq. (25).
shear flow is shown in Sec. IV. In the context of the wetting dynamics, Sec. V.1 report a model validation, while all wetting dynamics facets are analysed and discussed in Sec. V.2. Results are summarised in Sec. VI.
## II Interface model
In this section, we describe the theoretical model employed in this work to simulate a generic soft particle. Following Barthes-Biesel & Rallison [36], we consider a two-dimensional, isotropic and homogeneous elastic interface with no bending resistance. Its mechanical response is characterised by an interface strain energy \(w_{S}=w_{S}(I_{1},I_{2},\alpha_{1},\alpha_{2},\alpha_{3})\) which is written in terms of the principal strain invariants \(I_{1,2}\) and three parameters \(\alpha_{1,2,3}\) as [36]
\[w_{S}(I_{1},I_{2},\alpha_{1},\alpha_{2},\alpha_{3})=w_{S,0}+\frac{1}{2}(\alpha _{1}-\alpha_{3})\log(I_{2}+1)+\frac{1}{8}(\alpha_{1}+\alpha_{2})\log^{2}(I_{2} +1)+\alpha_{3}\left[\frac{1}{2}(I_{1}+2)-1\right], \tag{2}\]
where \(w_{S,0}\) is a reference value. \(I_{1}\) and \(I_{2}\) quantify the strain and dilation state of the membrane, respectively. The parameters \(\alpha_{1,2,3}\), instead, characterize the material properties: The pre-stress \(\alpha_{1}\) is an isotropic tension without an applied load; \(\alpha_{2}\) is the resistance against area dilatation and \(\alpha_{3}\) the resistance against shear deformation (i.e., the strain modulus). For the sake of simplicity, hereafter we will refer to these three parameters as \(\alpha=(\alpha_{1},\alpha_{2},\alpha_{3})\). Concerning their choice, we distinguish the two main classes of pre-stressed (\(\alpha_{1}>0\)) and non-pre-stressed (\(\alpha_{1}=0\)) particles. For the latter class, an appropriate combination of \(\alpha_{2}\) and \(\alpha_{3}\) leads to the well-known Skalak [37] and Neo-Hookean [38] models. By conveniently tuning these parameters, one can switch from a pure droplet (\(\alpha=(\sigma,0,0)\), where \(\sigma\) is the surface tension) to a pure elastic capsule (\(\alpha=(0,0,\alpha_{3}>0)\)) and describe intermediate and more complex situations. In particular, in this work we consider also the classes of particles with \(\alpha=(\alpha_{1}>0,0,\alpha_{3}>0)\) and \(\alpha=(\alpha_{1}>0,\alpha_{2}>0,\alpha_{3}>0)\). We call the first type of particle a "softly coated droplet" because it has the characteristic surface tension term of a droplet, but also a strain modulus because of \(\alpha_{3}>0\). Because of the presence of a dilatational term \(\alpha_{2}>0\), we call the second type "rigidly coated droplet". Here we argue that the latter case could be used to describe the interfacial properties of particles like liquid metal droplets with oxidized surface. In Table 1 we summarise the type of particles investigated in this work.
In order to check that Eq. (2) leads to the correct particle dynamics, we consider the case of a shear flow experiment, where a particle with initial radius \(R\) and dynamic viscosity \(\mu\) is placed between two distant moving walls which generate a shear rate \(\dot{\gamma}\) (see top panel of Fig. 2). In this setup, time-dependent motion of the particle shape may be distinguished into two contributions: a solid body rotation and a stretching. Notice that the interface may rotate via a tank-treading motion despite of the particle reaching its steady state. This means that at this stage the interface deformation is constant in time at an Eulerian point \(\mathbf{x}\), but it is not constant by looking at a point \(X\) on the interface. Thus, following concepts and notation of Ref. [36], the time-evolution of the dimensionless position \(\mathbf{x}\) (i.e., the position divided by the initial radius \(R\)), which is representative of the deformation field, reads
\[\mathbf{x}=\mathbf{X}+\beta\left[\mathbf{K}\cdot\mathbf{X}+\mathbf{XX}\cdot\left(\mathbf{J}-\mathbf{K} \right)\cdot\mathbf{X}\right], \tag{3}\]
where \(\beta\ll 1\) is the expansion coefficient around the initial spherical position (we truncate the equation at the leading order in \(\beta\)), while \(\mathbf{J}\) and \(\mathbf{K}\) are two symmetric and traceless second-rank tensors which depend only on time. It follows that the instantaneous external shape of the particle \(r\) can be computed in terms of the norm of Eq. (3), together with its normal \(\mathbf{n}\)[36],
\[r\equiv|\mathbf{x}|=1+\beta\mathbf{X}\cdot\mathbf{J}\cdot\mathbf{X}=1+\beta\frac {\mathbf{x}\cdot\mathbf{J}\cdot\mathbf{x}}{r^{2}}, \tag{4a}\] \[\mathbf{n}=\frac{\mathbf{x}}{r}+2\beta\left[\frac{\mathbf{x}\mathbf{x}\cdot\mathbf{J} \cdot\mathbf{x}}{r^{3}}-\frac{\mathbf{J}\cdot\mathbf{x}}{r}\right]. \tag{4b}\]
Since in Eqs. (4) only tensor \(\mathbf{J}\) appears, it means that \(\mathbf{J}\) describes the overall deformation (i.e., the stretching contribution), while \(\mathbf{K}\) describes the motion on the interface (i.e., the solid body rotation) [36]. We remark that \(\mathbf{J}\) is traceless because of the volume conservation constraint, whereas this property for \(\mathbf{K}\) can be checked _a posteriori_. In the limit of small deformations, the evolution equations for \(\mathbf{J}\) and \(\mathbf{K}\) are given by [36]
\[\begin{cases}\frac{\mathfrak{D}}{\mathfrak{D}t}\mathbf{K}=\frac{5\mathbf{E}}{2\lambda+3 }+\frac{\mathbf{L}}{2\lambda+3}+\frac{\mathbf{M}\left(6\lambda+4\right)}{\left(2 \lambda+3\right)\left(19\lambda+16\right)}\\ \frac{\mathfrak{D}}{\mathfrak{D}t}(\mathbf{J}-\mathbf{K})=\frac{2}{19\lambda+16}\mathbf{ M},\end{cases} \tag{5}\]
where
\[\frac{\mathfrak{D}\mathbf{A}}{\mathfrak{D}t}=\frac{d\mathbf{A}}{dt}-(\mathbf{\Omega}\cdot\mathbf{ A}-\mathbf{A}\cdot\mathbf{\Omega}) \tag{6}\]
is the Jaumann derivative[36] applied to a generic tensor \(\mathbf{A}\), which takes into account the rotation of the particle with the vorticity of the external fluid. \(\mathbf{E}\) and \(\mathbf{\Omega}\) are the symmetric and asymmetric part of the velocity gradient, respectively, \(\lambda\) is the viscosity ratio between inside and outside fluids, and
\[\mathbf{L}=4(\alpha_{2}+\alpha_{3})\mathbf{J}-(6\alpha_{2}+10\alpha_{3}) \mathbf{K}, \tag{7a}\] \[\mathbf{M}=-4(\alpha_{1}+2\alpha_{2}+2\alpha_{3})\mathbf{J}+(12\alpha_{2} +16\alpha_{3})\mathbf{K}. \tag{7b}\]
By numerically integrating Eq. (5), it is possible to obtain information on the transient deformation dynamics since the tensor \(\mathbf{J}\) is directly related to the particle-deformation as [39]
\[D=\text{Ca}(J_{11}^{2}+J_{12}^{2})^{1/2}, \tag{8}\]
where Ca= \(\mu R\dot{\gamma}/\alpha\) is the capillary number. In the latter definition, one can consider \(\alpha=\alpha_{1}=\sigma\) for a pure droplet or \(\alpha=\alpha_{3}\) for a pure capsule. For the sake of simplicity, we fix the values of parameters \(\alpha_{1}\), \(\alpha_{2}\), and \(\alpha_{3}\) to be equal to the same value \(\bar{\alpha}\) and we will refer to this triad of values simply as \(\alpha=\bar{\alpha}(0/1,0/1,0/1)\), with the vector elements turned on (1) and off (0) with the corresponding model (see Table 1).
## III Numerical implementation
The dynamics of the inner and outer fluid is simulated using a single-component lattice Boltzmann (LB) method in terms of the fluid particle populations \(f_{i}(\hat{\mathbf{x}},t)\). The latter represents the probability distribution function of finding a fluid particle in a discrete lattice (Eulerian) node \(\hat{\mathbf{x}}\) at a discrete time \(t\). The corresponding macroscopic behaviour is recovered in the long-wavelength limit, which allows the link with the Navier-Stokes equations. Indeed, the solutions of the Navier-Stokes equation for the total density and momentum are easily accessible from the populations as \(\rho(\hat{\mathbf{x}},t)=\sum_{i}f_{i}(\hat{\mathbf{x}},t)\) and \(\rho(\hat{\mathbf{x}},t)\mathbf{u}(\hat{\mathbf{x}},t)=\sum_{i}\mathbf{c}_{i}f_{i}( \hat{\mathbf{x}},t)\), respectively, with \(\mathbf{c}_{i}\) representing a set of 19 discrete velocities (\(i=0,\dots,18\)) living on a three-dimensional lattice (i.e., we employ a D3Q19 LB model). The dynamics of \(f_{i}\) is ruled by a continuous succession of propagation and collision steps, as highlighted by the discretized Boltzmann equation [40; 41]
\[f_{i}(\hat{\mathbf{x}}+\mathbf{c}_{i}\Delta t,t+\Delta t)-f_{i}(\hat{\mathbf{x}},t)=-\frac{\Delta t}{\tau}\left[f_{i}(\hat{\mathbf{x}},t)-f_{i}^{(eq)}(\hat{ \mathbf{x}},t)\right]+w_{i}\left(1-\frac{\Delta t}{2\tau}\right)\left(\frac{( \mathbf{c}_{i}-\mathbf{u})\cdot\mathbf{F}}{c_{s}^{2}}+\frac{(\mathbf{c}_{i}\cdot\mathbf{F} )(\mathbf{c}_{i}\cdot\mathbf{u})}{c_{s}^{4}}\right)\Delta t, \tag{9}\]
where \(\Delta t\) is the time step. The propagation of \(f_{i}\) on the lattice is described by the l.h.s. of Eq. (9) with the help of \(\mathbf{c}_{i}\), while the single-relaxation-time BGK approximation of the collision operator appears as the first term in the r.h.s. The latter has the aim of modelling the relaxation of \(f_{i}\) towards the equilibrium distribution \(f_{i}^{(eq)}(\hat{\mathbf{x}},t)\), represented as the local Maxwellian distribution
\[f_{i}^{(eq)}(\hat{\mathbf{x}},t)=w_{i}\rho\left[1+\frac{u_{k}c_{i,k}}{c_{s}^{ 2}}+\frac{u_{k}u_{j}(c_{i,k}c_{i,j}-c_{s}^{2}\delta_{kj})}{2c_{s}^{2}}\right]. \tag{10}\]
The relaxation process lasts for a relaxation time \(\tau\). In Eq. (10), \(f_{i}^{(eq)}\) is weighted by the lattice-dependent weights \(w_{i}\)1 and depends on the speed of sound \(c_{s}=\Delta\hat{\mathbf{x}}/(\sqrt{3}\Delta t)\), where \(\Delta\hat{\mathbf{x}}\) is the lattice spacing. The last term of Eq. (9) refers to the forcing implementation following the Guo scheme [42], where \(\mathbf{F}\) is the force acting on the fluid. Notice that, to guarantee the second-order space-time accuracy, this forcing scheme modifies the fluid velocity as \(\rho(\hat{\mathbf{x}},t)\mathbf{u}(\hat{\mathbf{x}},t)=\sum_{i}\mathbf{c}_{i}f_{i}( \hat{\mathbf{x}},t)+\mathbf{F}\Delta t/2\). In our simulations, we keep fixed to unity both \(\Delta\hat{\mathbf{x}}\) and \(\Delta t\). Furthermore, the fluid dynamic viscosity \(\mu\) in LB models is related to the relaxation time \(\tau\) as \(\mu=c_{s}^{2}\rho(\tau-1/2)\). Here, we keep the viscosity ratio \(\lambda\) fixed to unity, since the investigation on the role played by \(\lambda\) goes beyond the purpose of this work. Then, to simulate the interface of a coated droplet or soft particle immersed in the surrounding LB fluid, we model the spherical particle interface using a 3D triangular mesh generated from a recursive refining of an icosahedron. Thus,
the mesh resolution is defined in terms of the total number of triangular faces \(N_{f}\) (see Fig. 6 for a pictorial view of particles with different resolutions). To couple the soft particle dynamics with that of the surrounding fluid, we use the immersed boundary (IB) method, i.e., a fluid-mesh interaction method developed for the first time by Peskin [43] and based on the distinction between interface (Lagrangian) nodes \(\mathbf{q}(t)\) and fluid (Eulerian) nodes \(\hat{\mathbf{x}}\). The resulting coupling is distinct in two operations, i.e., interpolation and spreading. The interpolation operation consists of the computation of the \(i-\)th interface-node velocity \(\hat{\mathbf{q}}_{i}(t)\) from the fluid one (\(\mathbf{u}(\hat{\mathbf{x}},t)\)) as 2[41]
Footnote 2: Note that Eq. (11) causes the velocity of the surface to be equal to the fluid velocity, ensuring thus the no-slip boundary condition at the interface [41; 44].
\[\dot{\mathbf{q}}_{i}(t)=\sum_{\hat{\mathbf{x}}}\mathbf{u}(\hat{\mathbf{x}},t) \delta_{D}(\hat{\mathbf{x}}-\mathbf{q}_{i}(t))\Delta\hat{\mathbf{x}}^{3}. \tag{11}\]
This operation allows to update the node position \(\mathbf{q}_{i}(t)\) as:
\[\mathbf{q}_{i}(t+\Delta t)=\mathbf{q}_{i}(t)+\dot{\mathbf{q}}_{i}(t)\Delta t. \tag{12}\]
Then, the spreading operation is an interpolation of the interface nodal force to the fluid one which allows to make the latter aware of the presence of the interface: at this step, the total force (volume-)density the particle exerts on the fluid at the Eulerian node \(\hat{\mathbf{x}}\) is given by
\[\mathbf{F}(\hat{\mathbf{x}},t)=\sum_{i}\boldsymbol{\varphi}_{i}(t)\delta_{D}( \hat{\mathbf{x}}-\mathbf{q}_{i}(t)), \tag{13}\]
where \(\boldsymbol{\varphi}_{i}\) is the total force on the Lagrangian node \(i\) and the sum runs over all Lagrangian nodes. Both operations involve the so-called discrete delta function \(\delta_{D}\), which is used to approximate the Dirac delta function on our lattice and is defined as [41; 43; 44]
\[\delta_{D}(\hat{\mathbf{x}})=\frac{1}{\Delta\hat{x}^{3}}\phi_{4}(\hat{ \mathbf{x}})\phi_{4}(\hat{\mathbf{y}})\phi_{4}(\hat{\mathbf{z}})\, \tag{14}\]
where \(\phi_{4}(\mathrm{r})\) is the "interpolation stencil" involving four Eulerian nodes along each coordinate axis [45] and defined as follows:
\[\phi_{4}(\hat{\mathbf{x}})=\begin{cases}\frac{1}{8}\left(3-2|\hat{x}|+\sqrt{1 +4|\hat{x}|-4\hat{x}^{2}}\right)&0\leq|\hat{x}|\\ \frac{1}{8}\left(5-2|\hat{x}|-\sqrt{-7+12|\hat{x}|-4x^{2}}\right)&\Delta\hat{ x}\leq|\hat{x}|\leq 2\Delta\hat{x}\\ 0&2\Delta\hat{x}\leq|\hat{x}|\end{cases} \tag{15}\]
The resulting IBLB method has been largely used to simulate the dynamics of capsules [45; 46; 47; 48; 49] and red blood cells [50; 51; 52; 53; 54]. However, only a few works employed this method also for simulating droplet dynamics [55; 56; 57]. A detailed step-by-step description of the IBLB algorithm implementation can be found in Ref. [41].
In our implementation, the total nodal force \(\boldsymbol{\varphi}_{i}\), appearing in Eq. (13) and acting on the \(i\)-th node at position \(\mathbf{r}_{i}\) at time \(t\), is given by the sum of several contributions, i.e.,
\[\boldsymbol{\varphi}_{i}=\boldsymbol{\varphi}_{i}^{S}+\boldsymbol{\varphi}_{i} ^{V}+\boldsymbol{\varphi}_{i}^{\mathrm{w}}. \tag{16}\]
Each contribution plays a distinct role. First of all, \(\boldsymbol{\varphi}_{i}^{S}\) incorporates the information on the elastic properties of the interface. Thus, we compute this nodal force term as
\[\boldsymbol{\varphi}_{i}^{S}=-\frac{\partial}{\partial\{\mathbf{q}_{i}\}}w_{S }(\{\mathbf{q}_{i}\}), \tag{17}\]
where \(w_{S}\) is the generalised strain energy defined in Eq. (2). Eq. (17) is calculated using a first order finite element method as described in Ref. [45]. Then, because we are dealing with incompressible fluids, we need to consider a volume conservation constraint. With this aim, we follow Ref. [45] and we write the nodal volume force contribution \(\boldsymbol{\varphi}_{i}^{V}\) as [58]
\[\boldsymbol{\varphi}_{i}^{V}=-\frac{\partial}{\partial\{\mathbf{q}_{i}\}}w_{ V}(\{\mathbf{q}_{i}\}), \tag{18}\]
where \(w_{V}=k_{V}(V-V_{0})^{2}/2V_{0}\) is the volume energy. In this definition of the volume energy, \(k_{V}\) refers to the volume-force coefficient and it is kept fixed to 1. \(V=\sum_{j}V_{j}\) is the instantaneous total particle volume, with the index
\(j\) running over the number of faces \(N_{f}\)3, while \(V_{0}\) is the initial total particle volume. Further details on how to compute nodal force contributions in Eqs. (17) and (18) can be found in Ref. [58]. The last contribution in Eq. (16) corresponds to the wall-particle interaction, the key element for wetting dynamics simulations. The IBLB approach used in this work involves only one single fluid component and it is not possible to control the wall-fluid surface tensions \(\sigma_{sl}\) and \(\sigma_{sg}\) by introducing two different interactions as, for example, Huang and coworkers did in the case of multi-component pseudopotential LB models [24]. However, many implementation of fluid-wall interactions in the case of single-component LB models [59; 60; 61] use a pseudopotential-like fluid-wall interaction which does not control \(\sigma_{sl}\) and \(\sigma_{sg}\) separately, but only their overall effect. These models work well in describing the wetting dynamics of droplets, despite of this limitation. In this work, we follow the same approach and we introduce a Lennard-Jones interaction on behalf of the wall-particle interaction:
Footnote 3: Note that \(V\) is functionally dependent on \(\{\mathbf{r}_{i}\}\).
\[\boldsymbol{\varphi}_{i}^{\text{\tiny{W}}}=48\epsilon\left[\left(\frac{\xi} {d_{i}}\right)^{12}-\frac{1}{2}\left(\frac{\xi}{d_{i}}\right)^{6}\right]\frac {\mathbf{d}_{i}}{d_{i}^{2}}, \tag{19}\]
where \(\mathbf{d}_{i}\) is the shortest displacement vector between the centroid of the triangle to which node \(i\) belongs and the wall surface, and \(d_{i}=|\mathbf{d}_{i}|\). It means that the force is computed once for each triangle, and then it is distributed on its vertices. The choice of employing a Lennard-Jones interaction potential results from the necessity to model adhesive and repulsive forces between the droplet or particle and the surface. It follows the large amount of works based on molecular dynamics simulations which successfully studied the behaviour of nanodroplets or ridges on chemically patterned substrates [62; 63; 64; 65; 66]. Notice that in this work, we set \(\xi=0.5\Delta\hat{\mathbf{x}}\) to have an interface-wall interaction that decays to zero after one lattice spacing, thus respecting as much as possible the microscopic range, and tune the interaction by changing only \(\epsilon\). The nodal force contributions along with the corresponding parameters are summarised in Fig. 1.
By summarizing, the IBLB algorithm implemented in this work matches the following steps [41]:
1. Compute the nodal force \(\boldsymbol{\varphi}_{i}\) on each node \(i\) (Eq. (16));
2. Spread the nodal force to obtain the force acting on the fluid \(\mathbf{F}(\hat{\mathbf{x}},t)\) via Eq. (13);
3. Perform the LB integration step: compute equilibrium distributions (Eq. (10)), then apply the collision and perform the propagation. At this stage, \(\mathbf{F}(\hat{\mathbf{x}},t)\) enters in r.h.s. of Eq. (9);
4. Compute the fluid velocity \(\boldsymbol{u}(\hat{\mathbf{x}},t)\) from LB populations;
5. Interpolate the fluid velocity to compute the Lagrangian node velocity (Eq. (11));
6. Update the position of each node \(\mathbf{q}(t)\) via Eq. (12);
7. Iterate from step 1.
All simulations have been performed in a periodic domain in the x- and y- directions, while two walls are placed along the (vertical) z-direction. A half-node bounce-back rule implements second-order no-slip boundary conditions at the walls [41]. Dimensional quantities are shown in lattice Boltzmann units (lbu).
Note that, although this method is not able to capture particle breakup and coalescence, it provides an easy way to model different systems by simply tuning the \(\alpha_{i}\) parameters, as detailed in Section II. The introduction of the wall-particle interaction (19) induces an accumulation of interface nodes on the wall-particle contact area. Such an aggregation is more prominent for high values of \(\epsilon\) that are required for observing small contact angles, and is responsible for a numerical instability in that regime. Re-meshing technique may help mitigating this problem, but since we are interested in large contact angles, this accumulation does not affect the results presented in this paper.
## IV Benchmark: Shear Flow Dynamics
To showcase the versatility of the interface model proposed in Sec. II, we perform a double analysis by measuring the deformation of coated droplets and soft particles undergoing a shear flow. Indeed, on the one hand, we benchmark our model with what is known in the literature for the case of a pure droplet, while, on the other hand, we explore the different scenarios associated to each particle case listed in Table 1. In this setup, we run simulations for particles with an initial radius \(R=19\) lbu, placed in a channel with a distance between the two walls H=128 lbu. The system has the same size along the other two directions, x and y. In order to vary the capillary number Ca, we systematically
tune the values of \(\bar{\alpha}\), keeping fixed the shear rate \(\dot{\gamma}\) by the constraint of low Reynolds number (Re=\(10^{-2}\)). Without loss of generality, we set the fluid density \(\rho=1\) lbu and the relaxation time \(\tau=1\) lbu, resulting in a dynamic viscosity of the particle \(\mu\)=1/6 lbu. In Fig. 2(a) and (d), we report simulation data for the time-evolution of the deformation index defined as
\[D(t)=\frac{r_{1}(t)-r_{3}(t)}{r_{1}(t)+r_{3}(t)}, \tag{20}\]
where \(r_{1}\) and \(r_{3}\) are the main particle semi-axes in the shear plane (see the top of Fig. 2 for a sketch). The simulation time is normalised with \(\dot{\gamma}\). Results show a very good agreement between simulations and the time-evolution of the deformation \(D\) defined in Eq. (8), obtained from the analytical solutions of Eqs. (5) (dashed lines). In addition, the steady-state value of the deformation \(D\) can be analytically estimated as a function of the triad of \(\alpha\) as
\[D=\left[\frac{5\alpha_{1}\left(3\alpha_{2}+4\alpha_{3}\right)}{4\left(3\alpha _{1}\alpha_{2}+5\alpha_{1}\alpha_{3}+2\alpha_{2}\alpha_{3}+2\alpha_{3}^{2} \right)}\right]\text{Ca}, \tag{21}\]
Figure 2: Simulated experiment of a single particle under shear flow for the pre-stressed (panels (a)-(c)) and non-pre-stressed (panels (d)-(f)) particle models. In all panels, different symbols/colours refer to different models. Top panels: a sketch of the shear experiment and final shape of the particle for Ca= 0.3. Panels (a) and (d): time evolution of the deformation index \(D(t)\) (Eq. (20)) as a function of time for capillary number Ca=-0.5. Time is shown normalised with the shear rate \(\dot{\gamma}\), and dashed lines refer to the analytical solutions of Eqs. (5). Panels (b) and (e): the steady-state value of the deformation \(D\) as a function of Ca. Dashed lines draw the theoretical predictions: Eq. (22) (salmon line) for the pure droplet case and Eq. (21) (other colors lines) for all the other models. Panels (c) and (f): the steady-state value of the inclination angle \(\Theta\) as a function of Ca. To validate the model in the case of a pure droplet, we report black crosses from Ref. [67], and we draw dotted lines for Eq. (23).
where Ca=\(\mu R\dot{\gamma}/\alpha_{1}\). Since Eq. (21) has been computed with \(\alpha_{2},\ \alpha_{3}\neq 0\), it does not hold for a pure droplet, for which \(\alpha=\bar{\alpha}(1,0,0)\). In the latter case we have [36]
\[D=\frac{19\lambda+16}{16\lambda+16}\text{Ca}, \tag{22}\]
with \(\lambda=1\) in the present work. Fig. 2(b) and (e) confirm the agreement between simulation data and Eqs. (21) and (22) in the limit of small deformations (i.e., small Ca), while it diverges for larger values of \(D\). Note that in Fig. 2(b) the theoretical prediction for cases \(\alpha=\bar{\alpha}(1,0,1)\) and \(\alpha=\bar{\alpha}(1,1,1)\) are so close to not being distinguishable.
In addition, to complete the picture of soft particle dynamics under shear flow, we report in Fig. 2(c) and (f) the inclination angle \(\Theta\) (see top panel) as a function of the capillary number Ca. In the limit of small deformations and the case of a pure droplet with \(\lambda=1\), these results are again in agreement with what is expected from simulations [67] (black crosses) and the theory of Chaffey and Brenner [68] (dotted black line) which reads
\[\Theta=\frac{\pi}{4}-\frac{(19\lambda+16)(2\lambda+3)}{80(\lambda+1)}\text{Ca}. \tag{23}\]
For non-pre-stressed particle models, we observe a stronger dependency on Ca, probably due to the higher rigidity.
To summarize, we find a good agreement for the time-evolution of the particle deformation \(D(t)\) and its steady-state value \(D\) between the analytical solution of the model equations (5) and our numerical model for both coated droplets and soft particles. This is true in the limit of small deformation, which is the basic assumption behind the theory [36]. Furthermore, in the case of a pure droplet, both \(D\) and the inclination angle \(\Theta\) follow the analytical predictions. It is worth noting that this benchmark contribute also to the validation of our generalised interface model against the interface response to an external flow.
## V Wetting dynamics
### Model validation
We now analyse the wetting dynamics of coated droplets and soft particles simulated using the interface model discussed in Sec. II. In this kind of experiment, we consider a single particle with initial radius \(R\) and placed close to a flat wall, i.e., its initial position is such that the z-coordinate of its centre-of-mass \(\text{Z}_{\text{CM}}\) is at a distance \(R\) from the wall to let it feel the action of an attractive wall-interface interaction with intensity \(\epsilon\) (see Eq. (19) and Fig. 1 for a pictorial view). In our implementation, since we do not have direct control on \(\sigma_{sl}\), \(\sigma_{sg}\) appearing in Eq. (1), we consider \(\epsilon\) playing the role of an effective solid surface tension as \(\epsilon\propto\sigma_{sl}-\sigma_{sg}\).
Figure 3: Spreading experiment for a pure liquid droplet, \(\alpha=(\alpha_{1},0,0)\), on a flat surface. Panel (a): radius of the contact area \(r\) as a function of time \(t\), where \(r\) and \(t\) are reported normalised to the initial radius \(R\) and the characteristic time \(t^{*}=(\rho R^{3}/\sigma)^{1/2}\), respectively. Different symbols/colours refer to different values of the equilibrium contact angle \(\theta_{eq}\). In all cases, we observe a scaling law \(r/R=C(t/t^{*})^{\delta}\). The solid line indicates the scaling with \(\delta=3/10\), while the dotted line refers to scaling \(\delta=3/20\). Panels (b) and (c) show the value of the dimensionless exponent \(\delta\) and the dimensionless prefactor \(C\), respectively, as a function of \(\theta_{eq}\).
Before entering into the details of the wetting dynamics of pre-stressed and non-pre-stressed particles, we validate our implementation by quantitatively investigating the spreading dynamics of a pure droplet, \(\alpha=\bar{\alpha}(1,0,0)\), by comparing the time evolution of the radius \(r\) of the contact area with the literature. Indeed, it has been observed that this observable scales in time as
\[r=Ct^{\delta}, \tag{24}\]
where both the prefactor \(C\) and the exponent \(\delta\) can vary. When capillary forces drive the droplet spreading and inertial effects are negligible, Eq. (24) coincides with the Tanner's law [69], predicting an exponent \(\delta=1/10\). Contrariwise, when capillary and inertial forces are balanced, it has been observed that the value of the exponent can vary with some factors, such as viscosity [70], surface tension [70], droplet initial shape [71] and wettability [72; 73; 74; 70]. In particular, a value of \(\delta=1/2\) has been observed in the case of very small contact angles. In Fig. 3(a) we report the time evolution of \(r\), normalised to the initial radius \(R\) at varying equilibrium contact angles \(\theta_{eq}\). A scaling law following Eq. (24) is observed, with the exponent \(\delta\) slightly decreases at increasing \(\theta_{eq}\), in agreement with Ref. [72] (see Fig. 3(b)). This implies that in our simulations of wetting dynamics, the inertia is not negligible and it plays a role in resisting the deformation. Note that, as later highlighted in Fig. 4(a), our model can capture only cases of large contact angles (\(88^{\circ}\leq\theta_{eq}\leq 180^{\circ}\)), indeed \(\delta\) approaches but does not reach a value of close to \(1/2\) which is characteristic for small contact angles [72; 70; 73; 74]. This is comparable, for example, to the case of a liquid metal droplet, which has been observed to never assume values of \(\theta_{eq}<100^{\circ}\) also in the case of oxidization, suggesting that our model can be used to study such kind of system. Concerning the prefactor \(C\) (Fig. 3(c)), it decreases as \(\theta_{eq}\) increases, once again in agreement with Ref. [72]. Note that the "jumps" in \(r\) that are visible in Fig. 3(a) for \(\theta_{eq}=157^{\circ}\) originate from the numerical error in measuring very small variations of the contact area.
Figure 4: Experiment of wetting dynamics. Panels (a) and (b): Equilibrium contact angle \(\theta_{eq}\) as a function of the wall-particle interaction intensity \(\epsilon\), normalised to \(\bar{\alpha}\). Panels (c) and (d): Corresponding values of \(\cos\theta_{eq}\). Left panels ((a) and (c)) refer to pre-stressed particle models, while right panels ((b) and (d)) refer to non-pre-stressed particle models. In all panels, different symbols/colours refer to different models, while dashed lines indicate fitting curves with Eq. (25) (values of fitting parameters are listed in Table 1). Data refer to simulations with a number of triangular faces equal to \(N_{f}=16820\).
### Results
With the aim of simulating the wetting dynamics of droplets with complex interface properties, we explore both pre-stressed (\(\alpha_{1}>0\)) and non-pre-stressed (\(\alpha_{1}=0\)) particles. For each system, characterized by \(\alpha=(\alpha_{1},\alpha_{2},\alpha_{3})=\bar{\alpha}(0/1,0/1,0/1)\), we apply the same strategy used for the benchmark, which we summarize again for the sake of clarity. First, we fix the value of \(\bar{\alpha}\) to \(10^{-4}\) lbu. This choice fixes both the surface tension for pre-stressed particles and the strain modulus for non-pre-stressed particles. Then we measure \(\theta_{eq}\) as a function of the wall-particle interaction energy \(\epsilon\). In Appendix A we discuss also a resolution test for the wetting dynamics.
In Fig. 4(a) and (b) we report the measured \(\theta_{eq}\) as a function of the ratio \(\epsilon/\bar{\alpha}\) for pre-stressed and non-pre-stressed particle models, respectively. In Fig. 4(c) and (d) we report for convenience the corresponding values of \(\cos\theta_{eq}\). After the largest value of \(\epsilon/\bar{\alpha}\) reported in each plot, numerical instabilities appear in the contact area region; these set the limit of applicability of our approach in terms of contact angles that can be modeled. Concerning pre-stressed particles, pure droplet (circles) appear to be marginally more stable with respect to the choice of \(\epsilon\) than the other particles (softly coated particles, \(\alpha=\bar{\alpha}(1,0,1)\), upward triangles; rigidly coated particles, \(\alpha=\bar{\alpha}(1,1,1)\), pentagons), but can reach only a slightly higher contact angle (\(\theta_{eq}=88\) vs. \(\theta_{eq}=79^{\circ}\)). Notice that the cases \(\alpha=\bar{\alpha}(1,0,1)\) and \(\alpha=\bar{\alpha}(1,1,1)\) are very similar, meaning that when the system is very rigid, the dilatational contribution given by \(\alpha_{2}\) is not relevant for the equilibrium contact angle \(\theta_{eq}\). This result is in contrast with what we observed in the shear flow. Non-pre-stressed particle models (Fig. 4(b) and (d)) are stable for a more limited range of values of \(\epsilon/\bar{\alpha}\). After around the value of \(\epsilon/\bar{\alpha}=1.5\) the contact angle does not drop significantly anymore. Similarly to the pre-stressed case, \(\alpha_{2}\) does not seem to have any influence on \(\theta_{eq}\). The behaviour of \(\cos\theta_{eq}\) as a function of \(\epsilon/\bar{\alpha}\) follows very well the empirical behavior
\[\cos\theta_{eq}\simeq a\tan^{-1}\left[b(\epsilon/\bar{\alpha}-c)\right]-d \tag{25}\]
where \(a\), \(b\), \(c\) and \(d\) are fitting parameters depending on the type of particle (see Table 1 and dashed lines in Fig. 4(c) and (d)). Eq. (25) differs from Eq. (1) because, as mentioned above, the model we present in this work misses the direct control of the wall surface tensions but rather drives the mechanical interaction between the particle and the solid surface. Obviously, the fitting constants represent (unknown) functions of the parameters \(\alpha_{1}\), \(\alpha_{2}\), \(\alpha_{3}\) and \(\epsilon\). By increasing separately by a factor of ten each of the components of \(\alpha\), as reported in Fig. 5(a), we can understand that the leading order behavior is dictated by \(\alpha_{1}\), while \(\alpha_{2}\) and \(\alpha_{3}\) provide relatively minor changes in \(\theta_{eq}\). In addition, from Fig. 5(b) one can see that \(\alpha_{1}\) must enter in Eq. (25) in the ratio with \(\epsilon\), because data for the increased \(\alpha_{1}\) (downward triangles) collapses onto the original case (pentagons) once plotted as a function \(\epsilon/\alpha_{1}\). The remaining parameters \(\alpha_{2}\) and \(\alpha_{3}\), instead, do not appear to provide a similar scaling, thus meaning that these two parameters may functionally enter in the other fitting parameters of Eq. (25).
## VI Conclusions
In this work we introduced a novel numerical framework to accurately characterize coated droplets and soft particles. This approach is based on the theory of Barthes-Biesel and Rallison [36] and enables us to capture the unique behavior
Figure 5: Equilibrium contact angle \(\theta_{eq}\) as a function of the wall-particle interaction intensity \(\epsilon\), normalised to \(\bar{\alpha}\) (panel (a)) and \(\max(\alpha_{i})\) with \(i=1,2,3\) (panel (b)). All data refer to the case with all components of \(\alpha=(\alpha_{1},\alpha_{2},\alpha_{3})\) turned on, but different symbols/colours refer to a different “extreme” cases, where one of the three parameters is increased by an order of magnitude. The number of triangular faces \(N_{f}\) is the same as for the data in Fig. 4.
that is intermediate between that of a pure droplet and a capsule. With this generalised constitutive law we are able to capture the special properties of a wide spectrum of coated droplets, for example, liquid metal droplets surrounded by an oxide layer. In the present approach, the interface strain energy is written in terms of three parameters that play the role of material properties, i.e., the pre-stress (\(\alpha_{1}\)), the resistance against area dilatation (\(\alpha_{2}\)), and the resistance against shear deformation (\(\alpha_{3}\)). With the choice of these three parameters, we explore different types of coated droplets, from pure liquid droplets to soft particles. We validate our methodology with the theoretical predictions and recent experiments in both shear flow and wetting experiments, and we explore the limits of the model in terms of \(\alpha_{1,2,3}\). We plan to enrich this description by including new contributions to the presented model, for example, to mimic the thickness of the oxide layer in the case of liquid metal droplets. The latter could be useful to mimic the dynamics of other complex droplets, such as liquid marbles.
## Author Declarations
The authors have no conflicts to disclose.
## Acknowledgments
This work has received financial support from the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) - Project-ID 431791331 - SFB 1452 "Catalysis at liquid interfaces" and research unit FOR2688 "Instabilities, Bifurcations and Migration in Pulsatile Flows" (Project-ID 417989464). This work was supported by the Italian Ministry of University and Research (MUR) under the FARE programme, project "Smart-HEART". The authors gratefully acknowledge the Gauss Centre for Supercomputing e.V. (www.gauss-centre.eu) for funding this project by providing computing time through the John von Neumann Institute for Computing (NIC) on the GCS Supercomputer JUWELS ([75]) at Julich Supercomputing Centre (JSC).
## Data Availability
The data that support the findings of this study are available from the corresponding author upon reasonable request.
Figure 6: Resolution test for the wetting dynamics experiment a pure droplet with \(\alpha=\bar{\alpha}(1,0,0)\) (panels (a) and (b)) and for a mixed system with \(\alpha=\bar{\alpha}(1,1,1)\) (panels (c) and (d)). We compare the time evolution of the z-coordinate of the centre-of-mass \(\mathrm{Z_{CM}}\) for different resolutions, given in terms of the number of mesh triangular faces \(N_{f}\). Time \(t\) is shown in simulation units.
## Appendix A Resolution test for wetting dynamics
Results shown in Figs. 4 and 5 required a test to choose the best resolution in terms of accuracy and computational effort. In Fig. 6 we show the time evolution of the z-coordinate of its centre-of-mass Z\({}_{\rm CM}\), normalised by the initial radius \(R\), for a pure droplet case, \(\alpha=\bar{\alpha}(1,0,0)\), shown in Fig. 6(a) and (b), and a rigidly coated droplet, \(\alpha=\bar{\alpha}(1,1,1)\), shown in Fig. 6(c) and (d). We report two values of \(\epsilon/\bar{\alpha}\), i.e., 0.05, (Fig. 6(a) and (c)), and 7 (Fig. 6(b) and (d)), resulting in a large and a small equilibrium contact angle for both systems. As long as the contact angle is very large then all resolutions are equivalent. However, moving towards \(\theta_{eq}\sim 80^{\circ}\), a large number of \(N_{f}\) is required for more precise contact angle measurements in the case of a pure droplet. The latter statement follows from the way we compute \(\theta_{eq}\), i.e., by fitting the droplet shape with a circumference cut by a chord (i.e., the wall). To perform the fitting procedure, we take a slice of the particle mesh involving a number of nodes which is roughly \(\frac{4}{\sqrt[3]{3}}\sqrt{\pi N_{f}}\). Furthermore, simulations with \(N_{f}=42320\) show the same dynamics as \(N_{f}=16820\) but they require a higher computational cost, leading to the choice made to produce the data reported in Figs. 4 and 5, to run simulations with \(N_{f}=16820\).
|
2303.02417 | Twists by Dirichlet characters and polynomial Euler products of
L-functions | We prove that suitable properties of the twists by Dirichlet characters of an
L-function of degree 2 imply that its Euler product is of polynomial type. | J. Kaczorowski, A. Perelli | 2023-03-04T13:39:31Z | http://arxiv.org/abs/2303.02417v1 | # Twists by Dirichlet Characters and Polynomial Euler Products of \(L\)-Functions
# Twists by Dirichlet Characters and Polynomial Euler Products of \(L\)-Functions
J.KACZOROWSKI and A.PERELLI
**Abstract.** We prove that suitable properties of the twists by Dirichlet characters of an \(L\)-function of degree 2 imply that its Euler product is of polynomial type.
**Mathematics Subject Classification (2010):** 11M41
**Keywords:** Twists by Dirichlet characters; Euler products; Selberg class
## 1. Introduction
The properties of the twists by Dirichlet characters and the shape of the local Euler factors of \(L\)-functions are two seemingly unrelated subjects. The main goal of the present paper is to show that in fact these two themes are closely related. We focus our attention on \(L\)-functions from the Selberg class \(\mathcal{S}\) or the extended Selberg class \(\mathcal{S}^{\sharp}\) as they provide a very convenient framework for this study. We collect basic facts and notation related to \(\mathcal{S}\) and \(\mathcal{S}^{\sharp}\) in Section 2. Here we recall that the twist of a function
\[F(s)=\sum_{n=1}^{\infty}\frac{a(n)}{n^{s}} \tag{1.1}\]
from \(\mathcal{S}^{\sharp}\) by a Dirichlet character \(\chi\) (mod \(q\)) is defined as
\[F^{\chi}(s)=\sum_{n=1}^{\infty}\frac{a(n)\chi(n)}{n^{s}}.\]
This operation is fundamental in the theory of automorphic \(L\)-functions. If \(F\) belongs to the Selberg class, it is expected that \(F^{\chi}\) also belongs to the same class, at least when the conductors of \(F\) and of the primitive character \(\chi\) are coprime. This is true in many special cases, particularly for automorphic \(L\)-functions, but the general problem is wide open.
The second theme is the study of the admissible shape of local Euler factors of \(F\) in (1.1). Recall that the local factor of \(F\) at a prime \(p\) is defined as
\[F_{p}(s)=\sum_{k=0}^{\infty}\frac{a(p^{k})}{p^{ks}}. \tag{1.2}\]
The Euler product axiom in the definition of the Selberg class tells us that
\[F(s)=\prod_{p}F_{p}(s)\quad\text{and}\quad\log F_{p}(s)=\sum_{k=1}^{\infty} \frac{b(p^{k})}{p^{ks}}, \tag{1.3}\]
where
\[b(p^{k})\ll p^{\vartheta k} \tag{1.4}\]
for a certain \(\vartheta<1/2\). It is expected that every \(F\in S\) has a polynomial Euler product, i.e. of type
\[F_{p}(s)=\prod_{j=1}^{\partial_{p}}\left(1-\frac{\alpha_{j,p}}{p^{s}}\right)^ {-1}\]
for all primes \(p\). Again, this is true in many special cases, in particular for automorphic \(L\)-functions, but in general the problem is wide open.
In this paper we show, in the case of functions of degree \(2\), that suitable properties of the twists by Dirichlet characters imply that the local factors are of polynomial type. In order to state our results in a synthetic way we adopt the following terminology.
Given a prime \(p\), we say that \(F\in{\mathcal{S}}^{\sharp}\)_splits at \(p\)_ if for \(\sigma>1\)
\[F(s)=F_{p}(s)\sum_{p\nmid n}\frac{a(n)}{n^{s}}\]
and \(F_{p}(s)\) satisfies (1.3) and (1.4). Note that if \(F\) splits at \(p\) then \(a(1)=1\) and \(a(n)=a(p^{\ell})a(k)\) whenever \(n=p^{\ell}k\) with \(p\nmid k\). In particular, an \(L\)-function from the Selberg class splits at all primes \(p\). Let \(F\in{\mathcal{S}}^{\sharp}\), \(q_{F}\) be its conductor, assume that \(q_{F}\in{\mathbb{N}}\) and let \(p\) be a prime not dividing \(q_{F}\). We denote by \(m_{q_{F}}(p)\) the order of \(p\) (mod \(q_{F}\)), i.e. the least positive integer \(m\) such that \(p^{m}\equiv 1\) (mod \(q_{F}\)). We say that \(F\in{\mathcal{S}}^{\sharp}\) is _weakly twist-regular_ at \(p\) if for every primitive Dirichlet character \(\chi\) (mod \(p^{f}\)) with \(1\leq f\leq m_{q_{F}}(p)\), the twist \(F^{\chi}\) belongs to \({\mathcal{S}}^{\sharp}\) and has the same degree as \(F\). Moreover, we say that \(F\in{\mathcal{S}}^{\sharp}\) is _twist-regular_ at \(p\) if it is weakly twist-regular at \(p\) and for \(f=1\) the conductor \(q_{\chi}\) of \(F^{\chi}\) satisfies \(q_{\chi}=q_{F}p^{d_{F}}\), \(d_{F}\) being the degree of \(F\).
**Theorem 1.**Let \(F\in{\mathcal{S}}^{\sharp}\) be of degree \(2\) and conductor \(q_{F}\in{\mathbb{N}}\). Then there exists a constant \(B_{F}>0\) such that if \(F\) splits and is weakly twist-regular at a prime \(p>B_{F}\), then its local factor \(F_{p}(s)\) is a rational function of \(p^{-s}\) and its numerator has degree \(\leq m_{q_{F}}(p)-1\). In particular, if \(p\equiv 1\) (mod \(q_{F}\)) then \(F_{p}(s)^{-1}\) is a polynomial in \(p^{-s}\). Moreover, if \(F\in{\mathcal{S}}\) then the condition "\(p>B_{F}\)" can be replaced by "\(p\nmid q_{F}\)".
**Theorem 2.**Let \(F\in{\mathcal{S}}^{\sharp}\) be of degree \(2\) and conductor \(q_{F}\in{\mathbb{N}}\). Then there exists a constant \(B_{F}>0\) such that if \(F\) splits and is weakly twist-regular at two distinct primes \(p,q>B_{F}\), \(p\equiv q\) (mod \(q_{F}\)), then \(F_{p}(s)^{-1}\) and \(F_{q}(s)^{-1}\) are polynomials in \(p^{-s}\) and \(q^{-s}\), respectively. Moreover, if \(F\in{\mathcal{S}}\) then the condition "\(p,q>B_{F}\)" can be replaced by "\(p,q\nmid q_{F}\)".
An interesting feature of the L-functions, depending on the so-called multiplicity one property, is that the local factors at the primes not dividing the conductor determine all the other, i.e. those at the ramified primes. The next theorem makes this phenomenon more precise for prime conductors \(q_{F}\). Indeed, we show that the local factor at \(q_{F}\) is of polynomial type as well. We expect that a similar result holds for general \(q_{F}\), but it seems that proof of this fact would require a non-trivial extension of the methods in the present paper. We hope to address this problem in a future paper.
**Theorem 3.**Let \(F\in{\mathcal{S}}\) be of degree \(2\) and its conductor \(q_{F}\) be a prime. If \(F\) is weakly twist-regular at all primes \(\neq q_{F}\), then \(F\) has a polynomial Euler product.
Our final result gives information on the degree of the polynomials \(F_{p}(s)^{-1}\).
**Theorem 4.**Let \(F\in{\mathcal{S}}\) be of degree \(2\) and conductor \(q_{F}\in{\mathbb{N}}\). If \(F\) is twist-regular at all primes \(p\nmid q_{F}\), then there exists a constant \(B_{F}>0\) such that for all primes \(p\geq B_{F}\) we have
\[F_{p}(s)=\left(1-\frac{\alpha_{p}}{p^{s}}\right)^{-1}\left(1-\frac{\beta_{p}} {p^{s}}\right)^{-1}\]
with certain \(|\alpha_{p}|,|\beta_{p}|\leq 1\). Moreover, for the primes \(p<B_{F}\), \(p\nmid q_{F}\), we have that \(F_{p}(s)^{-1}\) is a polynomial in \(p^{-s}\).
We finally remark that the results in this paper are a step toward a very general form of Weil's converse theorem for \(L\)-functions of degree \(2\). We shall address this problem in forthcoming papers.
**Acknowledgements**. This research was partially supported by the Istituto Nazionale di Alta Matematica, by the MIUR grant PRIN-2017 "Geometric, algebraic and analytic methods in arithmetic" and by grant 2021/41/BST1/00241 "Analytic methods in number theory" from the National Science Centre, Poland.
## 2. Notation
Throughout the paper we write \(s=\sigma+it\), \(e(x)=e^{2\pi ix}\) and \(\overline{f}(s)\) for \(\overline{f(\overline{s})}\). The extended Selberg class \(\mathcal{S}^{\sharp}\) consists of non identically vanishing Dirichlet series (1.1) absolutely convergent for \(\sigma>1\), such that \((s-1)^{m}F(s)\) is entire of finite order for some integer \(m\geq 0\), and satisfying a functional equation of type
\[F(s)\gamma(s)=\omega\overline{\gamma}(1-s)\overline{F}(1-s),\]
where \(|\omega|=1\) and the \(\gamma\)-factor
\[\gamma(s)=Q^{s}\prod_{j=1}^{r}\Gamma(\lambda_{j}s+\mu_{j})\]
has \(Q>0\), \(r\geq 0\), \(\lambda_{j}>0\) and \(\Re(\mu_{j})\geq 0\). Note that the conjugate function \(\overline{F}\) has conjugated coefficients \(\overline{a(n)}\). The Selberg class \(\mathcal{S}\) is the subclass of \(\mathcal{S}^{\sharp}\) of the functions with an Euler product as in (1.2),(1.3) and (1.4), and satisfying the Ramanujan conjecture \(a(n)\ll n^{\varepsilon}\). Note that the local factors \(F_{p}(s)\) in (1.2) satisfy
\[F_{p}(s)\neq 0\quad\text{for }\sigma>\vartheta. \tag{2.1}\]
We refer to our survey papers [2],[3],[5],[6],[7],[8] for further definitions, examples and the basic theory of the Selberg class.
Degree \(d_{F}\), conductor \(q_{F}\), root number \(\omega_{F}\) and \(\xi\)-invariant \(\xi_{F}\) of \(F\in\mathcal{S}^{\sharp}\) are defined by
\[d_{F}=2\sum_{j=1}^{r}\lambda_{j},\qquad q_{F}=(2\pi)^{d}Q^{2} \prod_{j=1}^{r}\lambda_{j}^{2\lambda_{j}},\] \[\omega_{F}=\omega\prod_{j=1}^{r}\lambda_{j}^{-2i\Im(\mu_{j})}, \qquad\xi_{F}=2\sum_{j=1}^{r}(\mu_{j}-1/2)=\eta_{F}+id_{F}\theta_{F}\]
with \(\eta_{F},\theta_{F}\in\mathbb{R}\). We also write
\[\omega_{F}^{*}=\omega_{F}e^{-i\frac{\pi}{2}(\eta_{F}+1)}\big{(}\frac{q_{F}}{( 2\pi)^{2}}\big{)}^{i\frac{\theta_{F}}{2}}\quad\text{and}\quad\tau_{F}=\max_{j =1,\dots,r}\big{|}\frac{\Im(\mu_{j})}{\lambda_{j}}\big{|},\]
while \(m_{F}\) denotes the order of the pole of \(F\) at \(s=1\).
Finally, the linear twist of \(F\in\mathcal{S}^{\sharp}\) is defined as
\[F(s,\alpha)=\sum_{n=1}^{\infty}\frac{a(n)}{n^{s}}e(-n\alpha)\]
with \(\alpha\in\mathbb{R}\).
## 3. Lemmas
Given a Dirichlet character \(\chi\) we denote, as usual, by \(\chi^{*}\) the primitive character inducing \(\chi\) and by \(\chi_{0}\) the principal character. Also, we recall that \(\mu(n)\) and \(\varphi(n)\) denote the Mobius and Euler functions, respectively.
**Lemma 1**.: _Let \(p\) be a prime number. For every integer \(r\geq 1\) there exist coefficients \(c(\chi,p^{r})\), where \(\chi\) runs over the Dirichlet characters \((\mathrm{mod}\ p^{r}),\) such that for \((n,p^{r})=1\) we have_
\[e(-n/p^{r})=\sum_{\chi\,(\mathrm{mod}\,p^{r})}c(\chi,p^{r})\chi(n).\]
_Moreover_
\[c(\chi_{0},p^{r})=\begin{cases}\frac{1}{1-p}&\text{if }r=1\\ 0&\text{if }r>1.\end{cases}\]
Proof.: The existence of the coefficients \(c(\chi,p^{r})\) follows by elementary harmonic analysis on the group \(\mathbb{Z}_{p^{r}}^{*}\) of the reduced residues \((\mathrm{mod}\ p^{r})\). Moreover, by orthogonality we have that
\[c(\chi_{0},p^{r})=\frac{1}{\varphi(p^{r})}\sum_{n\in\mathbb{Z}_{p^{r}}^{*}}e(- n/p^{r})=\frac{\mu(p^{r})}{\varphi(p^{r})},\]
and the lemma follows.
Given a function \(F\in\mathcal{S}^{\sharp}\), a prime \(p\) and a positive integer \(m\) we define the polynomial \(W_{m,p}(X)\) as
\[W_{m,p}(X)=\sum_{\ell=0}^{m-2}a(p^{\ell})X^{\ell}+\frac{p}{p-1}a(p^{m-1})X^{m- 1}.\]
Note that if \(m=1\) the sum is empty and hence equals \(0\). Note also that \(W_{m,p}(X)\) is not identically vanishing since \(a(1)=1\), as \(F\) splits at \(p\).
**Lemma 2**.: _Let \(p\) be a prime number, \(m\) be a positive integer and suppose that \(F\in\mathcal{S}^{\sharp}\) splits at \(p\). Then for \(\sigma>1\) we have_
\[F(s,1/p^{m})=\sum_{\ell=0}^{m-1}\frac{a(p^{\ell})}{p^{\ell s}}\sum_{\begin{subarray} {c}\chi\,(\mathrm{mod}\,p^{m-\ell})\\ \chi\neq\chi_{0}\end{subarray}}c(\chi,p^{m-\ell})F^{\chi^{*}}(s)+\left(1-W_{m, p}(p^{-s})F_{p}(s)^{-1}\right)F(s).\]
Proof.: Writing \(n=p^{\ell}k\) with \(p\nmid k\), since \(F\) splits at \(p\) for \(\sigma>1\) we have
\[\sum_{n=1}^{\infty}\frac{a(n)}{n^{s}}e(-n/p^{m}) =\sum_{\ell=1}^{m-1}\sum_{p^{\ell}\|n}\frac{a(n)}{n^{s}}e(-n/p^{m} )+\sum_{p^{m}|n}\frac{a(n)}{n^{s}}+\sum_{p|n}\frac{a(n)}{n^{s}}e(-n/p^{m})\] \[=\sum_{\ell=0}^{m-1}\frac{a(p^{\ell})}{p^{\ell s}}\sum_{pik} \frac{a(k)}{k^{s}}e(-k/p^{m-\ell})+F(s)-\sum_{\ell=0}^{m-1}\frac{a(p^{\ell})}{ p^{\ell s}}\sum_{pik}\frac{a(k)}{k^{s}}.\]
Now we apply Lemma 1 to obtain that
\[\sum_{pik}\frac{a(k)}{k^{s}}e(-k/p^{m-\ell}) =\sum_{\chi\,(\mathrm{mod}\,p^{m-\ell})}c(\chi,p^{m-\ell})F^{\chi }(s)\] \[=\sum_{\begin{subarray}{c}\chi\,(\mathrm{mod}\,p^{m-\ell})\\ \chi\neq\chi_{0}\end{subarray}}c(\chi,p^{m-\ell})F^{\chi^{*}}(s)+c(\chi_{0},p^ {m-\ell})F^{\chi_{0}}(s).\]
Moreover, since \(F^{\chi_{0}}(s)=\sum_{p|n}a(n)n^{-s}=F_{p}(s)^{-1}F(s)\), from Lemma 1 we also have that
\[c(\chi_{0},p^{m-\ell})F^{\chi_{0}}(s)=\begin{cases}\frac{1}{1-p}F_{p}(s)^{-1}F (s)&\text{if }\ell=m-1\\ 0&\text{if }\ell<m-1,\end{cases}\]
and the lemma follows by a simple computation.
**Lemma 3.**_Let \(F\in{\mathcal{S}}^{\sharp}\) with \(d=2\), and let \(a<b\) be fixed. Then_
\[F(s)\ll\left(\frac{q_{F}}{(2\pi e)^{2}}\right)^{|\sigma|}|\sigma|^{2|\sigma|+1}\]
_uniformly for \(a\leq t\leq b\) and \(\sigma\leq-1\). Moreover, if \([a,b]\cap[-\tau_{F},\tau_{F}]=\emptyset\) we also have_
\[F(s)\gg\left(\frac{q_{F}}{(2\pi e)^{2}}\right)^{|\sigma|}|\sigma|^{2|\sigma|+1}\]
_uniformly for \(a\leq t\leq b\) and \(\sigma\leq-1\)._
_Proof._ This is a slightly refined version of Lemma 2.1 in [4]. We follow the proof of that lemma, using the notation there and recalling that here we have \(d=2\). Then we observe that Stirling's formula actually gives the more precise expression
\[\log|G(s)|=2\sigma\log\sigma+(\log\beta-2)\sigma+\log\sigma+O(1),\]
and the lemma follows. \(\square\)
**Lemma 4.**_Let \(F\in{\mathcal{S}}^{\sharp}\) with \(d_{F}=2\), and let \(\alpha>0\). Then for every integer \(K>0\) there exist polynomials \(Q_{0}(s),...,Q_{K}(s)\), with \(Q_{0}(s)\equiv 1\), such that_
\[F(s,\alpha)=-i\omega_{F}^{*}(\sqrt{q_{F}}\alpha)^{2s-1+i\theta_{F}}\sum_{\nu= 0}^{K}\big{(}\frac{iq_{F}\alpha}{2\pi}\big{)}^{\nu}Q_{\nu}(s)\overline{F} \big{(}s+\nu+2i\theta_{F},-\frac{1}{q_{F}\alpha}\big{)}+H_{K}(s,\alpha). \tag{3.1}\]
_Here \(H_{K}(s,\alpha)\) is holomorphic for \(-K+\frac{1}{2}<\sigma<2\) and \(|s|<2K\), and satisfies_
\[H_{K}(s,\alpha)\ll(AK)^{K} \tag{3.2}\]
_with a certain constant \(A=A(F,\alpha)>0\). Moreover, \(\deg Q_{\nu}=2\nu\) and_
\[Q_{\nu}(s)\ll\frac{(A(|s|+1))^{2\nu}}{\nu!}\qquad\qquad\mbox{ for }1\leq\nu\leq\min(|s|,K) \tag{3.3}\] \[Q_{\nu}(s)\ll(AK)^{K}\qquad\qquad\mbox{ for }|s|\leq 2K,\,\nu\leq K. \tag{3.4}\]
_Proof._ This is Theorem 2.1 of [4]. Note that the different value of the shift in (3.1) with respect to Theorem 2.1, namely \(\nu+2i\theta_{F}\) in place of \(\nu+i\theta_{F}\), is due to the slightly different definition of \(\theta_{F}\) used in this paper. \(\square\)
**Lemma 5.**_Let \(a<b\) be fixed and let \({\mathcal{Q}}_{\nu}(s),{\mathcal{F}}(s)\) be real-valued continuous functions defined in the horizontal strip \(a\leq t\leq b\) satisfying the following conditions:_
\[{\mathcal{F}}(s)\ll 1\qquad\qquad\mbox{for }\sigma\geq 1, \tag{3.5}\] \[{\mathcal{F}}(s)\ll B^{|\sigma|}|\sigma|^{2|\sigma|+1}\qquad \qquad\mbox{ for }\sigma\leq 1,\] (3.6) \[{\mathcal{Q}}_{\nu}(s)\ll\frac{(C(|s|+1))^{2\nu}}{\nu!}\qquad \qquad\mbox{ for }\ 0\leq\nu\leq\min(|s|,|\sigma|+2),\] (3.7) \[{\mathcal{Q}}_{\nu}(s)\ll(C(|\sigma|+2))^{|\sigma|+2}\qquad \qquad\mbox{ for }\nu\leq|\sigma|+2\,\ |s|\leq 2[|\sigma|]+2, \tag{3.8}\]
_where \(B,C>0\). Then for \(\sigma\leq-1\) and \(a\leq t\leq b\) we have_
\[\sum_{0\leq\nu\leq|\sigma|+2}{\mathcal{Q}}_{\nu}(s){\mathcal{F}}(s+\nu)\ll B ^{|\sigma|}|\sigma|^{2|\sigma|+1} \tag{3.9}\]
_with the implied constant depending on \({\mathcal{F}},a,b,B,C\) and implied constants in (3.5)-(3.8)._
_Proof._ In the proof we use the synthetic expression "suitably bounded" to mean "bounded by a constant depending at most on \({\mathcal{F}},a,b,B,C\) and implied constants in (3.5)-(3.8)". We first note that we may assume without loss of generality that
\[|\sigma|\geq\max(|a|,|b|)+1. \tag{3.10}\]
Indeed, otherwise both \(s\) and \(s+\nu\) with \(0\leq\nu\leq|\sigma|+2\) stay in a compact domain and hence the functions \(\mathcal{Q}_{\nu}(s),\mathcal{F}(s+\nu)\) are suitably bounded. Thus \(\Phi(s)\) is suitably bounded as well and the assertion follows in this case.
Assuming (3.10), we first consider the terms in (3.9) with
\[|\sigma|-\max(|a|,|b|)-1\leq\nu\leq|\sigma|+2. \tag{3.11}\]
For such \(\nu\) we have that \(-\max(|a|,|b|)-1\leq-|\sigma|+\nu\leq 2\); thus, as before, \(s+\nu\) stays in a compact domain and hence \(\mathcal{F}(s+\nu)\) is suitably bounded. Moreover, recalling (3.10) we certainly have
\[|s|\leq|\sigma|+|t|\leq 2[|\sigma|]+2,\]
so in view of this and of (3.11) we can apply (3.8) to estimate \(\mathcal{Q}_{\nu}(s)\). Hence the terms in \(\Phi(s)\) corresponding to \(\nu\) in the range (3.11) contribute at most
\[\ll(C(|\sigma|+2))^{|\sigma|+2}\ll B^{|\sigma|}|\sigma|^{2|\sigma|+1},\]
and our assertion holds in this case as well.
Finally suppose that
\[0\leq\nu\leq|\sigma|-\max(|a|,|b|)-1. \tag{3.12}\]
Recalling that \(\sigma\) is negative, for such \(\nu\) we have \(\sigma+\nu\leq-1\) and hence we can apply (3.6) to get
\[\mathcal{F}(s+\nu)\ll|\sigma|B^{|\sigma|-\nu}(|\sigma|-\nu)^{2(|\sigma|-\nu)}.\]
Moreover, for such \(\nu\) we also have \(\nu\leq\min(|s|,|\sigma|+2)\) so we can apply (3.7) to obtain, thanks to (3.10), that
\[\mathcal{Q}_{\nu}(s)\ll\frac{(C(|s|+1))^{2\nu}}{\nu!}\leq\frac{(2C|\sigma|)^{ 2\nu}}{\nu!}.\]
Hence the terms corresponding to \(\nu\) in the range (3.12) contribute at most
\[\ll|\sigma|\sum_{0\leq\nu\leq|\sigma|}\frac{(2C|\sigma|)^{2\nu}}{ \nu!}B^{|\sigma|-\nu}(|\sigma|-\nu)^{2(|\sigma|-\nu)}\] \[\ll|\sigma|B^{|\sigma|}\max_{0\leq\nu\leq|\sigma|}\left(|\sigma|^ {2\nu}(|\sigma|-\nu)^{2(|\sigma|-\nu)}\right)\sum_{\nu=0}^{\infty}\frac{1}{ \nu!}\left(\frac{4C^{2}}{B}\right)^{\nu}\] \[\ll B^{|\sigma|}|\sigma|^{2|\sigma|+1},\]
and the lemma follows.
**Lemma 6.**_Let \(F\in S^{\sharp}\) with \(d=2\). Then the linear twist \(F(s,1/q_{F})\) has meromorphic continuation to \(\mathbb{C}\) with poles at most at the points_
\[1-\nu-2i\theta_{F}\qquad\qquad\nu\in\mathbb{Z},\nu\geq 0\]
_and order \(\leq m_{F}\). Moreover, for every \(a<b\) such that_
\[(a+\theta_{F})(b+\theta_{F})>0 \tag{3.13}\]
_we have, uniformly for \(\sigma\leq-1\) and \(a\leq t\leq b\), that_
\[F(s,1/q_{F})\ll_{F,a,b}\left(\frac{q_{F}}{2\pi e}\right)^{2|\sigma|}|\sigma|^ {2|\sigma|+1}. \tag{3.14}\]
_Proof_. We start with Lemma 4 with the choice \(\alpha=1/q_{F}\). Since
\[\overline{F}\big{(}s+\nu+2i\theta_{F},-\frac{1}{q_{F}\alpha}\big{)}=\overline {F}(s+\nu+2i\theta_{F}), \tag{3.15}\]
the terms on the right hand side of (3.1) are holomorphic for \(s\neq 1-\nu-2i\theta_{F}\), with \(\nu\geq 0\) and \(s\) in the range stated after (3.1). Moreover, the potential poles are induced by \(F(s)\) and hence of order \(\leq m_{F}\). The first assertion follows since \(K\) is arbitrarily large.
To prove the boud (3.14) we use Lemma 4 with \(K=[|\sigma|]+2\). Recalling (3.15), from (3.1) and (3.2) we have that
\[F(s,1/q_{F})\ll q_{F}^{|\sigma|}\sum_{0\leq\nu\leq|\sigma|+2}(2\pi)^{-\nu}|Q_{ \nu}(s)||\overline{F}(s+\nu+2i\theta_{F})|+O\big{(}(A(|\sigma|+2))^{|\sigma|+2 }\big{)}. \tag{3.16}\]
Next we apply Lemma 5 with
\[\mathcal{Q}_{\nu}(s)=(2\pi)^{-2\nu}|Q_{\nu}(s)|\quad\text{and}\quad\mathcal{F }(s)=|\overline{F}(s+i\theta)|.\]
From (3.3) and (3.4) we see that (3.7) and (3.8) hold with \(C=A\), and from Lemma 3 we see that (3.6) holds with \(B=q_{F}/(2\pi e)^{2}\). Finally, (3.5) is satisfied as well, thanks to (3.13) and the description of the possible singularities of \(F(s,1/q_{F})\). Now it is clear that (3.14) follows from (3.9) and (3.16), and the proof is complete.
For \(F\) in the Selberg class \(\mathcal{S}\) we denote by \(N_{F}(\sigma,T)\) the number of non-trivial zeros \(\beta+i\gamma\) of \(F\) in the rectangle \(\beta>\sigma\), \(|\gamma|\leq T\).
**Lemma 7**.: _Let \(F\in\mathcal{S}\) with \(d=2\). Then for every \(\varepsilon>0\) and any fixed \(\sigma>1/2\) we have_
\[N_{F}(\sigma,T)\ll T^{3/2-\sigma+\varepsilon}.\]
Proof.: See p.474-475 of [4].
## 4. Proof of Theorem 1
Let \(p\) be as in Theorem 1. In particular we may assume that \(p\nmid q_{F}\), and let \(m=m_{q_{F}}(p)\). According to Lemma 2 we have
\[W_{m,p}(p^{-s})F_{p}(s)^{-1}F(s)=\sum_{\ell=0}^{m-1}\frac{a(p^{\ell})}{p^{\ell s }}\sum_{\begin{subarray}{c}\chi\,(\text{mod}\,p^{m-\ell})\\ \chi\neq\chi_{0}\end{subarray}}c(\chi,p^{m-\ell})F^{\chi^{*}}(s)+F(s)-F(s,1/p ^{m}), \tag{4.1}\]
where all terms on the right-hand side are meromorphic on \(\mathbb{C}\) except possibly the last one. Since \(p^{m}\equiv 1\ (\text{mod}\ q_{F})\) we have \(F(s,p^{m}/q_{F})=F(s,1/q_{F})\), hence from Lemma 4 with \(\alpha=1/p^{m}\) we obtain
\[F(s,1/p^{m})= -i\omega_{F}^{*}\left(\frac{\sqrt{q_{F}}}{p^{m}}\right)^{2s-1+i \theta_{F}}\sum_{\nu=0}^{K}\left(\frac{iq_{F}}{2\pi p^{m}}\right)^{\nu}Q_{ \nu}(s)\overline{F}(s+\nu+2i\theta_{F},-1/q_{F}) \tag{4.2}\] \[+H_{K}(s,1/p^{m}).\]
By Lemma 6 this gives meromorphic continuation of \(F(s,1/p^{m})\) to \(\mathbb{C}\), and hence the same is true for the term on the left hand side of (4.1). Therefore, the function
\[W(s):=W_{m,p}(p^{-s})F_{p}(s)^{-1} \tag{4.3}\]
is meromorphic on \(\mathbb{C}\) and, thanks to (2.1), its possible poles lie on the half-plane \(\sigma\leq\vartheta\). Recalling (4.1),(4.2) and the structure of the singularities of \(F(s,1/q_{F})\) described in Lemma 6, the possible poles of \(W(s)\) outside a certain horizontal strip of bounded height are induced by the zeros of \(F(s)\). But \(F_{p}(s)\) is \((2\pi i/\log p)\)-periodic, hence one such pole generates \(\gg T\) poles in the strip \(\sigma\leq\vartheta\), \(|t|\leq T\). If \(F\in\mathcal{S}\) this contradicts the density estimate in Lemma 7, thus \(W(s)\) is entire. If \(F\in\mathcal{S}^{\sharp}\) we may apply an idea from Gierszewski [1] and choose \(B_{F}>0\) so large that the whole horizontal strip \(2\pi/\log B_{F}\leq t\leq 4\pi/\log B_{F}\) is free from non-trivial zeros of \(F\), thus deducing that \(W(s)\) is entire if \(p>B_{F}\).
To conclude the proof it suffices to show that \(W(s)\) is a polynomial in \(p^{-s}\), since \(W_{m,p}(p^{-s})\) is a polynomial in \(p^{-s}\) of degree \(\leq m-1\). To this end, since \(W(s)\) is \((2\pi i/\log p)\)-periodic, we start by estimating of \(W(s)F(s)\) for \(a\leq t\leq b\), where \(a>\max(|\theta_{F}|,\tau_{F})\) and \(b=a+2\pi/\log p\). From (4.1) we obtain
\[W(s)F(s)\ll C_{1}^{|\sigma|}\max_{0\leq\ell\leq m-1}\max_{\begin{subarray}{c} \chi(\operatorname{mod}p^{m-\ell})\\ \chi\neq 0\end{subarray}}|F^{\chi^{*}}(s)|+|F(s)|+|F(s,1/p^{m})| \tag{4.4}\]
with a certain constant \(C_{1}>0\). Since \(F\) is weakly twist-regular at \(p\) we have that \(F^{\chi^{*}}\in\mathcal{S}^{\sharp}\) and has degree \(2\), hence by Lemma 3 the sum of the first two terms in (4.4) is
\[\ll C_{2}^{|\sigma|}|\sigma|^{2|\sigma|+1}\]
with \(C_{2}>0\). Moreover, from (4.2) and Lemma 4 with \(K=[|\sigma|]+2\) we see that the last term is
\[\ll\left(\frac{p^{m}}{\sqrt{q_{F}}}\right)^{2|\sigma|}\sum_{\nu=0}^{K}\left( \frac{q_{F}}{2\pi p^{m}}\right)^{\nu}|Q_{\nu}(s)||F(\overline{s}+\nu-2i\theta _{F},1/q_{F})|+O\left((A|\sigma|)^{|\sigma|}\right).\]
We estimate this sum using Lemma 5 with
\[\mathcal{Q}_{\nu}(s)=\left(\frac{q_{F}}{2\pi p^{m}}\right)^{\nu}|Q_{\nu}(s)| \quad\text{and}\quad\mathcal{F}(s)=|F(\overline{s}-2i\theta_{F},1/q_{F})|.\]
As in the proof of Lemma 6, but using Lemma 6 itself in place of Lemma 3, we easily check that the assumptions in Lemma 5 are satisfied, hence concluding that
\[F(s,1/p^{m})\ \ll C_{3}^{|\sigma|}|\sigma|^{2|\sigma|+1}\]
with some \(C_{3}>0\). Thus
\[W(s)F(s)\ll C_{4}^{|\sigma|}|\sigma|^{2|\sigma|+1} \tag{4.5}\]
with \(C_{4}=\max(C_{2},C_{3})\). On the other hand, we estimate \(F(s)\) from below using Lemma 3. Recalling (4.5) and that \(a>\tau_{F}\) we obtain
\[W(s)\ll C_{5}^{|\sigma|}\]
with a certain \(C_{5}=C_{5}(F,p)>0\), uniformly for \(-\infty<\sigma<\infty\) and \(a\leq t\leq b\). As in the proof of Theorem 1.1 of [4] (see p.448), this bound implies that \(W(s)\) is a polynomial in \(p^{-s}\), as required.
Finally, we write
\[F_{p}(s)=\frac{N_{p}(p^{-s})}{D_{p}(p^{-s})}\]
for certain coprime normalized polynomials \(N_{p},D_{p}\in\mathbb{C}[X]\). Since \(W(s)\) is entire, from (4.3) we see that \(N_{p}\) divides \(W_{m,p}\), thus has degree \(\leq m-1\). In particular, \(N_{p}\equiv 1\) if \(p\equiv 1\) (mod \(q_{F}\)), and Theorem 1 follows.
## 5. Proof of Theorem 2
From Theorem 1 we know that \(F_{p}(s)^{-1}\) is a rational function of \(p^{-s}\); suppose that this rational function is not a polynomial. Hence the singularities of \(F_{p}(s)^{-1}\) lie on a finite number of vertical lines and, in view of (2.1), there exists \(\sigma_{p}\leq\vartheta<1/2\) such that \(F_{p}(s)^{-1}\) is holomorphic for \(\sigma>\sigma_{p}\) and has infinitely many poles on the line \(\sigma=\sigma_{p}\). More precisely, the poles of \(F_{p}(s)^{-1}\) lie on finitely many arithmetic progressions with difference \(2\pi i/\log p\) and, thanks to Lemma 7, every
such arithmetic progression contains infinitely many poles of \(F_{p}(s)^{-1}F(s)\). Moreover, applying Lemma 2 with \(m=1\) we obtain that
\[\frac{p}{p-1}F_{p}(s)^{-1}F(s)=\sum_{\begin{subarray}{c}\chi\,(\mathrm{mod}\,p) \\ \chi\neq\chi_{0}\end{subarray}}c(\chi,p)F^{\chi}(s)+F(s)-F(s,1/p), \tag{5.1}\]
hence \(F(s,1/p)\) is meromorphic over \(\mathbb{C}\), has the same singularities of \(F_{p}(s)^{-1}F(s)\) on the line \(\sigma=\sigma_{p}\) and at most a pole at \(s=1\) for \(\sigma>\sigma_{p}\). It is also clear that analogous statements are true with the prime \(q\) in place of \(p\), assuming that \(F_{q}(s)\) is not of polynomial type, and without loss of generality we may assume that \(\sigma_{q}\leq\sigma_{p}\). If \(F_{q}(s)\) is of polynomial type we set \(\sigma_{q}=-\infty\).
We denote by \(h(s)\) a generic function which is meromorphic with finitely many poles for \(\sigma>\sigma_{p}-1\). From Lemma 4 with \(\alpha=p/q_{F}\) and arbitrarily large \(K\), and observing that the terms with \(\nu\) sufficiently large are holomorphic for \(\sigma>\sigma_{p}-1\), we have
\[F(s,p/q_{F})=-i\omega_{F}^{*}(p/\sqrt{q_{F}})^{2s-1+i\theta_{F}}\sum_{\nu=0}^{ K_{p}}\left(\frac{ip}{2\pi}\right)^{\nu}Q_{\nu}(s)\overline{F}(s+\nu+2i\theta_{F}, -1/p)+h(s)\]
with a suitable integer \(K_{p}\). Hence, recalling that \(Q_{0}(s)\equiv 1\), from the above properties of \(F(s,1/p)\) we deduce that
\[F(s,p/q_{F})=-i\omega_{F}^{*}(p/\sqrt{q_{F}})^{2s-1+i\theta_{F}}\overline{F}( s+2i\theta_{F},-1/p)+h(s). \tag{5.2}\]
Again, an analogous formula holds with the prime \(q\) in place of \(p\). Moreover, by a double conjugation from (5.2) we obtain that
\[F(s,1/p)=-i\omega_{F}^{*}(\sqrt{q_{F}}/p)^{2s-1+i\theta_{F}}\overline{F( \overline{s}-2i\theta_{F},p/q_{F})}+h(s), \tag{5.3}\]
and similarly with \(q\) in place of \(p\).
But \(p\equiv q\ (\mathrm{mod}\ q_{F})\), therefore
\[F(\overline{s}-2i\theta,p/q_{F})=F(\overline{s}-2i\theta_{F},q/q_{F})\]
and hence from (5.3) we obtain that
\[F(s,1/p)=(q/p)^{2s-1+i\theta_{F}}F(s,1/q)+h(s).\]
This shows that \(\sigma_{q}=\sigma_{p}\) and the singularities on \(\sigma=\sigma_{p}\) of \(F(s,1/p)\) and \(F(s,1/q)\) coincide, apart from a finite number of them. If \(F\in\mathcal{S}\), in view of Lemma 7 this is possible only when the differences of the involved arithmetic progressions are the same, i.e. when \(p=q\), a contradiction. If \(F\in\mathcal{S}^{\sharp}\) we argue as in the proof of Theorem 1, using the idea in Gierszewski [1]. Thus \(F_{p}(s)^{-1}\) has no singularities and hence it is a polynomial in \(p^{-s}\). By symmetry of arguments the same is true for \(F_{q}(s)^{-1}\), and the result follows.
## 6. Proof of Theorem 3
Clearly, \(F\) splits at every prime since it belongs to \(\mathcal{S}\). Moreover, denoting by \(p\) the prime number \(q_{F}\), given a prime \(p_{1}\neq p\) obviously there exists a distinct prime \(p_{2}\neq p\) such that \(p_{1}\equiv p_{2}\pmod{p}\). Hence, thanks to Theorem 2, we only have to show that \(F_{p}(s)^{-1}\) is a polynomial in \(p^{-s}\).
By orthogonality, for \(\sigma>1\) we have
\[\begin{split}\sum_{a=1}^{p-1}F(s,a/p)&=(p-1)\sum_{p \mid n}\frac{a(n)}{n^{s}}-\sum_{p\mid n}\frac{a(n)}{n^{s}}\\ &=(p-1)\big{(}F(s)-F_{p}(s)^{-1}F(s)\big{)}-F_{p}(s)^{-1}F(s)\\ &=\big{(}p-1-pF_{p}^{-1}(s)\big{)}\,F(s).\end{split} \tag{6.1}\]
For every \(1\leq a<p\) we fix a prime \(p_{a}\equiv a\ (\text{mod}\ p)\), so that \(F(s,a/p)=F(s,p_{a}/p)\). Thus by Lemma 4 with \(\alpha=p_{a}/p\) and \(K\) arbitrarily large we obtain
\[F(s,a/p)=-i\omega_{F}^{*}\left(\frac{p_{a}}{\sqrt{p}}\right)^{2s-1+i\theta_{F} }\sum_{\nu=0}^{K}\left(\frac{ip_{a}}{2\pi}\right)^{\nu}Q_{\nu}(s)\overline{F}( s+\nu+2i\theta_{F},-1/p_{a})+H_{K}(s,p_{a}/p). \tag{6.2}\]
But thanks to Lemma 1 we have
\[\overline{F}(s,-1/p_{a})=\frac{1}{1-p_{a}}\overline{F}_{p_{a}}(s)^{-1} \overline{F}(s)+\sum_{\begin{subarray}{c}\chi\,(\text{mod}\,p_{a})\\ \chi\neq\chi_{0}\end{subarray}}\overline{c(\chi,p_{a})\overline{F}^{\chi}}(s). \tag{6.3}\]
From the hypothesis of Theorem 3 we have that the twists \(\overline{F^{\chi}}(s)\) belong to \(S^{\sharp}\) and have degree \(2\); moreover, by Theorem 2, \(\overline{F}_{p_{a}}(s)^{-1}\) is a polynomial in \(p_{a}^{-s}\). Therefore (6.2) and (6.3) give meromorphic continuation of \(F(s,a/p)\) to the whole complex plane, with possible poles only at the points \(1-\nu-i\theta_{F}\), \(\nu\geq 0\).
In addition we have
\[\overline{F}_{p_{a}}(s)^{-1}\ll p_{a}^{C|\sigma|} \tag{6.4}\]
for a certain \(C>0\) and, according to Lemma 3,
\[\overline{F^{\chi}}(s)\ll A^{|\sigma|}|\sigma|^{2|\sigma|+1} \tag{6.5}\]
for a certain \(A>0\), uniformly for \(\sigma\leq-1\) and \(b\leq t\leq c\), with arbitrary fixed \(b,c\) such that \(|\theta_{F}|<b<c\). Thus, applying Lemma 5 with
\[\mathcal{Q}_{\nu}(s)=\left(\frac{p_{a}}{2\pi}\right)^{\nu}|Q_{\nu}(s)|\qquad \text{and}\qquad\mathcal{F}(s)=|\overline{F}(s+2i\theta_{F},-1/p_{a})|,\]
we obtain that
\[F(s,a/p)\ll B^{|\sigma|}|\sigma|^{2|\sigma|+1} \tag{6.6}\]
for a certain \(B>0\), uniformly for \(\sigma\leq-1\) and \(b\leq t\leq c\), with arbitrary fixed \(|\theta_{F}|<b<c\). Now, rewriting (6.1) as
\[F_{p}(s)^{-1}=1-\frac{1}{p}-\frac{1}{pF(s)}\sum_{a=1}^{p-1}F(s,a/p), \tag{6.7}\]
we obtain the meromorphic continuation of \(F_{p}(s)^{-1}\) to the whole complex plane. Finally, using the periodicity and density arguments as in the proof of Theorem 1 we conclude that \(F_{p}(s)^{-1}\) is an entire function. Moreover, by (6.6), (6.7) and Lemma 3 we have that
\[F_{p}(s)^{-1}\ll D^{|\sigma|}\]
for certain \(D>0\). Thus, again as in the proof of Theorem 1, \(F_{p}(s)^{-1}\) is a polynomial in \(p^{-s}\) and the result follows.
## 7. Proof of Theorem 4
Similarly as in the proof of Theorem 3, for every \((a,q_{F})=1\) we fix a prime \(p_{a}\equiv a\ (\text{mod}\ q_{F})\). From (5.1) with \(p=p_{a}\) we obtain
\[F(s,1/p_{a})\ll|F(s)|+|F_{p_{a}}(s)^{-1}F(s)|+\sum_{\begin{subarray}{c}\chi\, (\text{mod}\,p)\\ \chi\neq\chi_{0}\end{subarray}}|c(\chi,p_{a})F^{\chi}(s)|.\]
As in the proof of Theorem 3 we have that (6.4) and (6.5) hold in the present case as well, the second bound with \(F^{\chi}\) in place of \(\overline{F^{\chi}}\) and \(b>\max_{\chi\,(\text{mod}\,p)}|\theta_{F^{\chi}}|\). Thus, thanks to Lemma 3, for such \(s\) we have
\[F(s,1/p_{a})\ll C_{1}^{|\sigma|}|\sigma|^{2|\sigma|+1}\]
with a certain \(C_{1}>0\). Therefore, applying Lemma 4 with \(\alpha=p_{a}/q_{F}\) and then Lemma 5, we obtain
\[F(s,p_{a}/q_{F})\ll C_{2}^{|\sigma|}\sum_{0\leq\nu\leq|\sigma|+2}|Q_{\nu}(s)||F( \overline{s}+\nu-2i\theta,1/p_{a})|+(A|\sigma|)^{|\sigma|}\ll C_{3}^{|\sigma|}| \sigma|^{2|\sigma|+1} \tag{7.1}\]
with certain \(C_{2},C_{3}>0\), uniformly for \(\sigma\leq-1\) and \(b<t<c\).
Let now \(p\) be a sufficiently large prime, say \(p\geq B_{F}\) with a certain \(B_{F}>q_{F}\). Then there exists \((a,q_{F})=1\) such that \(p\equiv p_{a}\) (mod \(q_{F}\)). We shall estimate \(|F_{p}(s)^{-1}|\) from above in the horizontal half-strip \(\sigma\leq-1\), \(b<t<c\), where \(c>b+2\pi/\log p\) and \(b\) is sufficiently large. Using Lemma 4 with \(\alpha=1/p\), the fact that \(F(s,p/q_{F})=F(s,p_{a}/q_{F})\), Lemma 5 and (7.1) we obtain
\[F(s,1/p) \ll\left(\frac{p}{\sqrt{q_{F}}}\right)^{2|\sigma|+1}\sum_{0\leq \nu\leq|\sigma|+2}\left(\frac{q_{F}}{2\pi p}\right)^{\nu}|Q_{\nu}(s)F( \overline{s}+\nu-2i\theta_{F},p_{a}/q_{F})|+(A|\sigma|)^{|\sigma|+1} \tag{7.2}\] \[\ll(C_{4}p)^{2|\sigma|+1}|\sigma|^{2|\sigma|+1}.\]
Next, from Lemma 2 with \(m=1\) and then Lemma 3 and (7.2) we get, since the conductor \(q_{F\times}\) of \(F^{\chi}\) equals \(q_{F}p^{2}\) thanks to the hypotheses of Theorem 4, that
\[F_{p}(s)^{-1} \ll\frac{1}{|F(s)|}\sum_{\begin{subarray}{c}\chi\,(\text{mod}\,p )\\ \chi\neq\chi_{0}\end{subarray}}|c(\chi,p)F^{\chi}(s)|+\frac{1}{|F(s)|}|F(s,1/p)|+1\] \[\ll(C_{5}p)^{2|\sigma|+1}\ll p^{(5/2)|\sigma|}.\]
But we already know from Theorem 2 that \(F_{p}(s)^{-1}=P_{p}(p^{-s})\) for certain polynomial \(P_{p}\in\mathbb{C}[z]\), hence the last inequality shows that its degree is at most \(2\). Since the constant term of \(P_{p}\) is \(1\), we can write
\[F_{p}(s)=\left(1-\frac{\alpha_{p}}{p^{s}}\right)^{-1}\left(1-\frac{\beta_{p}} {p^{s}}\right)^{-1}\]
for certain complex numbers \(\alpha_{p}\) and \(\beta_{p}\). Recalling that the Dirichlet coefficients of \(F\) satisfy Ramanujan's conjecture we must have \(|\alpha_{p}|,|\beta_{p}|\leq 1\) (see p.448-449 of [4]), and the result follows.
|
2303.05378 | Greener yet Powerful: Taming Large Code Generation Models with
Quantization | ML-powered code generation aims to assist developers to write code in a more
productive manner, by intelligently generating code blocks based on natural
language prompts. Recently, large pretrained deep learning models have
substantially pushed the boundary of code generation and achieved impressive
performance. Despite their great power, the huge number of model parameters
poses a significant threat to adapting them in a regular software development
environment, where a developer might use a standard laptop or mid-size server
to develop her code. Such large models incur significant resource usage (in
terms of memory, latency, and dollars) as well as carbon footprint.
Model compression is a promising approach to address these challenges.
Several techniques are proposed to compress large pretrained models typically
used for vision or textual data. Out of many available compression techniques,
we identified that quantization is mostly applicable for code generation task
as it does not require significant retraining cost. As quantization represents
model parameters with lower-bit integer (e.g., int8), the model size and
runtime latency would both benefit from such int representation. We extensively
study the impact of quantized model on code generation tasks across different
dimension: (i) resource usage and carbon footprint, (ii) accuracy, and (iii)
robustness. To this end, through systematic experiments we find a recipe of
quantization technique that could run even a $6$B model in a regular laptop
without significant accuracy or robustness degradation. We further found the
recipe is readily applicable to code summarization task as well. | Xiaokai Wei, Sujan Gonugondla, Wasi Ahmad, Shiqi Wang, Baishakhi Ray, Haifeng Qian, Xiaopeng Li, Varun Kumar, Zijian Wang, Yuchen Tian, Qing Sun, Ben Athiwaratkun, Mingyue Shang, Murali Krishna Ramanathan, Parminder Bhatia, Bing Xiang | 2023-03-09T16:25:51Z | http://arxiv.org/abs/2303.05378v1 | # Greener yet Powerful: Taming Large Code Generation Models with Quantization
###### Abstract
ML-powered code generation aims to assist developers to write code in a more productive manner, by intelligently generating code blocks based on natural language prompts. Recently, large pretrained deep learning models have substantially pushed the boundary of code generation and achieved impressive performance. Despite their great power, the huge number of model parameters poses a significant threat to adapting them in a regular software development environment, where a developer might use a standard laptop or mid-size server to develop her code. Such large models incur significant resource usage (in terms of memory, latency, and dollars) as well as carbon footprint.
Model compression is a promising approach to address these challenges. Several techniques are proposed to compress large pretrained models typically used for vision or textual data. Out of many available compression techniques, we identified that quantization is mostly applicable for code generation task as it does not require significant retraining cost. As quantization represents model parameters with lower-bit integer (e.g., int8), the model size and runtime latency would both benefit from such int representation. We extensively study the impact of quantized model on code generation tasks across different dimension: (i) resource usage and carbon footprint, (ii) accuracy, and (iii) robustness. To this end, through systematic experiments we find a recipe of quantization technique that could run even a 6B model in a regular laptop without significant accuracy or robustness degradation. We further found the recipe is readily applicable to code summarization task as well.
## I Introduction
In recent years, ML-powered code generation tools, like Codex [1], GitHub Copilot [2], Amazon CodeWhisperer1, have gained significant traction. These services aim to generate a computer program in response to a human-written specification (commonly called _prompt_), as shown in Figure 1. Such tools bring promise to significantly automate the software development process and thus, improve developers' productivity.
Footnote 1: [https://aws.amazon.com/codewhisperer/](https://aws.amazon.com/codewhisperer/)
The backbone of ML-powered code generation tools are transformer based large pretrained language model (PLM) [3, 4, 5]. The Code Generation greatly benefits from the rapid development of PLMs, as they have recently exhibited superior performance in multiple code-related tasks, including code generation, code summarization and type inference [3, 4, 5, 6, 7, 8, 9]. Despite the great success, there are multiple challenges and downsides associated with applying the gigantic code generation models (2B-16B parameters) in a regular development environment.
* **Hosting.** The huge number of model parameters poses a significant challenge. For example, one of the largest open source models, CodeGen [5], contains up to \(16B\) parameters. Here hosting this model in a regular desktop environment with a regular laptop becomes almost impossible, as it requires 72 GB of memory. A regular development laptop rarely comes with this much memory (A decent MAC laptop usually has 16 GB or 32 GB RAM). Even if someone uses paid servers like EC2, using such models becomes extremely expensive--it might require around $100+ per 1k queries. Furthermore, their sizes will continue to grow, and accordingly, more stringent requirements and costs for hosting.
* **Latency and user experience.** The state-of-the-art code generation typically consists of \(20\sim 50\) transformer layers and \(2B\sim 16B\) parameters. Model inference/serving on single GPU machine might incur a latency of several seconds. Such a delay in response would cause a negative user experience, especially for interactive code development.
* **Carbon footprint.** Recently, researchers [10][11] start to pay more attention to examining PLMs from the perspective of responsible and green AI. The training and inference of large PLMs typically involve a considerable amount of **CO2** emission. For example, the **CO2** emission of training GPT-3 model (175B parameters) amounts to three times that of a whole jet plane for San Francisco\(\leftrightarrow\)New York [11].
To address these challenges, Machine Learning researchers started investigating different model compression techniques [12]. A key challenge, however, is to still preserve the powerfulness of the gigantic models while significantly reducing the computational cost by compressing them. Addressing this challenge would be crucial to de
mocratizing the power of AI. In this paper, we empirically investigate whether such model compression techniques can be effective for code generation models.
Our target user is a regular developer using a laptop with a good configuration (e.g., a Laptop with CPU only/limited GPUs or with access to a moderate sized server). She uses a state-of-the-art code generation model. Also, she does not have resources to retrain a huge PLM from scratch. In such a scenario, we identify the following desirable properties that a practically useful model compression strategy needs to satisfy:
* **Minimal compression cost**: converting a pretrained model to a more efficient version typically involves certain processing/training costs. If the compression technique requires significant (re)training of the large model over substantial amounts of data, it could result in undesirable environmental impacts (large power consumption and carbon footprint) and the cost would be prohibitively high for an average user to afford. High processing costs would contradict the purpose of greener AI and democratizing AI.
* **Substantial reduction in hosting cost:** as state-of-the-art models are already gigantic (e.g., 2B to 16B parameters) and are expected to continue growing in sizes, minor reductions in compressed size or runtime latency would not be practically useful. Ideally, one would expect a properly designed model compression method to bring at least 50% improvement in these key hosting metrics (e.g., size/latency).
* **Preservation of generation power**: it is highly desirable the compressed model would still have similar generation power as the original model. Model compression at the cost of significantly degenerated predictions would make the compressed model much less appealing to employ.
* **Minimal adverse side effect**: in addition to preserving generation accuracy, we also expect the model to not degenerate in other important aspects of generation, such as weakened robustness.
Most model compression techniques developed by ML community, such as distillation [13, 14], pruning [15, 16] and quantization-aware training [17, 18, 19] are often associated with large training costs. Training or finetuning large transformer models requires access to training data and large compute resources. This is often not an option for many users who typically use the model pretrained on large training corpus by others.
Out of many model compression options, we are able to identify a compression recipe with negligible processing cost and preserved accuracy with a specific subcategory of quantization methods, i.e., Post-Training Quantization (PTQ). Quantization is a compression technique where the weights and activations of an ML model are converted to and computed with integer data types such as int8 instead of commonly used float-point data types such as fp32. As data is represented with lower-bits (e.g., 8 or 4) the model would be much smaller in size. Also, most hardware types (either CPU or GPU) perform integer operations (e.g., multiplication) at a much faster speed; the quantized model would also likely to enjoy reduced computational cost and latency. Properly designed PTQ methods would require none or a relatively small amount of code data for post-training processing, and experimental results show that the proposed approach is highly effective on multiple tasks. This means one can get all the compression benefits (e.g., latency/memory/storage/carbon emission) with negligible cost while retaining the generation power of the full-precision model.
Our contribution can be summarized as follows:
* We recognize the importance of model compression in the context of code generation and identify the adequacy of post-training quantization for this purpose. To our best knowledge, this is the first attempt at compressing a state-of-the-art code generation model. Impact-wise, the quantized model with 16B parameters could run on a personal laptop with only CPUs, and generate a 20-token long prediction within 25 seconds (as opposed to 70 seconds by the corresponding full-precision model).
* We perform an extensive empirical study on multiple code generation models with their quantized variations on both NL-to-code and code-to-NL tasks. We observe comparable accuracy across multiple types of models and parameter sizes with the proposed quantization techniques. Even for extremely large CodeGen-\(16B\), we can preserve comparable accuracy with quantization. Besides, we experiment in different ablation settings to provide guidelines for properly employing quantization.
* We present an in-depth empirical analysis on the layers, activations, and weights of the state-of-the-art code generation models to gain deeper insights on the effect of quantization in them. This helps us understand why certain quantization methods perform better than others.
* Beyond accuracy, we also investigate the impact of quantization on model robustness, which is often overlooked by the existing code generation literature. We show that the proposed quantization recipe would have no adverse
Fig. 1: **Sample prompt, code, and test cases taken from MBPP dataset [3]. Given the NL prompt, a code generation model aims to generate the corresponding code. The associated test cases run the generated code to check functional correctness.**
impact on model robustness.
## II Background & Related Work
### _Code Generation with Transformer-based Models_
Recently, applying transformer-based Pretrained Language Models (PLMs) to the source code generation task, have drawn considerable attention and set overwhelmingly strong state-of-the-art in this field [3, 4, 5, 6, 8, 9]. The goal is to generate complete or code fragments given natural language or partial code as prompts. To achieve this goal, large language models are trained on humongous code corpora, typically curated from open source code archives like GitHub, Stack Overflow, etc.
The PLMs typically use decoder-only (e.g., GPT [20]) or encoder-decoder architecture (e.g., BART [21]/T5 [22]). For code generation tasks, decoder-only models (e.g., CodeGen [5] and Inocoder [9]) take some pre-encoded code representation and learn to decode, i.e., synthesize next token sequences. Typically, these models use causal language modeling, i.e, generate the tokens conditioned on the previous token sequences. Thus, decoder-only models are a natural fit for code completion tasks where the previous code context is given and the model is expected to generate the next tokens. In contrast, encoder-decoder based code generation models like PLBART [23] and CodeT5 [24] are typically trained to learn to reconstruct the original code sequence that is corrupted using an arbitrary noise function. Therefore, such models do not naturally fit the code completion tasks but are found effective when finetuned for code generation or summarization tasks.
### _Model Compression_
The large transformer models use billions of parameters and may require trillions of operations for generating code. Model compression tackles this high costs of large models to enable their wider and easier adoption. Model compression is a class of techniques designed to reduce model size (i.e., bytes required to represent the model) and improve generation latency while maintaining minimum accuracy (i.e., ability to generate useful and correct code) degradation. Some representative techniques include:
1. _Knowledge distillation_. A small student model is trained on the outputs of a larger teacher model that we want to compress [13, 14].
2. _Pruning_. It constitutes a class of techniques that make the weight matrices sparse to reduce the number of parameters as many of the matrix entries will now be zeros [15, 16, 25, 26].
3. _Quantization_. This technique uses fewer bits to represent the weights of parameterized functions [17, 27].
### _Quantization for model compression_
Here we describe the process of quantizing a tensor and discuss different model-quantization techniques.
#### Ii-C1 Quantization operation
Quantization refers to the conversion of a full-precision (or floating-point) tensors to tensors with integer values. An example of the quantization operations is depicted in Figure 3. Given a matrix \(W\), a basic quantizer \(Q(\cdot)\) uses scale and rounding operations to get the quantized version of the matrix:
\[Q(W)=\frac{W_{q}}{s_{W}},\text{ where }s_{W}=\frac{2^{B-1}}{\alpha_{W}}\text{ and }W_{q}=round(s_{W}W)\]
Here, \(\alpha_{W}\) is the quantization range, and \(B\) is the bitwidth (which is 8 in case of int8), \(W_{q}\) is the quantized integer matrix, \(s_{W}\) is the quantization scale, and \(Q(W)\) is the quantized approximation of the matrix \(W\)
**Quantization Noise.** We assess the quality of quantization by estimating the relative quantization noise \(q_{a}\), defined as [28]:
\[q_{a}=\frac{||A-Q(A)||_{2}}{||A||_{2}}\approx\frac{\Delta_{A}^{2}}{12||A||_{2} }\approx\frac{1}{12s_{A}^{2}||A||_{2}} \tag{1}\]
where \(||x||_{2}\) is the the \(L_{2}\)-norm of the vector \(x\), and \(\Delta_{W}=1/s_{W}\) quantization step size. The quantization noise increases with \(\Delta_{W}\) (or decreases with \(s_{W}\)), as the approximation of the full precision parameters becomes coarser.
Fig. 3: **Toy example for quantizing the typical floating-point weight matrix (a) into int8 matrix using (b) per-tensor v.s. (c) per-column quantization.**
Fig. 2: **Transformer structure and multi-head attention cell. The feed-forward layer and all linear layers inside multi-head attention are colored in green. We quantize all these linear layers in the network.**
**Quantization Range and Scale Factor.** The quantization range \(\alpha_{W}\) is the value that will be mapped to the largest representable integer (127 in the case of int8). Typically we set \(\alpha_{W}=\max(\text{abs}(W))\), consequently setting the scale factor \(s_{W}=2^{B-1}/\max(\text{abs}(W))\). However, having a large outlier in \(W\) will increase \(\alpha_{W}\) and therefore increase the quantization noise. To avoid that, some choose to clip the data by choosing \(\alpha_{W}<\max(\text{abs}(W))\) (see Figure 4), where the matrix elements are clipped the \(s\) before the quantization operation; i.e., matrix elements \(>\alpha_{W}\) are set to \(\alpha_{W}\) and those \(<-\alpha_{W}\) are set to \(-\alpha_{W}\).
#### Iv-B2 Quantization techniques
Model quantization techniques can be classified based on the following:
**Methods to obtain quantized network.** This can be broadly classified into
* _Quantization Aware Training (QAT):_ QAT requires training the model from scratch with additional simulated quantization operations during training process to ensure the learned parameter values are quantization-friendly. This is expensive due to the potentially huge cost of training but would lead to models that potentially have higher accuracy than a PTQ model.
* _Post Training Quantization (PTQ)_: PTQ derives a quantized (e.g., int8) network from an existing full-precision network without any training or finetuning with additional data. Since the model is not originally trained to perform inference with quantized parameters and activation, models quantized by PTQ tend to be more susceptible to quantization noise. However, the low costs associated with PTQ make it a very popular choice for obtaining quantized models.
**Methods to choose the activation scale.** Here we choose the ranges and the values of the activations change for each example. There are various options to choose the quantization scale parameters for activations that can be classified into (see Figure 5):
* _Dynamic quantization:_ Here we determine the clip range (\(\alpha\)) and scale parameter (\(s\)) on the fly for activations, in order to minimize quantization noise where possible. One could typically use the maximum (absolute) value of the activation tensors as the clip range for each input. However, determining the clip range dynamically would incur an additional scanning cost to find the max value.
* _Static quantization:_ Here we use the same pre-determined scale through so-called **calibration** on samples by minimizing certain loss (e.g., MSE/Entropy) between original activation and quantized activations. Static quantization might be susceptible to higher quantization noise though it would lower computational cost during inference.
**Quantization Granularity.** As we discussed in the Section II-C1, choosing a large clip range (accordingly, small scales \(s_{A}\)) due to outliers can lead to a large quantization step which adds to quantization noise. To avoid outliers, column-wise quantization scales can be used where the scales are selected based on the max value of each column instead of the entire matrix. Broadly, we can classify quantization techniques based on the granularity of the quantization scales into 1) _per-tensor scales_ where the entire tensor uses a single scale \(s_{A}\), and 2) _per-column/per-row scales_ where each column uses a different scale.
Figure 3 illustrates the differences between the two scaling options. Here, we are choosing scale values based on the maximum absolute value of the quantized block. Choosing per-column scales avoids tensor-wide outliers and allows for finer quantization steps than per-tensor scales.
In the rest of the paper, we will primarily use PTQ as this has minimal post/re-training cost. We examine the accuracy of the models with dynamic and static quantization and discuss the impacts of choosing per-tensor and per-row scales. We will use int8 precision for quantization
Fig. 4: **Illustration of quantization operation showing, quantization step, clipping, scaling, and mapping.**
Fig. 5: **Toy example on quantizing activations with dynamic quantization v.s. static quantization.**
as it is widely supported across all major CPUs and GPUs that are used today.
## III Methodology
The goal of this work is to provide an empirical and conceptual analysis of quantization techniques, originally developed as a core ML technique, in the context of large code generation models. To do that, we analyze the characteristics of the models using different dimensions of quantization techniques, as discussed in Section II-C2. This section discusses our study methodology in detail.
### _Quantized Model Preparation_
#### Iii-A1 Schemes for Quantization
For quantization techniques, we investigate both schemes of quantization (dynamic and static) described in previous sections and prepare the quantized models as follows.
* **Dynamic quantization:** For implementation, we use the native PyTorch Quantization 2 API and convert all the weight matrices in Feed Forward Network (FFN) and Self-Attention to int8. As explained in the previous section, the min/max bound of each layer's activation is determined in a dynamic manner depending on the input during inference. The processing time needed for this scheme is minimal, which typically takes \(<1\) minutes for \(2B\) models and \(<4\) minutes for \(6B\) models. Footnote 2: [https://pytorch.org/docs/stable/quantization.html](https://pytorch.org/docs/stable/quantization.html)
* **Static quantization:** Static quantization needs to determine the clipping range for activations before inference, and such ranges are typically obtained from calibration by minimizing the quantization noise. We perform the activation-bound calibration with a tiny fraction (5k samples) from the CodeSearchNet (Python) training set. In preliminary experiments, we find _MSE (Mean Squared Error)_ based loss to be most effective, so we minimize the MSE between the quantized activations and full-precision ones as the calibration.
### _Study Subjects_
**Studied Models.** We leverage the state-of-the-art and representative code generation models that have open sourced model checkpoints available, to study the efficacy of different calibration techniques. We aim to cover models with different sizes and backbone architectures. In particular, we focus on CodeGen [5], as they open sourced models with different sizes {350M, 1B, 6B, 16B} and different language support (mono v.s. multi-language generation). Additionally, we also include InCoder [9] to further confirm the patterns we observe with CodeGen models. We also studied two more models Code-T5 [24] and PLBART [23] for code summarization task. The statistics of these models are summarized in Table I.
**Studied Tasks.** In this paper, our main focus is code generation task (NL-to-code). Further, to stress test the effectiveness of quantization on other generative tasks, we study code summarization task for models' accuracy evaluation (RQ4). Thus, we study the following two tasks:
* **NL-to-code generation**: Here we evaluate the models' code generation ability. A user gives a natural language prompt as input. These are loosely defined specifications. The model is expected to generate the corresponding code fragments. The generated code is tested by running the test cases. Figure 1 shows an example.
* **Code-to-NL generation**: We further evaluate a generative model's capability on code summarization task, where given the function signature and body, the model generates an NL description of the function.
**Studied Dataset:** We use HumanEval [6] and MBPP [3] for evaluating the functional correctness of generated programs. The MBPP dataset [3] contains 974 short Python functions with their textual descriptions and test cases to evaluate correctness (see Figure 1). HumanEval [6] is a similar dataset released by OpenAI, which is widely used in evaluating code generation tasks. It contains 164 handwritten Python programs, associated with their natural language descriptions and test cases.
**Evaluation Metrics:** Generative models in NLP domain traditionally use some form of textual matching (exact or fuzzy match) between the generated text and ground truth and often report BLEU scores. Such textual similarity is problematic for evaluating code generation tasks, as the same functionality can be implemented in many ways. To overcome this, recent papers on code generation task [1, 29, 30] recommend to evaluate functional correctness by running the generated code against test cases. Here we follow a similar evaluation criterion.
Each sample in our studied dataset is equipped with multiple test cases, as shown in Figure 1. The generated code needs to pass _all_ provided tests to be considered as "_pass_". Following [1, 29], we report pass@k to estimate the model's ability to generate code that will "_pass_". Pass@k measures the fraction of examples that are "_pass_" by at least one of the \(k\) solutions that the model generates. However, given the ML model is probabilistic, we expect pass@\(k\) to have a high variance. To address this, a standard practice is to generate \(n>k\) solutions and estimate the statistical mean of pass@\(k\) from these \(n\) samples, i.e., estimate the fraction of times we "_pass_" if we randomly pick \(k\) samples from \(n\). In this paper, we use pass@1 and pass@5 as a metric for evaluations, which is estimated by generating 10 samples per problem in the dataset. The reported accuracy (pass@\(k\)) is averaged on all samples generated for all programs in each dataset.
To evaluate the code summarization models, we use smoothed BLEU score [31] following prior works [23, 24].
## IV Results
We evaluate the effect of quantization across three dimensions: greener, accuracy, and robustness for code generation tasks. To evaluate generalizability, we further
evaluate quantization techniques for code summarization tasks, as code summarization is a popular code-related generative task where a different modality, i.e., text, is generated. In particular, we aim to answer the following four research questions:
* **RQ1**. How effective are quantization techniques for greener code generation models?
* **RQ2**. Can quantized models maintain the prediction power of the corresponding full-precision models?
* **RQ3**. How robust are quantized models compared to the corresponding full-precision models?
* **RQ4**. Are quantization techniques effective for other code related generative tasks such as code summarization?
### _Quantization for Greener Code Generation (RQ1)_
MotivationThe heart of this paper lies on this RQ, i.e., whether a quantized model can be substantially greener than its full precision counterpart. Here by green, we mean less resource usage and less carbon footprint. Our use case is to facilitate a regular development environment that can benefit from such large models. Thus, a full precision model can be pretrained with larger resource (even at industry scale). However, a developer will be using the model in an environment which is either CPU-only or contain a smaller number of GPUs. To this end, this RQ evaluates the model's resource usage and carbon footprint at inference time.
Experimental SetupWe aim to answer RQ1 by investigating quantization from a model hosting perspective, with GPU or CPU as the underlying hardware. We consider both on-cloud and on-device settings as both can be important use cases for code generation models. The environment used for experiment is the following:
* **On cloud**: We use an AWS p3dn.24xlarge instance3 which have both CPUs and GPUs available with NVMe-based SSD storage. Footnote 3: More details on the hardware specification can be found at [https://aws.amazon.com/ec2/instance-types/p3/](https://aws.amazon.com/ec2/instance-types/p3/)
* **On device**: We use a typical developer's laptop, a MacBook Pro which runs macOS Monterey (version 12.5), with 32 GB memory and M1 processor.
MetricsWe report inference latency and model storage size as primary metrics for model hosting. Based on the latency result and the specification of underlying hardware, we also estimate (assuming sequential prediction) the potential cost4 (in US$) and carbon emission 5 (in \(gCO_{2}eq\)) for evaluating the impact in terms of green AI.
Footnote 4: Based on estimate in \(\texttt{intp}/\texttt{new}\) instance-pising-pising-pising-pising-pising-pising-pising-pising-pising-pising-pising-pising-pising-pising-pising-pising-pising-pising-pising-pising-pising-pising-pising-pising-pising-pising-pising-pising-pising-pising-pising-pising-pising-pising-pising-pising-pising-pising-pising-pising-pising-pising-pising-pising-pising-pising-pising-pising-pising-pising-pising-pising-pising-pising-pising-pising-pising-pising-pising-pising-pising-pising-p-ising-pising-pising-pising-pising-pising-pising-pising-pising-pising-p-pising-pising-p-ising-p-ising-p-ising-pising-p-pising-p-ising-p-ising-p-ising-p-ising-p-pising-p-pising-p-ising-p-ising-p-pising-p-ising-p-ising-p-ising-p-ising-p-ising-p-ising-p-ising-p-ising-p-ising-p-ising-p-ising-p-ising-p-ising-p-ising-p-ising-p-ising-p-ising-p-ising-p-ising-p-ising-p-ising-p-ising-p-ising-p-ising-p-ising-p-ising-p-ising-p-ising-p-ising-p-ising-p-ising-p-ising-p-ising-p-ising-p-ising-p-ising-p-ising-p-ising-p-ising-p-ising-p-ising-p-ising-p-ising-p-ising-p-ising-p-ising-p-ising-p-ising-p-ising-p-ising-p-ising-p-ising-p-ising-p-ising-p-ising-p-ising-p-ising-p-ising-p-ising-p-ising-p-ising-p-ising-p-ising-p-ising-p-ising-p-ising-p-ising-p-ising-p-ising-p-ising-p-ising-p-ising-p-ising-p-ising-p-ising-p-ising-p-ising-p-ising-p-ising-p-ising-p-ising-p-ising-p-ising-p-ising-p-ising-p-ising-p-ising-p-ising-p-ising-p-ising-p-ising-p-ising-p-ising-p-ising-p-ising-p-ising-p-ising-p-ising-p-ising-p-ising-p-ising-p-ising-p-ising-p-ising-p-ising-p-ising-p-ising-p-ising-p-ising-p-ising-p-ising-p-ising-p-ising-p-ising-p-ising-p-p-ising-p-ising-p-ising-p-ising-p-ising-p-ising-p-ising-p-ising-p-ising-p-ising-p-ising-p-ising-p-ising-p-ising-p-ising-p-ising-p-ising-p-ising-p-ising-p-ising-p-ising-p-ising-p-ising-p-ising-p-ising-p-ising-p-ising-p-ising-p-ising-p-ising-p-ising-p-ising-p-ising-p-ising-p-ising-p-ising-p-ising-p-ising-p-ising-p-ising-p-ising-p-ising-p-ising-p-ising-p-ising-p-ising-p-ising-p-ising-p-ising-p-ising-p-ising-p-ising-p-ising-p-ising-p-ising-p-ising-p-ising-p-ising-p-ising-p-ising-p-ising-p-ising-p-ising-p-ising-p-ising-p-ising-p-ising-p-ising-p-ising-p-ising-p-ising-p-ising-p-ising-p-ising-p-ising-p-ising-p-ising-p-ising-p-ising-p-ising-p-ising-p-ising-p-ising-p-ising-p-ising-p-ising-p-ising-p-ising-p-ising-p-ising-p-ising-p-ising-p-ising-p-ising-p-ising-p-ising-p-ising-p-ising-p-ising-p-ising-p-ising-p-ising-p-ising-p-ising-p-ising-p-ising-p-ising-p-ising-p-ising-p-ising-p-ising-p-ising-p-ising-p-ising-p-ising-p-ising-p-ising-p-ising-p-ising-p-ising-p-ising-p-ising-p-ising-p-ising-p-ising-p-ising-p-ising-p-ising-p-ising-p-ising-p-ising-p-ising-p-ising-p-ising-p-ising-p-ising-p-ising-p-ising-p-ising-p-ising-p-ising-p-ising-p-ising-p-ising-p-ising-p-ising-p-ising-p-ising-p-ising-p-ising-p-ising-p-ising-p-ising-p-ising-p-ising-p-ising-p-ising-p-ising-p-ising-p-ising-p-ising-p-ising-p-ising-p-ising-p-ising-p-ising-p-ising-p-ising-p-ising-p-ising-p-ising-p-ising-p-ising-p-ising-p-ising-p-ising-p-ising-p-ising-p-ising-p-ising-p-ising-p-ising-p-p-ising-p-ising-p-ising-p-ising-p-ising-p-ising-p-ising-p-p-ising-p-ising-p-ising-p-p-ising-p-ising-p-ising-p-ising-p-ising-p-ising-p-p-ising-p-ising-p-ising-p-ising-p-p-ising-p-ising-p-ising-p-p-ising-p-ising-p-ising-p-p-ising-p-p-ising-p-ising-p-p-ising-p-ising-p-p-ising-p-p-ising-p-ising-p-ising-p-ising-p-ising-p-ising-p-ising-p-p-ising-p-ising-p-ising-p-ising-p-ising-p-ising-p-ising-p-ising-p-ising-p-ising-p-ising-p-p-ising-p-ising-p-ising-p-ising-p-ising-p-p-ising-p-ising-p-p-ising-p-p-ising-p-ising-p-p-ising-p-p-ising-p-p-ising-p-p-ising-p-ising-p-p-ising-p-ising-p-p-ising-p-p-ising-p-ising-p-p-ising-p-p-ising-p-p-ising-p-ising-p-ising-p-p-ising-p-p-ising-p-ising-p-ising-p-p-ising-p-ising-p-p-ising-p-p-ising-p-p-ising-p-p-ising-p-ising-p-p-ising-p-ising-p-p-ising-p-ising-p-p-ising-p-p-ising-p-p-ising-p-p-ising-p-p-ising-p-ising-p-p-ising-p-p-ising-p-p-ising-p-p-ising-p-p-ising-p-p-ising-p-ising-p-p-ising-p-p-ising-p-p-ising-p-p-ising-p-p-ising-p-p-ising-p-p-ising-p-ising-p-p-ising-p-ising-p-p-ising-p-ising-p-p-ising-p-p-ising-p-ising-p-p-ising-p-p-ising-p-p-ising-p-p-ising-p-p-ising-p-p-ising-p-p-ising-p-ising-p-p-ising-p-p-ising-p-ising-p-p-ising-p-p-ising-p-ising-p-p-ising-p-ising-p-p-ising-p-p-ising-p-ising-p-p-ising-p-ising-p-ising-p-p-ising-p-p-ising-p-p-ising-p-ising-p-ising-p-p-ising-p-p-ising-p-p-ising-p-ising-p-p-ising-p-ising-p-ising-p-p-ising-p-ising-p-p-ising-p-ising-p-p-ising-p-ising-p-p-ising-p-ising-p-p-ising-p-ising-p-p-ising-p-ising-p-ising-p-ising-p-ising-p-ising-p-ising-p-p-ising-p-ising-p-p-ising-p-p-ising-p-ising-p-ising-p-ising-p-ising-p-ising-p-ising-p-ising-p-ising-p-ising-p-p-ising-p-ising-p-ising-p-ising-p-ising-p-ising-p-p-ising-p-ising-p-ising-p-ising-p-ising-p-p-ising-p-ising-p-ising-p-ising-p-ising-p-ising-p-ising-p-ising-p-ising-p-ising-p-ising-p-ising-p-ising-p-ising-p-ising-p-ising-p-ising-p-ising-p-ising-p-ising-p-ising-p-ising-p-p-ising-p-ising-p-ising-p-p-ising-p-ising-p-ising-p-ising-p-ising-p-ising-p-ising-p-ising-p-ising-p-ising-p-ising-p-ising-p-ising-p-ising-p-ising-p-ising-p-ising-p-ising-p-ising-p-ising-p-ising-p-ising-p-ising-p-ising-p-ising-p-ising-p-p-ising-p-ising-p-ising-p-ising-p-ising-p-ising-p-ising-p-ising-ising-p-ising-p-ising-p-ising-p-ising-p-ising-p-ising-p-ising-p-ising-p-ising-p-ising-p-ising-p-ising-p-ising-p-ising-p-ising-p-ising-p-ising-p-ising-p-ising-p-ising-p-ising-p-ising-p-ising-p-ising-p-ising-p-ising-p-ising-p-ising-p-ising-p-ising-p-ising-p-ising-p-ising-p-ising-p-ising-p-ising-p-ising-p-ising-p-ising-p-ising-p-ising-p-ising-p-ising-p-ising-p-ising-p-ising-p-ising-p-ising-p-ising-p-ising-p-ising-p-ising-p-ising-p-ising-p-ising-p-ising-p-ising-p-ising-p-ising-p-ising-ising-p-ising-p-ising-p-ising-p-ising-p-ising-p-ising-p-ising-p-ising-p-ising-ising-p-ising-p-ising-p-ising-ising-p-ising-p-ising-p-ising-p-ising-ising-p-ising-p-ising-p-ising-p-ising-p-
putational cost comes from the multiplication of various matrices. As Pytorch framework does not support GPU kernel-based end-to-end inference, we measure the potential latency impact through matrix multiplication as a proxy to showcase the efficacy of quantization. We report the latency results based on Nvidia CUTLASS kernel in Table III, and we can observe 40% latency reduce across different sizes using int8 matrix multiplication.
**Result 1:**_The quantized models use much less latency, memory, and storage than the corresponding full precision model. It also has remarkably less carbon footprint. Thus, it is possible to fit even a 6B-parameter model within a regular laptop._
### _Accuracy Evaluation for Code Generation Task (RQ2)_
_Motivation._ Although greener, a quantized model will be mostly useful if it maintains the accuracy of the original full precision model. In this RQ, we evaluate the functional correctness of code generation models for full precision and their different quantized variants.
_Experimental Setup._ We evaluate the code generation tasks using CodeGen and Incoder quantized models with static and dynamic activation quantization. We tested the models with per-column scales and per-tensor scales while quantizing the weights as well. We report both pass@1 and pass@5 accuracies.
_Observations._ Table IV summarizes the results. We see accuracy gain for Incoder-6.7B models across all the quantization settings, while Incoder-1B shows an average accuracy drop of 0.84% on HumanEval and 2.47% on MBPP datasets. CodeGen models show \(<2\%\) average degradation with pass@1 metric on HumanEval and MBPP datasets with both Dynamic and Static quantization. However, we observe 3%-4% and 2% average drop in accuracy in the pass@5 metrics with dynamic quantization and static (per-tensor) quantization respectively. With static (per-column) quantization the average pass@5 accuracy drop is \(<2\%\) for CodeGen models.
Overall, dynamic (per-tensor) quantization tends to outperform static (per-tensor) quantization by a small margin and static (per-column) quantization outperforms static (per-tensor) quantization. This is because:
\begin{table}
\begin{tabular}{c|c|c c|c c|c c|c c} \hline \hline \multirow{2}{*}{**Dataset**} & \multirow{2}{*}{**Model**} & \multirow{2}{*}{**Full-precision**} & \multicolumn{3}{c|}{**Dynamic Quant.**} & \multicolumn{3}{c}{**Static Quant.**} \\ \cline{3-10} & & & \multicolumn{2}{c|}{**(per-tensor)**} & \multicolumn{2}{c|}{**(per-tensor)**} & \multicolumn{2}{c|}{**(per-tensor)**} & \multicolumn{2}{c}{**(per-column)**} \\ \cline{3-10} & & **pass@1** & **pass@5** & **pass@1** & **pass@5** & **pass@1** & **pass@5** & **pass@1** & **pass@5** \\ \hline \multirow{4}{*}{HumanEval} & Incoder-1.3B & 7.13 & 8.98 & 5.55 (**-1.58**) & 8.33 (**-0.65**) & 5.85 (**-1.28**) & 7.99 (**-0.99**) & 6.71 (**-0.42**) & 8.86 (**-0.12**) \\ & Incoder-6.7B & 8.11 & 9.70 & 8.23 (**+0.12**) & 10.52 (**+0.82**) & 8.41 (**+0.30**) & 10.46 (**+0.76**) & 9.27 (**+1.16**) & 11.38 (**+1.68**) \\ & Codegera-350M & 11.71 & 16.21 & 11.77 (**+0.06**) & 14.70 (**-1.51**) & 10.79 (**-0.92**) & 14.90 (**+1.31**) & 11.83 (**+0.12**) & 16.66 (**+0.45**) \\ & Codegera-2B & 20.91 & 27.75 & 18.48 (**-2.43**) & 26.56 (**-1.18**) & 17.87 (**-3.04**) & 26.13 (**-1.62**) & 22.50 (**+1.59**) & 29.59 (**+1.84**) \\ & Codegera-6B & 24.02 & 36.82 & 26.71 (**+1.69**) & 34.27 (**-2.55**) & 25.37 (**+1.35**) & 34.02 (**-2.80**) & 25.73 (**+1.71**) & 33.74 (**-3.08**) \\ \hline \multirow{4}{*}{MBPP} & Incoder-1.3B & 5.92 & 10.27 & 4.11 (**-1.82**) & 7.87 (**-2.40**) & 3.68 (**-2.25**) & 7.06 (**-3.21**) & 3.82 (**-2.10**) & 7.22 (**-3.05**) \\ & Incoder-6.7B & 7.53 & 11.55 (**-0.23**) & 11.79 (**+0.24**) & 7.86 (**-0.34**) & 12.30 (**+0.75**) & 7.80 (**+0.28**) & 12.37 (**+0.82**) \\ & Codegera-350M & 16.99 & 25.39 & 15.32 (**-1.67**) & 23.35 (**-2.04**) & 15.32 (**-1.67**) & 23.85 (**-1.54**) & 15.87 (**-1.12**) & 24.28 (**-1.12**) \\ & Codegera-2B & 31.57 & 41.97 & 28.10 (**-3.47**) & 38.24 (**-3.73**) & 27.38 (**-4.19**) & 39.04 (**-2.93**) & 30.59 (**-0.98**) & 40.93 (**-1.04**) \\ & Codegera-6B & 34.00 & 51.97 & 34.49 (**-0.49**) & 45.42 (**-6.55**) & 34.74 (**+0.49**) & 45.74 (**-6.23**) & 37.35 (**+3.35**) & 48.90 (**-3.07**) \\ \hline \hline \end{tabular}
\end{table} TABLE IV: **Pass@k (%) accuracy on HumanEval and MBPP. Performance gains are in blue and drops in red.**
Fig. 6: **Statistics impacting quantized model accuracy.**
* **Weight Quantization.** Weight distributions have a high variance within a kernel, accounting for outliers that result in large quantization noise. This is particularly an issue with increasing matrix sizes in larger models. Figure 5(a) shows how the quantization noise increases with model sizes with per-tensor scales, but not with per-column scales. This reduced quantization noise with per-column scales explains why static (per-column) setting outperforms static (per-tensor) one.
* **Activation Quantization.** The primary challenge in activation quantization is in choosing the quantization scales. With static-quantization, we have to pick pre-determined scales based on validation data. This pre-determined scale is picked conservatively On the other hand, dynamic quantization allows us to adjust the scales for every input example and for every token, thereby making it attractive to reduce quantization noise. Dynamic quantization will be useful if we observe high variance in the max-values across different inputs/tokens. For example, Figure 5(b) shows the max value of the activation across different layers in CodeGen-350M.
* **Error Accumulation.** Quantization noise accumulates with depth, making deeper models more challenging to quantization. Figure 5(c) shows the relative quantization noise with model depth for CodeGen-6B model, showing quantization error growing with depth. We observe that a) per-column quantization results in smaller accumulated error with depth and b) the error tends to reduce in the last few (\(\bar{4}\)) layers of the model. The latter could be due to the inherent robustness of the model.
#### V-B1 Ablation Study
To better understand the impact of different design choices on the model, as discussed in Section III-A, we further investigated pass@1 scores for different model variations on HumanEval.
**Size of calibration set.** Here, we study how the size of calibration data affects the performance of quantized models. Figure 7 shows that the execution accuracy (on both 2B and 350M models) is typically stable across different sizes of calibration data. When using only 500 samples for calibration, the quantized model can already learn a reasonable clipping range (\(\alpha\)) and achieve comparable accuracy as full-precision baselines. Such calibration cost (e.g., takes a few minutes on a single CPU/GPU) is almost negligible compared to other model compression options, such as distillation, which typically requires iterating over the whole training corpus and takes weeks to finish.
**Impact of precision.** We experimented with using 4-bit precision instead of the 8-bits that we use in the rest of the paper. The experiments with different precision settings on CodeGen-2B models on HumanEval and the results are summarized in Table V. We use the static (per-column) quantization setting for these experiments.
With 8-bit weights and activation (W8A8), we can meet the accuracy of a full-precision model on HumanEval. However, this accuracy drops by \(\approx 4\%\) with weights quantized with 4-bits while activations remain quantized with 8-bits (W8A4). We find that the model does not generate any meaningful outputs when activations are quantized with 4-bits while the weights remain quantized with 8-bits (W8A4), indicating that the model is more sensitive to activation quantization than those of the weights.
#### V-B2 Quantizing Extremely Large Code Generation Models:
So far we have seen that appropriately designed quantization techniques could preserve accuracy for models with medium to large sizes (up to 6B parameters). Now we conduct an extreme study with Codegen-16B, one of the largest publicly available code generation models.
From Table VI, one can observe that both dynamic and static (per-column) quantization achieve competitive results compared to the original model. For example, dynamic quantized model (model size: 17 GB) could achieve similar pass@5 and slightly lower pass@1 compared to the significantly more gigantic FP32 model (75 GB).
### _Robustness Evaluation (RQ3)_
_Motivation._ It is well known that Deep Learning models are sensitive to input perturbations [32, 33, 34, 35, 36] ; i.e., a well-trained model performs significantly worse when evaluated
\begin{table}
\begin{tabular}{l|r r} \hline \hline
**pass@** & \multicolumn{1}{c}{**1**} & \multicolumn{1}{c}{**5**} \\ \hline Full precision & 20.91\% & 27.75\% \\ W8A8 & 22.50\% & 29.59\% \\ W4A8 & 18.54\% & 24.83\% \\ W8A4 & 0.61\% & 1.39\% \\ \hline \hline \end{tabular}
\end{table} TABLE V: **Execution accuracy of CodeGen-2B model at different activation and weight precision settings on HumanEval. Here WxAy indicates x-bit weights and y-bit activations.**
Fig. 7: **Execution accuracy on HumanEval with Codegen-2B and Codegen-350M (per-column static) when they are calibrated (_MSE_ loss) on different amounts of data (from 500 to 5k). Dotted lines denote the pass@1 of corresponding full-precision models.**
against meaningful perturbed inputs. Thus, it is important to estimate _robustness_ of a model by evaluating it against such perturbations. In particular, a good quantized model should not adversely impact the robustness of a model, i.e., the original full-precision model's robustness should not decrease drastically after quantization.
_Experimental Setup._ To evaluate the effect of quantization on a model's Robustness, we evaluate both the original and the quantized models on HumanEval [6] and MBPP [3] dataset with perturbed inputs. In the NLP domain, researchers propose different semantic preserving perturbations to inputs; e.g., mutating words with their synonyms [37, 38, 39] or character-level mutations [40, 41]. We adapt similar techniques in our context. In particular, we perturb the text in each prompt with three different types of perturbations respectively (see Table VIII):
1. _Character-level perturbations_ by changing randomly selected characters to upper cases.
2. _Word-level perturbations_ by substituting randomly selected words with synonyms from WordNet [42];
3. _Sentence-level perturbations_ by paraphrasing the whole text with back translation [43, 44]. In specific, it transforms the English docstring into German and then translates back to English.
For these three types of perturbations, we use the default settings and implementations from a standard text perturbation benchmark NL-Augmenter [45]. These perturbations are designed such that the original semantics of the natural language remains unaltered [46, 47, 48]. Then we measure the average pass@1 with greedy sampling for each model on the three perturbed datasets along with the unperturbed ones to avoid randomness and better observe the robustness trends.
To measure the robustness of a model, we compute the change in pass@1 results between perturbed and unperturbed inputs. For each type of perturbation, we compute the percentage change across all the inputs in a dataset, as: \(\%\Delta=\frac{\text{pass@1}_{\text{unperturbed}}-\text{pass@1}_{\text{perturbed}}}{\text{pass@1}_{\text{unperturbed}}}\).
Table VII reports the results. The lower the value of \(\Delta\), the better the robustness of a model. A negative drop means the model performs better with perturbed inputs.
_Observations._ The results show that, overall all the quantization methods, including per-tensor dynamic, per-tensor static, and per-column static, have comparable robustness performance w.r.t. the corresponding full precision model. In certain cases, in fact, quantized models perform better (as shown in red). On average across all model types and perturbations, full precision, per-tensor dynamic, per-tensor static, and per-column static quantized models have 13.27%, 15.92%, 12.91%, and 13.33% percentage of the drops on MBPP and HumanEval datasets. Models quantized with static per-column overall have slightly better robustness performance compared to the ones quantized with dynamic/static per-tensor quantized models.
We further compute per sample difference in pass@1 result between a quantized and the corresponding full-precision model using Wilcoxon-Mann-Whitney test [49]--this also confirms the difference between the two models in statistically insignificant.
**Result 3:**_Quantization does not have any negative impact on model's robustness-- a quantized model reacts to perturbed inputs very similarly as the corresponding full-precision model._
### _Accuracy for Code Summarization Task (RQ4)_
_Motivation._ Here we check whether the quantization techniques studied so far are also applicable to other code-related tasks. In particular, we chose code summarization, as it is reversing the modality studied so far (NL for code). _Experimental Setup._ Here, we use the _finetuned_ PLBART and CodeT5 models on the code summarization task (in Python) released by the authors. Since CodeGen is not designed to generate summaries given a code snippet, we do not use it in the evaluation. In our early experiments,
\begin{table}
\begin{tabular}{l|l|c} \hline \hline
**pass@** & **1** & **5** \\ \hline Full-precision & 29.39\% & 39.02\% \\ Dynamic Quantization & 27.68\% & 39.63\% \\ Static (per column) Quantization & 26.40\% & 34.78\% \\ \hline \hline \end{tabular}
\end{table} TABLE VI: **Execution accuracy (pass@1 and pass@5) of Codegen-16B on HumanEval.**
\begin{table}
\begin{tabular}{l|l|c c|c c c c} \hline \hline & \multicolumn{3}{c|}{**HumanEval**} & \multicolumn{3}{c}{**MBPP**} \\ & \multicolumn{1}{c}{**Ch**} & \multicolumn{1}{c}{**W**} & \multicolumn{1}{c}{**S**} & \multicolumn{1}{c}{**Ch**} & \multicolumn{1}{c}{**W**} & \multicolumn{1}{c}{**S**} \\ \hline \multicolumn{8}{c}{**Encoder**} \\ \hline \multirow{4}{*}{**1.3B**} & FP* & **0.00** & 18.18 & -9.09 & 30.00 & 35.00 & 8.33 \\ & D (T) & 11.11 & 11.11 & 11.11 & **10.81** & 24.32 & 13.51 \\ & S (C) & **0.00** & 18.18 & 0.00 & 40.00 & 30.00 & **7.50** \\ & S (T) & 10.00 & **10.00** & **-10.00** & 31.58 & **23.68** & 15.79 \\ \hline \multirow{4}{*}{**6.7B**} & FP & **-7.69** & 30.77 & 7.69 & 24.68 & 25.97 & 10.39 \\ & D (T) & 7.69 & 7.69 & 7.69 & 18.42 & 26.32 & 15.79 \\ & S (C) & 0.00 & **7.14** & 14.29 & **9.59** & **19.18** & **-4.11** \\ & S (T) & -7.14 & 14.29 & **-7.14** & 25.97 & 24.68 & 7.79 \\ \hline \multicolumn{8}{c}{**Codegen**} \\ \hline \multirow{4}{*}{**350M**} & FP & **10.53** & **10.53** & 15.79 & 13.56 & 19.21 & 6.78 \\ & D (T) & 15.79 & 15.79 & **5.26** & 17.72 & 13.92 & 7.59 \\ & S (C) & 22.73 & 18.18 & 13.64 & 14.91 & **12.42** & **3.11** \\ & S (T) & 33.33 & 23.81 & 14.29 & **13.16** & 14.47 & 5.26 \\ \hline \multirow{4}{*}{**2B**} & FP & **12.82** & **15.38** & 20.51 & 7.99 & **9.27** & 6.39 \\ & D (T) & 29.73 & 32.43 & 27.03 & 6.79 & 11.79 & **-1.07** \\ & S (C) & 13.16 & 23.68 & 18.42 & 10.03 & 15.53 & 7.12 \\ & S (T) & 15.75 & 27.27 & **12.12** & **7.72** & 9.56 & 2.21 \\ \hline \multirow{4}{*}{**6B**} & FP & 17.78 & 24.44 & 28.89 & **-0.85** & **4.55** & 0.28 \\ & D (T) & 30.00 & 40.00 & 34.00 & 6.34 & 12.97 & 6.05 \\ \cline{1-1} & S (C) & 20.93 & **20.93** & **16.28** & 6.96 & 8.36 & **-0.84** \\ \cline{1-1} & S (T) & **15.56** & 28.89 & 20.00 & 6.10 & 9.01 & 2.62 \\ \hline \hline \end{tabular}
* FP=Full-precision; D (T)=Dynamic (per-tensor); S(C)=Static (per-column); S(T)=Static (per-tensor)
\end{table} TABLE VII: **The percentage of the pass@1 _drop_ on the datasets with character-level (Ch), word-level (W), and sentence-level (S) perturbations of prompt compared to the unperturbed ones. We highlight the least drops for each setting.**
we evaluated InCoder full precision models on this task based on the author released code, but got very poor performance, therefore, we do not pursue the model.
**Observations:** The results are presented in Table IX. We observe almost no drop in BLEU score for CodeT5 models with both Dynamic and Static quantization. In comparison, while PLBART with Dynamic quantization matches the full-precision performance, we observe a performance drop with Static quantization. To understand this degradation in performance, we perform a qualitative comparison between these two settings. A few examples are provided in Table X. Overall, we observe that PLBART with static quantization generates shorter summaries that affect the BLEU score. However, the generated summaries are semantically comparable to the full precision version.
## V Threats to Validity
This paper presents an in-depth empirical evaluation of a specific type of model compression technique on code generation task. The main threats to the validity of our conclusions are external, relating to the generalization of our findings, to both other types of compression techniques and to other ML-powered code related tasks.
First, as discussed in Section II, quantization-based compression techniques are mostly suitable for usecase as a typical developer may not have resources to retrain the model from scratch using other compression methods.
Second, we focus on mostly generative tasks, and thus study code generation (NL-to-code) in detail. To evaluate the generalizability of our findings, we also investigate the effect of quantization on code summarization (RQ4).
Finally, we have other threats including studying each of these tasks on two models and two dataset respectively. However, these are state-of-the-art open source models and data widely studied in the literature. We further studied the different sizes of these models. We evaluated on perturbed data (RQ3) which also gives us confidence on the stability of our results. Besides, all the other quantization-related parameters used in the experiments are empirically evaluated. We also report the most stringent measurement (pass@1) to reduce any measurement bias.
## VI Conclusion
Code Generation models based on large PLMs have set the new state-of-the-art in generating functionally correct code given natural language description. However, the sizes of these models could be prohibitively large (e.g., billions of parameters), which can cause problems for green AI and responsible AI. Therefore, developing approaches towards improving model efficiency yet preserving their powerful generation capability is of great practical importance. In this paper, we address this problem by developing a quantization-based recipe for such models. We demonstrate the efficacy of proposed methods in terms of greenness, accuracy, and robustness. As future work, we would like to investigate the efficacy of quantization for more code intelligence applications, such as code search, code editing, and code translation.
\begin{table}
\begin{tabular}{p{14.2pt}|p{142.3pt}|p{142.3pt}} \hline \hline
**Full-precision** & **Static (per-tensor)** \\ \hline Copy an entire table to a temporary file & \begin{tabular}{p{142.3pt}} dump the contents of a table to a temporary file \\ \end{tabular} \\ \hline Recursively make all intermediate directories and subdirectories. & \begin{tabular}{p{142.3pt}} helper function to make intermediate directories and subdirectories. \\ \end{tabular} \\ \hline Downloads a video by its id and title. & download by vid \\ \hline Generate RST API documentation for a module. &
\begin{tabular}{p{142.3pt}} Generate the documentation for the given module. \\ \end{tabular} \\ \hline \hline \end{tabular}
\end{table} TABLE X: **Qualitative comparisons of summaries by PLBART in full-precision and static quantization.**
\begin{table}
\begin{tabular}{p{142.3pt}|p{142.3pt}|p{142.3pt}|p{142.3pt}} \hline \hline \multicolumn{1}{c|}{**Examples**} & \multicolumn{3}{c}{**Passing All Tests**} \\ \cline{3-5} \multicolumn{1}{c|}{} & \multicolumn{1}{c|}{**Full-precision**} & **Dynamic (per-tensor)** \\ \hline \multirow{3}{*}{**S1**} & Unperturbed & Write a python function to determine whether all the numbers are different from each other are not. & ✓ & ✓ \\ & Character-level & Write a python function to determine whether all the numbers are unlike from each other are not. & ✓ & ✓ \\ & Word-level & Write a python function to determine whether all the numbers are unlike from each other are not. & ✓ & ✓ \\ & Sentence-level & Write a Python function to see if all numbers differ from each other. & ✓ & ✓ \\ \hline \multirow{3}{*}{**S2**} & Unperturbed & Write a function to extract the index minimum value record from the given tuples. & ✓ & ✓ \\ & Character-level & Write a function to extract the index minimum value record from the given tuples. & ✓ & ✓ \\ & Word-level & Write a function to extract the index minimal value record from the given tuples. & ✓ & ✓ \\ & Sentence-level & Write a function to extract the index minimum dataset from the given tuples. & ✓ & ✓ \\ \hline \multirow{3}{*}{**S3**} & Unperturbed & Write a function to print check if the triangle is equilateral or not. & ✓ & ✓ \\ & Character-level & Write a function to print check if the triangle is equilateral or not. & ✓ & ✓ \\ & Word-level & Write a function to print check if the triangle equal equalier or not. & ✗ & ✓ \\ & Sentence-level & Write a function to check whether the triangle is equilateral or not. & ✗ & ✓ \\ \hline \hline \end{tabular}
\end{table} TABLE VIII: **Example impact of word-level, character-level, sentence-level perturbations on full-precision and per-tensor dynamic quantized models.** The perturbed region is underlined.
\begin{table}
\begin{tabular}{p{142.3pt}|p{142.3pt}} \hline \hline
**Full-precision** & **Static (per-tensor)** \\ \hline Copy an entire table to a temporary file & \begin{tabular}{p{142.3pt}} dump the contents of a table to a temporary file \\ \end{tabular} \\ \hline Recursively make all intermediate directories and subdirectories. & \begin{tabular}{p{142.3pt}} helper function to make intermediate directories and subdirectories. \\ \end{tabular} \\ \hline Downloads a video by its id and title. & download by vid \\ \hline Generate RST API documentation for a module. &
\begin{tabular}{p{142.3pt}} Generate the documentation for the given module. \\ \end{tabular} \\ \hline \hline \end{tabular}
\end{table} TABLE IX: **Evaluation results (smoothed BLEU score) for code summarization.** |
2305.01290 | A Construction of Arbitrarily Large Type-II $Z$ Complementary Code Set | For a type-I $(K,M,Z,N)$-ZCCS, it follows $K \leq M \left\lfloor
\frac{N}{Z}\right\rfloor$. In this paper, we propose a construction of type-II
$(p^{k+n},p^k,p^{n+r}-p^r+1,p^{n+r})$-$Z$ complementary code set (ZCCS) using
an extended Boolean function, its properties of Hamiltonian paths and the
concept of isolated vertices, where $p\ge 2$. However, the proposed type-II
ZCCS provides $K = M(N-Z+1)$ codes, where as for type-I $(K,M,N,Z)$-ZCCS, it is
$K \leq M \left\lfloor \frac{N}{Z}\right\rfloor$. Therefore, the proposed
type-II ZCCS provides a larger number of codes compared to type-I ZCCS.
Further, as a special case of the proposed construction, $(p^k,p^k,p^n)$-CCC
can be generated, for any integral value of $p\ge2$ and $k\le n$. | Rajen Kumar, Prashant Kumar Srivastava, Sudhan Majhi | 2023-05-02T09:46:42Z | http://arxiv.org/abs/2305.01290v3 | # A Direct Construction of Type-II \(Z\) Complementary Code Set with Arbitrarily Large Codes
###### Abstract
In this paper, we propose a construction of type-II \(\mathbf{Z}\)-complementary code set (ZCCS), using a multi-variable function with Hamiltonian paths and disjoint vertices. For a type-I \(\mathbf{(K,M,Z,N)}\)-ZCCS, \(\mathbf{K}\) is bounded by \(\mathbf{K\leq M\left|\frac{N}{Z}\right|}\). However, the proposed type-II ZCCS provides \(\mathbf{K=M(N-Z+1)}\). The proposed type-II ZCCS provides a larger number of codes compared to that of type-I ZCCS. Further, the proposed construction can generate the Kernel of complete complementary code (CCC) as \(\mathbf{(p,p,p)}\)-CCC, for any integral value of \(\mathbf{p\geq 2}\).
**Keywords:**\(\mathbf{Z}\)-complementary code set (ZCCS), complete complementary code (CCC), generalized Boolean function (GBF), aperiodic correlation, zero correlation zone
*Corresponding author(s). E-mail(s): [email protected];
Contributing authors: [email protected]; [email protected];
## 1 Introduction
A collection of matrices is described as a complete complementary code (CCC) set if the aperiodic auto-correlation sum (AACS) of each matrix/code is zero for all non-zero time shifts and the aperiodic cross-correlation sum (ACCS) between any two
matrices is zero for all time shifts [1]. CCC has shown a significant role over asynchronous multi-carrier code division multiple access (MC-CDMA) for interference-free communication [2]. Rathinakumar and Chaturvedi proposed \((2^{k},2^{k},2^{k+m})\)-CCC using a generalized Boolean function (GBF) associating it with a graph of a Hamiltonian path of \(m\) vertices after deleting the \(k\) vertices from the graph [3]. To utilize more users in the MC-CDMA system, a zero correlation zone (ZCZ) has been introduced in the code and formed \(Z\)-complementary code set (ZCCS) in [4]. ZCCSs are capable of supporting interference-free MC-CDMA in the quasi-synchronous environment without the need of power control [2]. Depending on whether the ZCZ width exists close to zero time shift or end time shift, the set is referred to as a type-I ZCCS or type-II ZCCS, respectively. A \((K,M,Z,N)\)-ZCCS is a collection of \(K\) matrices; each matrix has \(M\) constituent sequences, each of length \(N\) and the ZCZ width is \(Z\). For a type-I \((K,M,Z,N)\)-ZCCS, \(K\) is bounded by \(K\leq M\left\lfloor\frac{N}{Z}\right\rfloor\)[4]. In other words, if ZCZ width is considered to be \(1/n\) of the complete time zone for some natural number \(n\), the number of users can be increased by \(n\) times to the number of constituent sequences. In [5, 6, 7, 8, 9, 10], construction of type-I ZCCS has been proposed following the above bound. Recently, type-II ZCCS has been proposed to improve the number of codes in the MC-CDMA system in [11]. However, the code in [11] is based on CCC and indirect construction. Best of the Authors' knowledge, there is no direct construction of type-II ZCCS in the existing literature.
In this paper, we propose a direct construction of type-II \((p^{r+k},p^{k},p^{n+r}-p^{r}+1,p^{n+r})\)-ZCCS, where \(p\geq 2\), \(0<k<n\) and \(r\geq 0\), i.e., we have \(p^{r}\) times more codes compared to the flock size by compromising \(p^{r}-1\) shift of ZCZ width. Therefore, a large number of codes can be generated compared to type-I ZCCS with the same number of constituent sequences in a code. To construct type-II ZCCS, a multi-variable function (MVF) is used, which is associated with a graph of a collection of \(k\) Hamiltonian paths and \(r\) disjoint vertices. The proposed method can generate the Kernel of CCC as \((p,p,p)\)-CCC, for any integral value of \(p\geq 2\), but in [12] the Kernel can be obtained only for the prime values of \(p\). As the proposed type-II ZCCS has larger ZCZ width and more codes compared to type-I ZCCS, it is suitable for the asynchronous environment.
The rest of this paper is written as follows. The notations, fundamental definitions, and relationship between a graph and an MVF are defined in Section 2. In Section 3, we describe the construction of type-II ZCCS and provide an example of the proposed construction. Section 4, contains the conclusion of the paper.
## 2 Preliminaries
In this section, we mention a few important definitions and symbols that will be utilized throughout the paper.
**Definition 1**.: _Let \(\mathbf{a}=(a_{0},a_{1},\ldots,a_{N-1})\) and \(\mathbf{a}^{\prime}=(a^{\prime}_{0},a^{\prime}_{1}\),...,\(a^{\prime}_{N-1})\) be two complex-valued sequences of length \(N\). The ACCS between \(\mathbf{a}\) and \(\mathbf{a}^{\prime}\) is defined as_
\[\mathcal{C}_{\mathbf{a},\mathbf{a}^{\prime}}(\tau)=\left\{\begin{array}{ll} \sum_{i=0}^{N-1-\tau}a_{i+\tau}a^{{}^{\prime}*}_{i},&0\leq\tau<N,\\ \sum_{i=0}^{N+\tau-1}a_{i}a^{{}^{\prime}*}_{i-\tau},&-N<\tau<0,\\ 0,&\mbox{otherwise,}\end{array}\right.\]
_where \(a_{i}^{\prime*}\) is complex conjugate of \(a_{i}^{\prime}\). When \(\mathbf{a}=\mathbf{a}^{\prime}\), \(\mathcal{C}_{\mathbf{a},\mathbf{a}^{\prime}}(\tau)\) is called AACS of \(\mathbf{a}\) and is denoted as \(\mathcal{C}_{\mathbf{a}}(\tau)\)._
Let \(C_{i}\) be a code and defined as
\[C_{i}=\left[\begin{array}{cccc}\mathbf{a}_{0}^{i}&\mathbf{a}_{1}^{i}&\ldots &\mathbf{a}_{M-1}^{k}\end{array}\right]_{M\times N}^{T}.\]
**Definition 2**.: _Let \(C_{s}\) and \(C_{t}\) be any two codes, then ACCS between \(C_{s}\) and \(C_{t}\) is defined by_
\[\mathcal{C}\left(C_{s},C_{t}\right)(\tau)=\sum_{\nu=0}^{M-1}\mathcal{C}\left( \mathbf{a}_{\nu}^{s},\mathbf{a}_{\nu}^{t}\right)(\tau). \tag{1}\]
_When \(C_{s}=C_{t}\), \(\mathcal{C}(C_{s},C_{t})(\tau)\) is called AACS of \(C_{s}\) and is denoted as \(\mathcal{C}(C_{s})(\tau)\)._
**Definition 3**.: _Let \(\mathbf{C}=\{C_{0},C_{1},\ldots,C_{K-1}\}\) be a set of \(K\) matrices (codes), each of having order \(M\times N\). \(\mathbf{C}\) is called a type-I \((K,M,Z,N)\)-ZCCS if it satisfies the following properties_
\[\mathcal{C}\left(C_{k_{1}},C_{k_{2}}\right)(\tau)=\left\{\begin{array}{ll} NM,&\tau=0,k_{1}=k_{2},\\ 0,&\tau=0,k_{1}\neq k_{2},\\ 0,&1\leq|\tau|<Z.\end{array}\right. \tag{2}\]
**Definition 4**.: _Let \(\mathbf{C}=\{C_{0},C_{1},\ldots,C_{K-1}\}\) be a set of \(K\) matrices (codes), each of having order \(M\times N\). \(\mathbf{C}\) is called a type-II \((K,M,Z,N)\)-ZCCS if it satisfies the following properties_
\[\mathcal{C}\left(C_{k_{1}},C_{k_{2}}\right)(\tau)=\left\{\begin{array}{ll} NM,&\tau=0,k_{1}=k_{2},\\ 0,&\tau=0,k_{1}\neq k_{2},\\ 0,&N-Z<|\tau|<N.\end{array}\right. \tag{3}\]
When \(K=M\) and \(Z=N\), type-II ZCCS is called a \((K,K,N)-\)CCC.
### Multi-variable Functions and Corresponding Sequences
Let \(\mathbf{x}=\{x_{0},x_{1},\ldots,x_{n-1}\}\) such that \(\mathbf{x}\in\mathbb{Z}_{p}^{n}\), where \(\mathbb{Z}_{p}=\{0,1,\ldots,p-1\},\) for \(p\geq 2\). We say \(f\) is a MVF if \(f:\mathbb{Z}_{p}^{n}\rightarrow\mathbb{Z}_{q}\). We define \(\chi(f)=\left(\zeta_{q}^{f(0)},\zeta_{q}^{f(1)},\ldots,\zeta_{q}^{f(p^{n}-1)}\right)\), where \(f(I)=f(I_{0},I_{1},\ldots,I_{n-1})\)[13], such that
\[I=\sum_{i=0}^{n-1}p^{n-i-1}I_{i}. \tag{4}\]
### Relation with graph and multi-variable function
Let \(Q\):\(\mathbb{Z}_{p}^{n}\)\(\rightarrow\)\(\mathbb{Z}_{q}\) be a MVF defined by
\[Q(x_{0},x_{1},\ldots,x_{n-1})=\sum_{0\leq i<j<n}h_{ij}x_{i}x_{j}, \tag{5}\]
where \(h_{ij}\)\(\in\)\(\mathbb{Z}_{q}\), so that \(Q\) is a quadratic form in \(n\) variables, \(x_{0},x_{1},\ldots,x_{n-1}\) over \(\mathbb{Z}_{p}\). Let \(\mathcal{G}(Q)\) be a labelled graph of \(n\) vertices with \(Q\) as given in (5). Label the vertices
by \(x_{0},x_{1},\ldots,x_{n-1}\) and if \(h_{ij}\neq 0\), join vertices \(x_{i}\) and \(x_{j}\) by an edge with the label of \(h_{ij}\). A graph \(\mathcal{G}(Q)\) of the type defined above is a path if \(n\geq 1\) and \(\mathcal{G}(Q)\) has exactly \(n-1\) edges, all are labelled by \(\frac{q}{p}\) and form a Hamiltonian path in \(\mathcal{G}(Q)\). For \(n\geq 1\), a path on \(n\) vertices corresponds to a quadratic form of the type
\[Q(x_{0},x_{1},\ldots,x_{n-1})=\frac{q}{p}\sum_{i=1}^{n-1}x_{\pi(i-1)}x_{\pi(i)}, \tag{6}\]
where \(\pi\) is a permutation of \(\{0,1,\ldots,n-1\}\). This \(\pi\) is dependent on the Hamiltonian path. Now, we define MVF over Hamiltonian path \(\mathcal{G}(Q)\) by \(H:\mathbb{Z}_{p}^{n}\rightarrow\mathbb{Z}_{p}^{h}\) as
\[H(x_{0},x_{1},\ldots,x_{n-1})=Q+\sum_{i=0}^{n-1}\gamma_{i}x_{i}+\theta, \tag{7}\]
where \(\gamma_{i},\theta\in\mathbb{Z}_{p}\).
**Definition 5.**_Let \(f_{1}(x_{1,0},x_{1,1},\ldots,x_{1,n_{1}-1})\):\(\mathbb{Z}_{p}^{n_{1}}\)\(\rightarrow\)\(\mathbb{Z}_{q}\) and \(f_{2}(x_{2,0},x_{2,1},\ldots,x_{2,n_{2}-1})\):\(\mathbb{Z}_{p}^{n_{2}}\)\(\rightarrow\)\(\mathbb{Z}_{q}\) be two MVFs then we define a new MVF as \(f(x_{0},x_{1},\ldots,x_{n_{1}+n_{2}-1})\):\(\mathbb{Z}_{p}^{n_{1}+n_{2}}\)\(\rightarrow\)\(\mathbb{Z}_{q}\) such that \((x_{0},x_{1},\ldots,x_{n_{1}+n_{2}-1})\)=\((x_{1,0},x_{1,1},\ldots,x_{1,n_{1}-1},x_{2,0},\)\(x_{2,1},\ldots,x_{2,n_{2}-1})\) and represented by \(f=f_{1}\oplus f_{2}\)._
Let \(\mathcal{G}(Q_{0}),\mathcal{G}(Q_{1}),\ldots,\mathcal{G}(Q_{k-1})\) are \(k\) many Hamiltonian paths with \(n_{0},n_{1},\ldots,n_{k-1}\) vertices, respectively, and \(x_{\pi_{i}(0)}\) and \(x_{\pi_{i}(n_{i}-1)}\) are the end vertices of \(\mathcal{G}(Q_{i})\). We define a MVF of \(n(=n_{0}+n_{1}+\cdots+n_{k-1})\) variable as follows
\[f_{\beta}^{\alpha}=\sum_{i=0}^{k-1}\left(H(\mathbf{x}_{i})+\frac{q}{p}\left( \alpha_{i}x_{\pi_{i}(0)}+\beta_{i}x_{\pi_{i}(n_{i}-1)}\right)\right), \tag{8}\]
where \(\mathbf{x}_{i}\) are vertices of \(\mathcal{G}(Q_{i})\), \(H\) is as defined in (7), \(\alpha=\sum_{i=0}^{k-1}p^{k-i-1}\alpha_{i}\) and \(\beta=\sum_{i=0}^{k-1}p^{k-i-1}\beta_{i}\), i.e., \(\boldsymbol{\alpha}=(\alpha_{0},\alpha_{1},\ldots,\alpha_{k-1})\) and \(\boldsymbol{\beta}=(\beta_{0},\beta_{1},\ldots,\beta_{k-1})\) are vector representation of \(\alpha\) and \(\beta\) with the base \(p\), respectively.
### Pmepr
Let \(\mathbf{a}=(a_{0},a_{1},\ldots,a_{M-1})\) be a complex valued sequence of length \(N\). For a multi-carrier system with \(M\) subcarriers, the time domain multi-carrier signal can be written as
\[s_{\mathbf{a}}(t)=\sum_{j=0}^{M-1}a_{j}e^{2\pi\sqrt{-1}jt}, \tag{9}\]
where \(0\leq t<1\) and the carrier spacing has been normalized to \(1\) and \(\mathbf{a}\) is spreaded over \(M\) subcarriers. The instantaneous envelope power of the signal is the real-valued function \(P_{\mathbf{a}}(t)=|s_{\mathbf{a}}(t)|^{2}\). A ZCCS based MC-CDMA is given in [14]. From [15], the
PMEPR is bounded by the AACS value of \(\mathbf{a}\) as follows,
\[PMEPR(\mathbf{a})\leq\sum_{\tau=-M+1}^{M+1}\mathcal{C}(\mathbf{a})(\tau). \tag{10}\]
## 3 Proposed Construction
In this section first we are proposing two lemmas, which are nothing but extension of two well known results. These lemmas are in support for complete the proof of the proposed theorems. **Lemma 1** is extension of sum of root of unity and **Lemma 2** is extension orthogonality of Walash function over prime to any number bigger than two. **Lemma 1**.: _For \(q(\geq 2)\in\mathbb{N}\), \(k,r\in\mathbb{Z}\) such that \(k\neq rq\). Let \(\zeta_{q}=\exp 2\pi\iota/q\), then_
\[\sum_{i=0}^{q-1}\zeta_{q}^{k\cdot i}=0. \tag{11}\]
Proof.: Since \(k\neq rq\), let \(k_{1}=k\ (\mathrm{mod}\ q)\), since \(k\neq rq\) it implies \(k_{1}\neq 0\). Now \(k_{1}\in\mathbb{Z}_{q}\). To complete the proof we consider two case.
_Case 1._\(k_{1}\) is relatively prime with \(q\). Then
\[\begin{split}\sum_{i=0}^{q-1}\zeta_{q}^{k\cdot i}& =\sum_{i=0}^{q-1}\zeta_{q}^{k_{1}\cdot i}\\ &=\sum_{i=0}^{q-1}\zeta_{q}^{i}=0,\ \ \text{sum of $q$-th root of unity.}\end{split} \tag{12}\]
_Case 2._ When \(k_{1}\) is not relatively prime with \(q\). Let \(d\in\mathbb{Z}_{q}\), such that \(k_{1}=k_{2}d\) and \(q=q_{1}d\) and \(k_{2}\) is relatively prime with \(q_{1}\). Now,
\[\begin{split}\sum_{i=0}^{q-1}\zeta_{q}^{k\cdot i}& =\sum_{i=0}^{q-1}\zeta_{q}^{k_{1}\cdot i}\\ &=\sum_{i=0}^{q-1}\zeta_{q}^{k_{2}d\cdot i}\\ &=\sum_{i=0}^{q-1}\zeta_{q_{1}}^{k_{2}\cdot i}\\ &=d\sum_{i=0}^{q-1}\zeta_{q_{1}}^{i}\ \ \ =0,\ \ \text{sum of $q_{1}$-th root of unity.}\end{split} \tag{13}\]
Concluding above two cases, the proof completes.
**Lemma 2**.: _Let \(\mathcal{G}(\Omega)\) be graph of \(r\) isolated vertices then MVF \(\Omega^{\delta}:\mathbb{Z}_{p}^{r}\rightarrow\mathbb{Z}_{q}\) defined by_
\[\Omega^{\delta}(\mathbf{x})=\frac{q}{p}\boldsymbol{\delta}\cdot\mathbf{x}, \tag{14}\]
_where \(\boldsymbol{\delta}\) is vector representation of \(0\leq\delta<p^{r}\), with base \(p\) and \(q\) is integral multiple of \(p\). \(\Omega^{\delta}(\mathbf{x})\) is known as Walsh function [16]. Therefore, dot product between sequences \(\chi(\Omega^{\delta^{1}}(\mathbf{x}))\) and \(\chi(\Omega^{\delta^{2}}(\mathbf{x}))\) is zero, for \(\delta^{1}\neq\delta^{2}\)._
Proof.: Let \(\delta^{1}\neq\delta^{2}\), therefore, \(\boldsymbol{\delta}^{1}\) is differ from \(\boldsymbol{\delta}^{2}\) for at least one elements. To proof completely, we consider two cases as follows:
_Case 1._ Let \(\boldsymbol{\delta}^{1}=(\delta_{0},\delta_{1},\ldots,\delta_{u}^{1},\ldots, \delta_{r-1})\) and \(\boldsymbol{\delta}^{2}=(\delta_{0},\delta_{1},\ldots,\delta_{u}^{2},\ldots, \delta_{r-1})\).
\[\Omega^{\delta^{1}}(\mathbf{x})-\Omega^{\delta^{2}}(\mathbf{x})=\frac{q}{p}( \delta_{u}^{1}-\delta_{u}^{2})x_{u}. \tag{15}\]
Let for and integer \(I\), the \(p\)-ary vector representation is \(\mathbf{I}=(I_{0},I_{1},\ldots,I_{r-1})\).
\[\Omega^{\delta^{1}}(\mathbf{I})-\Omega^{\delta^{2}}(\mathbf{I})=\frac{q}{p}( \delta_{u}^{1}-\delta_{u}^{2})I_{u}. \tag{16}\]
Now, we use this in dot product of \(\chi(\Omega^{\delta^{1}}(\mathbf{x}))\) and \(\chi(\Omega^{\delta^{2}}(\mathbf{x}))\).
\[\begin{split}\chi(\Omega^{\delta^{1}}(\mathbf{x}))\cdot\chi( \Omega^{\delta^{2}}(\mathbf{x}))=&\sum_{I=0}^{p^{r}-1}\zeta_{q}^{ \frac{q}{p}(\delta_{u}^{1}-\delta_{u}^{2})I_{u}}\\ =& p^{r-1}\sum_{I_{u}=0}^{p-1}\zeta_{q}^{\frac{q}{p} (\delta_{u}^{1}-\delta_{u}^{2})I_{u}}\\ =& p^{r-1}\sum_{I_{u}=0}^{p-1}\zeta_{p}^{(\delta_{u} ^{1}-\delta_{u}^{2})I_{u}}\\ =& 0,\ \ \text{from }\mathbf{Lemma 1}.\end{split} \tag{17}\]
_Case 2._ Let \(\boldsymbol{\delta}^{1}\) is differ form \(\boldsymbol{\delta}^{2}\) at more than one place. Let \(0\leq j_{0}<j_{1}<\ldots<j_{k-1}\leq r-1\), for \(1<k\leq r\) such that \(\boldsymbol{\delta}^{1}\) is differ form \(\boldsymbol{\delta}^{2}\) at \(j_{0},j_{1},\ldots,j_{k}\) place. Now
\[\Omega^{\delta^{1}}(\mathbf{x})-\Omega^{\delta^{2}}(\mathbf{x})=\frac{q}{p} \sum_{u=0}^{k-1}(\delta_{j_{u}}^{1}-\delta_{j_{u}}^{2})x_{j_{u}}. \tag{18}\]
Now, we use this in dot product of \(\chi(\Omega^{\delta^{1}}(\mathbf{x}))\) and \(\chi(\Omega^{\delta^{2}}(\mathbf{x}))\).
\[\begin{split}\chi(\Omega^{\delta^{1}}(\mathbf{x}))\cdot\chi( \Omega^{\delta^{2}}(\mathbf{x}))&=\sum_{I=0}^{p^{r}-1}\zeta_{q}^{ \sum_{u=0}^{k-1}\frac{q}{p}(\delta_{ju}^{1}-\delta_{ju}^{2})I_{ju}}\\ &=\sum_{I=0}^{p^{r}-1}\zeta_{p}^{\sum_{u=0}^{k-1}(\delta_{ju}^{1}- \delta_{ju}^{2})I_{ju}}\\ &=\sum_{I=0}^{p^{r}-1}\left(\prod_{u=0}^{k-1}\zeta_{p}^{(\delta_ {ju}^{1}-\delta_{ju}^{2})I_{ju}}\right)\\ =& p^{r-k}\sum_{I_{ju}=0}^{p-1}\zeta_{p}^{(\delta_ {ju}^{1}-\delta_{ju}^{2})I_{ju}}\\ =& 0,\ \ \text{from \bf Lemma 1}.\end{split} \tag{19}\]
From the above two cases the proof is completed.
We use two collections of graphs, one with Hamiltonian paths and the other is with disjoint vertices, to construct type-II ZCCS.
**Theorem 1**.: _Let \(p\geq 2\), \(\mathcal{G}(\Omega)\) be graph of \(r\) isolated vertices and \(\mathcal{G}(Q)\) be a Hamiltonian path with \(n\) vertices and \(x_{\pi(0)}\) and \(x_{\pi(n-1)}\) be the end vertices of \(\mathcal{G}(Q)\), since there is a single Hamiltonian path, we use \(\pi\) instead of \(\pi_{0}\). \(f_{\beta}^{\alpha}\) and \(\Omega^{\delta}\) are defined as (8) and (14), respectively. Now, we define an MVF as_
\[F_{\beta}^{\alpha p^{r}+\delta}=f_{\beta}^{\alpha}\oplus\Omega^{\delta}, \tag{20}\]
_for \(0\leq\alpha,\beta<p\) and \(0\leq\delta<p^{r}\). Now, we define the order set as_
\[C_{\alpha p^{r}+\delta}=\left[\chi(F_{0}^{\alpha p^{r}+\delta})\ \chi(F_{1}^{\alpha p^{r}+\delta})\ \cdots\ \chi(F_{p-1}^{\alpha p^{r}+\delta})\right]^{T}, \tag{21}\]
_and_
\[\mathbf{C}=\left\{C_{0},C_{1},\ldots,C_{p^{1+r}-1}\right\}. \tag{22}\]
_Then \(\mathbf{C}\) is a type-II \(\left(p^{1+r},p,p^{n+r}-p^{r}+1,p^{n+r}\right)\)-ZCCS._
Proof.: Since, \(0\leq\alpha,\beta\leq p-1\), then \(\boldsymbol{\alpha}=\alpha_{0}\) and \(\boldsymbol{\beta}=\beta_{0}\), in this condition \(\alpha=\alpha_{0}\) and \(\beta=\beta_{0}\). So we are using \(\alpha,\beta\) and \(\pi\) instead of \(\alpha_{0},\beta_{0}\) and \(\pi_{0}\) in the proof. Now, \(f_{\beta}^{\alpha}:\mathbb{Z}_{p}^{n+r}\rightarrow\mathbb{Z}_{q}\) be an MVF, given by
\[f_{\beta}^{\alpha}(x_{0},x_{1},\ldots,x_{n-1})= Q(x_{0},x_{1},\ldots,x_{n-1})+\sum_{i=0}^{n-1}\gamma_{i}x_{i} \tag{23}\] \[+\frac{q}{p}\left(\alpha x_{\pi(0)}+\beta x_{\pi(n-1)}\right). \tag{24}\]
\[F_{\beta}^{\alpha p^{r}+\delta}(x_{0},x_{1},\ldots,x_{n-1},x_{n},x_{n+1}, \ldots,x_{n+r-1})=f_{\beta}^{\alpha}\oplus\Omega^{\delta} \tag{25}\] \[=f_{\beta}^{\alpha}(x_{0},x_{1},\ldots,x_{n-1})+\Omega^{\delta}(x _{n},x_{n+1},\ldots,x_{n+r-1}). \tag{26}\]
Consider two codes \(C_{s}\) and \(C_{t}\) such that
\[C_{s}=\left[\chi(F_{0}^{s})\ \chi(F_{1}^{s})\ \cdots\ \chi(F_{p-1}^{s})\right]^{T}, \tag{27}\]
and
\[C_{t}=\left[\chi(F_{0}^{t})\ \chi(F_{1}^{t})\ \cdots\ \chi(F_{p-1}^{t})\right]^{T}. \tag{28}\]
Consider \(I\) and \(J\) are two integers such that \(I=J+\tau\) for some integral value of \(\tau\). Let \(\mathbf{I}\) and \(\mathbf{J}\) be the vector representation of \(I\) and \(J\) with base \(p\), respectively.
\[\begin{split} F_{\beta}^{s}(\mathbf{I})-F_{\beta}^{t}(\mathbf{J})=& (H(I_{0},I_{1},\ldots,I_{n-1})-H(J_{0},J_{1},\ldots,J_{n-1}))\\ &+\Omega^{\delta^{s}}(I_{n},I_{n+1},\ldots,I_{n+r-1})-\Omega^{ \delta^{t}}(J_{n},J_{n+1},\ldots,J_{n+r-1})\\ &+\frac{q}{p}\beta(I_{\pi(n-1)}-J_{\pi(n-1)})+\frac{q}{p}\alpha( I_{\pi(0)}-J_{\pi(0)}),\end{split} \tag{29}\]
where \(\delta^{s}\) and \(\delta^{t}\) are \(\delta\) in \(s=\alpha p^{r}+\delta\) and \(t=\alpha p^{r}+\delta\), respectively. When \(s=t\), \(\alpha\) will not appear in (29). We denote \(H(I_{0},I_{1},\ldots,I_{n-1})-H(J_{0},J_{1},\ldots,J_{n-1})\) by \(H^{IJ}\), \(\Omega^{\delta^{s}}(I_{n},I_{n+1},\ldots,I_{n+r-1})-\Omega^{\delta^{t}}(J_{n},J_{n+1},\ldots,J_{n+r-1})\) by \(\Omega^{IJ}_{st}\), \((I_{\pi(n-1)}-J_{\pi(n-1)})\) by \(D^{IJ}_{n-1}\) and \((I_{\pi(0)}-J_{\pi(0)})\) by \(D^{IJ}_{0}\). Substitute these in (29), we have
\[F_{\beta}^{s}(\mathbf{I})-F_{\beta}^{t}(\mathbf{J})=H^{IJ}+\Omega^{IJ}_{st}+ \frac{q}{p}\alpha D^{IJ}_{0}+\frac{q}{p}\beta D^{IJ}_{n-1}. \tag{30}\]
When \(|\tau|\geq p^{r}\), at least one of the components from the first \(n\) component is not the same for any number representation (with the base of \(p\)) of \(\mathbf{I}\) and \(\mathbf{J}\). We consider two cases to complete our proof.
_Case 1._\(I_{\pi(n-1)}\neq J_{\pi(n-1)}\),
\[\zeta_{q}^{F_{\beta}^{s}(\mathbf{I})-F_{\beta}^{t}(\mathbf{J})}=\zeta_{q}^{H^{ IJ}+\Omega^{IJ}_{st}+\frac{q}{p}\alpha D^{IJ}_{0}}\zeta_{q}^{D^{IJ}_{n-1} \frac{q}{p}\beta}. \tag{31}\]
Thus
\[\begin{split}\sum_{\beta=0}^{p-1}\left(\zeta_{q}^{F_{\beta}^{s}( \mathbf{I})-F_{\beta}^{t}(\mathbf{J})}\right)&=\zeta_{q}^{H^{IJ}+ \Omega^{IJ}_{st}+\frac{q}{p}\alpha D^{IJ}_{0}}\sum_{\beta=0}^{p-1}\zeta_{q}^{D ^{IJ}_{n-1}\frac{q}{p}\beta}\\ &=\zeta_{q}^{H^{IJ}+\Omega^{IJ}_{st}+\frac{q}{p}\alpha D^{IJ}_{0}} \sum_{\beta=0}^{p-1}\zeta_{p}^{D^{IJ}_{n-1}\beta}\\ &=0,\ \text{from \bf Lemma 1}.\end{split} \tag{32}\]
_Case 2._ If \(I_{\pi(n-1)}=J_{\pi(n-1)}\) then there exist a largest integer \(0\leq u\leq n-2\), such that \(I_{\pi(u)}\neq J_{\pi(u)}\). Let us consider \(\mathbf{I}^{k}\), which is an integer having vector representation of \(I^{k}\) with base \(p\) differs from that of \(\mathbf{I}\) only at the position \(\pi(u)\), i.e.,
\[\mathbf{I}^{k}=(I_{0},I_{1},\ldots,[(I_{\pi(u+1)}-k)]\ (\mathrm{mod}\ p),\ldots,I_{n}, \ldots,I_{n+r}), \tag{33}\]
where \(1\leq k\leq p-1\). Similarly, \(\mathbf{J}^{k}\) can be defined by assumption \(I^{k}=J^{k}+\tau\). It can be observed that an invertible map from the pair \((\mathbf{I},\mathbf{J})\) to the pair \((\mathbf{I}^{k},\mathbf{J}^{k})\) can be made. Thus, both pairs \((\mathbf{I},\mathbf{J})\) and \((\mathbf{I}^{k},\mathbf{J}^{k})\), for \(k=1,2,\ldots,p-1\) contribute to \(\mathcal{C}(C_{s},C_{t})(\tau)\).
\[F^{s}_{\beta}(\mathbf{I}^{k})-F^{s}_{\beta}(\mathbf{I})=-k\left(\frac{q}{p} \left(I_{\pi(u)}+I_{\pi(u+2)}\right)+\gamma_{\pi(u+1)}\right). \tag{34}\]
If \(u=n-2\), we just remove the term \(I_{\pi(u+2)}\). Similarly, we can write
\[F^{t}_{\beta}(\mathbf{J}^{k})-F^{t}_{\beta}(\mathbf{J})=-k\left(\frac{q}{p} \left(J_{\pi(u)}+J_{\pi(u+2)}\right)+\gamma_{\pi(u+1)}\right), \tag{35}\]
and
\[F^{s}_{\beta}(\mathbf{I})-F^{t}_{\beta}(\mathbf{J})-F^{s}_{\beta }(\mathbf{I}^{k})+F^{t}_{\beta}(\mathbf{J}^{k})= \frac{qk}{p}(I_{\pi(u)}-J_{\pi(u)})\] \[= \frac{qk}{p}(I_{\pi(u)}-J_{\pi(u)}) \tag{36}\] \[= \frac{qk}{p}D^{IJ}_{u}.\]
Thus,
\[\zeta^{F^{s}_{\beta}(\mathbf{I})-F^{t}_{\beta}(\mathbf{J})-(F^{s}_{\beta}( \mathbf{I}^{k})-F^{t}_{\beta}(\mathbf{J}^{k}))}_{\sigma}=\zeta^{kD^{IJ}_{u}}_ {p}. \tag{37}\]
and summing over \(\beta\), we have
\[\begin{split}&=\sum_{k=1}^{p-1}\zeta^{F^{s}_{\beta}(\mathbf{I})-F^ {t}_{\beta}(\mathbf{J})-(F^{s}_{\beta}(\mathbf{I}^{k})-F^{t}_{\beta}(\mathbf{ J}^{k}))}_{\sigma}\\ &=\sum_{k=1}^{p-1}\zeta^{F^{s}_{\beta}(\mathbf{I})-F^{t}_{\beta}( \mathbf{J})-(F^{s}_{\beta}(\mathbf{I}^{k})-F^{t}_{\beta}(\mathbf{J}^{k}))}_{ \sigma}\\ &=\sum_{k=1}^{p-1}\zeta^{kD^{IJ}_{u}}_{p}\\ &=0.\end{split} \tag{38}\]
Combining the above two cases, we can conclude that
\[\mathcal{C}(C_{s},C_{t})(\tau)=0\ \mathrm{for}\ |\tau|\geq p^{r}. \tag{39}\]
Now, we have to show that for \(s\neq t\), \(\mathcal{C}(C_{s},C_{t})(0)=0\). To prove this again, we consider two cases as follows.
_Case 1._ Let \(s=\alpha p^{r}+\delta^{s}\) and \(t=\alpha p^{r}+\delta^{t}\). Then ACCS between \(C_{s}\) and \(C_{t}\) at zero shift can be written as
\[\begin{split}\mathcal{C}(C_{s},C_{t})(0)&=\sum_{I=0} ^{p^{n+r}-1}\sum_{\beta=0}^{p-1}\zeta_{q}^{F_{\beta}^{\mathbf{s}}(\mathbf{I})-F _{\beta}^{\mathbf{t}}(\mathbf{I})}\\ &=\sum_{I=0}^{p^{n+r}-1}\sum_{\beta=0}^{p-1}\zeta_{q}^{\frac{q}{p} (\Omega^{\delta^{s}}(\mathbf{I})-\Omega^{\delta^{t}}(\mathbf{I}))}\\ &=p^{n}\sum_{I=0}^{p^{r}-1}\sum_{\beta=0}^{p-1}\zeta_{p}^{\Omega ^{\delta^{s}}(\mathbf{I})-\Omega^{\delta^{t}}(\mathbf{I})}\\ &=p^{n}\sum_{\beta=0}^{p-1}\chi(\Omega^{\delta^{s}})\cdot\chi( \Omega^{\delta^{t}})\\ &=0.\end{split} \tag{40}\]
_Case 2._ Let \(s=\alpha^{s}p^{r}+\delta\) and \(t=\alpha^{t}p^{r}+\delta\). Then
\[F_{\beta}^{s}(\mathbf{I})-F_{\beta}^{t}(\mathbf{I})=\frac{q}{p}(\alpha^{s}- \alpha^{t})I_{\pi(0)}. \tag{41}\]
Thus
\[\zeta_{q}^{F_{\beta}^{s}(\mathbf{I})-F_{\beta}^{t}(\mathbf{I})}=\zeta_{q}^{ \frac{q}{p}(\alpha^{s}-\alpha^{t})I_{\pi(0)}}, \tag{42}\]
and
\[\begin{split}\sum_{I=0}^{p^{n+r}-1}\zeta_{q}^{F_{\beta}^{s}( \mathbf{I})-F_{\beta}^{t}(\mathbf{I})}=&\sum_{I=0}^{p^{n+r}-1} \zeta_{p}^{(\alpha^{s}-\alpha^{t})I_{\pi(0)}}\\ =& p^{n+r-1}\sum_{I=0}^{p}\zeta_{p}^{(\alpha^{s}- \alpha^{t})I_{\pi(0)}}\\ =& p^{n+r-1}\sum_{I=0}^{p}\zeta_{p}^{(\alpha^{s}- \alpha^{t})I}\\ =& 0.\end{split} \tag{43}\]
Thus
\[\sum_{\beta=0}^{p-1}\sum_{I=0}^{p^{n+r}-1}\zeta_{q}^{F_{\beta}^{s}(\mathbf{I} )-F_{\beta}^{t}(\mathbf{I})}=0. \tag{44}\]
From the above cases, we can conclude that
\[\mathcal{C}(C_{s},C_{t})(0)=0. \tag{45}\]
Equations (39) and (45) lead us to conclude the **Theorem** 1.
Now, we provide construction for type-II \((p^{k+r},p^{k},p^{n+r}-p^{r}+1,p^{n+r})\)-ZCCS.
**Theorem 2**.: _Let \(\mathcal{G}(\Omega)\) be set of \(r\) disjoint vertices and \(\mathcal{G}(Q_{0}),\mathcal{G}(Q_{1}),\ldots,\mathcal{G}(Q_{k-1})\) be \(k\) many Hamiltonian paths, with \(n_{0},n_{1},\ldots,n_{k-1}\) vertices such that \(x_{\pi_{i}(0)}\) and \(x_{\pi_{i}(n_{i}-1)}\) are end vertices of \(\mathcal{G}(Q_{i})\). \(f_{\beta}^{\alpha}\) and \(\Omega^{\delta}\) are defined in (8) and (14), respectively. Now we define an MVF as_
\[F_{\beta}^{\alpha p^{r}+\delta}=f_{\beta}^{\alpha}\oplus\Omega^{\delta}, \tag{46}\]
_for \(0\leq\alpha,\beta<p^{k}\) and \(0\leq\delta<p^{r}\). Now, we define the order set as_
\[C_{\alpha p^{r}+\delta}=\left[\chi(F_{0}^{\alpha p^{r}+\delta})\ \chi(F_{1}^{ \alpha p^{r}+\delta})\ \cdots\ \chi(F_{p^{k}-1}^{\alpha p^{r}+\delta})\right]^{T}, \tag{47}\]
_and_
\[\mathbf{C}=\left\{C_{0},C_{1},\ldots,C_{p^{k+r}-1}\right\}. \tag{48}\]
_Then \(\mathbf{C}\) is a type-II \(\left(p^{k+r},p^{k},p^{n+r}-p^{r}+1,p^{n+r}\right)\)-ZCCS._
Proof.: To prove \(\mathcal{C}(C_{s},C_{t})=0\) for \(|\tau|\geq p^{r}\), first we prove it for \(p^{r}\geq|\tau|\geq p^{r+n_{k}}-1\), which will be same proof as **Theorem**1, and after that the same process can be followed for the interval \(p^{r+\sum_{j=0}^{i}n_{k-j}}\geq|\tau|\geq p^{r+\sum_{j=0}^{i+1}n_{k-j}}-1\), for \(i\in\{1,2,\ldots,k-2\}\).
We found that some codes are completely uncorrelated, i.e., ACCS sums are zero for the entire time shift, which can be obtained by following _Remark 1_.:
**Remark 1**.: _In the proposed type-II ZCCS, for a fix \(\delta\) and \(\alpha^{s}\neq\alpha^{t}\), we have \(\mathcal{C}(C_{s},C_{t})(\tau)=0\) for each integral value of \(\tau\), where \(s=\alpha^{s}p^{r}+\delta\) and \(t=\alpha^{t}p^{r}+\delta\)._
**Remark 2**.: _For **Theorem**1 and 2, when number of isolated vertices is zero, we get \((p,p,p^{n})\)-CCC and \((p^{k},p^{k},p^{n})\)-CCC._
**Theorem**1 of [12] appears as a special case of _Remark 2_. _Remark 2_ can also generate Kernel of CCC \((p,p,p)\) for any integral value of \(p\geq 2\), however, [12] can generate Kernel of CCC \((p,p,p)\) for only prime number \(p\). We know that aperiodic correlation values are unchanged by cycling shifts of elements in the sequence. In **Theorem**2, path has been considered as first \(n_{1}\) vertices should construct \(1^{\rm st}\) path, now first \(n_{2}\) vertices should construct \(2^{\rm nd}\) path and so on. Since aperiodic correlation is unchanged under cyclic shifts in the elements of a sequence. The restriction of vertices to a path in the same order is necessary. We are providing one more remark considering the graph for which **Theorem**2 is valid.
**Remark 3**.: _Let \(\mathcal{G}(Q)\) be a collection of Hamiltonian paths with \(n_{0},n_{1},\ldots,n_{k-1}\) vertices without the restriction that each path is made of consecutive vertices only **Theorem**2 is true._
In _Theorem_2, column sequence PMEPR upper bound is \(p^{r}\), which can be reduced by adding a suitable function. In [7], there is an explanation and proof is given that how to find a suitable function. We know that if a sequence is part of GCS with \(p\) sequences, its PMEPR upper bound is \(p\). Every column of a code obtained from _Theorem_2 can be represented by a linear function of \((\beta_{0},\beta_{1},\ldots,\beta_{k-1})\), and if we add some function of \((\beta_{0},\beta_{1},\ldots,\beta_{k-1})\) in (46) it will act as constant multiplication in the sequence which does not affect AACS/ACCS property of codes. Therefore, we are proposing a remark that ensures low column PMEPR.
**Remark 4**.: _In **Theorem 2** replace (46) it by_
\[F_{\beta}^{\alpha p^{r}+\delta}=f_{\beta}^{\alpha}\oplus\Omega^{\delta}+\sum_{ \nu=1}^{k-1}\beta_{\nu-1}\beta_{\nu}, \tag{49}\]
_for \(0\leq\alpha,\beta<p^{k}\) and \(0\leq\delta<p^{r}\) and \((\beta_{0},\beta_{1},\ldots,\beta_{k-1})\) is \(p\)-ary vector representation of \(\beta.\) Then the obtained type-II ZCCS have column sequence PMEPR \(p.\)_
**Remark 5**.: _As \(p=2\) and \(q=2\), we get type-II binary \((2^{k+r},2^{k},2^{n+r}-2^{r}+1,2^{n+r})\)-ZCCS. The direct construction for binary type-II ZCCS is not reported in the existing literature._
**Example 1**.: _Let \(\mathcal{G}(Q_{0})\) and \(\mathcal{G}(Q_{1})\) be two Hamiltonian paths each of having two vertices denoted by \(x_{0},x_{1}\) and \(x_{2},x_{3}\), respectively and \(\mathcal{G}(\Omega)\) be an isolated vertex denoted by \(x_{4}\), as shown in Fig. 1._
_Now, defining \(f:\mathbb{Z}_{2}^{5}\to\mathbb{Z}_{2}\) as follows,_
\[F_{\beta}^{2\alpha+\delta}=x_{0}x_{1}+x_{2}x_{3}+\alpha_{0}x_{0}+\alpha_{1}x_{ 2}+\beta_{0}x_{1}+\beta_{1}x_{3}+\delta_{0}x_{4}+\beta_{0}\beta_{1}. \tag{50}\]
_Using (50), in **Theorem 2**, we get_
\[C_{2\alpha+\delta}=\left[\chi(F_{0}^{2\alpha+\delta})\ \chi(F_{1}^{2\alpha+ \delta})\ \chi(F_{2}^{2\alpha+\delta})\ \chi(F_{3}^{2\alpha+\delta})\right]^{T}, \tag{51}\]
_for \(0\leq\alpha,\beta<8\) and \(0\leq\delta<2.\) Then \(\mathbf{C}=\{C_{0},C_{1},\ldots,C_{8}\}\) are given below (\(+\) represents \(1\) and \(-\) represents \(-1\)) is a binary type-II \((8,4,31,32)\)-ZCCS. Using (10), we get column PMEPR upper bound is \(2\) for the obtained codes._
\[C_{0}= \begin{bmatrix}+++++--++++++++--++++++++----++\\++++++++--++++++++--++++++++--\\ ++++--++++++++--++++--++++--\\ ----++----++--++++++++--++++--+\\ ++++++++--++++++++--+++++--\\ ++++++++----++++++++--+\\ \end{bmatrix},\] \[C_{1}= \begin{bmatrix}++++++++++++++++++++--++++++++--\\ ++++++++--++++----++++--\\ ++++--++++--++++--+\\ ++++--++++++++--+++--++++--+++\\ \end{bmatrix},\]
Figure 1: Graph of two Hamiltonian paths and one disjoint vertices.
\[C_{2}=\begin{bmatrix}++--+++++--+++++--+++++--+++--++- \\ ++--++++--++--++--++--+++- \\ ++----++--++--++--++--+--+++ \\ --++++--++++++--+++++ \end{bmatrix},\] \[C_{3}=\begin{bmatrix}++--+++--++--++--++--+++--+++ \\ ++--++++--++--++--+--+--+--+--+--+--+--+--+--+\\ ++----++--++--++--+--+--+--+--+--+--+--+--+--+--+--+---+-+--+-+-+--+-+-+-+-+-+-&\\ +---++--+--+--+--+--+--+--+--+--+--+--+-+-+-+-+-+-+-+-+-+&\\ -++--+--+--+--+--+--+--+--+--+--+-+-+-+-+-+-+-+-+\end{bmatrix},\] \[C_{5}=\begin{bmatrix}+-+-+--+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ \\ +-+-+--+--+--+--+--+--+--+-+-+-+-+-+-+-&\\ -+-+--+--+--+--+--+--+--+--+-+-+-+-+-+-&\\ -+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-&\\ -++-+-+--+--+--+--+--+-+-+-+-+-+-+-+-+-+-+-&\\ -++-+-+--+--+--+--+--+--+-+-+-+-+-+-+-+-+-+-+\end{bmatrix},\] \[C_{7}=\begin{bmatrix}+--++-+--++--++--++--++--+--+--++--++- +-+--+-+-+-+-+-+-+-+-+-+-+&\\ +--+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+&\\ -++--+--+--+--+--+--+--+--+--+--+--+-+-+-+-+-+-+-+-+-+\end{bmatrix}.\]
In type-I ZCCS with 4 sequences of length 32, the number of codes can be 8 only if ZCZ width is less than 16 due to the bound \(K\leq M\left\lfloor\frac{N}{Z}\right\rfloor\). However, _Example 1_ provides 8 codes with the ZCZ width 31.
In Table 1, we provide some available parameters for type-I ZCCS and type-II ZCCS, where it can be observed that the type-II ZCCS provides a larger number of codes compared to type-I ZCCS for a specific ZCZ width.
## 4 Conclusion
In this work, we connected MVF to a graph, which is mainly composed of disjoint vertices and Hamiltonian paths, in order to construct type-II ZCCS. A type-II \((p^{k+r},p^{k},p^{r+n}-p^{r}+1,p^{r+n})\)-ZCCS is for \(p,k,r\) and \(n\) non-negative integer such that \(p\geq 2\), \(r\geq 0\) and \(0<k<n\). Furthermore, the proposed construction can generate the Kernel of CCC as \((p,p,p)\)-CCC, for any integral value of \(p\). The proposed construction follows the code bound \(K=M(N-Z+1)\), given in [13], for type-II \((K,M,Z,N)\)-ZCCS. |
2310.05240 | Limitations of Stochastic Selection with Pairwise Independent Priors | Motivated by the growing interest in correlation-robust stochastic
optimization, we investigate stochastic selection problems beyond independence.
Specifically, we consider the instructive case of pairwise-independent priors
and matroid constraints. We obtain essentially-optimal bounds for contention
resolution and prophet inequalities. The impetus for our work comes from the
recent work of Caragiannis et al., who derived a constant-approximation for the
single-choice prophet inequality with pairwise-independent priors.
For general matroids, our results are tight and largely negative. For both
contention resolution and prophet inequalities, our impossibility results hold
for the full linear matroid over a finite field. We explicitly construct
pairwise-independent distributions which rule out an omega(1/Rank)-balanced
offline CRS and an omega(1/log Rank)-competitive prophet inequality against the
(usual) oblivious adversary. For both results, we employ a generic approach for
constructing pairwise-independent random vectors -- one which unifies and
generalizes existing pairwise-independence constructions from the literature on
universal hash functions and pseudorandomness. Specifically, our approach is
based on our observation that random linear maps turn linear independence into
stochastic independence.
We then examine the class of matroids which satisfy the so-called partition
property -- these include most common matroids encountered in optimization. We
obtain positive results for both online contention resolution and prophet
inequalities with pairwise-independent priors on such matroids, approximately
matching the corresponding guarantees for fully independent priors. These
algorithmic results hold against the almighty adversary for both problems. | Shaddin Dughmi, Yusuf Hakan Kalayci, Neel Patel | 2023-10-08T17:18:01Z | http://arxiv.org/abs/2310.05240v3 | # Limitations of Stochastic Selection Problems with Pairwise Independent Priors
###### Abstract
Motivated by the growing interest in correlation-robust stochastic optimization, we investigate stochastic selection problems beyond independence. Specifically, we consider the instructive case of pairwise-independent priors and matroid constraints. We obtain essentially-optimal bounds for offline contention resolution and prophet inequalities against the almighty online adversary. The impetus for our work comes from the recent work of [18], who derived a constant-approximation for the single-choice prophet inequality with pairwise-independent priors.
For general matroids, our results are tight and largely negative. For both contention resolution and prophet inequalities, our impossibility results hold for the full linear matroid over a finite field. We explicitly construct pairwise-independent distributions which rule out an \(\omega\left(\frac{1}{\mathbf{Rank}}\right)\)-balanced offline CRS and an \(\omega\left(\frac{1}{\log\mathbf{Rank}}\right)\)-competitive prophet inequality. For both results, we employ a generic approach for constructing pairwise-independent random vectors -- one which unifies and generalizes existing pairwise-independence constructions from the literature on universal hash functions and pseudorandomness. Specifically, our approach is based on our observation that random linear maps turn linear independence into stochastic independence.
We then examine the class of matroids which satisfy the so-called partition property -- these include most common matroids encountered in optimization. We obtain positive results for both contention resolution and prophet inequalities with pairwise-independent priors on such matroids, approximately matching the corresponding guarantees for fully independent priors.
###### Contents
* 1 Introduction
* 2 Preliminaries
* 2.1 Notations
* 2.2 Matroid Theory
* 2.3 Contention Resolution Schemes
* 2.4 Prophet Inequality
* 3 Overview of Technical Results
* 3.1 A Recipe for \(k\)-wise Independent Vector Families
* 3.2 Optimal Pairwise Independent CRS for Matroids
* 3.3 Optimal Pairwise Independent Matroid Prophet Inequality
* 3.4 Stochastic Selection with Partition Property
* 4 Building Block: Random Linear Maps
* 5 Limits of Pairwise Independent Contention Resolution
* 6 Limits of Prophet Inequality with Pairwise Independent Priors
* 6.1 Construction of Pairwise Independent Weight Distribution
* 6.2 Upper Bounding the Approximation Ratio
* 6.2.1 Lower bound for Prophet's Value
* 6.2.2 Upper Bound on the Perfomance of any Algorithm
* 7 Pairwise Independent Matroid Prophet Inequality
* 8 Partition Property and Stochastic Selection Problems
* 8.1 Partition Property and Pairwise Independent Matroid Prophet Inequalities
* 8.2 Partition Property and Pairwise Independent CRS for Matroids
* 9 Open Questions
* A Missing Proofs from Section 5
* B Missing Proofs from Section 6
Introduction
Combinatorial optimization subject to uncertainty has gained substantial interest in recent years, largely motivated by its applications in computational economics ([31, 20]). In many of these tasks, the underlying uncertainty or stochasticity arises from either the random availability of elements of a set system or a stochastic weight assignment over these elements. Two pivotal stochastic selection problems, _contention resolution_ (e.g., [48, 3, 28, 43, 40]) and _generalized prophet inequalities_ (e.g., [37, 30, 26]), fit into this paradigm. These problems appear either directly, or indirectly as subroutines, throughout the fields of algorithms and combinatorial optimization with a wide range of applications including mathematical programming [48], combinatorial sparsification [25], online rounding schemes [46], stochastic probing [28, 2, 13], oblivious posted pricing mechanisms [43, 28, 27], and algorithmic delegation [36, 14, 15].
A rich literature examines the design of algorithms for these problems when the input is a product distribution or negatively correlated. However, our understanding is relatively limited when the input distribution exhibits correlations, particularly positive correlations, which are often present in many intended applications. For instance, consider the scenario of sequential posted pricing where a seller with a single item encounters \(n\) prospective buyers in sequence, each possessing a valuation for the product. The seller, with the goal of maximizing profit, offers a fixed, non-negotiable price to each buyer, who then decides to buy the item if the price is less than or equal to their valuation. Yet, in today's hyper-connected world, it is unrealistic to presume buyers remain unaffected by or ignorant of each other's valuations. In fact, notable studies [42, 7] demonstrate this phenomenon by showing that the aggregate online reviews from a large group of buyers play a critical role in shaping customer behavior.
A deeper understanding of the interplay between correlation and optimal selection, and an expansion of the algorithmic toolkit thereof, could enable progress in everything from approximation algorithms, online decision-making, decision-making under uncertainty, mechanism design, and beyond. Consequently, recent research efforts also aimed to expand this toolkit to enable correlation robust stochastic optimization [9, 16, 17, 29, 33]. It is either known (e.g. [32, 44]), or easy to show, that not much can be achieved when there is of arbitrary positive correlation. Even under assumptions like the _linear correlation model_[12], in the worst case there are no positive algorithmic results for prophet inequalities with non-sparse dependencies even for the rank one matroid [33]1.
Footnote 1: In the linear correlation model, weights \(X=(X_{1},\ldots,X_{n})\) are sampled as \(X=AY\), where \(A\) is a fixed matrix and \(Y=(Y_{1},\ldots,Y_{n})\) is a vector of independent random variables. [33] shows a constant factor prophet inequality when either all rows or columns of \(A\) have a constant number of non-negative entries. However, in general, prophet inequalities with linear correlations have a lower bound of \(\Omega(n)\) due to [33].
Recent work by [18] attempts to rigorously understand stochastic selection problems with a broad family of distributions that permit positive correlations. They examine prophet inequalities for the rank one matroid when the stochastic inputs are _pairwise independent_, that is, the value of any two elements of the set system are independent of each other. Pairwise independence is a practical relaxation of independent distributions, which has been identified as a key technical component in constructing pseudo-random generators, derandomization, and reusing randomness due to its "almost independent" behavior (for more details, see surveys [38, 47]). For further insight into the motivation for studying stochastic optimization problems under conditions of pairwise independent distributions, [18] provides a more extensive discussion. A key contribution of [18], and the initial motivation for this work, is their _constant approximate_ prophet inequality for the rank one matroid when the input distribution is pairwise independent. This naturally leads us to
the following question:
**Question 1.1**.: _Do constant approximate prophet inequalities or contention resolution schemes exist for a broader class of set-systems, such as matroids, when the input distribution is pairwise independent? More generally, is it possible to characterize a class of set-systems that admit constant approximate prophet inequalities or contention resolution schemes?_
We resolve the above question for matroids. We demonstrate the limitations for both the matroid prophet inequalities and matroid contention resolution schemes when the stochastic inputs are pairwise independent. In contrast to the strong algorithmic results with mutually independent inputs [37, 21], we rule out constant-approximate matroid prophet inequalities and offline contention resolution schemes. Subsequently, we complement these results with optimal algorithms that match the performance upper bounds. The following informal results summarize our main contribution.
1. There is no \(\omega\left(\frac{1}{\log\mathbf{Rank}}\right)\)-competitive matroid prophet inequality with pairwise independent distributions.
2. There is no \(\omega\left(\frac{1}{\mathbf{Rank}}\right)\)-balanced contention resolution schemes for pairwise independent distributions.
For both results, we carefully construct a pairwise independent distribution for the linear matroid \(\mathbb{F}_{q}^{d}\) for some large \(d\in\mathbb{Z}_{+}\) and desired prime \(q\). Our approach to constructing pairwise independent distributions is founded on the observation that uniformly random linear maps between vector spaces convert linear independence in the domain space to stochastic independence in the range space. To put it formally, when a group of \(n\) vectors, which are \(k\)-wise linearly independent, is embedded in a vector space via a uniformly random linear map, the embedded \(n\) vectors exhibit \(k\)-wise stochastic independence and each assumes a uniform marginal distribution over the space. Special instances of this observation have previously been employed to define \(k\)-wise independent hash functions [19, 49] and \(k\)-wise independent random bits [5, 35, 6, 39]. For a comprehensive overview of prior work on the construction of pairwise independent distributions, we refer interested readers to the survey by [38] and Chapter 3 of [47]. Our method can be viewed as a simple unification of existing constructions of scalar-valued random variables2.
Footnote 2: Despite the simplicity of the construction, we have not been able to identify another construction with this level of generality. The concepts presented here permeate existing work on constructing \(k\)-wise independent random variables.
Later, we examine the class of matroids that satisfy the partition property -- these include the most common matroids encountered in optimization. Informally, this property holds if a matroid can be approximated by a (random) partition matroid. We demonstrate that, when a matroid fulfills the partition property, we can reduce the problem to one defined over rank one matroids. Leveraging the prophet inequality of [18] and our contention resolution scheme for the rank one matroid, we obtain constant factor prophet inequalities and contention resolution schemes for pairwise independent distributions for such matroids.
Finally, our results enhance the current understanding of the differentiation between two matroid families: those which admits the partition property and those that do not, as discussed in recent works [1, 10]. In the context of the matroid secretary problem, existing algorithms for a certain class of matroids--graphic [8], co-graphic [45], and laminar [34]--are contingent upon a constant partition property of these matroids. A survey by Dinitz [22] has posed an open question
whether every matroid satisfies a constant partition property, yet a recent work [1] has disproven this by demonstrating that binary matroids do not satisfy an \(\alpha\) partition property for \(\alpha\leq O(d^{1/4})\) (\(d\) is the rank of the matroid). Thus, [1] and another independent study [10] have both negated the possibility of any constant approximate partition-based algorithm for the matroid secretary problem. In this work, we both strengthen and deepen this distinction in two ways. First, we establish that linear matroids does not admit an \(\alpha\)-partition property for any \(\alpha\leq O(d)\), and their special subclass, binary matroids, does not for any \(\alpha\leq O(d/\log d)\). Second, we introduce an algorithm-independent structural distinction between these two classes, precluding the existence of constant approximate prophet inequality and contention resolution algorithms for matroids lacking constant partition property under pairwise independent distributions.
## 2 Preliminaries
### Notations
Throughout, we use bold, lowercase letters to signify vectors, with the \(\ell^{\text{th}}\) component denoted by \(\mathbf{v}(\ell)\). Sets are denoted by capital letters, while collections of sets are denoted by bold capitals. We let the set of positive integers up to \(n\) as \([n]\). For a function \(\phi:A\to B\), for any \(E\subseteq A\), we define a multiset \(\phi[E]:=\{\phi(a):a\in E\}\). For any finite set of elements \(E\), we denote the set of all subsets of \(E\) as \(2^{E}\). We denote _downward-closed_ set-system over set of elements \(E\) as \(\mathcal{M}=(E,\mathcal{I})\), where \(\mathcal{I}\) is a downward closed subset of \(2^{E}\).
We let \(\Delta(2^{E})\) as the set of all possible distributions over \(2^{E}\). For any \(\mathbf{x}\in[0,1]^{|E|}\), we let \(\Delta_{pw}(2^{E})(\mathbf{x})\subseteq\Delta(2^{E})\) as the family of pairwise independent distributions over a subset of elements \(E\) with marginals \(\mathbf{x}\), i.e. for every \(\mathcal{D}(\mathbf{x})\in\Delta_{\text{pw}}(\mathbf{x})\), \(\mathbf{Pr}_{Q\sim\mathcal{D}(\mathbf{x})}[e\in Q]=x_{e}\) and the events \(\{e\in Q\}_{e\in E}\) are pairwise independent.
Similarly, we denote the set of possible non-negative weight/value distributions over the set of elements \(E\) that (randomly) assigns a positive weight to each element in \(E\) as \(\Delta(\mathbb{R}_{\geq 0}^{E})\). We let \(\Delta_{\text{pw}}(\mathbb{R}_{\geq 0}^{E})\subseteq\Delta(\mathbb{R}_{\geq 0}^{E})\) be the class of pairwise independent weight distribution over elements \(E\) such that for any \(\mathcal{D}\in\Delta_{\text{pw}}\), any pair of \(e,f\in E\) of elements and values \(x,y\in\mathbb{R}_{\geq 0}\), \(\mathbf{Pr}_{w\sim\mathcal{D}}[w(e)=x\wedge w(f)=y]=\mathbf{Pr}_{w\sim \mathcal{D}}[w(e)=x]\cdot\mathbf{Pr}_{w\sim\mathcal{D}}[w(f)=y]\). Throughout the paper, we exchangeably use the term "weight" and "value".
\(\mathbb{F}_{q}\) denotes a finite field of size \(q\) and \(\mathbb{F}_{q}^{d}\) represents a vector space of dimension \(d\) over \(\mathbb{F}_{q}\), where \(q\) is a prime. We define \(\mathbb{F}_{q}^{d}\times[m]\) as a vector space \(\mathbb{F}_{q}^{d}\) with \(m\) copies of each vector where each copy is labeled with an integer from \([m]\), i.e. \(\left\{\mathbf{v}^{i}:i\in[m]\text{ for all }\mathbf{v}\in\mathbb{F}_{q}^{d}\right\}\) with standard operations on \(\mathbb{F}_{q}^{d}\)3. We use capital letters to symbolize matrices over these finite fields, and their rank is denoted by \(\mathbf{Rank}(\cdot)\). A matrix \(R\in\mathbb{F}_{q}^{r\times c}\) is a full row-rank matrix if \(\mathbf{Rank}(R)=r\), i.e. its rows constitute a set of linearly independent vectors.
Footnote 3: We assume that all vectors are column vectors
### Matroid Theory
We use standard definitions from matroid theory; for details see [41, 50]. A _matroid_\(\mathcal{M}=(E,\mathcal{I})\) is a set-system with elements \(E\) and a family of independent sets \(\mathcal{I}\subseteq 2^{E}\) satisfying the three _matroid axioms_. A _weighted matroid_ incorporates a matroid \(\mathcal{M}=(E,\mathcal{I})\) with weights \(w\in\mathbb{R}^{E}\) for its elements. By duplicating or making parallel labeled copies of each element of a matroid
\(\mathcal{M}=(E,\mathcal{I})\) "\(m\)" times, we construct a larger matroid \(\mathcal{M}^{m}=(E^{m},\mathcal{I}^{m})\), where \(E^{m}=E\times[m]\) and \(T\subseteq E^{m}\) is independent, \(T\in\mathcal{I}^{m}\), if \(\{e:e^{i}\in T\}\in\mathcal{I}\) and \(|T\cap\{e^{i}:i\in[m]\}|\leq 1\) for all \(e\in E\).
The rank function of matroid \(\mathcal{M}=(E,\mathcal{I})\) is indicated by \(\textbf{Rank}^{\mathcal{M}}\), where \(\textbf{Rank}^{\mathcal{M}}(S)=\max\{|T|:T\subseteq S,T\in\mathcal{I}\}\). Weighted version of the rank \(\textbf{Rank}^{\mathcal{M}}_{w}\) is defined for weighted matroids \((\mathcal{M},w)\) as \(\textbf{Rank}^{\mathcal{M}}_{w}(S)=\max\{w(T):T\subseteq S,T\in\mathcal{I}\}\). The span function of matroid \(\mathcal{M}\) is denoted by \(\textbf{Span}^{\mathcal{M}}(S)\) where \(\textbf{Span}^{\mathcal{M}}(S)=\{e\in E:\textbf{Rank}^{\mathcal{M}}(S\cup\{e \})=\textbf{Rank}^{\mathcal{M}}(S)\}\). We slightly abuse notation and use \(\textbf{Rank}(\mathcal{M})=\textbf{Rank}^{\mathcal{M}}(E)\) for the rank of the matroid. We may omit the superscript \(\mathcal{M}\) whenever it is clear from the context.
A _linear matroid_\(\mathcal{M}=(E,\mathcal{I})\) consists of a vector space \(E\) and independent sets \(\mathcal{I}\) that form a linearly independent set of vectors in \(E\). A _binary matroid_ is a linear matroid defined on the vector space \(\mathbb{F}_{2}^{d}\) for \(d\in\mathbb{N}\). Linear matroid with \(E=\mathbb{F}_{q}^{d}\) is referred to as full linear matroid over \(\mathbb{F}_{q}^{d}\). We refer to matroid with \(E=\mathbb{F}_{2}^{d}\) as full binary matroid.
### Contention Resolution Schemes
A contention resolution scheme (CRS) is a rounding technique introduced by Chekuri et al. [48] in the context of (offline) submodular function maximization. More formally, given downward-closed set-system \((E,\mathcal{I})\) over ground set \(E\), let \(P_{\mathcal{I}}\) be a _convex relaxation_ of the constraints \(\mathcal{I}\subseteq 2^{E}\), let \(\mathbf{x}\in P_{\mathcal{I}}\) and set of _active_ elments \(Q\sim\mathcal{D}\) such that \(\mathcal{D}\in\Delta(2^{E})\) and \(\textbf{Pr}_{Q\sim\mathcal{D}}[e\in Q]=x_{e}\) for all \(e\in E\). The goal of a CRS is to round the set \(Q\sim\mathcal{D}\) (which is not necessarily feasible in \(\mathcal{I}\)) to a feasible set \(I\in\mathcal{I}\) such that \(I\subseteq Q\). CRSes are judged by their balanced ratio: we say that a contention resolution scheme is \(c\)-balanced if for all \(e\in E\), \(\textbf{Pr}[e\in I\mid e\in Q]\geq c\). Many natural classes of combinatorial constraints including matroids, matching, and knapsack are known to admit \(\Omega(1)\)-balanced contention resolution schemes when the marginals \(\{e\in Q\}_{e\in E}\) are independent across elements.
For given \(\mathbf{x}\in\mathcal{P}_{\mathcal{I}}\), in this work, we focus on CRS for matroids when \(\mathcal{D}\in\Delta_{\text{pw}}(2^{E})(\mathbf{x})\). In the following definition, we define CRS for matroids with pairwise independent distribution which is a special case of a more general definition that appeared in Dughmi [23].
**Definition 2.1**.: _A pairwise independent CRS\(\pi\) for a matroid \(\mathcal{M}=(E,\mathcal{I})\) is an algorithm that takes as input a point \(\mathbf{x}\in P_{\mathcal{I}}\), distribution \(\mathcal{D}(\mathbf{x})\in\Delta_{\text{pw}}(2^{E})(\mathbf{x})\) and a set of active elements \(Q\sim\mathcal{D}(\mathbf{x})\) sampled from \(\mathcal{D}(\mathbf{x})\) and outputs a feasible subset \(\pi_{\mathbf{x}}(Q)\in\mathcal{I}\). In addition, we say CRS \(\pi\) is \(c\)-balanced (a.k.a, \(\pi\) has balance ratio \(c\)) if for every \(\mathbf{x}\in P_{\mathcal{I}}\) and any distribution \(\mathcal{D}\in\Delta_{\text{pw}}(2^{E})(\mathbf{x})\),_
\[\underset{R\sim\mathcal{D}(\mathbf{x})}{\mathbf{Pr}}[e\in\pi_{\mathbf{x}}(Q)] \geq c\cdot\underset{Q\sim\mathcal{D}(\mathbf{x})}{\mathbf{Pr}}[e\in Q].\]
Finally, we state the following theorem from [23] that characterizes the set of distributions which admits balanced contention resolution schemes.
**Theorem 2.2** (Theorem 3.6 from [23]).: _Given a matroid \(\mathcal{M}=(E,\mathcal{I})\), and let \(\mathcal{D}\) be a distribution supported on \(2^{E}\). The following are equivalent for every \(\alpha\geq 1\),_
1. \(\mathcal{D}\) _admits an_ \(\alpha\)_-balanced contention resolution map._
2. _For every weight vector_ \(w\in\mathbb{R}_{\geq 0}^{E}\)_, the following holds:_ \(\mathbb{E}_{Q\sim\mathcal{D}}[\textbf{Rank}_{w}(Q)]\geq\alpha\cdot\mathbb{E}_{ Q\sim\mathcal{D}}[w(Q)]\)__
### Prophet Inequality
Given a downward closed sets-system \(\mathcal{M}=(E,\mathcal{I})\), a _prophet inequality problem_ consists of weights \(w\in\mathbb{R}_{\geq 0}^{E}\) sampled from a known distribution \(\mathcal{D}\) and a permutation \(\lambda\) of \(E\) potentially chosen by an adversary. We take the perspective of the _gambler_, who knows \(\mathcal{M}\) and the distribution \(\mathcal{D}\) of the random variables \(\{w(e)\}_{e\in E}\). The gambler starts with an empty set \(S\) of accepted elements and then observes each element in \(E\) in an order \(\lambda\) chosen by an adversary. For the purposes of this paper, the gambler plays against the _almighty adversary_ who knows the realization of all the elements in advance however in contrast to [28], the adversary, does not know the coin flips of the gambler's strategy. When the element \(e\in E\) arrives, the gambler learns the realization of \(w(e)\) and has to decide online whether to accept element \(e\) or not based on \((e,w(e))\) and the previously accepted elements \(S\). However, they can only accept \(e\) if \(S\cup\{e\}\) is feasible in \(\mathcal{M}\). The gambler seeks to maximize their utility \(\mathbb{E}_{\mathcal{D}}\left[\sum_{e\in S}w(e)\right]\), and in particular to compete with a _prophet_ who plays the same game and knows the realizations of all random variables in advance. If the gambler has a strategy guaranteeing an \(\alpha\) fraction of the prophet's expected utility in expectation, we say that we have an \(\alpha\)-approximate (or competitive) prophet inequality for a matroid \(\mathcal{M}\) and the value distribution \(\mathcal{D}\). In this work, we focus on the matroid prophet inequality problem with the class of pairwise independent value distributions, i.e. \(\mathcal{D}\in\Delta_{\text{pw}}(\mathbb{R}_{\geq 0}^{|E|})\).
## 3 Overview of Technical Results
In this section, we briefly present an overview of our techniques and results. Our results are a mixed bag: first, we identify limitations of algorithmic results for prophet inequalities and offline contention resolution schemes for matroids when inputs exhibit limited independence, particularly, pairwise independence. Our results essentially demonstrate the "complexity" of stochastic selection problems beyond independent priors. Next, we obtain constant factor prophet inequalities and offline contention resolution schemes for matroids that adhere to the so-called partition property, which includes the most common matroids encountered in combinatorial optimization--examples include graphic, co-graphic, transversal matroids, etc.
### A Recipe for \(k\)-wise Independent Vector Families
Our exposition starts with an exploration of a methodology to generate pairwise independent distributions (or in a broader sense, \(k\)-wise independent distributions) over a vector space over a finite field. Consider \(\mathbf{r}_{1},\mathbf{r}_{2}\), two (stochastically independent) random vectors in \(\mathbb{F}_{q}^{d}\), selected uniformly at random, with each entry of \(\mathbf{r}_{1}\) and \(\mathbf{r}_{2}\) independently drawn from \(\operatorname{Unif}\left\{0,\ldots q-1\right\}\). We then define vectors \(\mathbf{x}_{1}=\mathbf{r}_{1},\mathbf{x}_{2}=\mathbf{r}_{2},\mathbf{x}_{3}= \mathbf{r}_{1}+\mathbf{r}_{2}\). It is notable that these vectors are pairwise independent, in other words, \(\mathbf{Pr}[\mathbf{x}_{i}=\mathbf{u}\wedge\mathbf{x}_{j}=\mathbf{v}]= \mathbf{Pr}[\mathbf{x}_{i}=\mathbf{u}]\cdot\mathbf{Pr}[\mathbf{x}_{j}=\mathbf{ v}]\) for any arbitrary \(\mathbf{u},\mathbf{v}\in\mathbb{F}_{q}^{d}\), despite their lack of mutual independence. This follows from the fact that as long as any two distinct random vectors \(\mathbf{x}_{i}\) and \(\mathbf{x}_{j}\) (i.e. \(i\neq j\)) are not defined by the same linear combination of random vectors \(\mathbf{r}_{1},\mathbf{r}_{2}\), their embedding into higher dimensional vector space through a _uniformly random linear map_ preserves their stochastic independence. Here, in essence, we perform a random transformation on the vector space \(\mathbb{F}_{q}^{2}\), and subsequently embed them randomly into the higher-dimensional space \(\mathbb{F}_{q}^{d}\).
We utilize _random linear maps_ to generate pairwise independent (and more generally \(k\)-wise independent) random vectors. We consider a set of deterministic vectors \(\sigma_{1},\ldots\sigma_{T}\subseteq\mathbb{F}_{q}^{m}\), where
any pair of vectors are linearly independent (not multiples of each other). Let \(R\in\mathbb{F}_{q}^{d\times m}\) be a uniformly random matrix, and \(\Phi_{R}:\mathbb{F}_{q}^{m}\to\mathbb{F}_{q}^{d}\) be a linear function such that \(\Phi_{R}(\sigma)=R\cdot\sigma\). Our first observation (in Lemma 4.1) is that the events \(\{\Phi_{R}(\sigma_{i})=\mathbf{u}\}\) and \(\{\Phi_{R}(\sigma_{j})=\mathbf{v}\}\) for any arbitrary \(i\neq j\in[T]\) and \(\mathbf{u},\mathbf{v}\in\mathbb{F}_{q}^{d}\) are independent. Additionally, for any non-zero \(\sigma\in\mathbb{F}_{q}^{m}\) and \(\mathbf{u}\in\mathbb{F}_{q}^{d}\), we find that \(\mathbf{Pr}[\Phi_{R}(\sigma)=u]=\frac{1}{|\mathbb{F}_{q}^{d}|}=\frac{1}{q^{d}}\), meaning that each \(\Phi_{R}(\sigma)\) is uniformly distributed over \(\mathbb{F}_{q}^{d}\) for any non-zero vector \(\sigma\in\mathbb{F}_{q}^{m}\).
In a nutshell, given the set of pairwise linearly independent vectors \(\Sigma=\{\sigma_{1},\ldots\sigma_{T}\}\) for \(T>m\), the random vectors \(\Phi_{R}[\Sigma]=\{\Phi_{R}(\sigma):\sigma\in\Sigma\}\) are pairwise independent and uniformly distributed over the space. Notice that range of the linear map \(\Phi_{R}\) lies in the column space of \(R\). Therefore, \(\mathbf{Rank}(\Phi_{R}[\Sigma])\leq m\), and so \(\Phi_{R}[\Sigma]\) generates \(T\) many pairwise independent vectors in \(\mathbb{F}_{q}^{d}\) with \(\mathbf{Rank}(\Phi_{R}[\Sigma])\leq m<T\). Hence, \(\Phi_{R}[\Sigma]\) is contained in a subspace with a small rank while maintaining pairwise stochastic independence.
### Optimal Pairwise Independent CRS for Matroids
The first application of random linear maps hinges on its ability to control the rank of the set of output vectors. As a first step, we construct an approximately pairwise independent distribution for a linear matroid \(\mathcal{M}\) with rank \(d\) defined on \(\mathbb{F}_{q}^{d}\) that does not admit \(\frac{c}{d}\)-balanced CRS for some constant \(c\). We construct a distribution \(D\in\Delta_{pw}(2^{E})\) in two steps: first, we choose \(\Theta(d)\) many vectors \(\Sigma\) from \(\mathbb{F}_{q}^{c}\) that are pairwise linearly independent, where \(c\) is a universal constant (indeed \(c=2\)). By suitably choosing \(q\) to be sufficiently large, we can ensure the existence of a set of pairwise linearly independent vectors \(\Sigma\subseteq\mathbb{F}_{q}^{c}\) with \(|\Sigma|=d\). Second, we sample a uniformly random matrix \(R\in\mathbb{F}_{q}^{c\times d}\) and define \(Q=\Phi_{R}[\Sigma]\) as the collection of vectors generated by the random linear map \(\Phi_{R}\). When \(d>>c\), with high probability, \(R\) has full row-rank, making \(\mathbf{Pr}[\mathbf{v}\in Q]\approx\frac{d}{q^{d}}\) for any \(\mathbf{v}\in\mathbb{F}_{q}^{d}\). It is noteworthy that the marginal probability vector of the above distribution is close to \(\frac{d}{q^{d}}\cdot\mathbf{1}\in\mathcal{P}_{M}\).
Before we analyze the pairwise independence of the events \(\{v\in Q\}_{v\in\mathbb{F}_{q}^{d}}\), we informally show that the full linear matroid over \(\mathbb{F}_{q}^{d}\) does not admit \(\frac{c}{d}\)-balanced CRS for \(\mathcal{D}\). We observe that by construction, \(\mathbf{Rank}_{\mathcal{M}}(Q)\leq c\) with probability one.
Therefore, we get
\[\frac{\mathbb{E}_{Q\sim\Phi_{R}[\Sigma]}[\mathbf{Rank}(Q)]}{\mathbb{E}_{Q\sim \mathcal{D}}[|Q|]}\lesssim\frac{c}{d}=O\left(\frac{1}{d}\right).\]
Combining the above inequality with Lemma 2.2, we conclude that the full matroid on \(\mathbb{F}_{q}^{d}\) does not admit \(\omega(1/d)\)-balanced CRS for \(\mathcal{D}\).
To address the technical challenges of the above approach, let \(Q=\Phi_{R}[\Sigma]:=\{\mathbf{q}_{1},\ldots,\mathbf{q}_{d}\}\). We first observe that due to Lemma 4.1, for any two vectors \(\mathbf{v},\mathbf{u}\in\mathbb{F}_{q}^{d}\) and \(i\neq j\in[d]\), events \(\{\mathbf{q}_{i}=\mathbf{v}\}\) and \(\{\mathbf{q}_{j}=\mathbf{u}\}\) are independent of each other. However, the events \(\{\mathbf{v}\in Q\}\) and \(\{\mathbf{u}\in Q\}\) are not independent as once we condition on the event \(\{\mathbf{v}\in Q\}\), \(\mathbf{u}\) is less likely to be in the set \(Q\) as the size of \(Q\) is fixed with \(|Q|=d\). At first glance, when \(d\) is large, a fixed size of \(Q\) seems to be a small technical hurdle. However, when the random matrix \(R\) is not full rank, then with non-zero probability, the event \(\{\mathbf{q}_{i}=\mathbf{q}_{j}\}\) can occur for any \(i\neq j\). This makes the analysis of the pairwise independence property of the events \(\{\mathbf{v}\in\mathbb{F}_{q}^{d}\}_{\mathbf{v}\in\mathbb{F}_{q}^{d}}\) notoriously difficult even when \(d\) is large. On the brighter side, the random matrix \(R\) has a full column rank with high probability which allow us to construct a distribution \(\mathcal{D}\) over a slightly modified full \(\mathbb{F}_{q}^{d}\) matroid.
We overcome the aforementioned technical challenges and construct a linear matroid \(\mathcal{M}\) and distribution over subsets of its elements that do not admit \(\omega(1/\mathbf{Rank}(\mathcal{M}))\)-balanced CRS in Section 5. The actual construction is technically involved, hence, here we present a brief overview. We first construct \(d\) duplicates of each vector in full linear matroid over \(\mathbb{F}_{q}^{d}\) and formulate a matroid \(\mathcal{M}^{d}=(\mathbb{F}_{q}^{d}\times[d],\mathcal{I}^{d})\) (See Section 2.2 for definition). By making copies, whenever the event \(\{\mathbf{q}_{i}=\mathbf{v}\}\) and \(\{\mathbf{q}_{j}=\mathbf{v}\}\) occurs, we can make two distinct copies of \(\mathbf{v}\) active, i.e., we let two copies \(\mathbf{v},\mathbf{v}^{\prime}\in Q\). Subsequently, we mix this distribution with a positively correlated distribution where all elements are active simultaneously to ensure that events \(\{\mathbf{v}^{j}\in Q\}_{\mathbf{v}^{j}\in\mathbb{F}_{q}^{d}\times[d]}\) are pairwise independent. Our technique here can be viewed as a generalization of Alon et.al [4] which construct \(k\)-wise independent distribution over \(\{0,1\}^{n}\) from an almost \(k\)-wise independent distribution over \(\{0,1\}^{n}\) with small total variation distance. The key difference is that their techniques heavily rely on the fact that each marginal of the joint distribution is almost \(\mathrm{Ber}(1/2)\) which is not the case in our construction.
We complement this result with the existence of \(\frac{1}{d}\)-balanced CRS for matroids over the class pairwise independent distributions \(\{\Delta_{\mathrm{pw}}(\mathbf{x}):\mathbf{x}\in P_{\mathcal{I}}\}\). We observe that for any \(\mathbf{x}\in P_{\mathcal{I}}\), weight assignment \(\mathbf{w}\geq 0\) and \(\mathcal{D}(\mathbf{x})\in\Delta_{\mathrm{pw}}(\mathbf{x})\) we have
\[\operatorname*{\mathbb{E}}_{Q\sim\mathcal{D}(\mathbf{x})}[\mathbf{Rank}_{ \mathbf{w}}(Q)]\geq\operatorname*{\mathbb{E}}\left[\max_{e\in Q}\mathbf{w}(e) \right]\geq\frac{1}{d}\cdot\sum_{e\in E}\mathbf{w}(e)\cdot\mathbf{x}(e)=\frac {1}{d}\cdot\operatorname*{\mathbb{E}}_{R\sim\mathcal{D}}[\mathbf{w}(Q)].\]
Hence, Theorem 2.2 concludes that there existence of a \(\frac{1}{d}\)-balanced offline CRS for a matroid \(\mathcal{M}\) with distribution \(\mathcal{D}(\mathbf{x})\).
### Optimal Pairwise Independent Matroid Prophet Inequality
Let us first revisit the framework we use to fabricate bad examples for contention resolution schemes. Initially, we assemble a set of vectors, denoted by \(\Sigma\), which lays the groundwork for constructing a "bad" distribution \(\mathcal{D}\in\Delta_{\mathrm{pw}}(2^{E})\). We design \(\Sigma\) to ensure two conditions: (i) the vectors within \(\Sigma\) are pairwise linearly independent, and (ii) \(\Sigma\) presents a "bottleneck" to the primary problem at hand. Following that, we use the random linear map \(\Phi_{R}[\Sigma]\) to yield a set of random vectors (which are elements of the matroid at hand) that are pairwise independent and retain the "bottleneck" characteristics.
We adhere to this same blueprint for crafting a bad example for the Prophet Inequality problem. In what follows, we consider a full binary matroid \(\mathbb{F}_{2}^{2d}\) and construct a pairwise independent weight distribution \(\mathcal{D}\in\Delta_{\mathrm{pw}}(\mathbb{R}_{\geq 0}^{|E|})\) over elements of \(\mathbb{F}_{2}^{2d}\). Formally, we create \(\kappa=\Theta(\log d)\) many distinct weights \(\{1,2^{2},\ldots,2^{\kappa-1}\}\) and (randomly) construct \(\tau=\Theta(d)\) many vectors from \(\mathbb{F}_{q}^{2d}\) such that each vector is assigned one of the \(\kappa\) many possible weights and rest of them are assigned a weight of zero. We refer to the vectors with weight \(2^{\ell}\) as vectors at "level-\(\ell\)"; exchangeably; a vector at level \(\ell\in[\kappa]\) is assigned a weight of \(2^{\ell}\). In constructing the weight assignments over the vectors from \(\mathbb{F}_{2}^{2d}\), we start by assembling a collection of "raw" vectors \(\Sigma=\bigcup_{\ell}^{\kappa}\Sigma_{\ell}\) from the vector space \(\mathbb{F}_{2}^{d}\). Subsequently, we build "actual" vectors in level \(\ell\) by applying a random linear map \(\Phi_{R}:\mathbb{F}_{2}^{d}\rightarrow\mathbb{F}_{2}^{2d}\) to "raw" vectors \(\Sigma_{\ell}\), i.e. \(\Phi_{R}(\Sigma_{\ell})\).
In the construction of each level, we ensure that the rank of the vectors at level \(\ell\) is half the rank of vectors at level \(\ell-1\). In our construction, the span of vectors at level one has a rank of \(\Theta(d)\). Therefore, the span of vectors at level \(\ell\) has a rank of \(\Theta\left(\frac{d}{2^{\ell}}\right)\). Moreover, the vectors from level \(\ell\) are assigned a weight of \(2^{\ell}\). As a result, we observe that the weighted rank of vectors at
any level is \(\Theta(d)\). The goal here is to construct these levels such that vectors from a given level fall within the span of vectors allocated at the lower levels.
For simplicity, let us assume that the vectors at each level form an independent set in the full matroid \(\mathbb{F}_{2}^{2d}\). The essential "bottleneck" we strive to maintain is that vectors with smaller weights, or vectors at lower levels, span all vectors at higher levels, with the "contribution" of each level to the prophet's reward being roughly the same. Consequently, any constant approximate algorithm needs to select a constant fraction of vectors at each level.
To mislead any algorithm/gambler, we intend to ensure that when a vector from level \(\ell\) is chosen, without knowing the set of vectors from level \(\ell^{\prime}>\ell\), then each vector from level \(\ell^{\prime}\) is spanned with a constant probability; this is the key objective of our construction. So, any algorithm oblivious to the vectors in the higher level that selects a vector at level \(\ell\) ends up spanning each vector at a higher level with a constant probability. As a result, the algorithm will only be able to select vectors from \(O(1)\) many levels before it spans all the vectors at a higher level with a probability of at least \(1-o(1)\), leading to the lower bound of \(\Omega(1/\kappa)\).
It is not very difficult to construct such weight assignments with arbitrarily correlated distribution. Therefore, the key challenge here is to maintain the "bottleneck" of the problem while preserving pairwise independence. We overcome this challenge by encoding the underlying bottleneck into \(\Sigma=\bigcup_{\ell=1}^{\kappa}\Sigma_{\ell}\). To do that, we choose "raw" basis vectors \(B_{\ell}\) from the canonical basis of \(\mathbb{F}_{q}^{d}\) in a way that \(B_{1}\supseteq B_{2}\cdots\supseteq B_{\kappa}\). This selection forms a nested system of subspaces \(\mathbf{Span}(\Phi_{R}[B_{\ell}])\) for \(\ell\in[\kappa]\) within the embedded vector space of \(\mathbb{F}_{q}^{2d}\). Next, we create "raw" vectors \(\Sigma_{\ell}\) for each level as linear combinations of vectors in \(B_{\ell}\). Therefore, \(\mathbf{Span}(\Sigma_{\ell})\subseteq\mathbf{Span}(B_{\ell})\) and \(\mathbf{Span}(\Phi_{R}(\Sigma_{\ell}))\subseteq\mathbf{Span}(\Phi_{s}(B_{ \ell}))\). However, in the construction of \(\Sigma_{\ell}\), there is a possible pitfall that results in \(\Sigma_{\ell}\cap\mathbf{Span}(B_{\ell-1})=\emptyset\). This condition might inadvertently assist the gambler to extract a significant fraction of the reward in higher levels, even without knowledge of upcoming levels. To evade this trap and to ensure that \(\Sigma:=\bigcup_{\ell=1}^{\kappa}\Sigma_{\ell}\) serves as a tool for crafting a pairwise independent weight assignment yielding high prophet reward, we choose \(\Sigma\) to meet certain criteria:
1. \(\mathbf{Pr}[\sigma\in\mathbf{Span}(B_{\ell+1})\mid B_{\ell},\Sigma_{\ell}]= \frac{1}{2}\) for any vector \(\sigma\in\Sigma_{\ell}\)
2. \(\mathbf{Rank}(\Sigma_{\ell})=|\Sigma_{\ell}|=|B_{\ell}|/2\), i.e. \(\Sigma_{\ell}\) consists of linearly independent vectors of size \(|B_{\ell}|/2\),
3. \(\Sigma:=\bigcup_{\ell=1}^{\kappa}\Sigma_{\ell}\) are pairwise linearly independent vectors.
For now, let's assume the existence of \(\Sigma_{\ell}\) and \(B_{\ell}\) satisfying the above properties. The technical details for the construction of such \(\Sigma_{\ell}\) and \(B_{\ell}\) are described in Procedure 2 (Section 6).
Consider \(X=\Phi_{R}[\Sigma]\) to be a subset of \(\mathbb{F}_{2}^{2d}\) representing the collection of vectors upon application of the random linear map. Property 3 implies that the elements of \(X\) are pairwise independent vectors. As a natural next step, we would attempt defining a weight assignment \(w(\mathbf{v})=2^{\ell}\) if \(\mathbf{v}\in\Phi_{R}[\Sigma_{\ell}]\). However there are two key challenges that hinder such distributions to be pairwise independent: first, each level contains a fixed number of vectors. Hence, once conditioned on \(w(\mathbf{v})=2^{\ell}\), the probability of all remaining vectors to receive the weight \(2^{\ell}\) decreases.
The second, and more challenging technical obstacle, is that this form of weight assignment cannot be considered _valid_, as \(\mathbf{v}\in\mathbb{F}_{2}^{2d}\) might be part of both \(\Phi_{R}[\Sigma_{\ell}]\) and \(\Phi_{R}[\Sigma_{\ell^{\prime}}]\) for distinct \(\ell,\ell^{\prime}\in[\kappa]\), and this happens with a non-zero probability. Despite this, such an event has a small probability of occurrence since the random matrix \(R\in\mathbb{F}_{2}^{2d\times d}\) become full column-rank with high probability. As \(X\) is a set of linear combinations of columns of \(R\), when \(R\) is a full column-rank matrix, each vector can appear in at most one of the levels. Thus, the weight assignment
is "approximately" valid which allows us to construct a valid weight assignments that preserves the required bottleneck for any prophet inequality algorithm with high probability. The actual construction is more involved and deferred to Section 6.
For the sake of exposition, let us momentarily consider the above weight assignment distribution \(\mathcal{D}\) as a valid and pairwise independent. We define an arrival order of elements (vectors) to the gambler as \(\Phi_{R}[\Sigma_{1}],\Phi_{R}[\Sigma_{2}],\ldots\Phi_{R}[\Sigma_{\kappa}]\) where elements (vectors) within each group are arranged arbitrarily. This ordering necessitates an adversary who knows all weight realizations beforehand. Assuming provisionally that \(w\) is a valid weight assignment, it is observed that Property 2 and Property 1 ensure that
\[\mathbb{E}[\mathbf{Rank}_{w}(\Phi_{R}[\Sigma_{\ell}\cap\mathbf{Span}(B_{\ell} \setminus B_{\ell+1})])]=\mathbb{E}[\mathbf{Rank}_{w}(\Phi_{R}[\Sigma_{\ell} \cap\mathbf{Span}(B_{\ell})])]=\frac{|B_{\ell}|}{4}.\]
Therefore, a prophet who knows all the weights in advance can select the set of items \(S^{*}:=\bigcup_{\ell=1}^{\kappa}\Phi_{R}[\Sigma_{\ell}\cap\mathbf{Span}(B_{ \ell}\setminus B_{\ell+1})]\) and guarantee that
\[\mathbb{E}[\mathbf{Rank}_{w}(S^{*})] =\sum_{\ell=1}^{\kappa}\mathbb{E}[\mathbf{Rank}_{w}(\Phi_{R}[ \Sigma_{\ell}\cap\mathbf{Span}(B_{\ell}\setminus B_{\ell+1})])]\] \[=\sum_{\ell=1}^{\kappa}2^{\ell}\cdot\frac{|B_{\ell}|}{4}=\sum_{ \ell=1}^{\kappa}2^{\ell}\cdot\frac{d}{2^{\ell-1}\cdot 4}\] \[=\Omega(\kappa\cdot d).\]
On the other hand, when gambler selects \(\rho_{\ell}\) many vectors at level \(\ell\), we can leverage concentration inequalities (assuming \(d=2^{O(\kappa)}\)) and Property 1 to demonstrate that, with high probability, roughly \(\rho_{\ell}\cdot\frac{1}{2^{\ell-\ell}}\) of these vectors will reside in the \(\mathbf{Span}(B_{\ell^{\prime}})\) for any \(\ell^{\prime}\geq\ell\). Additionally, notice that \(|B_{\ell^{\prime}}|=|B_{\ell}|\cdot\frac{1}{2^{\ell^{\prime}-\ell}}\). Roughly speaking, \(\rho_{\ell}\) many selected vectors at level \(\ell\) spans \(\rho_{\ell}/|B_{\ell}|\) fraction of the rank of level \(\ell^{\prime}\). As a result, one can show that for any gambler/algorithm, the sum of fractions of vectors selected from each level is upper-bounded by a universal constant with high probability. Given that the maximum reward obtainable from each level is upper-bounded by \(\approx 2^{\ell}\cdot\frac{d}{2^{\ell-1}}=2d\), a gambler can achieve the expected reward of \(O(d)\). Thus, we have
\[\frac{\mathbb{E}[\text{reward of gambler}]}{\mathbb{E}[\text{reward of prophet}]}=\frac{O(d)}{\Omega(\kappa\cdot d)}=O\left(\frac{1}{\kappa}\right)=O \left(\frac{1}{\log d}\right).\]
Lastly, we readdress the techniques used to tackle challenges in creating pairwise independent weight assignments, utilizing a method similar to the CRS construction outlined in a previous section but more involved. First, we generate \(\tau\) many copies of each vector in \(\mathbb{F}_{2}^{2d}\) and obtain a matroid \(\mathcal{M}^{\tau}=(\mathbb{F}_{2}^{2d}\times[\tau],\mathcal{I}^{\tau})\). Next, whenever \(\mathbf{v}\in\mathbb{F}_{2}^{2d}\) appears at different levels, different copies of \(\mathbf{v}\in\mathbb{F}_{2}^{2d}\) are assigned different weights. This way, the resulting distribution becomes "valid". Later, we mix this distribution with two others, one in which all weights are uniformly at random and the other where we randomly select a level \(\ell\) and assign all weights to \(2^{\ell}\). Technical details of the description of weight assignment distribution are presented in Section 6.
We later introduce an algorithm for the pairwise independent matroid prophet inequality, that matches the upper bound of \(O\left(1/\log\mathbf{Rank}\right)\) up to a constant factor. Our algorithm's primary strategy is to group the matroid elements into \(O(\log\mathbf{Rank})\) buckets based on their weights. This ensures that within the same bucket, the weight of any two elements is within a factor of 2 of
each other. Following this, we employ a greedy algorithm on the group of elements yielding the highest expected reward. Given that there are \(\log\mathbf{Rank}\) total groups, our algorithm achieves an approximation factor of \(\Omega(1/\log\mathbf{Rank})\). However, one must note that the elements can have weights in \(\mathbb{R}_{\geq 0}\), suggesting the possibility of a group where the weights of elements might vary significantly. We address this challenge in our analysis by leveraging local-lemma type results for the pairwise independent set of events, as proposed in [18], showing that this particular bucket with high probability contains at most one element.
### Stochastic Selection with Partition Property
Finally, we show that when the input matroid \(\mathcal{M}\) admits the \(\alpha\)-_partition property_ (Formally defined in Section 8), allowing the matroid to be approximated by a (random) _simple partition matroid_, then we can transform the pairwise independent Contention Resolution and Prophet Inequality problems into parallel instances of the same problem with rank one matroid constraints. To do this, we leverage the \(\frac{1}{3}\)-approximate pairwise independent prophet inequality for the rank one matroid from [18]. For the construction of a reduction algorithm for Contention Resolution, we first prove a \(\frac{1}{4}\)-balanced pairwise independent CRS for the one-uniform matroid. Following this, by exploiting a result from convex geometry, we transform the input distribution of CRS for a general problem into valid input distributions (specifically, ex-ante feasible) for simple partition matroids, thereby simplifying the CRS problem to instances on partition matroids.
## 4 Building Block: Random Linear Maps
In this section, we devise a generic tool to produce random vectors displaying limited independence, relying solely on a uniformly random map between two finite vector spaces. For two vector spaces \(\mathbb{F}_{q}^{m}\) and \(\mathbb{F}_{q}^{d}\) over the same finite field \(\mathbb{F}_{q}\), a random linear map assigns basis vectors of \(\mathbb{F}_{q}^{m}\) randomly to vectors in \(\mathbb{F}_{q}^{d}\). We demonstrate that this random map transforms linear independence in the domain vector space \(\mathbb{F}_{q}^{m}\) into stochastic independence in the range vector space \(\mathbb{F}_{q}^{d}\).
To elaborate, consider \(R\in\mathbb{F}_{q}^{d\times m}\) to be a uniformly random matrix, where each entry \(r_{i,j}\) is sampled independently according to the uniform distribution over the finite set of elements \(\{0,1,\ldots,q-1\}\). We define a random linear map \(\Phi_{R}:\mathbb{F}_{q}^{m}\to\mathbb{F}_{q}^{d}\) as \(\Phi_{R}(\sigma)=R\cdot\sigma\). One can also verify that when \(R\) is sampled uniformly, \(\Phi_{R}\) has a uniform distribution over all possible linear functions from \(\mathbb{F}_{q}^{m}\) to \(\mathbb{F}_{q}^{d}\). Let \(\Sigma\subseteq\mathbb{F}_{q}^{m}\) be a collection of \(k\)-wise linearly independent vectors, implying that any subset of size \(k\) from \(\Sigma\) consists of linearly independent vectors, which we refer to as "raw" vectors. We then observe that by applying the random linear map \(\Phi_{R}\) to these "raw" vectors \(\Sigma\), we obtain a collection of \(k\)-wise stochastically independent vectors. The following lemma formalizes this discussion.
**Lemma 4.1**.: _Let \(\Sigma\subseteq\mathbb{F}_{q}^{m}\) be a collection of vectors \(\sigma_{1},\ldots,\sigma_{n}\in\mathbb{F}_{q}^{m}\) which are \(k\)-wise linearly independent, \(R\in\mathbb{F}_{q}^{d\times m}\) be a uniformly random matrix with entries \(r_{i,j}\sim\mathrm{Unif}\,(0,1,\ldots,q-1)\) and \(\mathbf{X}=\Phi_{R}[\Sigma]\) with vectors \(\mathbf{x}_{1},\ldots\mathbf{x}_{n}\). Then for any subset \(S\subseteq[n]\) of size at most \(k\) and vectors \(\mathbf{v}_{i}\in\mathbb{F}_{q}^{d}\) for \(i\in S\), events \(\{\mathbf{x}_{i}=\mathbf{v}_{i}\}_{i\in S}\) are mutually independent. Moreover, for any \(i\in[n]\) and \(\mathbf{v}\in\mathbb{F}_{q}^{d}\), \(\mathbf{Pr}[\mathbf{x}_{i}=\mathbf{v}]=\frac{1}{q^{d}}\)._
Proof.: Let \(S^{\prime}\subseteq S\) be an arbitrary subset and \(\mathbf{v}_{i}\in\mathbb{F}_{q}^{d}\) be an arbitrary vector for all \(i\in S^{\prime}\). We note
that \(|S^{\prime}|\leq k\). We start by observing that
\[\mathbf{Pr}\left[\bigwedge_{i\in S^{\prime}}\{\mathbf{x}_{i}=\mathbf{ v}_{i}\}\right] =\mathbf{Pr}\left[\bigwedge_{i\in S^{\prime}}\left\{\sum_{j=1}^{m} \Phi_{R}[\sigma_{i}]=\mathbf{v}_{i}\right\}\right]\] \[=\mathbf{Pr}\left[\bigwedge_{i\in S^{\prime}}\bigwedge_{\ell=1}^ {d}\left\{\sum_{j=1}^{m}r_{\ell,j}\cdot\sigma_{i}(j)=\mathbf{v}_{i}(\ell) \right\}\right]\] \[=\mathbf{Pr}\left[\bigwedge_{\ell=1}^{d}\bigwedge_{i\in S^{ \prime}}\left\{\sum_{j=1}^{m}r_{\ell,j}\cdot\sigma_{i}(j)=\mathbf{v}_{i}(\ell) \right\}\right]\] \[=\prod_{\ell=1}^{d}\mathbf{Pr}\left[\bigwedge_{i\in S^{\prime}} \left\{\sum_{j=1}^{m}r_{\ell,j}\cdot\sigma_{i}(j)=\mathbf{v}_{i}(\ell)\right\} \right]\qquad\text{ (disjoint set of RVs)}\]
For a fixed \(\ell\in[d]\), consider the system of equations with variables \(\theta_{j}\) for all \(j\in[m]\)
\[\sum_{j=1}^{m}\theta_{j}\cdot\sigma_{i}(j)=\mathbf{v}_{i}(\ell)\qquad\forall i \in S^{\prime}.\]
Notice that as \(|S^{\prime}|\leq k\), the subset \(\Sigma_{S^{\prime}}\subseteq\mathbb{F}_{q}^{m}\) with vectors \(\{\sigma_{i}\mid i\in S\}\) induces linearly independent vectors by the definition of \(\Sigma\). Therefore, the set of variables that satisfies the system of equations induces a subspace of dimension \(m-|S^{\prime}|\), call \(\Theta\). Since any \(d\) dimensional subspace of \(\mathbb{F}_{q}^{m}\) contains \(q^{d}\) many points, \(|\Theta|=q^{m-|S^{\prime}|}\). Thus, whenever the vector \((r_{\ell,1},r_{\ell,2},\ldots r_{\ell,m})\) is in \(\Theta\), then \(\sum_{j=1}^{m}r_{\ell,j}\cdot\sigma_{i}(j)=x_{i}(\ell)\) for all \(i\in S^{\prime}\). So, we can compute the probability as follows.
\[\mathbf{Pr}\left[\bigwedge_{i\in S^{\prime}}\{\mathbf{x}_{i}=\mathbf{v}_{i}\} \right]=\prod_{\ell=1}^{d}\mathbf{Pr}\left[\bigwedge_{i\in S^{\prime}}\left\{ \sum_{j=1}^{m}r_{\ell,j}\cdot\sigma_{i}(j)=\mathbf{v}_{i}(\ell)\right\}\right] =\left(\frac{q^{m-|S^{\prime}|}}{q^{m}}\right)^{d}=q^{-d\cdot|S^{\prime}|}.\]
On the other hand, the product of probabilities can be computed as
\[\prod_{i\in S^{\prime}}\mathbf{Pr}[\mathbf{x}_{i}=\mathbf{v}_{i}]=\prod_{i\in S ^{\prime}}\mathbf{Pr}\left[\bigwedge_{\ell=1}^{d}\left\{\sum_{j=1}^{m}r_{\ell,j}\cdot\sigma_{i}(j)=\mathbf{v}_{i}(j)\right\}\right]=\prod_{i\in S^{\prime} }q^{-d}=q^{-d\cdot|S^{\prime}|}.\]
The second equality follows from the fact that \(\sigma_{i}\) is a non-zero vector and a non-zero combination of uniformly random vectors is also a uniformly random vector over \(\mathbb{F}_{q}^{d}\). The latter claim can be proved by showing that uniformly random distribution will be preserved under scaling and translation. Thus, we conclude that
\[\mathbf{Pr}\left[\bigwedge_{i\in S^{\prime}}\{\mathbf{x}_{i}=\mathbf{v}_{i}\} \right]=\prod_{i\in S^{\prime}}\mathbf{Pr}[\mathbf{x}_{i}=\mathbf{v}_{i}].\]
Hence the events \(\{\mathbf{x}_{i}=\mathbf{v}_{i}\}_{i\in S}\) are mutually independent and \(\Phi_{R}[\Sigma]\) forms \(k\)-wise independent vectors.
Next, we present a well-known fact about random matrices defined over finite fields. For the self containment of the paper, we provide a simple proof.
**Lemma 4.2**.: _Let \(R\in\mathbb{F}_{q}^{d\times m}\) be a uniformly random matrix, i.e., \(R\) be a matrix where each entry is \(\operatorname{Unif}(\{0,1,\ldots q-1\})\). Then, for any \(m<d\), we have \(\operatorname{\mathbf{Pr}}[\textbf{Rank}(R)=m]\geq 1-\frac{1}{q^{d-m}}\)._
Proof.: The proof of this lemma proceeds via induction on \(m\). The base case where \(m=1\) is straightforward, as \(\textbf{Rank}(R)\geq 0\), which validates the claim. For the induction step, consider \(m>1\), and let \(R\in\mathbb{F}_{q}^{d\times m}\) be a matrix generated uniformly at random with column denoted by \(\mathbf{r}_{1},\ldots\mathbf{r}_{m}\). Then, we have the following:
\[\operatorname{\mathbf{Pr}}[\textbf{Rank}(R)=m]\] \[=\operatorname{\mathbf{Pr}}[\textbf{Rank}(\mathbf{r}_{1},\ldots \mathbf{r}_{m-1})=m-1]\cdot\operatorname{\mathbf{Pr}}[\mathbf{r}_{\mathbf{m}} \notin\operatorname{\mathbf{Span}}(\mathbf{r}_{1},\ldots\mathbf{r}_{m-1}) \mid\textbf{Rank}(\mathbf{r}_{1},\ldots\mathbf{r}_{m-1})=m-1]\] \[\geq\left(1-\frac{1}{q^{d-m+1}}\right)\cdot\operatorname{ \mathbf{Pr}}[\mathbf{r}_{\mathbf{m}}\notin\operatorname{\mathbf{Span}}( \mathbf{r}_{1},\ldots\mathbf{r}_{m-1})\mid\textbf{Rank}(\mathbf{r}_{1},\ldots \mathbf{r}_{m-1})=m-1]\] \[=\left(1-\frac{1}{q^{d-m+1}}\right)\cdot\left(1-\frac{1}{q^{d-m+1 }}\right)\geq 1-\frac{2}{q^{d-m+1}}\] \[\geq 1-\frac{q}{q^{d-m+1}}=1-\frac{1}{q^{d-m}}.\]
Above, the first inequality follows from the induction on \(m\). The second equality holds since \(\mathbf{r}_{m}\) is a sampled independent of \(\mathbf{r}_{1},\ldots,\mathbf{r}_{m-1}\) and uniformly from \(\mathbb{F}_{q}^{d}\backslash\operatorname{\mathbf{Span}}(\mathbf{r}_{1}, \ldots,\mathbf{r}_{m-1})\) once conditioned on the event \(\mathbf{r}_{m}\notin\operatorname{\mathbf{Span}}(\mathbf{r}_{1},\ldots, \mathbf{r}_{m-1})\). Hence, \(\operatorname{\mathbf{Pr}}[\mathbf{r}_{m}\notin\operatorname{\mathbf{Span}}( \mathbf{r}_{1},\ldots\mathbf{r}_{m-1})\mid\textbf{Rank}(\mathbf{r}_{1}, \ldots,\mathbf{r}_{m-1})]=\left(1-\frac{1}{q^{d-m+1}}\right)\).
The following observation, derived from the preceding lemma, essentially states that when the dimension of the range space is considerably larger than that of the domain space, a random linear map tends to become a random linear "embedding" (or an injective function) with a high probability. This observation plays a crucial role in addressing potential issues that could arise from the multiple generations of the same vector in later sections.
**Observation 4.3**.: _Given that the random matrix \(R\) possesses full row-rank, the mapping function \(\Phi_{R}:\mathbb{F}_{2}^{m}\rightarrow\mathbb{F}_{2}^{d}\) forms a injection, i.e., \(\Phi_{R}(\mathbf{v})=\Phi_{R}(\mathbf{u})\) holds true if and only if \(\mathbf{u}=\mathbf{v}\). Consequently, when \(m<d\), with probability at least \(1-\frac{1}{q^{d-m}}\), \(\Phi_{R}\) is one-to-one._
## 5 Limits of Pairwise Independent Contention Resolution
In this section, we show that there exists a matroid \(\mathcal{M}=(E,\mathcal{I})\) with rank \(2d\) and pairwise independent distribution \(\mathcal{D}\in\Delta_{\operatorname{pw}}(2^{E})\) that does not admit \(O\left(\frac{1}{d}\right)\)-balanced CRS. In particular, we consider the matroid \(\mathcal{M}^{d}=(\mathbb{F}_{q}^{2d}\times[d],\mathcal{I}^{d})\) which consists of \(d\) labeled copies of each element in full linear matroid \(\mathbb{F}_{q}^{2d}\), and construct distribution \(\mathcal{D}\) over \(2^{\mathbb{F}_{q}^{2d}\times[d]}\) described in Procedure 1. The following is the main theorem of this section:
**Theorem 5.1**.: _For \(d>2\) and some prime \(q>d\), there is no \(\frac{3}{d}\)-balanced CRS for full linear matroid over \(\mathbb{F}_{q}^{2d}\)._
The key idea here is to construct \(d\) many vectors uniformly from \(\mathbb{F}_{q}^{2d}\times[d]\) with rank \(O(1)\) such that each vector is included in the collection independently from any other vector. At a high level, we construct such a collection of vectors as follows: first, construct \(d\) many pairwise linearly independent vectors \(\Sigma\subseteq\mathbb{F}_{q}^{c}\) where \(c=2\). In Lemma 5.2, we show the existence of such \(\Sigma\) when the \(q\) is large enough. Next we consider random linear map \(X=\Phi_{R}[\Sigma]=\{\mathbf{x}_{1},\ldots,\mathbf{x}_{d}\}\) where \(R\) is a uniformly random matrix in \(\mathbb{F}_{q}^{2d\times c}\).
Next, we consider a random permutation \(\pi^{\mathbf{v}}\) for each \(\mathbf{v}\in\mathbb{F}_{q}^{2d}\) independently and include each labeled copy \(\mathbf{v}^{i}\in\mathbb{F}_{q}^{2d}\times[d]\) into the set of active elements \(A\) if \(\mathbf{x}_{\pi(i)}=\mathbf{v}\). Recall that Lemma 4.1 implies that the events \(\{\mathbf{x}_{i^{\prime}}=\mathbf{v}\}\) and \(\{\mathbf{x}_{j^{\prime}}=\mathbf{u}\}\) are independent for any \(\mathbf{u},\mathbf{v}\in\mathbb{F}_{q}^{2d}\) and \(i^{\prime}\neq j^{\prime}\in[d]\). However, for any distinct pairs of \(\mathbf{v}^{i},\mathbf{u}^{j}\in\mathbb{F}_{q}^{2d}\times[d]\), if \(\mathbf{v}^{i}\) is active, the probability of \(\mathbf{u}^{j}\) being active decreases as \(A\) has fixed size of \(d\). To overcome this hurdle, with a small probability, we let \(A=\mathbb{F}_{q}^{2d}\times[d]\). Combining everything, we describe our distribution in Procedure 1.
```
Dist - I : \(\mathcal{D}_{1}\) Initialize \(A\leftarrow\emptyset\). 1. Independently for each \(\mathbf{v}\in\mathbb{F}_{q}^{2d}\), consider a uniformly random permutation \(\pi^{\mathbf{v}}\sim\Pi[d]\). 2. Independently for each \(\mathbf{v}\in\mathbb{F}_{q}^{2d}\), define the set of available copies \(N^{\mathbf{v}}\) as follows \[N^{\mathbf{v}}=\begin{cases}\text{Uniformly random subset of $[d]$ of size $d/2$}&\text{w.p. $a(d):=\frac{d \cdot(d-2)}{(d-1)^{2}}$}\\ \emptyset&\text{otherwise.}\end{cases}\]
3. Let matrix \(R\in\mathbb{F}_{q}^{c\times 2d}\) to be a random matrix with entries \(r_{ij}\sim\text{Unif}\left\{0,1,\ldots,d-1\right\}\) independently.
4. Let \(\Sigma^{\text{CRS}}\subseteq\mathbb{F}_{q}^{c}\) be a collection of pairwise linearly independent \(d\) many vectors.
5. Define \(X=\Phi_{R}[\Sigma^{\text{CRS}}]\) with vectors \(\mathbf{x}_{1},\ldots\mathbf{x}_{d}\).
6. For all \(t\in[d]\), if \(\mathbf{v}=\mathbf{x}_{\pi^{\mathbf{v}}(t)}\) then \(\mathbf{v}^{t}\in A\) if and only if \(t\in N^{\mathbf{v}}\). Dist II : \(\mathcal{D}_{2}\)
7. Set \(A\gets E^{d}\). Dist III : \(\mathcal{D}_{3}\)
8. Set \(A\leftarrow\emptyset\). Sample active elements \(A\) from \(\mathcal{D}_{1}\) w.p \(p:=1-\frac{1}{q^{2d}}\), \(\mathcal{D}_{2}\) w.p. \(\delta\), and \(\mathcal{D}_{3}\) w.p. \(1-p-\delta\).
```
**Procedure 1**.: Pairwise Independent Set of Active Elements of \(\mathcal{M}_{q}^{2d}\)
**Dist - I :**\(\mathcal{D}_{1}\)
Initialize \(A\leftarrow\emptyset\).
1. Independently for each \(\mathbf{v}\in\mathbb{F}_{q}^{2d}\), consider a uniformly random permutation \(\pi^{\mathbf{v}}\sim\Pi[d]\).
2. Independently for each \(\mathbf{v}\in\mathbb{F}_{q}^{2d}\), define the set of available copies \(N^{\mathbf{v}}\) as follows \[N^{\mathbf{v}}=\begin{cases}\text{Uniformly random subset of $[d]$ of size $d/2$}&\text{w.p. $a(d):=\frac{d \cdot(d-2)}{(d-1)^{2}}$}\\ \emptyset&\text{otherwise.}\end{cases}\]
3. Let matrix \(R\in\mathbb{F}_{q}^{c\times 2d}\) to be a random matrix with entries \(r_{ij}\sim\text{Unif}\left\{0,1,\ldots,d-1\right\}\) independently.
4. Let \(\Sigma^{\text{CRS}}\subseteq\mathbb{F}_{q}^{c}\) be a collection of pairwise linearly independent \(d\) many vectors.
5. Define \(X=\Phi_{R}[\Sigma^{\text{CRS}}]\) with vectors \(\mathbf{x}_{1},\ldots\mathbf{x}_{d}\).
6. For all \(t\in[d]\), if \(\mathbf{v}=\mathbf{x}_{\pi^{\mathbf{v}}(t)}\) then \(\mathbf{v}^{t}\in A\) if and only if \(t\in N^{\mathbf{v}}\). Dist II : \(\mathcal{D}_{2}\)
7. Set \(A\gets E^{d}\). Dist III : \(\mathcal{D}_{3}\)
[MISSING_PAGE_POST]
distribution in \(\Delta_{\mathrm{pw}}\left(2^{E^{d}}\right)\). More specifically, we want to show that
\[\mathbf{Pr}[\mathbf{v}^{i}\in A\wedge\mathbf{u}^{j}\in A]=\mathbf{Pr}[\mathbf{v} ^{i}\in A]\cdot\mathbf{Pr}[\mathbf{u}^{j}\in A] \tag{1}\]
for any pairs of distinct labeled vectors \(\mathbf{v}^{i},\mathbf{u}^{j}\in\mathbb{F}_{q}^{2d}\times[d]\). Observe that even if \(\mathbf{v}\) equals \(\mathbf{u}\), the indices \(i\) and \(j\) associated with them must be distinct. Before delving into probabilistic assurances, we first establish that \(\mathcal{D}_{1}\) is a valid distribution. The only non-trivial part in constructing \(\mathcal{D}_{1}\), specifically Step (4), is validated through the subsequent lemma.
**Lemma 5.2**.: _Given that \(d\cdot q\leq q^{c}\), there exists a set of vectors \(\Sigma^{\mathit{CRS}}\subset\mathbb{F}_{q}^{c}\), which consists of \(d\) pairwise linearly independent vectors._
Proof.: For any vector \(\mathbf{v}\) in \(\mathbb{F}_{q}^{c}\), the set of scalar multiples \(D_{\mathbf{v}}\) defined as \(\{i\cdot\mathbf{v}:i\in\{0,\ldots,q-1\}\}\) comprises all vectors linearly dependent on \(\mathbf{v}\). For any group of vectors \(\mathbf{v}_{1},\ldots,\mathbf{v}_{t}\in\mathbb{F}_{q}^{c}\) with \(t<d\), there exists a vector \(\mathbf{u}\in\mathbb{F}_{q}^{c}\setminus\bigcup_{j=1}^{t}D_{\mathbf{v}_{j}}\) which is orthogonal to all \(\mathbf{v}_{1},\ldots\mathbf{v}_{t}\) since \(|D_{\mathbf{v}_{j}}|=q\) and \(|\mathbb{F}_{q}^{c}|=q^{c}\geq d\cdot q>t\cdot q\). Therefore, a set \(\{\mathbf{v}_{1},\ldots,\mathbf{v}_{d}\}\) exists within \(\mathbb{F}_{q}^{c}\), and these vectors are pairwise linearly independent.
For the rest of the section, we set \(q\) as the nearest prime larger than \(d\). Next, we continue by calculating the marginal probabilities for each element and joint probabilities for different pairs, as outlined in the subsequent lemma. The proof of the lemma involves probabilistic calucations and delegated to Appendix A
**Lemma 5.3**.: _For any distinct pairs of labeled vectors \(\mathbf{v}^{i},\mathbf{u}^{j}\in\mathbb{F}_{q}^{2d}\times[d]\),_
\[\mathbf{Pr}[\mathbf{v}^{i}\in A]=\frac{p\cdot a(d)}{2\cdot q^{2d}}+\delta \qquad\text{and}\qquad\mathbf{Pr}[\mathbf{v}^{i},\mathbf{u}^{j}\in A]=\frac{ p}{4\cdot q^{4d}}\cdot\left(\frac{d-2}{d-1}\right)^{2}+\delta.\]
At this point, we are prepared to present the lemma that verifies Procedure 1 generates a valid pairwise independent distribution. The proof of the lemma is delegated to Appendix A.
**Lemma 5.4**.: _There exists \(\delta>0\) such that_
1. \(p+\delta\leq 1\)_, in particular,_ \(\delta=O(\frac{1}{q^{4d}})\)_._
2. \(\mathbf{Pr}[\mathbf{v}^{i},\mathbf{u}^{j}\in A]=\mathbf{Pr}[\mathbf{v}^{i} \in A]\cdot\mathbf{Pr}[\mathbf{u}^{j}\in A]\) _for any two distinct items_ \(\mathbf{v}^{i},\mathbf{u}^{j}\in\mathbb{F}_{q}^{2d}\times[d]\)_._
We now demonstrate that \(A\), when sampled in accordance with Procedure 1, does not admit \(\omega(1/d)\)-balanced contention resolution map. Due to Lemma 2.2, it is enough to prove that \(\mathbb{E}_{A\sim\mathcal{D}}[\mathbf{Rank}(A)]\leq O(1/d)\cdot\mathbb{E}_{A \sim\mathcal{D}}[|A|]\). The following lemma proves this fact.
**Lemma 5.5**.: \(\mathbb{E}[\mathbf{Rank}(A)]\leq\frac{c+1}{d}\cdot\mathbb{E}[|A|]=\frac{6}{ \mathbf{Rank}(\mathcal{M}^{d})}\cdot\mathbb{E}[|A|]\)_._
Proof.: We observe that
\[\mathbb{E}[\mathbf{Rank}(A)] =\mathbb{E}[\mathbf{Rank}(A)\mid A\sim\mathcal{D}_{1}]\cdot \mathbf{Pr}[A\sim\mathcal{D}_{1}]+\mathbf{Rank}(M)\cdot\mathbf{Pr}[A\sim \mathcal{D}_{2}]\] \[\leq\mathbb{E}[\mathbf{Rank}(A)\mid A\sim\mathcal{D}_{1}]+\frac{2 d}{q^{4d}} \left(\delta\leq\frac{1}{q^{4d}}\right)\] \[\leq c+\frac{2d}{q^{4d}}\leq c+1.\]
Since \(|A|\geq d\) with probability \(1\), we have \(\mathbb{E}[\mathbf{Rank}(A)]\leq\frac{c+1}{d}\cdot\mathbb{E}[|A|]\). This completes the proof as \(c=2\) and \(\mathbf{Rank}(\mathcal{M}^{d})=2d\)
In the following, we demonstrate that the marginal probabilities of each element's inclusion in \(A\) reside within the matroid polytope, denoted as \(\mathcal{P}_{\mathcal{I}^{d}}\).
**Lemma 5.6**.: _Let \(\mu\) be the marginal probability vector of \(A\) where \(\mu(\mathbf{v}^{i})=\mathbf{Pr}_{A\sim\mathcal{D}}[\mathbf{v}^{i}\in A]\). Then, \(\mu\in P_{\mathcal{I}^{d}}\)._
Proof.: Lemma 5.3 implies that,
\[\mu(\mathbf{v}^{i})=\mathbf{Pr}[\mathbf{v}^{i}\in A]=\frac{p\cdot a(d)}{2\cdot q ^{2d}}+\delta\leq\frac{1}{q^{2d}}.\]
For any subset \(S\subseteq\mathbb{F}_{q}^{2d}\times[d]\), observe that
\[\mu(S):=\sum_{\mathbf{v}^{i}\in S}\mu(\mathbf{v}^{i})\leq|S|\cdot\frac{1}{q^{ 2d}}\leq\frac{d\cdot q^{\mathbf{Rank}(S)}}{q^{2d}}\leq\frac{2d\cdot q^{ \mathbf{Rank}(S)}}{\mathbf{Rank}(S)\cdot q^{2d}}\cdot\frac{\mathbf{Rank}(S)} {2}\leq\frac{\mathbf{Rank}(S)}{2}.\]
Above, the second inequality holds because \(|S|\leq d\cdot q^{\mathbf{Rank}(S)}\) as we have \(d\) copies of each \(\mathbf{v}\in\mathbb{F}_{q}^{2d}\) in \(\mathbb{F}_{q}^{2d}\times[d]\). The last inequality follow from the fact that \(xq^{-x}\) is a decreasing function of \(x\) and \(\mathbf{Rank}(S)\leq 2d\), hence, \(\frac{2d\cdot q^{\mathbf{Rank}(S)}}{\mathbf{Rank}/2\cdot q^{2d}}\leq 1\). Thus, \(\mu\in P_{\mathcal{I}^{d}}\).
Now we are ready to prove Theorem 5.1.
Proof of Theorem 5.1.: From Lemma 5.6, we conclude that the set \(A\) sampled from the distribution described in Procedure 1 has marginal inside the full linear matroid over \(\mathbb{F}_{q}^{2d}\) polytope which is pairwise independent due to Lemma 5.4. Moreover, Lemma 5.5, characterization of CRS in Lemma 2.2 and \(q>d\) completes the proof of the theorem.
## 6 Limits of Prophet Inequality with Pairwise Independent Priors
In this section, we construct an instance of the prophet inequality problem with pairwise independent priors that rules out \(\omega\left(\frac{1}{\log\mathbf{Rank}}\right)\)-competitive pairwise independent prophet inequality. We consider a full binary matroid of rank \(2d\) with \(\tau\) many copies of each element denoted as \(\mathcal{M}^{\tau}=(E^{\tau},\mathcal{I}^{\tau})\), i.e. \(E^{\tau}:=\mathbb{F}_{2}^{2d}\times[\tau]\). The exact value of \(\tau\) will be determined later. Our objective is to create a pairwise independent weight distribution over \(E^{\tau}\) such that any algorithm, which does not have prior knowledge of all weight realizations, fails to select an independent set with a high reward.
### Construction of Pairwise Independent Weight Distribution
Revisiting the key concept we discussed in Section 3.3, our approach involves constructing \(\kappa\) levels. Elements from \(E^{\tau}\) are randomly (though not necessarily uniformly) assigned into one of these levels, with elements at level \(\ell\) receiving a weight of \(2^{\ell}\). This randomized assignment of elements from \(E^{\tau}\) at level \(\ell\) is achieved by applying a random linear map \(\Phi_{R}:\mathbb{F}_{2}^{d}\rightarrow\mathbb{F}_{2}^{2d}\) to a strategically assembled, pairwise linearly independent (random) collection of vectors (in \(\mathbb{F}_{2}^{d}\)), denoted as \(\Sigma_{\ell}\). This random map yields a vector in \(\mathbb{F}_{2}^{2d}\) for each input vector from \(\Sigma_{\ell}\), and hence, an element from matroid \(\mathcal{M}^{\tau}\).
To construct \(\Sigma_{\ell}\), we start at the first level with the principal bases vectors of \(\mathbb{F}_{2}^{d}\), denoted as \(B_{1}=\{\mathbf{e}_{1},\ldots\mathbf{e}_{d}\}\), call them "alive" basis vectors of level one. At each iteration \(\ell\), we randomly
select half of the alive basis vectors from \(B_{\ell-1}\) to form \(B_{\ell}\). It is crucial to understand that \(B_{\ell}\) is not selected uniformly at random from \(B_{\ell-1}\), but instead is cleverly engineered to "deceive" any prophet inequality algorithm. Subsequently, \(\Sigma_{\ell}\) is constructed as linear combinations of the alive basis vectors from \(B_{\ell}\). As we revisited in Section 3.3, \(\Sigma_{\ell}\) and \(B_{\ell}\) fulfill the following conditions:
1. \(\mathbf{Pr}[\sigma\in\mathbf{Span}(B_{\ell+1})\mid B_{\ell},\Sigma_{\ell}]= \frac{1}{2}\) for any vector \(\sigma\in\Sigma_{\ell}\)
2. \(\mathbf{Rank}(\Sigma_{\ell})=|\Sigma_{\ell}|=|B_{\ell}|/2\), i.e. \(\Sigma_{\ell}\) consists of linearly independent vectors of size \(|B_{\ell}|/2\),
3. \(\Sigma:=\bigcup_{\ell=1}^{\kappa}\Sigma_{\ell}\) forms a pairwise linearly independent vectors.
Detailed construction of \(\Sigma_{\ell}\) and \(B_{\ell}\) is presented in Procedure 2. To help understand the process, we give a brief explanation of the construction. We initially identify the set of principal basis vectors as the alive basis vectors, which is denoted as \(B_{1}=\{\mathbf{e}_{1},\ldots\mathbf{e}_{d}\}\). We then partition \(B_{1}\) into \(\mathbf{P}_{1}=\{P_{1}(i):i\in[d/2]\}\), with each element \(P_{1}(i)\) containing two individual basis vectors, precisely, \(P_{1}(i)=\{\mathbf{e}_{2i-1},\mathbf{e}_{2i}\}\). We then assemble \(\Sigma_{1}=\{\mathbf{e}_{1},\mathbf{e}_{3},\ldots\mathbf{e}_{d-1}\}\) by selecting the first vector in each part.
Subsequently, we randomly pick half of the parts from \(\mathbf{P}_{1}\) to establish \(\overline{\mathbf{P}}_{\mathbf{1}}\). For simplicity in our discussion, we will renumber the selected parts in \(\overline{\mathbf{P}}_{\mathbf{1}}\) as \(\left\{\overline{P}_{1}(i):i\in[d/4]\right\}\). Following this, we combine consecutive parts in \(\overline{\mathbf{P}}_{\mathbf{1}}\) to create the second level partition, formally denoted as \(\mathbf{P}_{\mathbf{2}}=\{P_{2}(j):j\in[d/8]\}\), where \(P_{2}(j)=\overline{P}_{1}(2j-1)\cup\overline{P}_{1}(2j)\). Essentially, the collection of basis vectors that appear in a part of \(\overline{\mathbf{P}}_{\mathbf{1}}\) becomes the "alive" basis vectors at level two. Notice that each basis vector from the first level appears in the second level with a probability of \(\frac{1}{2}\) (with possible correlations across vectors).
Continuing with the construction of \(\Sigma_{2}\), we focus on a specific index \(j\in[d/8]\). We enumarate the vectors in \(P_{2}(j)\) as \(\{\mathbf{v}_{1}^{\prime},\mathbf{v}_{2}^{\prime},\mathbf{v}_{3}^{\prime}, \mathbf{v}_{4}^{\prime}\}\), and construct \(\Sigma_{2}(j)=\{\sigma_{1}=\mathbf{v}_{1}^{\prime}+\mathbf{v}_{2}^{\prime}, \sigma_{2}=\mathbf{v}_{2}^{\prime}+\mathbf{v}_{3}^{\prime}\}\). We then repeat this process for each index \(j\in[d/8]\), generating the sets \(\Sigma_{2}(j)\), and define \(\Sigma_{2}\) as \(\Sigma_{2}=\bigcup_{j=1}^{d/8}\Sigma_{2}(j)\). Note that every set \(\Sigma_{2}(j)\), and thereby \(\Sigma_{2}\), consists of linearly independent vectors, with \(|\Sigma_{2}|=|B_{2}|/2\). Moreover, the vectors in \(\Sigma_{2}\) and those in \(\Sigma_{1}\) are pairwise linearly independent, since \(\Sigma_{1}\) and \(\Sigma_{2}\) contain disjoint sets of vectors within \(\mathbb{F}_{2}^{d}\).
We inductively implement the following steps for each level \(\ell\) using the template provided by the construction of \(\mathbf{P}_{2}\) and \(\Sigma_{2}\): (i) we generate partitions \(\overline{\mathbf{P}}_{\ell-1}\) and then \(\mathbf{P}_{\ell}\) from \(\mathbf{P}_{\ell-1}\) in a random manner, (ii) we construct \(\Sigma_{\ell}(j)\) by obtaining the linear combination of \(2^{\ell-1}\) consecutive "alive" basis vectors from an arbitrary order of \(P_{\ell}(j)\) for each \(j\), (iii) take union of vectors in \(\Sigma_{\ell}(j)\) for each \(j\) to form \(\Sigma_{\ell}\). We refer the reader to Figure 1 for a visualization of the construction of \(B_{\ell}\), \(\mathbf{P}_{\ell}\), and \(\overline{\mathbf{P}}_{\ell}\) for the initial three levels. Also, Figure 2 provides a visual representation of the construction of vectors in \(\Sigma_{\ell}(j)\) for \(\ell=3\) and \(P_{\ell}(j)=\{\mathbf{v}_{1}^{\prime},\mathbf{v}_{2}^{\prime},\ldots,\mathbf{ v}_{8}^{\prime}\}\).
After the construction of "raw" vectors \(\Sigma_{\ell}\) for every \(\ell\in[\kappa]\), we apply a random linear map \(\Phi_{R}:\mathbb{F}_{2}^{d}\rightarrow\mathbb{F}_{2}^{2d}\) using a uniformly random matrix \(R\in\mathbb{F}_{2}^{2d\times d}\) to obtain \(X=\Phi_{R}[\Sigma]\). By demonstrating that \(\Sigma\) is composed of pairwise linearly independent vectors, as in Lemma 6.2, we infer that any two different vectors \(\mathbf{x}_{i}\) and \(\mathbf{x}_{j}\) are independent due to Lemma 4.1. We arrange the vectors in \(\Sigma=\{\sigma_{1},\ldots\sigma_{\tau}\}\) and define a level map \(\psi:[\tau]\rightarrow[\kappa]\), which maps an index of vector \(\Sigma\) to one of the levels in \([\kappa]\). For any specified index \(i\in[\tau]\), \(\psi\) outputs \(\ell\in[\kappa]\) if \(\sigma_{i}\in\Sigma_{\ell}\). As vectors in \(\Sigma\) are pairwise linearly independent, they are unique, therefore ensuring the map is well-defined. Here, we fix \(\tau=\sum_{\ell=1}^{\kappa}|\Sigma_{\ell}|\). The exact value of \(\kappa\) is determined in Theorem 6.6 which is \(O(\log d)\).
Finally, we employ random vectors \(X\) and create a weight distribution over the matroid \(\mathcal{M}^{\tau}\). For each vector \(\mathbf{v}\in\mathbb{F}_{2}^{2d}\), we independently sample a uniformly random permutation \(\pi^{\mathbf{v}}\in\Pi[\tau]\). We
define \(w(\mathbf{v}^{i})=2^{\ell}\) if \(\mathbf{x}_{\pi^{\mathbf{v}}(i)}=\mathbf{v}\) and \(\psi(\pi^{\mathbf{v}}(i))=\ell\). Lemma 4.1 signifies that the events \(\left\{\mathbf{x}_{i}=\mathbf{v}\right\}\) and \(\left\{\mathbf{x}_{j}=\mathbf{u}\right\}\) are independent for any \(\mathbf{u},\mathbf{v}\in\mathbb{F}_{2}^{2d}\) and \(i\neq j\in[\tau]\). However, for any \(\mathbf{v}^{i}\in E^{\tau}\), upon conditioning on the event \(w(\mathbf{v}^{i})=2^{\ell}\), the likelihood of any other \(\mathbf{u}^{i}\in E^{\tau}\) taking the weight \(2^{\ell}\) decreases, as we have a fixed number of vectors at level \(\ell\). To overcome this obstacle, with small probability, we assign the weights of elements in \(E^{\tau}\) using positively correlated and independent distributions. The precise distribution is shown in Procedure 3.
**Procedure 2.** Construction of Pairwise Linearly Independent Collection of Vectors \(\Sigma\)
1. **Base level \(\ell=1\)**: 1. \(B_{1}=\left\{\mathbf{e}_{1},\ldots,\mathbf{e}_{d}\right\}\) be the principal basis of \(\mathbb{F}_{2}^{d}\). 2. \(\mathbf{P}_{1}=\left\{P_{1}(i):i\in[d/2]\right\}\) be a partition of \(B_{1}\) where \(P_{1}(i)=\left\{\mathbf{e}_{2i-1},\mathbf{e}_{2i}\right\}\).
2. **Level \(\ell>1\)**: 1. Let \(\overline{\mathbf{P}}_{\ell-1}\) be uniformly random half of \(\mathbf{P}_{\ell-1}\) and define \(\overline{\mathbf{P}}_{\ell-1}:=\left\{\overline{P}_{\ell-1}(i):i\in[d/2^{2 \ell-2}]\right\}\). 2. Define \(P_{\ell}(i)=\overline{P}_{\ell-1}(2i-1)\cup\overline{P}_{\ell-1}(2i)\) for each \(i\in\frac{d}{2^{2\ell-1}}\) and set \(\mathbf{P}_{\ell}=\left\{P_{\ell}(i):i\in[d/2^{2\ell-1}]\right\}\). 3. Define \(B_{\ell}=\bigcup_{i=1}^{d/2^{2\ell-1}}P_{\ell}(i)\) which is called as the alive basis vectors of level \(\ell\).
3. For any \(\ell\in[\kappa]\) and \(i\in[d/2^{2\ell-1}]\), define \(\Sigma_{\ell}(i)\subseteq\mathbb{F}_{2}^{d}\) as collection of \(|\Sigma_{\ell}(i)|=2^{\ell-1}\) linearly independent vectors which are linear combinations of \(2^{\ell-1}\) vectors of \(P_{\ell}(i)\).
4. Let \(\Sigma_{\ell}=\bigcup_{i=1}^{d/2^{2\ell-1}}\Sigma_{\ell}(i)\) and \(\Sigma=\bigcup_{\ell=1}^{\kappa}\Sigma_{\ell}\).
Figure 1: **A snapshot of the construction of \(B_{\ell}\) and partition \(\mathbf{P}_{\ell}\) for the initial \(3\) levels. In this illustration, vertically aligned dots in groups of three represent the same basis vector. At any given level \(\ell\), vectors indicated in solid colors denote \(B_{\ell}\), the alive basis vectors of level \(\ell\). Furthermore, the colored boxes at each level \(\ell\), along with their encompassed alive basis vectors, constitute the parts \(P_{\ell}(i)\) in the partition of \(\mathbf{P}_{\ell}\). Random parts \(\overline{\mathbf{P}}_{\ell}\) that survive through to the next level are marked without a cross.**
5. Given \(\tau=\sum_{\ell=1}^{\kappa}\frac{d}{2^{\ell}}\), let \(\Sigma=\{\sigma_{1},\ldots\sigma_{\tau}\}\) be the collection of "raw" vectors together with a level map \(\psi:[\tau]\to[\kappa]\) where \(\psi(i)=\ell\) if \(\sigma_{i}\in\Sigma_{\ell}\).
Before moving forward, we prove the correctness of Step (3) of Procedure 2. The subsequent lemma demonstrates that the vectors \(\Sigma_{\ell}(i)\) are linearly independent. The construction of \(\Sigma_{\ell}(i)\) is visualized for \(\ell=3\) and some \(i\in[d/2^{5}]\) in Figure 2.
**Lemma 6.1**.: _For any \(1\leq\ell\leq 2d\), let \(P\subseteq\mathbb{F}_{2}^{2d}\) be a collection of \(2^{\ell}\) linearly independent vectors. There exists a set of linearly independent vectors \(S\subseteq\mathbb{F}_{2}^{2d}\) of size \(2^{\ell-1}\) such that each vector \(\sigma\in S\) is a linear combination of \(2^{\ell-1}\) vectors from \(P\)._
Proof.: Let \(\mathbf{v}_{1}^{\prime},\ldots\mathbf{v}_{2^{\ell}}^{\prime}\) be an enumeration of \(P\) and assume that \(\mathbf{v}_{i}^{\prime}=\mathbf{v}_{j}^{\prime}\) if \(i=j\mod 2^{\ell}\) to avoid notational clutter. We introduce \(\sigma_{t}:=\sum_{i=t}^{t+2^{\ell-1}-1}\mathbf{v}_{i}^{\prime}\) for all \(t\in[2^{\ell-1}]\) and define \(S=\{\sigma_{t}:t\in[2^{\ell-1}]\}\).
We claim that \(S\) forms a linearly independent set. For contradiction, suppose there exist coefficients \(\mathbf{a}\in\mathbb{F}_{2}^{2^{\ell-1}}\) such that \(\sum_{t=1}^{2^{\ell-1}}\mathbf{a}(t)\cdot\sigma_{t}=\mathbf{0}\). Define \(t^{*}\) as the minimum index with non-zero \(a_{t}\) weight, i.e., \(t^{*}=\min\left\{t\in[2^{\ell-1}]:\mathbf{a}(t)=1\right\}\). Observe that \(\langle\mathbf{v}_{t^{*}}^{\prime},\sigma_{t^{*}}\rangle=||\mathbf{v}_{t^{*}}^ {\prime}||^{2}\) and deduce that \(\mathbf{v}_{t^{*}}^{\prime}\) is not orthogonal to \(\sigma_{t^{*}}\). However, \(\mathbf{v}_{t^{*}}^{\prime}\) is orthogonal to any \(\sigma_{j}\) if \(j>t^{*}\) since \(\sigma_{t}=\sum_{i=t}^{t+2^{\ell-1}-1}\) and \(|P|=2^{\ell}\). Therefore, \(\sigma_{t^{*}}\notin\mathbf{Span}\left\{\sigma_{j}:j>t^{*}\right\}\) yields a contradiction.
Next, we observe that the vectors in \(\Sigma\) are pairwise linearly independent with probability \(1\).
**Observation 6.2**.: \(\Sigma\) _consists of pairwise linearly independent vectors with probability \(1\)._
Proof.: Any two vectors in \(\Sigma\) are expressed as a linear combination of different sets of basis vectors, hence, \(\Sigma\) is pairwise linearly independent.
**Procedure 3**.: Pairwise Independent Weight Assignments to Matroid \(\mathcal{M}^{\tau}\)
**Dist - I :**\(\mathcal{D}_{1}\)
Initialize all weights \(w(\mathbf{v}^{j})=0\) for all \(\mathbf{v}\in\mathbb{F}_{2}^{2d}\) and \(j\in[\tau]\).
1. Independently for each \(\mathbf{v}\in\mathbb{F}_{2}^{2d}\), consider uniformly random permutation \(\pi^{\mathbf{v}}\sim\Pi[\tau]\).
Figure 2: **Vector generation**.
2. Independently for each \(\mathbf{v}\in\mathbb{F}_{2}^{2d}\), define available copies \(N^{\mathbf{v}}\) as follows \[N^{\mathbf{v}}=\begin{cases}\text{Uniformly random subset of $[\tau]$ of size $\tau/2$}&\text{with prob. $\tau\cdot a(\tau)=\frac{\tau(\tau-2)}{(\tau-1)^{2}}$},\\ \emptyset&\text{otherwise }.\end{cases}\]
3. Let matrix \(R\in\mathbb{F}_{2}^{2d\times d}\) to be a random matrix with entries \(r_{ij}\sim\text{Ber}\left(\frac{1}{2}\right)\) independently.
4. Define \(X=\Phi_{R}[\Sigma]\) with vectors \(\mathbf{x}_{1},\ldots\mathbf{x}_{\tau}\), where \(\Sigma\) is constructed via Procedure 2.
5. Given enumeration \(\Sigma=\{\sigma_{1},\ldots\sigma_{\tau}\}\), define level map \(\psi:[\tau]\to[\kappa]\) such that \(\psi(i)=\ell\) if \(\sigma_{i}\in\Sigma_{\ell}\).
6. For all \(t\in[\tau]\), if \(\mathbf{v}=\mathbf{x}_{t}\) then \(w\left(\mathbf{v}^{\pi^{\mathbf{v}}(t)}\right)=2^{\ell}\) if and only if \(\pi^{\mathbf{v}}(t)\in N^{\mathbf{v}}\) and \(\psi(t)=\ell\).
**Dist II(\(\ell\)) :**: \(\mathcal{D}_{2}(\ell)\)
1. Set \(w(\mathbf{v}^{j})=2^{\ell}\) for all \(v\in\mathbb{F}_{2}^{2d}\) and \(j\in[\tau]\).
**Dist III :**: \(\mathcal{D}_{3}\)
1. Set \(w(\mathbf{v}^{j})=2^{\ell}\) with probability \(\frac{1}{2^{\ell}}\) independently for all \(\mathbf{v}^{j}\in E^{\tau}\).
**Dist IV :**: \(\mathcal{D}_{4}\)
1. Set \(w(\mathbf{v}^{j})=0\) for all \(v\in\mathbb{F}_{2}^{2d}\) and \(j\in[\tau]\).
Sample weights
\[w\sim\begin{cases}\mathcal{D}_{1}&\text{w.p. $p=1-\frac{d}{2^{d}}$},\\ \mathcal{D}_{2}(\ell)&\text{w.p. $p_{\ell}=\frac{p}{4}\cdot\frac{a^{2}(\tau) \cdot d}{2^{4d+\ell}}$},\\ \mathcal{D}_{3}&\text{w.p. $q$},\\ \mathcal{D}_{4}&\text{w.p. $1-p-\sum_{\ell\in[\kappa]}p_{\ell}-q$}.\end{cases}\]
In the following lemma, we will conclude that the weight assignment constructed in Procedure 3 is pairwise independent. The proof of the lemma is rather technical and we delegate to Appendix B to prevent the interruptions in the flow of the presentation.
**Lemma 6.3**.: _There exists \(0<q\leq\frac{3\cdot d}{2^{3d}}\) such that for any \(\mathbf{v}^{i}\) and \(\mathbf{u}^{j}\in E^{\tau}\), random weights \(w(\mathbf{v}^{i})\) and \(w(\mathbf{u}^{j})\) are independently assigned by Procedure 3._
### Upper Bounding the Approximation Ratio
Next, we consider an instance of prophet inequality problem for a matroid \(\mathcal{M}^{\tau}\) with rank \(2d\) and \(E^{\tau}=\mathbb{F}_{2}^{2\tau}\times[\tau]\), value distribution \(\mathcal{D}\) as detailed in Procedure 3, and arrival order \(\lambda\) in which elements with positive weights arrive in the increasing order of their weights followed by the elements with zero weight. Ties among elements arriving of the same weight are broken arbitrarily. We note that the arrival order \(\lambda\) can only be orchestrated by an adversary who knows all the weight assignments upfront.
#### 6.2.1 Lower bound for Prophet's Value
We first establish that a prophet, who is aware of all realizations within the probability space, can guarantee an expected reward of \(\Omega(\kappa\cdot d)\).
**Lemma 6.4**.: _Let \(\mathcal{M}^{\tau}=(E^{\tau},\mathcal{I}^{\tau})\) be a matroid where \(E=\mathbb{F}_{2}^{2d}\) and \(\mathcal{I}\) denotes all collections of linearly independent vectors in \(E\) for \(\kappa\geq 35\) and \(d\geq 2^{5\kappa}\). For the pairwise independent weight distribution \(\mathcal{D}\in\Delta_{pw}(\mathbb{R}_{\geq 0}^{E})\) described in Procedure 3,_
\[\mathbb{E}[\text{reward of prophet}]=\operatorname*{\mathbb{E}}_{w\sim \mathcal{D}}[\boldsymbol{Rank}_{w}(E^{\tau})]\geq\frac{\kappa\cdot d}{12}.\]
Proof.: We first primarily focus on instances where weight vector \(w\sim\mathcal{D}_{1}\) and the random matrix \(R\) possesses full row-rank. Notice that these two events are independent where the first one happens with probability \(p=1-\frac{d}{2^{d}}\) and the second does with probability \(1-\frac{1}{2^{m}}\) due to Lemma 4.2. Moreover, Observation 4.3 ensures that at most one copy of each vector \(\mathbf{v}\in\mathbb{F}_{2}^{2d}\) has a non-zero value. We now transition to examine subsequent computations conditioned on the event \(\{w\sim\mathcal{D}_{1}\wedge\mathbf{Rank}(A)=d\}\).
Select an arbitrary vector \(\mathbf{v}\in X\) from \(X=\Phi_{R}[\Sigma]\), and assign \(t_{X}(\mathbf{v})\) to be the unique index satisfying \(\mathbf{x}_{t_{X}(\mathbf{v})}=\mathbf{v}\). Note that \(t_{X}:X\to[\tau]\) serves as a random function that maps vectors of \(X\) to their respective indices and it is well defined when the random matrix \(R\) has full row rank. Furthermore, the only copy of \(\mathbf{v}\) has a non-zero value with a probability of \(\frac{\tau(\tau-2)}{2(\tau-1)^{2}}\) which occurs if \(\pi^{\mathbf{v}}(t_{X}(\mathbf{v}))\in N^{\mathbf{v}}\) (\(N^{\mathbf{v}}\) is defined in Procedure 3). Let us denote \(\mathcal{E}(v)\) as the event when \(\pi^{\mathbf{v}}(t_{X}(\mathbf{v}))\in N^{\mathbf{v}}\).
Next, let \(\Sigma^{\prime}_{\ell}\subseteq\Sigma_{\ell}\setminus\mathbf{Span}(B_{\ell+1})\) be the subset of vectors in \(\Sigma_{\ell}\) that do not belong to the span of \(B_{\ell+1}\). In essence, the vector in \(\Sigma^{\prime}_{\ell}\) form a set of vectors orthogonal to \(B_{\ell+1}\) and hence, orthogonal to \(\Sigma_{\ell^{\prime}}\) for any \(\ell^{\prime}>\ell\). Now, we define \(X^{\prime}_{\ell}=\Phi_{R}[\Sigma^{\prime}_{\ell}]\) and a solution \(S=\bigcup_{\ell=1}^{\kappa}\{\mathbf{v}^{\pi^{\mathbf{v}}(t_{X}(\mathbf{v}))} :\mathbf{v}\in X^{\prime}_{\ell}\}\).
**Claim 6.5**.: \(S=\bigcup_{\ell=1}^{\kappa}\{\mathbf{v}^{\pi^{\mathbf{v}}(t_{X}(\mathbf{v}))} :\mathbf{v}\in X^{\prime}_{\ell}\}\) _is an independent set._
Proof of Claim 6.5.: Notice that vectors in \(X^{\prime}_{\ell}\) orthogonal orthogonal to all vectors in \(\Phi_{R}[B_{\ell^{\prime}}]\) for any \(\ell^{\prime}>\ell\) since \(\Phi_{R}\) is a linear function and \(\Sigma^{\prime}_{\ell}\) is orthogonal to \(\Sigma_{\ell^{\prime}}\) for any \(\ell^{\prime}>\ell\). Thus, \(X^{\prime}_{\ell}\) is orthogonal to any \(X^{\prime}_{\ell^{\prime}}\) for any \(\ell^{\prime}>\ell\). Furthermore, each \(X_{\ell}\) consists of linearly independent vectors when \(R\) has full row-rank and \(\Sigma_{\ell}\) is a collection of linearly independent vectors as verified in Lemma 6.1. Therefore, \(S\) forms an independent set.
By using the fact that \(S\) is an independent set of the matroid \(\mathcal{M}^{\tau}\), we lower bound the expected reward of the prophet under conditioned on the event \(\{w\sim\mathcal{D}_{1}\wedge\mathbf{Rank}(A)=d\}\).
\[\mathbb{E}_{w}[\mathbf{Rank}_{w}(E^{\tau})] \geq\mathbb{E}_{w}[\mathbf{Rank}_{w}(S)]\geq\sum_{\ell=1}^{\kappa}2^ {\ell}\cdot\mathbb{E}\left[\sum_{\mathbf{v}\in X_{\ell}^{\prime}}\mathbb{1}_{ \mathcal{E}(\mathbf{v})}\right]\] \[=\sum_{\ell=1}^{\kappa}2^{\ell}\cdot\sum_{\mathbf{v}\in\mathbb{F }_{2}^{2d}}\mathbf{Pr}[\mathcal{E}(\mathbf{v})\mid\mathbf{v}\in X_{\ell}^{ \prime}]\cdot\mathbf{Pr}[\mathbf{v}\in X_{\ell}^{\prime}]\] (Linearity of expectation) \[=\sum_{\ell=1}^{\kappa}2^{\ell}\cdot\frac{\tau(\tau-2)}{2(\tau-1) ^{2}}\cdot\sum_{\mathbf{v}\in\mathbb{F}_{2}^{2d}}\mathbf{Pr}[\mathbf{v}\in X_ {\ell}^{\prime}]\] \[=\sum_{\ell=1}^{\kappa}2^{\ell}\cdot\frac{\tau(\tau-2)}{2(\tau-1 )^{2}}\cdot\frac{d}{2^{\ell+1}} \left(|X_{\ell}^{\prime}|=\frac{d}{2^{\ell+1}}\text{ w.p. }1\right)\] \[\geq\sum_{\ell=1}^{\kappa}\frac{d}{2}\cdot\frac{1}{3} (\tau\geq 3)\] \[=\frac{\kappa\cdot d}{6}.\]
Finally, we release the condition and compute the expected reward of the prophet as follows.
\[\mathbb{E}[\mathbf{Rank}_{w}(E^{\tau})] \geq\mathbb{E}[\mathbf{Rank}_{w}(E^{\tau})\mid w\sim\mathcal{D}_{ 1}\wedge\mathbf{Rank}(A)=d]\cdot\mathbf{Pr}[w\sim\mathcal{D}_{1}\wedge\mathbf{ Rank}(A)=d]\] \[\geq\frac{\kappa\cdot d}{6}\cdot\mathbf{Pr}[w\sim\mathcal{D}_{1}] \cdot\mathbf{Pr}[\mathbf{Rank}(A)=d]\] \[\geq\frac{\kappa\cdot d}{6}\cdot\left(1-\frac{d}{2^{d}}\right) \cdot\left(1-\frac{1}{2^{d}}\right)\] \[\geq\frac{\kappa\cdot d}{6}\cdot\frac{1}{2}\cdot\frac{1}{2} (d>10)\] \[=\frac{\kappa\cdot d}{24}.\]
#### 6.2.2 Upper Bound on the Perfomance of any Algorithm
Next, we show that the expected reward of the gambler can not be larger than \(O(d)\). This implies that no algorithm can obtain the competitive ratio of better than \(\Omega\left(\frac{1}{\kappa}\right)=\Omega\left(\frac{1}{\log d}\right)\). Throughout the section, we exchangably use an algorithm as a gambler and vice versa.
**Theorem 6.6**.: _Let \(\mathcal{M}^{\tau}=(E^{\tau},\mathcal{I}^{\tau})\) be a matroid where \(E=\mathbb{F}_{2}^{2d}\) and \(\mathcal{I}\) denotes all collections of linearly independent vectors in \(E\) for \(d>35\) and \(\kappa=\frac{1}{10}\cdot\log d\). There is no \(\omega(1/\log d)\)-competitive strategy for gambler against prophet when weights are distributed according to Procedure 3._
Before we discuss the theorem's proof, we recall that with high probability, weight vector \(w\) is sampled from \(\mathcal{D}_{1}\), and the random matrix \(R\) has full row rank. Hence, any strategy that competes with the prophet must exhibit strong performance for this event, which we denote as \(\mathcal{E}_{\mathrm{hard}}\). In the subsequent section, we first analyze the performance of an arbitrary algorithm/gambler conditioned on \(\mathcal{E}_{\mathrm{hard}}\). Later, we will eliminate this constraint and complete the proof of the theorem.
In the previous section, we showed that for any level \(\ell\), the prophet's selection (or offline optimal solution) entails the selection of vectors that are linear combinations of the vectors from \(\Phi_{R}[B_{\ell}\setminus B_{\ell+1}]\) due to their orthogonality to vectors in \(\Phi_{R}[B_{\ell+1}]\). The central argument we are making here is that if an algorithm is ignorant of the "active" basis \(B_{\ell^{\prime}}\) for \(\ell^{\prime}>\ell\) and selects "large" number of vectors to compete against the prophet's utility, then it inadvertently spans a "large" fraction of the rank of \(\Phi_{R}(\Sigma_{\ell^{\prime}})\) at any higher level \(\ell^{\prime}>\ell\). Consequently, an algorithm can compete against the prophet's utility at only a constant number of levels before it spans nearly all vectors at a higher level, leading to the desired result.
Recall the first step of constructing \(\Sigma_{\ell}\) described in Procedure 2: we partition the "alive" basis vectors at level \(\ell\) (denoted as \(B_{\ell}\)) into \(d/2^{2\ell-1}\) subsets denoted as \(\mathbf{P}_{\ell}=\{P_{\ell}(i):i\in d/2^{2\ell-1}\}\). Given that gambler's selection is \(S\), we say that gambler has _decision profile_\(\rho\), if he or she selects \(\rho_{\ell}(i)\) elements among labeled copies of vectors in \(X_{\ell}(i)=\Phi_{R}[\Sigma_{\ell}(i)]\), that is, \(\rho_{\ell}(i)=|S\cap X_{\ell}(i)\times[\tau]|\). Note that \(\rho\) is random due to the randomness of both \(R\) and the selection \(S\) made by the gambler. We define \(\rho_{\ell}=\sum_{i}\rho_{\ell}(i)\), and obtain
\[\mathbb{E}[\mathbf{Rank}_{w}(S)\mid\mathcal{E}_{\mathrm{hard}}]=\sum_{\ell=0 }^{k-1}2^{\ell}\cdot\rho_{\ell}.\]
Remember that at any level \(\ell\), \(P_{\ell}(i)\) is formed by merging two parts from level \(\ell-1\), say \(P_{\ell-1}(j),P_{\ell-1}(j^{\prime})\in\mathbf{P}_{\ell-1}\) for some distinct \(j\) and \(j^{\prime}\). Therefore, iteratively applying the same argument, we observe that \(P_{\ell}(i)\) comprises \(2^{\ell-\ell^{\prime}}\) many "parts" from each lower level \(\ell^{\prime}\leq\ell\). Now, we let \(\gamma_{\ell}(i)\) as the
\[\gamma_{\ell}(i)=\Bigg{|}\bigg{\{}\bigcup_{\ell^{\prime}\leq\ell}\ \bigcup_{P_{\ell^{\prime}}(i^{\prime})\subseteq P_{\ell}(i)}\Phi_{R}[\Sigma_{ \ell^{\prime}}(i^{\prime})]\bigg{\}}\bigcap S\Bigg{|}=\Bigg{|}\bigg{\{} \bigcup_{\ell^{\prime}\leq\ell}\ \bigcup_{P_{\ell^{\prime}}(i^{\prime})\subseteq P_{\ell}(i)}X_{\ell^{\prime}}( i^{\prime})\bigg{\}}\bigcap S\Bigg{|},\]
which is essentially the number of vectors selected by an algorithm that is in the span of the vectors \(\Phi_{R}[P_{\ell}(i)]\).
In addition, we can recursively define \(\gamma_{\ell}(i)\) as follows: if \(P_{\ell}(i)=P_{\ell-1}(j)\cup P_{\ell-1}(j^{\prime})\) for some distinct \(j\) and \(j^{\prime}\) at level \(\ell-1\), then \(\gamma_{\ell}(i)=\rho_{\ell}(i)+\gamma_{\ell-1}(j)+\gamma_{\ell-1}(j^{\prime})\). We define \(\gamma_{\ell}=\sum_{j}\gamma_{\ell}(j)\), which is essentially a number of vectors selected by an algorithm upto level \(\ell\) contained in the \(\mathbf{Span}(\Phi_{R}[B_{\ell}])\). We observe that,
\[\gamma_{\ell}=\sum_{i}\left(\rho(i)+\sum_{P_{\ell}(i)}\sum_{P_{\ell-1}(i^{ \prime})\subseteq P_{\ell}(i)}\gamma_{\ell-1}(i^{\prime})\right),\]
where unifnormly random half of the parts from level \(\ell-1\) are contained in \(P_{\ell}(i)\) for some \(i\in[d/2^{\ell}]\).
On the other hand, in accordance with the weight assignment procedure outlined in Procedure 3, the weighted rank of any level is bounded by \(O(d)\) when \(\mathcal{E}_{\mathrm{hard}}\) happens. Consequently, any algorithm aiming for a constant competitive ratio must choose at least \(\frac{1}{\kappa}\cdot|B_{\ell}|\) vectors from level \(\ell\) for some \(\ell\in[\kappa]\), which implies \(\rho_{\ell}\geq\frac{1}{\kappa}\cdot|B_{\ell}|=\frac{1}{\kappa}\cdot\frac{d} {2^{\ell-1}}\) for some \(\ell\in[\kappa]\). Applying concentration inequalities and using the fact that \(d>2^{5\kappa}\), we can argue that whenever \(\gamma_{\ell}>\frac{1}{2\kappa}\cdot|B_{\ell}|\) is sufficiently large, \(\gamma_{\ell+1}\) is roughly \(\rho_{\ell+1}+\frac{\gamma_{\ell}}{2}\) with high probability. Intuitively, this observation says that once the span of selected vectors generates a suitably large subspace within the span of \(\Phi_{R}[B_{\ell}]\), about half of these vectors also fall within the span of \(\Phi_{R}[B_{\ell+1}]\). In the next lemma, we formalize the above intuition.
**Lemma 6.7**.: _For any \(1<\ell\leq\kappa\), if \(\gamma_{\ell-1}\geq\frac{1}{\kappa}\cdot|B_{\ell-1}|\),_
\[\mathbf{Pr}\left[\gamma_{\ell}\geq\rho_{\ell}+\left(1-\frac{1}{\kappa^{2}} \right)\cdot\frac{\gamma_{\ell-1}}{2}\right]\geq 1-\frac{1}{\kappa^{2}}.\]
In the following lemma, we demonstrate that once an algorithm selects sufficiently large number of vectors up to level \(\ell\), i.e. \(\gamma_{\ell}\) is large, it inadvertently spans a "large" portion of the \(\mathbf{Span}(\Phi_{R}[\Sigma_{\ell^{\prime}}])\) for any \(\ell^{\prime}>\ell\) via the vectors selected up to level \(\ell\). In a way, this lemma illustrates the disadvantage where any algorithm faces due to its lack of information about the alive basis \(B_{\ell^{\prime}}\) and \(\Phi_{R}(\Sigma_{\ell^{\prime}})\) at higher levels \(\ell^{\prime}>\ell\).
**Lemma 6.8**.: _Let \(t\geq 0\) be the minimum index with \(\rho_{t}\geq\frac{2}{\kappa}\cdot|B_{t}|\) then with probability \(1-\frac{1}{\kappa}\), for all \(\ell\geq t\)_
\[\gamma_{\ell}\geq\rho_{\ell}+\left(1-\frac{1}{\kappa^{2}}\right)\cdot\frac{ \gamma_{\ell-1}}{2}\qquad\text{and}\qquad\gamma_{\ell}\geq\left(1-\frac{1}{ \kappa}\right)\cdot\sum_{i=t}^{\ell-1}\frac{1}{2^{\ell-i}}\cdot\rho_{i}.\]
The two prior lemmas set the stage for the upcoming essential lemma that limits the performance of any algorithm, given that the event \(\mathcal{E}_{\text{hard}}\) takes place. Intuitively, any algorithm that aspires to be within a constant factor of the prophet's performance must select at least \(\frac{1}{\kappa}\) fraction of vectors (actually, much larger than \(\frac{1}{\kappa}\)) from \(\Omega(\kappa)\) levels. Nevertheless, the selection of \(\Omega\left(\frac{1}{\kappa}\right)\) fraction of vectors from any level eventually leads to spanning a large portion of vectors/rank at higher levels. This phenomenon prevents any algorithm from effectively competing against the prophet with high probability.
**Lemma 6.9**.: _For any collection of elements \(S\) selected by the gambler, we have_
\[\mathbb{E}[\textbf{Rank}_{w}(S)\mid\mathcal{E}_{\text{hard}}]\leq 7\cdot d.\]
Before we proceed with proofs of these lemmas, we utilize Lemma 6.9 to prove the main result of this section.
Proof of Theorem 6.6.: First of all, Lemma 4.2 and \(\mathbf{Pr}[w\sim\mathcal{D}_{1}]=1-\frac{d}{2^{2d}}\) implies that
\[\mathbf{Pr}[\mathcal{E}_{\text{hard}}]=\mathbf{Pr}[w\sim\mathcal{D}_{1} \wedge\mathbf{Rank}(R)=d]\geq 1-\frac{d}{2^{d}}-\frac{1}{2^{d}}\geq 1-\frac{2}{k}\]
since \(d>\kappa\geq 35\). This yields us
\[\mathbb{E}[\mathbf{Rank}_{w}(S)] \leq\mathbf{Pr}[\mathcal{E}_{\text{hard}}]\cdot\mathbb{E}[ \mathbf{Rank}_{w}(S)\mid\mathcal{E}_{\text{hard}}]+k\cdot d\cdot\mathbf{Pr}[ \mathcal{E}_{\text{hard}}\text{ does not hold}]\] \[\leq\left(1-\frac{2}{k}\right)\cdot\mathbb{E}[\mathbf{Rank}_{w} (S)\mid\mathcal{E}_{\text{hard}}]+k\cdot d\cdot\frac{2}{k}\] \[\leq\mathbb{E}[\mathbf{Rank}_{w}(S)\mid\mathcal{E}_{\text{hard}} ]+2\cdot d\] \[\leq 9\cdot d\] (Lemma 6.9).
On the other hand, Theorem 6.4 shows that the prophet's expected reward is at least \(\Omega(k\cdot d)\). Therefore, no algorithm can achieve a competitive ratio better than \(O(1/\kappa)=O(1/\log d)\) for the Prophet Inequality Problem when values are distributed according to Procedure 2 and elements are presented in the order of \(\lambda\)
The rest of this section is dedicated to providing proof for the lemmas stated above. Before we proceed with proofs, we state a useful fact about the concentration of the sum of random variables which are sampled from a finite population without replacement.
**Proposition 6.10** (Heffding type inequality [11]).: _Let \(\mathcal{X}=(x_{1},\ldots,x_{N})\) be a finite population of \(N\) points and \(X_{1},\ldots,X_{n}\) be a random sample drawn without replacement from \(\mathcal{X}\). Let_
\[a=\min_{1\leq i\leq N}x_{i}\quad\text{ and }\quad b=\max_{1\leq i\leq N}x_{i}.\]
_Then, for all \(\varepsilon>0\),_
\[\mathbb{P}\left(\sum_{i=1}^{n}X_{i}-n\cdot\mu\geq n\cdot\varepsilon\right) \leq\exp\left(-\frac{2n\varepsilon^{2}}{(b-a)^{2}}\right),\]
_where \(\mu=\frac{1}{N}\sum_{i=1}^{N}x_{i}\) is the mean of \(\mathcal{X}\)._
Proof of Lemma 6.7.: For any fixed \(\ell\in[\kappa]\), recall that \(\gamma_{\ell}(i)\) is the count of vectors selected up to level \(\ell\) which are lying in the span of \(\Phi_{R}[P_{\ell}(i)]\) for each \(i\in[d/2^{2\ell-1}]\). Define \(\Gamma_{\ell}=\{\gamma_{\ell}(i):P_{\ell}(i)\in P_{\ell}\}\). For each \(P_{\ell}(i)\in P_{\ell}\) let \(y_{i}=\mathbbm{1}_{P_{\ell}(i)\in P_{\ell}}\) denote whether the part \(P_{\ell}(i)\) survives to next level or not. Observe that \(\gamma_{\ell+1}-\rho_{\ell+1}\) is the sum of random half of \(\Gamma_{\ell}\). Then,
\[\gamma_{\ell+1}=\rho_{\ell+1}+\sum_{i=1}^{d/2^{2\ell-1}}y_{i}\cdot\gamma_{\ell }(i).\]
Since each \(\gamma_{\ell}(i)\leq 2^{\ell-1}\), by invoking Proposition 6.10 with \(\epsilon=\frac{1}{\kappa^{2}}\) and ensuring that \(d\geq 2^{5\kappa}\) and \(k\geq 35\), we deduce
\[\gamma_{\ell+1}-\rho_{\ell+1}\geq\left(1-\frac{1}{\kappa^{2}}\right)\cdot \frac{\gamma_{\ell-1}}{2}\]
with probability at least \(1-\frac{1}{\kappa^{2}}\). The inequality holds since
\[d\geq\kappa^{4}\cdot 2^{4\kappa}\cdot\log\kappa^{2}\implies\exp\left(-\frac{2d /2^{2\ell-1}}{2^{2\ell-2}\cdot\kappa^{4}}\right)\leq\frac{1}{\kappa^{2}}\]
when \(d\geq 2^{5\kappa}\) and \(\kappa\geq 35\).
Proof of Lemma 6.8.: We start by defining an event \(\mathcal{E}_{\ell}\) for each level \(\ell\geq t\) each of which holds if \(\gamma_{\ell+1}\geq\rho_{\ell+1}+\left(1-\frac{1}{\kappa^{2}}\right)\cdot\frac {\gamma_{\ell}}{2}\). By using the chain decomposition of conditional probability we show that
\[\mathbf{Pr}\left[\bigwedge_{\ell=t}^{\kappa-1}\mathcal{E}_{\ell}\right] =\prod_{\ell=t}^{\kappa-1}\mathbf{Pr}[\mathcal{E}_{\ell}\mid \mathcal{E}_{t}\ldots\mathcal{E}_{\ell-1}]\] (Lemma 6.7) \[=\prod_{\ell=t}^{\kappa-1}\mathbf{Pr}\left[\gamma_{\ell+1}\geq \rho_{\ell+1}+\left(1-\frac{1}{\kappa^{2}}\right)\cdot\frac{\gamma_{\ell}}{2} \mid\mathcal{E}_{t},\ldots\mathcal{E}_{\ell-1}\right]\] \[\geq\prod_{\ell=t+1}^{\kappa}1-\frac{1}{\kappa^{2}}\] \[\geq 1-\frac{1}{\kappa}.\]
Moreover, with probability \(1-\frac{1}{\kappa}\), we have
\[\gamma_{\ell} \geq\rho_{\ell}+\left(1-\frac{1}{\kappa^{2}}\right)\cdot\frac{ \gamma_{\ell-1}}{2}\] \[\geq\rho_{\ell}+\cdot\sum_{i=t}^{\ell-1}\left(1-\frac{1}{\kappa^{ 2}}\right)^{\ell-i}\frac{1}{2^{\ell-i}}\cdot\rho_{i} \text{(Iteratively expanding the summand)}\] \[\geq\rho_{\ell}+\left(1-\frac{1}{\kappa}\right)\cdot\sum_{i=t}^{ \ell-1}\frac{\rho_{i}}{2^{\ell-i}} \left(\left(1-\frac{1}{\kappa^{2}}\right)^{\ell-i}\geq\left(1- \frac{1}{\kappa^{2}}\right)^{\kappa}\geq\left(1-\frac{1}{\kappa}\right)\right).\]
Proof of Lemma 6.9.: Recall that \(\mathcal{E}_{\text{hard}}\) is the event when \(w\) sampled according to \(\mathcal{D}_{1}\) and random matrix \(R\) has full row-rank. Next, we condition on that \(\mathcal{E}_{\text{hard}}\) occurs and compute an upper-bound for \(\mathbb{E}[\mathbf{Rank}_{w}(S)\mid\mathcal{E}_{\text{hard}}]\).
First of all, observe that for any level \(\ell\), \(\gamma_{\ell}\leq\mathbf{Rank}(\Phi_{R}[B_{\ell}])=\mathbf{Rank}(B_{\ell})=d/ 2^{\ell-1}\). For the simplicity of notation, we define normalized values \(\overline{\rho}_{\ell}=\rho_{\ell}\cdot\frac{2^{\ell-1}}{d}\) and \(\overline{\gamma}_{\ell}=\gamma_{\ell}\cdot\frac{2^{\ell-1}}{d}\). Notice that for any \(1\leq\ell\leq k\), we have \(0\leq\overline{\rho}_{\ell},\overline{\gamma}_{\ell}\leq 1\).
Let \(t\) be the minimum index such that \(\rho_{t}\geq\frac{2}{\kappa}\cdot|B_{t}|\). If there is no such index, then let \(t=\kappa\). By Lemma 6.8, with probability \(\left(1-\frac{1}{\kappa}\right)\), we have
\[\gamma_{\ell}\geq\rho_{\ell}+\left(1-\frac{1}{\kappa}\right)\cdot\sum_{j=t}^{ \ell-1}\frac{\rho_{j}}{2^{\ell-j}}\qquad\text{ or equivalently }\qquad\overline{\gamma}_{\ell}\geq\overline{\rho}_{\ell}+\left(1-\frac{1}{ \kappa}\right)\cdot\sum_{j=t}^{\ell-1}\overline{\rho}_{j}\]
for all \(\ell\geq t\). Therefore, with probability \(1-1/\kappa\), \(\overline{\rho}_{\ell}\leq 1-\left(1-\frac{1}{\kappa}\right)\cdot\sum_{j=t}^{ \ell-1}\overline{\rho}_{j}\) for all \(\ell\geq t\).
\[\mathbb{E}[\mathbf{Rank}_{w}(S)\mid\mathcal{E}_{\text{hard}}] =\sum_{\ell=1}^{k}2^{\ell-1}\cdot\rho_{\ell}\] \[\leq\sum_{\ell=1}^{t-1}2^{\ell-1}\cdot\frac{2}{\kappa}\cdot\frac{ d}{2^{\ell-1}}+\left(1-\frac{1}{\kappa}\right)\cdot\sum_{\ell=t}^{\kappa-1}2^{ \ell-1}\cdot\rho_{\ell}+\frac{1}{\kappa}\cdot\sum_{\ell=t}^{\kappa-1}2^{ \ell-1}\cdot\frac{d}{2^{\ell-1}}\] \[\leq 3\cdot d+\sum_{\ell=t}^{\kappa-1}2^{\ell-1}\cdot\rho_{\ell}\] \[\leq 3\cdot d+\sum_{\ell=t}^{\kappa-1}2^{\ell-1}\cdot\overline{\rho }_{\ell}\cdot\frac{d}{2^{\ell-1}} \left(\overline{\rho}_{\ell}=\rho_{\ell}\cdot\frac{2^{\ell-1}}{d}\right)\] \[\leq 3\cdot d+\sum_{\ell=t}^{\kappa-1}d\cdot\overline{\rho}_{\ell}\] \[\leq 3\cdot d+d\cdot\sum_{\ell=t}^{\kappa-1}\min\left\{\overline{ \rho}_{\ell},1-\left(1-\frac{1}{\kappa}\right)\cdot\sum_{i=t}^{\ell-1} \overline{\rho}_{i}\right\}.\]
Let \(t^{\prime}\) be the minimum index such that \(\sum_{\ell=t}^{t^{\prime}}\overline{\rho}_{\ell}\geq 1-1/\kappa\). If there is no such index then set
\(t^{\prime}=\kappa-1\). Then,
\[\mathbb{E}[\textbf{Rank}_{w}(S)\mid\mathcal{E}_{\text{hard}}] \leq 3\cdot d+d\cdot\sum_{\ell=t}^{\kappa-1}\min\left\{\overline{ \rho}_{\ell},1-\left(1-\frac{1}{\kappa}\right)\cdot\sum_{i=t}^{\ell-1} \overline{\rho}_{i}\right\}\] \[\leq 3\cdot d+d\cdot\left(\sum_{\ell=t}^{t^{\prime}}\overline{ \rho}_{\ell}+\sum_{\ell=t^{\prime}+1}^{\kappa-1}\left(1-\left(1-\frac{1}{ \kappa}\right)\cdot\sum_{i=t}^{\ell-1}\overline{\rho}_{i}\right)\right)\] \[\leq 3\cdot d+2\cdot d+d\cdot\sum_{\ell=t^{\prime}+1}^{\kappa-1} \left(1-\left(1-\frac{1}{\kappa}\right)\cdot\sum_{i=t}^{\ell-1}\overline{\rho }_{i}\right)\] \[\leq 5\cdot d+d\cdot\sum_{\ell=t^{\prime}+1}^{\kappa-1}\left(1- \left(1-\frac{1}{\kappa}\right)\cdot\left(1-\frac{1}{k}\right)\right)\] \[\leq 5\cdot d+d\cdot\sum_{\ell=t^{\prime}+1}^{\kappa-1}\frac{2}{\kappa}\] \[\leq 7\cdot d.\]
Above, the third inequality holds because \(\sum_{\ell=t}^{t^{\prime}}\overline{\rho}_{\ell}=\sum_{\ell=t}^{t^{\prime}-1} \overline{\rho}_{\ell}+\overline{\rho}_{t^{\prime}}\leq 2-1/k\leq 2\) since \(t^{\prime}\) is the smallest index such that \(\sum_{\ell=t}^{t^{\prime}}\overline{\rho}_{\ell}\geq 1-1/k\). The fourth inequality holds because \(\sum_{i=t}^{t^{\prime}}\overline{\rho}_{i}\geq 1-1/k\) if \(t^{\prime}<\kappa-1\). Otherwise, the second summation vanishes as \(t^{\prime}+1>\kappa-1\).
7 \(\Omega\left(\frac{1}{\log\textbf{Rank}}\right)\) Pairwise Independent Matroid Prophet Inequality
In this section, we give an algorithm for a pairwise independent prophet inequality problem that matches the upper bound obtained in Section 6. We first present useful local-lemma type result for pairwise independent events from [18].
Let \(\mathcal{E},\mathcal{F}\) be two events on the same probability space. We represent the occurrences of at least one of \(\mathcal{E},\mathcal{F}\) and both \(\mathcal{E},\mathcal{F}\) events simultaneously by \(\mathcal{E}\vee\mathcal{F}\) and \(\mathcal{E}\wedge\mathcal{F}\) respectively. Given the set of pairwise independent events \(\{\mathcal{E}_{1},\ldots,\mathcal{E}_{k}\}\) on the same probability space, the following lemma from [18] lower bounds the probability of the occurrence of at least one of the random events \(\{\mathcal{E}_{1},\ldots,\mathcal{E}_{k}\}\) which we present in the following lemma.
**Lemma 7.1** (Lemma 1 from [18]).: _Let \(\{\mathcal{E}_{i}\}_{i=1}^{k}\) be a collection of pairwise independent events and \(\mathcal{D}\) be a pairwise independent distribution over these events. Then_
\[\textbf{Pr}\left[\bigvee_{i=1}^{k}\mathcal{E}_{i}\right]\geq\frac{\sum_{i=1}^{ k}\textbf{Pr}[\mathcal{E}_{i}]}{1+\sum_{i=1}^{k}\textbf{Pr}[\mathcal{E}_{i}]}.\]
The following theorem is the main result of the section.
**Theorem 7.2**.: _For any given matroid \(\mathcal{M}(E,\mathcal{I})\), there exists an \(O\left(\frac{1}{\log\textbf{Rank}}\right)\)-approximation algorithm for pairwise independent prophet inequality problem where **Rank** is a short-hand notation for \(\textbf{Rank}(\mathcal{M})\)._
Our algorithm defines a strategy for the gambler by dividing elements into _weight buckets_. Given a matroid \(\mathcal{M}=(E,\mathcal{I})\) and pairwise independent weight distribution \(w\sim\mathcal{D}\), we denote \(\mathbf{OPT}=\mathbb{E}_{\mathcal{D}}\left[\mathbf{Rank}_{w}(\mathcal{M})\right]\). Next, we set \(k=\lceil 4\log\mathbf{Rank}\rceil\) and define \(k+2\) buckets \(B_{0},B_{1},\ldots B_{k}\) and \(B_{\infty}\) as follows:
\[B_{0}=\left[0,\frac{\mathbf{OPT}}{\mathbf{Rank}^{2}}\right),B_{1}=\left[\frac{ \mathbf{OPT}}{\mathbf{Rank}^{2}},\frac{2\mathbf{OPT}}{\mathbf{Rank}^{2}} \right),\ldots,B_{k}=\left[\cdot\frac{2^{k-1}\mathbf{OPT}}{\mathbf{Rank}^{2}}, \frac{2^{k}\mathbf{OPT}}{\mathbf{Rank}^{2}}\right),B_{\infty}=\left[\frac{2^{k }\mathbf{OPT}}{\mathbf{Rank}^{2}},\infty\right).\]
Given any draw of the weights \(w\sim\mathcal{D}\), we partition elements into sets \(E_{0},E_{1},\ldots E_{k}\) and \(E_{\infty}\) as follows. We define \(E_{i}=\{e\in E:w(e)\in B_{i}\}\) for any \(i\in\{0,\ldots k,\infty\}\) as elements whose weight lies in bucket \(B_{i}\). Here, note that the sets \(E_{i}\) for \(i\in\{0,\ldots,k,\infty\}\) are random. We define the expected optimal reward from bucket \(B_{i}\) for any \(i\in\{0,1,\ldots,k,\infty\}\) as follows,
\[\mathbf{OPT}(B_{i})=\mathbb{E}\left[\max_{S\subseteq E_{i},S\in\mathcal{I}} \mathbf{Rank}_{w}(S)\right]\text{ for any }i\in\{0,\ldots k,\infty\}.\]
We can upper-bound the expected reward of the prophet by the total expected optimal rewards from each bucket.
\[\mathbf{OPT}\leq\sum_{i=0}^{k}\mathbf{OPT}(B_{i})+\mathbf{OPT}(B_{\infty}). \tag{2}\]
We start by showing that the contribution of bucket \(B_{0}\) can be ignored. More formally,
**Claim 7.3**.: _Given matroid \(\mathcal{M}=(E,\mathcal{I})\) and pairwise independent distribution \(w\sim\mathcal{D}\),_
\[\mathbf{OPT}(B_{0})\leq\frac{\mathbf{OPT}}{\mathbf{Rank}}.\]
Proof.: We can bound,
\[\mathbf{OPT}(B_{0})=\mathbb{E}\left[\max_{S\subseteq E_{0},S\in\mathcal{I}} \mathbf{Rank}_{w}(S)\right]\leq\mathbb{E}\left[|S|\cdot\frac{\mathbf{OPT}}{ \mathbf{Rank}^{2}}\right]\leq\frac{\mathbf{OPT}}{\mathbf{Rank}}.\]
The first inequality holds because \(w(e)\leq\frac{\mathbf{OPT}}{\mathbf{Rank}^{2}}\) for any element \(e\in E_{0}\) with probability 1. The latter follows from the fact that \(|S|\leq\mathbf{Rank}\) with probability 1.
Clearly, this claim implies that \(\sum_{i=1}^{k}\mathbf{OPT}(B_{i})+\mathbf{OPT}(B_{\infty})\geq\left(1-\frac{1} {\mathbf{Rank}}\right)\cdot\mathbf{OPT}\). Next, we propose an algorithm which guarantees at least \(\Omega\left(\frac{1}{\log\mathbf{Rank}}\right)\) fraction of \(\mathbf{OPT}\). The algorithm first determines the optimal bucket \(B^{*}\) which guarantees the maximum expected reward as follows:
\[B^{*}=\operatorname*{argmax}_{B_{i}\in\{B_{1},\ldots,B_{k},B_{\infty}\}} \mathbf{OPT}(B_{i}).\]
Later, it greedily picks all elements belonging to bucket \(B^{*}\). We formally state this algorithm as follows. In the rest of the section, we aim to show that Algorithm 1 guarantees the desired approximation ratio. We first observe that
\[\mathbf{OPT}(B^{*}) \geq\frac{1}{4\log\mathbf{Rank}}\cdot\left(\sum_{i=1}^{k}\mathbf{ OPT}(B_{i})+B_{\infty}\right)\geq\frac{1}{4\log\mathbf{Rank}}\cdot\left(1-\frac{1} {\mathbf{Rank}}\right)\cdot\mathbf{OPT}\] \[\geq\frac{1}{8\log\mathbf{Rank}}\cdot\mathbf{OPT} \tag{3}\]
Let \(S\) be the output of the algorithm for any given problem instance. Next, we aim to show that \(\mathbb{E}[w(S)]\geq c\cdot\mathbf{OPT}(B^{*})\) for some universal constant \(c\). We consider the following two cases separately: (i) \(B^{*}\in\{B_{1},\ldots B_{k}\}\) and (ii) \(B^{*}=B_{\infty}\) and start by showing the first.
**Lemma 7.4**.: _Let \(S\) be the output of Algorithm 1. If \(B^{*}\in\{B_{1},\ldots B_{k}\}\) then_
\[\mathbb{E}[w(S)]\geq\frac{1}{2}\cdot\mathbb{E}[\boldsymbol{Rank}_{w}(B^{*})].\]
Proof.: Let \(B_{i}=B^{*}\) for some \(i\in[k]\) and \(\ell=2^{i-1}\cdot\frac{\mathbf{OPT}}{\boldsymbol{\mathrm{Rank}}^{2}}\). Then, observe that \(\ell\leq w(e)\leq 2\cdot\ell\) for all \(e\in B_{i}\) and \(\boldsymbol{\mathrm{Rank}}_{w}(E_{i})\leq 2\cdot\ell\cdot\boldsymbol{ \mathrm{Rank}}(E_{i})\). Since the greedy algorithm guarantees an independent set \(S\) such that \(|S|=\boldsymbol{\mathrm{Rank}}(E^{*})\) with probability 1, we have
\[\boldsymbol{\mathrm{Rank}}_{w}(S)\geq\ell\cdot|S|=\ell\cdot\boldsymbol{ \mathrm{Rank}}(E_{i})\geq\frac{1}{2}\cdot\boldsymbol{\mathrm{Rank}}_{w}(E_{i}).\]
which completes the proof.
Next, we analyze the case when \(B^{*}=B_{\infty}\). We start by observing that the probability of having bucket \(B_{\infty}\) nonempty, i.e. \(E_{\infty}\neq\emptyset\) is very small.
**Observation 7.5**.: _Given matroid \(\mathcal{M}=(E,\mathcal{I})\) and pairwise independent distribution \(w\sim\mathcal{D}\),_
\[\mathbf{Pr}\left[E_{\infty}\neq\emptyset\right]\leq\frac{1}{\boldsymbol{Rank} ^{2}}.\]
Proof.: Assume for a contradiction \(\mathbf{Pr}\left[E_{\infty}\neq\emptyset\right]>\frac{1}{\boldsymbol{ \mathrm{Rank}}^{2}}\), then
\[\operatorname*{\mathbb{E}}_{w\sim\mathcal{D}}[\boldsymbol{\mathrm{Rank}}_{w} (\mathcal{M})] \geq\operatorname*{\mathbb{E}}_{w\sim\mathcal{D}}\left[\max_{e\in E }w(e)\right]\] \[\geq 2^{k}\cdot\frac{\mathbf{OPT}}{\boldsymbol{\mathrm{Rank}}^{2}} \cdot\mathbf{Pr}[E_{\infty}\neq\emptyset]\] \[\geq\mathbf{OPT}\cdot\boldsymbol{\mathrm{Rank}}^{2}\cdot\mathbf{ Pr}[E_{\infty}\neq\emptyset] (k\geq 4\cdot\log\boldsymbol{\mathrm{Rank}})\] \[>\frac{\mathbf{OPT}\cdot\boldsymbol{\mathrm{Rank}}^{2}}{ \boldsymbol{\mathrm{Rank}}^{2}}\] (contradiction assumption) \[=\mathbf{OPT}\]
yields a contradiction.
Before we complete the proof of the second scenario, when \(B^{*}=B_{\infty}\), we state a fact.
**Proposition 7.6**.: _Given matroid \(\mathcal{M}=(E,\mathcal{I})\), pairwise independent distribution \(w\sim\mathcal{D}\) and \(e^{\prime}\in E\),_
\[\sum_{e\in E\setminus\{e^{\prime}\}}\mathbf{Pr}[w(e)\in B_{\infty} \mid w(e^{\prime})\in B_{\infty}]\leq\frac{\frac{1}{\mathbf{Rank}^{2} }}{1-\frac{1}{\mathbf{Rank}^{2}}}.\]
Proof.: First, we note that the set of events \(\{w(e)\in B_{\infty}\}_{e\in E}\) are pairwise independent. Hence, Lemma 7.1 implies that,
\[\mathbf{Pr}[E_{\infty}\neq\emptyset]=\mathbf{Pr}\left[\bigvee_{e \in E}\{w(e)\in B_{\infty}\}\right]\geq\frac{\sum_{e\in E}\mathbf{Pr}[w(e)\in B _{\infty}]}{1+\sum_{e\in E}\mathbf{Pr}[w(e)\in B_{\infty}]}.\]
Observe that this is equivalent to
\[\sum_{e\in E}\mathbf{Pr}[w(e)\in B_{\infty}]\leq\frac{\mathbf{Pr}[E_{\infty} \neq\emptyset]}{1-\mathbf{Pr}[E_{\infty}\neq\emptyset]}\leq\frac{\frac{1}{ \mathbf{Rank}^{2}}}{1-\frac{1}{\mathbf{Rank}^{2}}}.\]
Above, the last inequality follows due to Observation 7.5. Since distribution \(\mathcal{D}\) is pairwise independent,
\[\sum_{e\in E\setminus\{e^{\prime}\}}\mathbf{Pr}[w(e)\in B_{\infty} \mid w(e^{\prime})\in B_{\infty}]=\sum_{e\in E\setminus\{e^{\prime}\}} \mathbf{Pr}[w(e)\in B_{\infty}]\leq\frac{\frac{1}{\mathbf{Rank}^{2}} }{1-\frac{1}{\mathbf{Rank}^{2}}}\]
completes the proof.
The following lemma establishes the result of this section.
**Lemma 7.7**.: _Let \(S\) be the output of Algorithm 1,_
\[\mathbb{E}[w(S)\mid B^{*}=B_{\infty}]\geq\frac{2}{3}\cdot\mathbb{E}[\mathbf{Rank}_{w}(B^{*})\mid B^{*}=B_{\infty}].\]
Proof.: First we can bound the \(\mathbf{OPT}(B_{\infty})\) as follows:
\[\mathbf{OPT}(B_{\infty})=\mathbb{E}\left[\max_{I\subseteq E_{ \infty},I\in\mathcal{I}}\mathbf{Rank}_{w}(I)\right]\leq\sum_{e\in E} \mathbb{E}\left[w(e)\mid w(e)\in B_{\infty}\right]\cdot\mathbf{Pr}[w(e)\in B _{\infty}].\]
We note that our algorithm always selects the first element in \(B_{\infty}\) presented by the adversary. Now, we define an event \(\mathcal{E}_{e}(x):=\{w(e)=x\wedge\bigcap_{e^{\prime}\neq e\in E}\{w(e^{ \prime})\notin B_{\infty}\}\}\) for any \(x\geq\mathbf{Rank}^{2}\cdot\mathbf{OPT}\).
We can bound the probability of the event \(\mathcal{E}_{e}\) as follows:
\[\mathbf{Pr}[\mathcal{E}_{e}(x)] =\mathbf{Pr}\left[w(e)=x\wedge\bigwedge_{e^{\prime}\in E\setminus \{e\}}\{w(e^{\prime})\notin B_{\infty}\}\right]\] \[=\mathbf{Pr}\left[\bigwedge_{e^{\prime}\in E\setminus\{e\}}\{w(e ^{\prime})\notin B_{\infty}\}\mid w(e)=x\right]\cdot\mathbf{Pr}[w(e)=x]\] \[=(1-\mathbf{Pr}\left[E_{\infty}\setminus\{e\}\neq\emptyset\mid w (e)=x\right])\cdot\mathbf{Pr}[w(e)=x]\] \[\geq\left(1-\frac{\frac{1}{\mathbf{Rank}^{2}}}{1-\frac{1}{ \mathbf{Rank}^{2}}}\right)\cdot\mathbf{Pr}[w(e)=x].\] (Proposition 7.6 )
As events \(\{\mathcal{E}_{e}(x)\}_{e\in E}\) are set of disjoint events, we can lower bound the performance of the algorithm as
\[\mathbb{E}[w(S)\mid B^{*}=B_{\infty}] \geq\sum_{e\in E}\int_{x\geq\mathbf{Rank}^{2}\cdot\mathbf{OPT}}x \cdot d\,\mathbf{Pr}[\mathcal{E}_{e}(x)]\] \[\geq\left(1-\frac{\frac{1}{\mathbf{Rank}^{2}}}{1-\frac{1}{ \mathbf{Rank}^{2}}}\right)\cdot\sum_{e\in E}\int_{x\geq\mathbf{Rank}^{2}\cdot \mathbf{OPT}}x\cdot d\,\mathbf{Pr}[w(e)=x]\] \[=\left(1-\frac{\frac{1}{\mathbf{Rank}^{2}}}{1-\frac{1}{ \mathbf{Rank}^{2}}}\right)\cdot\sum_{e\in E}\mathbb{E}[w(e)\mid w(e)\geq\mathbf{ Rank}^{2}\cdot\mathbf{OPT}]\cdot\mathbf{Pr}[w(e)\geq\mathbf{Rank}^{2}\cdot \mathbf{OPT}]\] \[\geq\left(1-\frac{\frac{1}{\mathbf{Rank}^{2}}}{1-\frac{1}{ \mathbf{Rank}^{2}}}\right)\cdot\mathbf{OPT}(B_{\infty})\] \[\geq\frac{2}{3}\cdot\mathbf{OPT}(B_{\infty}).\]
Here, the last inequality follows when \(\mathbf{Rank}(\mathcal{M})\geq 2\).
We are now ready to prove the main theorem of the section:
Proof of Theorem 7.2.: Let \(S\) be the output of Algorithm 1. Combining Lemmas 7.4 7
\[\mathbb{E}[w(S)]\geq\frac{1}{2}\mathbb{E}[\mathbf{OPT}(B^{*})]\geq\frac{1}{16 \cdot\log\mathbf{Rank}}\cdot\mathbf{OPT}.\]
Above the last inequality holds due to Equation 3. This concludes the proof.
**Remark 7.8**.: _We note that our algorithm is \(\Omega(1/\log\textbf{Rank})\)-approximate against even an "almighty" adversary from [28] who knows all the weight assignments or randomness of the algorithm in advance. This holds because our algorithm is deterministic and it picks a set with the total expected weight of at least \(\frac{1}{2}\mathbb{E}[\textbf{Rank}_{w}(B^{*})]\) for any arrival order._
## 8 Partition Property and Stochastic Selection Problems
A matroid \(\mathcal{M}=(E,\mathcal{I})\) is called a simple parition matroid if a partition of \(E=\bigcup_{i=1}^{d}P_{i}\) exists such that \(I\in\mathcal{I}\) if and only if \(|I\cap P_{i}|\leq 1\) for all \(i\in[d]\). We recall the following partition property of matroids defined in [8].
**Definition 8.1** (Strong \(\alpha\)-Partition Property).: _Consider a matroid \(\mathcal{M}=(E,\mathcal{I})\). We say that \(\mathcal{M}\) satisfies an \(\alpha\)-partition property for some \(\alpha\in(0,1]\) if it is possible to determine a function \(f\) that generates a random simple partition matroid \(\mathcal{M}^{\prime}=(E^{\prime},\mathcal{I}^{\prime})\) on the set of elements \(E^{\prime}\subseteq E\) such that for any non-negative weight vector \(w\):_
1. \(\textbf{Rank}_{w}(\mathcal{M})\geq\mathbb{E}_{\mathcal{M}^{\prime}\sim f( \mathcal{M})}\left[\textbf{Rank}_{w}(\mathcal{M}^{\prime})\right]\geq\alpha \cdot\textbf{Rank}_{w}(\mathcal{M})\)_,_
2. \(\mathcal{I}^{\prime}\subseteq\mathcal{I}\) _for any_ \(\mathcal{M}^{\prime}\in\operatorname{supp}(f(\mathcal{M}))\)_._
Several families of matroids have been identified to satisfy this property for a global constant or a "small" \(\alpha\)[8, 45]. Table 1 summarizes our results. We regard the function \(f\) as a black box function that takes a matroid \(\mathcal{M}\) as an input and returns a (random) simple partition matroid \(\mathcal{M}^{\prime}\) which guarantees the optimal possible \(\alpha\)-partition property for \(\mathcal{M}\).
### Partition Property and Pairwise Independent Matroid Prophet Inequalities
We reference a theorem from the work that investigates the prophet inequality problem with pairwise independent distributions [18]. This theorem illustrates the existence of a constant approximate pairwise independent prophet inequality algorithm for a one-uniform matroid.
**Theorem 8.2** ([18]).: _Given a one-uniform matroid over elements \(E\) and a pairwise independent value distribution \(\mathcal{D}\in\Delta_{\text{pw}}(\mathbb{R}_{\geq 0}^{|E|})\), there exists a \((1/3)\)-approximate prophet inequality algorithm._
Broadening this result, we devise an algorithm that admits a \(\frac{\alpha}{3}\)-approximate pairwise independent prophet inequality when the matroid upholds an \(\alpha\)-partition property. The algorithm operates in a straightforward manner: when given a matroid \(\mathcal{M}\) that satisfies the \(\alpha\)-partition property, it initially constructs a simple partition matroid \(\mathcal{M}^{\prime}\) and proceeds to execute a \(\frac{1}{3}\)-approximate algorithm for each partition, treating them akin to a one-uniform matroid.
\begin{table}
\begin{tabular}{|c||c|c|c|} \hline
**Constraint** & **CRS** & \begin{tabular}{c} **Prophet** \\ **Inequality** \\ \end{tabular} & \begin{tabular}{c} **Note and Reference for** \\ **Partition Property** \\ \end{tabular} \\ \hline \hline Partition Matroid & \(\frac{1}{4}\)-balanced & \(\frac{1}{3}\)-competetive & Theorem 8.3 8.6 \\ \hline Graphic Matroid & \(\frac{1}{8}\)-balanced & \(\frac{1}{6}\) -competetive & Theorem 8.3 8.6 and [8] \\ \hline Co-Graphic Matroid & \(\frac{1}{12}\)-balanced & \(\frac{1}{9}\)-competitive & Theorem 8.3 8.6 and [45] \\ \hline Laminar Matroid & \(\frac{1}{12\sqrt{3}}\)-balanced & \(\frac{1}{9\sqrt{3}}\)-competitive & Theorem 8.3 8.6 and [34] \\ \hline Low Density & \(\frac{1}{8\gamma}\)-balanced & \(\frac{1}{6\gamma}\)-competetive &
\begin{tabular}{c} Theorem 8.3 8.6 and [45], \\ \(\gamma=\max_{S\subseteq E}\frac{|S|}{\textbf{Rank}(S)}\) \\ \end{tabular} \\ \hline Column \(k\) Sparse & \(\frac{1}{8k}\)-balanced & \(\frac{1}{6k}\)-competetive & Theorem 8.3 8.6 and [45] \\ \hline \end{tabular}
\end{table}
Table 1: Summary of our results for matroids that satisfies \(\alpha\)-partition property.
**Theorem 8.3**.: _Suppose \(\mathcal{M}=(E,\mathcal{I})\) is a matroid which holds an \(\alpha\)-partition property for a certain \(\alpha\in(0,1]\), and \(\mathcal{D}\in\Delta_{\text{pw}}(\mathbb{R}^{E}_{\geq 0})\) is a pairwise independent value distribution. Algorithm 2 yields \(\frac{\alpha}{3}\)-approximate pairwise independent prophet inequality._
Proof.: Let's denote \(\mathcal{A}_{i}\) as the run of algorithm \(\mathcal{A}\) with inputs \(P_{i}\) and value distribution \(\{w(e):e\in P_{i}\}\). Given the random simple partition matroid \(\mathcal{M}^{\prime}\), we can observe that
\[\operatorname*{\mathbb{E}}_{w\sim\mathcal{D}}[w(S)\mid\mathcal{M}^ {\prime}] =\sum_{i=1}^{r}\operatorname*{\mathbb{E}}_{\mathcal{A}_{i},w\sim \mathcal{D}}[w(S_{i})\mid\mathcal{M}^{\prime}]\] (Linearity of expectation) \[\geq\sum_{i=1}^{r}\operatorname*{\mathbb{E}}_{\mathcal{A}_{i},w \sim\mathcal{D}}\left[\frac{1}{3}\cdot\operatorname{\mathbf{Rank}}_{w}^{ \mathcal{M}^{\prime}}(P_{i})\mid\mathcal{M}^{\prime}\right]\] (Theorem 8.2 ) \[=\frac{1}{3}\cdot\operatorname*{\mathbb{E}}_{w\sim\mathcal{D}} \left[\sum_{i=1}^{r}\operatorname{\mathbf{Rank}}_{w}^{\mathcal{M}^{\prime}}(P_ {i})\mid\mathcal{M}^{\prime}\right]\] (Linearity of expectation) \[=\frac{1}{3}\cdot\operatorname*{\mathbb{E}}_{w\sim\mathcal{D}} \left[\operatorname{\mathbf{Rank}}_{w}(\mathcal{M}^{\prime})\mid\mathcal{M}^ {\prime}\right].\] ( \[\mathcal{M}^{\prime}\] is a partition matroid)
If we drop the condition and use Fubini's theorem, we get that
\[\operatorname*{\mathbb{E}}_{\mathcal{M}^{\prime}}\left[\frac{1}{3}\cdot \operatorname*{\mathbb{E}}_{w\sim\mathcal{D}}\left[\operatorname{\mathbf{Rank} }_{w}(\mathcal{M}^{\prime})\mid\mathcal{M}^{\prime}\right]\right] =\frac{1}{3}\operatorname*{\mathbb{E}}_{w\sim\mathcal{D}} \left[\operatorname*{\mathbb{E}}_{\mathcal{M}^{\prime}}\left[\operatorname{ \mathbf{Rank}}_{w}(\mathcal{M}^{\prime})\mid\mathbf{w}\right]\right]\]
given that \(\operatorname{\mathbf{Rank}}_{w}(\mathcal{M}^{\prime})\) is a non-negative random variable with a finite expectation. Above the equality holds because \(\mathcal{M}^{\prime}\) is independent of the weight assignments \(w\sim\mathcal{D}\). Thus, we have
\[\operatorname*{\mathbb{E}}[w(S)] =\operatorname*{\mathbb{E}}_{\mathcal{M}^{\prime}}\left[ \operatorname*{\mathbb{E}}_{w\sim\mathcal{D}}[w(S)\mid\mathcal{M}^{\prime}]\right]\] \[\geq\frac{1}{3}\operatorname*{\mathbb{E}}_{w\sim\mathcal{D}} \left[\operatorname*{\mathbb{E}}_{\mathcal{M}^{\prime}}\left[\operatorname{ \mathbf{Rank}}_{w}(\mathcal{M}^{\prime})\mid w\right]\right]\] \[\geq\frac{1}{3}\operatorname*{\mathbb{E}}_{w\sim\mathcal{D}} \left[\alpha\cdot\operatorname{\mathbf{Rank}}_{w}(\mathcal{M})\right]\] ( \[\alpha\] -partition property) \[=\frac{\alpha}{3}\cdot\operatorname*{\mathbb{E}}_{w\sim \mathcal{D}}\left[\operatorname{\mathbf{Rank}}_{w}(\mathcal{M})\right].\]
**Remark 8.4**.: _In Theorem 8.3, our approximation guarantees hold against even an "almighty" adversary from [28] who knows all the weight assignments or randomness of the algorithm in advance. This holds because the pairwise independent prophet inequality algorithm for one-uniform matroid from [18] is deterministic. Therefore, once we conditioned on \(\mathcal{M}^{\prime}\) Equation 4 holds against an almighty adversary._
### Partition Property and Pairwise Independent CRS for Matroids
In this section, we demonstrate a \(\frac{\alpha}{4}\)-balanced Contention Resolution Scheme (CRS) for matroids that exhibit an \(\alpha\)-partition property. Initially, we present a straightforward \(\frac{1}{4}\)-balanced pairwise independent CRS for a one-uniform matroid.
**Lemma 8.5**.: _For a one-uniform matroid over elements \(E\), \(\mathbf{x}\in[0,1]^{|E|}\) with \(\sum_{e\in E}\mathbf{x}(e)\leq 1\), and a pairwise independent distribution \(\mathcal{D}\in\Delta_{\text{pw}}(2^{E})(\mathbf{x})\), there exists a \(\frac{1}{4}\)-balanced CRS for \(\mathcal{D}\)._
Proof.: Due to Lemma 2.2, it is enough to show that for any weight vector \(w\in\mathbb{R}_{\geq 0}^{|E|}\), \(\mathbb{E}_{A\sim\mathcal{D}}[\mathbf{Rank}_{w}(A)]\geq\frac{1}{4}\cdot\sum_{e \in E}w(e)\cdot\mathbf{x}(e)\). For the set \(A\sim\mathcal{D}\), we construct \(A^{\prime}\) by independently choosing each element from \(A\) with probability \(\frac{1}{2}\). Noting that \(\mathbf{Pr}[e\in A^{\prime}]=\frac{\mathbf{x}(e)}{2}\), we can formulate the following bound:
\[\mathop{\mathbb{E}}_{A\sim\mathcal{D}}[\mathbf{Rank}_{w}(A)] \geq\mathop{\mathbb{E}}_{A^{\prime}}[\mathbf{Rank}_{w}(A^{ \prime})]\] \[\geq\sum_{e\in E}w(e)\cdot\mathop{\mathbf{Pr}}_{A^{\prime}}[A^{ \prime}\cap E=\{e\}]\] \[=\sum_{e\in E}w(e)\cdot\mathop{\mathbf{Pr}}_{A^{\prime}}[e\in A^{ \prime}]\cdot\mathbf{Pr}[(A^{\prime}\cap E)\setminus\{e\}=\emptyset\mid e\in A ^{\prime}]\] \[\geq\sum_{e\in E}w(e)\cdot\mathop{\mathbf{Pr}}_{A^{\prime}}[e\in A ^{\prime}]\cdot\left(1-\sum_{e^{\prime}\in E\setminus\{e\}}\mathop{\mathbf{Pr}} _{A^{\prime}}[e^{\prime}\in A^{\prime}\mid e\in A^{\prime}]\right)\] (Union Bound) \[=\frac{1}{2}\cdot\sum_{e\in E}w(e)\cdot\mathbf{x}(e)\cdot\left(1- \frac{1}{2}\cdot\sum_{e^{\prime}\in E\setminus\{e\}}\mathbf{x}(e^{\prime})\right)\] (Pairwise Independence) \[\geq\frac{1}{4}\cdot\sum_{e\in E}w(e)\cdot\mathbf{x}(e)\]
Next, similar to the prophet inequalities, we reduce CRS for matroids with \(\alpha\) -partition property to one uniform matroid. The following is the main theorem of this section:
**Theorem 8.6**.: _Let \(\mathcal{M}=(E,\mathcal{I})\) be a matroid that satisfies \(\alpha\)-partition property for some \(\alpha\in(0,1]\), \(\mathbf{x}\in P_{\mathcal{I}}\) and \(\mathcal{D}\in\Delta_{\text{pw}}(2^{E})(\mathbf{x})\). Then \(\mathcal{D}\) admits an \(\frac{\alpha}{4}\)-balanced CRS._
Before we prove the theorem, we prove a crucial lemma that for any matroid and \(\mathcal{M}\) satisfying \(\alpha\)-partition property and \(\mathbf{x}\in\mathcal{P}_{\mathcal{I}}\), there exists \(\mathbf{x}^{\prime}\in\mathcal{P}_{\mathcal{I}^{\prime}}\) that approximates \(\mathbf{x}\) in expectation, where \(\mathcal{M}^{\prime}=(E,\mathcal{I}^{\prime})\) is a (random) simple partition matroid that approximates the (expected) rank of \(\mathcal{M}\) within \(\alpha\) factor.
**Lemma 8.7**.: _Let \(\mathcal{M}=(E,\mathcal{I})\) be a matroid that satisfies \(\alpha\)-partition property associated with the black box function \(f\), \(\mathbf{x}\in\mathcal{P}_{\mathcal{I}}\) and weight vector \(w\in\mathbb{R}_{\geq 0}^{E}\) and \(\mathcal{M}^{\prime}=(E,\mathcal{I}^{\prime})\) be a random partition matroid produced by \(f(\mathcal{M})\). Then there exists (random) \(\mathbf{x}^{\prime}\) such that:_
1. \(\mathbf{x}^{\prime}\in\mathcal{P}_{\mathcal{I}^{\prime}}\) _with probability_ \(1\)_,_
2. \(\mathbf{x}^{\prime}(e)\leq\mathbf{x}(e)\) _for all_ \(e\in E\) _with probability_ \(1\)_,_
3. \(\sum_{e\in E}w(e)\cdot\mathbb{E}[\mathbf{x}^{\prime}(e)]\geq\alpha\cdot\sum_{ e\in E}w(e)\cdot\mathbf{x}(e)\)_._
Proof.: Due to Caratheodory's theorem, we can express \(\mathbf{x}=\sum_{j=1}^{k}c_{j}\cdot\mathds{1}_{I_{j}}\), where \(\sum_{j=1}^{k}c_{j}=1\) and \(I_{j}\in\mathcal{I}\) for every \(j\in[k]\), where \(k=|E|+1\). Next, let us define \(w^{j}=w\cdot\mathds{1}_{I_{j}}\), and notice that the \(\alpha\)-partition property implies
\[\mathbb{E}[\mathbf{Rank}_{w^{j}}(\mathcal{M}^{\prime})]\geq\alpha\cdot \mathbb{E}[\mathbf{Rank}_{w^{j}}(\mathcal{M})]=\alpha\cdot\sum_{e\in I_{j}}w(e) \tag{5}\]
where the equality follows because \(I_{j}\in\mathcal{I}\). For all \(\mathcal{M}^{\prime}=(E^{\prime},\mathcal{I}^{\prime})\in\operatorname{supp}( f(\mathcal{M}))\), we define \(\mathbf{x}^{\prime}\) such that for any \(e\in E\):
\[\mathbf{x}^{\prime}(e)=\sum_{j=1}^{k}c_{j}\cdot\mathds{1}\left[e\in\operatorname {argmax}_{S\in\mathcal{I}^{\prime}}w^{j}(S)\right], \tag{6}\]
Notice that \(\mathbf{x}^{\prime}\in\mathcal{P}_{\mathcal{I}^{\prime}}\) since \(\sum_{j}^{k}c_{j}=1\) and \(\mathbf{x}\) is convex combination of independent sets of \(\mathcal{M}^{\prime}\). Also, observe that \(\mathbf{x}^{\prime}(e)\leq\mathbf{x}(e)\) as \(\sum_{j=1}^{k}c_{j}\cdot\mathds{1}[e\in I_{j}]=\mathbf{x}(e)\) due to Caratheodory's theorem. In addition,
\[\sum_{e\in E}\mathbb{E}[\mathbf{x}^{\prime}(e)]\cdot w(e) =\sum_{e\in E}\sum_{j=1}^{k}c_{j}\cdot\mathbf{Pr}[e\in\operatorname {argmax}_{S\in\mathcal{I}^{\prime}}w^{j}(S)]\cdot w(e)\] (6) \[=\sum_{j=1}^{k}c_{j}\cdot\sum_{e\in E}\mathbf{Pr}[e\in\operatorname {argmax}_{S\in\mathcal{I}^{\prime}}w^{j}(S)]\cdot w(e)\] \[=\sum_{j=1}^{k}c_{j}\cdot\mathbb{E}[\mathbf{Rank}_{w^{j}}( \mathcal{M}^{\prime})]\] (Definition of \[\mathbf{Rank}_{w^{j}}(\cdot)\] ) \[\geq\sum_{j=1}^{k}c_{j}\cdot\alpha\sum_{e\in I_{j}}w(e)\] (5) \[=\alpha\cdot\sum_{e\in E}w(e)\cdot\mathbf{x}(e)\] (Caratheodory's decomp.)
We now move forward to provide a proof for the main theorem of this section.
Proof of Theorem 8.6.: Let \(w\in\mathbb{R}_{+}^{E}\) be an arbitrary weight assignment of elements, and \(\mathbf{x}\in\mathcal{P}_{\mathcal{I}}\) be an arbitrary vector of marginal probabilities. Let \(\mathbf{x}^{\prime}\) be the random vector that is generated as per Lemma 8.7. For any realization \(\mathcal{M}^{\prime}\), let us define \(N_{\mathcal{M}^{\prime}}\subseteq E\) as a randomly chosen set of elements, where each element \(e\in E\) is included with a probability of \(\frac{\mathbf{x}^{\prime}(e)}{\mathbf{x}(e)}\), independently. We observe that the set \(A\cap N_{\mathcal{M}^{\prime}}\) is sampled from a distribution belonging to \(\Delta_{\operatorname{pw}}(2^{E})(\mathbf{x}^{\prime})\) since \(\mathbf{x}^{\prime}(e)\leq\mathbf{x}(e)\) due to Lemma 8.7. We have:
\[\mathbb{E}[\mathbf{Rank}_{w}^{\mathcal{M}}(A)] \geq\mathbb{E}[\mathbf{Rank}_{w}^{\mathcal{M}^{\prime}}(A)] (\mathcal{I}^{\prime}\subseteq\mathcal{I})\] \[\geq\mathbb{E}[\mathbf{Rank}_{w}^{\mathcal{M}^{\prime}}(A\cap N_{ \mathcal{M}^{\prime}})] (monotonicity of \[\mathbf{Rank}_{w}(\cdot)\] ) \[=\mathbb{E}\left[\mathbb{E}[\mathbf{Rank}_{w}^{\mathcal{M}^{ \prime}}(Q\cap N_{\mathcal{M}^{\prime}})\mid\mathcal{M}^{\prime}]\right]\] \[\geq\mathbb{E}\left[\frac{1}{4}\cdot\sum_{e\in E}\mathbf{w}(e) \cdot\mathbf{x}(e)\cdot\mathbf{Pr}[e\in N_{\mathcal{M}^{\prime}}\mid\mathcal{ M}^{\prime}]\right]\] (Lemma 8.5) \[=\mathbb{E}\left[\frac{1}{4}\cdot\sum_{e\in E}w(e)\cdot\mathbf{x} ^{\prime}(e)\right] (\text{Construction of }N_{\mathcal{M}^{\prime}})\] \[=\sum_{e\in E}\frac{1}{4}\cdot w(e)\,\mathbb{E}[\mathbf{x}^{ \prime}(e)] (\text{Linearity of Exp.})\] \[\geq\frac{\alpha}{4}\cdot\sum_{e\in E}w(e)\cdot\mathbf{x}(e) \text{Lemma \ref{lem:rank}}\] \[=\frac{\alpha}{4}\cdot\mathop{\mathbb{E}}_{A\sim\mathcal{D}}[w(A)]\]
Concluding from Lemma 2.2, we find that the distribution on the set of active elements, if pairwise independent, admits an \(\frac{\alpha}{4}\)-balanced CRS when the matroid satisfies the \(\alpha\)-partition property.
**Remark 8.8**.: _Theorem 8.6 combined with impossibility result for CRS from Theorem 5.1, we conclude that the class of linear matroids does not satisfy \(\alpha\)-partition property for \(\alpha\leq O(d)\). In addition, in the analysis of Thoerem 5.1, by setting \(c=\log_{2}d\) and \(q=2\), we conclude that the full binary matroid does not satisfy \(O(\log d/d)\)-partition property._
## 9 Open Questions
In this work, we explore the pairwise independent prophet inequality in scenarios where the arrival order is determined by an adversary who is aware of all the weight assignments beforehand. We question if a comparable upper limit on the approximation ratio is valid for models with the worst-case arrival or random arrival models. In these models, the sequence of elements is either chosen by an adversary who does not have access to the weight assignments in advance, or is selected uniformly randomly.
Due to the relation between matroid secretary conjecture and prophet secretary problem from [24], we note that demonstrating an upper limit for the random arrival order would refute the matroid secretary conjecture. Here, we would like to point our that our lower-bound construction for the matroid prophet inequality appears highly nontrivial even if considered in the random-order setting. Therefore, algorithmic attacks against our construction in the random-order setting could serve as a stepping stone towards the matroid secretary conjecture.
|
2304.08823 | Transfer to a Low-Resource Language via Close Relatives: The Case Study
on Faroese | Multilingual language models have pushed state-of-the-art in cross-lingual
NLP transfer. The majority of zero-shot cross-lingual transfer, however, use
one and the same massively multilingual transformer (e.g., mBERT or XLM-R) to
transfer to all target languages, irrespective of their typological,
etymological, and phylogenetic relations to other languages. In particular,
readily available data and models of resource-rich sibling languages are often
ignored. In this work, we empirically show, in a case study for Faroese -- a
low-resource language from a high-resource language family -- that by
leveraging the phylogenetic information and departing from the
'one-size-fits-all' paradigm, one can improve cross-lingual transfer to
low-resource languages. In particular, we leverage abundant resources of other
Scandinavian languages (i.e., Danish, Norwegian, Swedish, and Icelandic) for
the benefit of Faroese. Our evaluation results show that we can substantially
improve the transfer performance to Faroese by exploiting data and models of
closely-related high-resource languages. Further, we release a new web corpus
of Faroese and Faroese datasets for named entity recognition (NER), semantic
text similarity (STS), and new language models trained on all Scandinavian
languages. | Vésteinn Snæbjarnarson, Annika Simonsen, Goran Glavaš, Ivan Vulić | 2023-04-18T08:42:38Z | http://arxiv.org/abs/2304.08823v1 | # Transfer to a Low-Resource Language via Close Relatives:
###### Abstract
Multilingual language models have pushed state-of-the-art in cross-lingual NLP transfer. The majority of zero-shot cross-lingual transfer, however, use _one and the same_ massively multilingual transformer (e.g., mBERT or XLM-R) to transfer to _all_ target languages, irrespective of their typological, etymological, and phylogenetic relations to other languages. In particular, readily available data and models of resource-rich sibling languages are often ignored. In this work, we empirically show, in a case study for Faroese - a low-resource language from a high-resource language family - that by leveraging the phylogenetic information and departing from the 'one-size-fits-all' paradigm, one can improve cross-lingual transfer to low-resource languages. In particular, we leverage abundant resources of other Scandinavian languages (i.e., Danish, Norwegian, Swedish, and Icelandic) for the benefit of Faroese. Our evaluation results show that we can substantially improve the transfer performance to Faroese by exploiting data and models of closely-related high-resource languages. Further, we release a new web corpus of Faroese and Faroese datasets for named entity recognition (NER), semantic text similarity (STS), and new language models trained on all Scandinavian languages.
## 1 Introduction
Massively multilingual Transformer-based language models (MMTs) such as mBERT Devlin et al. (2019), XLM-RoBERTa Conneau et al. (2020) and mT5 Xue et al. (2021) have been the driving force of modern multilingual NLP, allowing for rapid bootstrapping of language technology for a wide range of low(er)-resource languages by means of (zero-shot or few-shot) cross-lingual transfer from high(er)-resource languages Lauscher et al. (2020); Hu et al. (2020); Xu and Murray (2022); Schmidt et al. (2022). Cross-lingual transfer with MMTs is not without drawbacks. MMTs' representation spaces are heavily skewed in favor of high-resource languages, for which they have been exposed to much more data in pretraining Joshi et al. (2020); Wu and Dredze (2020); combined with the 'curse of multilinguality' - i.e., limited per-language representation quality stemming from a limited capacity of the model Conneau et al. (2020); Pfeiffer et al. (2022) - this leads to lower representational quality for languages underrepresented in MMTs' pretraining. Cross-lingual transfer with MMTs thus fails exactly in settings in which it is needed the most: for low-resource languages with small digital footprint Zhao et al. (2021). Despite these proven practical limitations, the vast majority of work on cross-lingual transfer still relies on MMTs due to their appealing conceptual generality: in theory, they support transfer between any two languages seen in their pretraining. Such strict reliance on MMTs effectively ignores the linguistic phylogenetics and fails to directly leverage resources of resource-rich languages that are closely related to a target language of interest.
In this work, we attempt to mitigate the above limitations for a particular group of languages, departing from the 'one-size-fits-all' paradigm based on MMTs. We focus on a frequent and realistic setup in which the target language is a low-resource language but from a high-resource language family, i.e., with closely related resource-rich languages. A recent comprehensive evaluation of the languages used in Europe1 scores languages |
2305.07839 | The Geometry of Multilingual Language Models: An Equality Lens | Understanding the representations of different languages in multilingual
language models is essential for comprehending their cross-lingual properties,
predicting their performance on downstream tasks, and identifying any biases
across languages. In our study, we analyze the geometry of three multilingual
language models in Euclidean space and find that all languages are represented
by unique geometries. Using a geometric separability index we find that
although languages tend to be closer according to their linguistic family, they
are almost separable with languages from other families. We also introduce a
Cross-Lingual Similarity Index to measure the distance of languages with each
other in the semantic space. Our findings indicate that the low-resource
languages are not represented as good as high resource languages in any of the
models | Cheril Shah, Yashashree Chandak, Manan Suri | 2023-05-13T05:19:15Z | http://arxiv.org/abs/2305.07839v1 | # The Geometry of Multilingual Language Models: An Equality Lens
###### Abstract
Understanding the representations of different languages in multilingual language models is essential for comprehending their cross-lingual properties, predicting their performance on downstream tasks, and identifying any biases across languages. In our study, we analyze the geometry of three multilingual language models in Euclidean space and find that all languages are represented by unique geometries. Using a geometric separability index we find that although languages tend to be closer according to their linguistic family, they are almost separable with languages from other families. We also introduce a Cross-Lingual Similarity Index to measure the distance of languages with each other in the semantic space. Our findings indicate that the low-resource languages are not represented as good as high resource languages in any of the models
## 1 Methodology
We use the XNLI-15way dataset Conneau et al. (2018) and sample 300 parallel sentences across the 15 languages for our analysis. We use common multilingual transformers, mBERT Devlin et al. (2019), MiniLM Wang et al. (2020), and XLMR Conneau et al. (2020a). More details about the models and dataset are available in Appendix A. We study the geometric properties of multilingual models using three methods:
1) We visualise the embedding space of a group of languages by taking the top 3 PCA components.
2) **Cross-lingual Similarity Index \(\Gamma\):** There have been many approaches to compute cross-lingual similarity such as Liu et al. (2020), however due to the extremely high anisotropy(average cosine similarity of any two randomly sampled words in the dataset) Ethayarajh (2019) of models like XLMR, it is difficult to make conclusions using cosine similarity. Thus we introduce a metric to quantify cross-lingual similarity. For a pair of languages, \(l1,l2\), we take embeddings \(s_{(l1,i)}\), \(s_{(l2,i)\forall i\in[0,299]}\) using model \(\mathcal{M}\) and calculate the Cross-lingual Similarity Index as follows:
\[\Gamma_{l1,l2}=\frac{\frac{1}{n}\sum_{i=1}^{n}cosine(s_{(l1,i)},s_{(l2,i)})}{ Anisotropy(\mathcal{M})} \tag{1}\]
For a model \(\mathcal{M}\), \(\Gamma\) ranges from \(-1/Anisotropy(\mathcal{M})\) to \(1/Anisotropy(\mathcal{M})\). Low model anisotropy and high average cosine similarity is ideal, therefore higher \(\Gamma\) is ideal. Positive and negative values of \(\Gamma\) correspond to average directional orientation between embeddings. \(|\Gamma|\leq 1/Anisotropy(M)\) would mean that the average similarity is less than the similarity for random words.
3) **Language Separability \(\Phi\):** We study the separation of different languages in embedding space by treating all the points belonging to a language as a single cluster and calculating the pairwise Geometric Separability Index Thornton (2008) between two languages. |
2307.11822 | Sign regular matrices and variation diminution: single-vector tests and
characterizations, following Schoenberg, Gantmacher-Krein, and Motzkin | Variation diminution (VD) is a fundamental property in total positivity
theory, first studied in 1912 by Fekete-P\'olya for one-sided P\'olya frequency
sequences, followed by Schoenberg, and by Motzkin who characterized sign
regular (SR) matrices using VD and some rank hypotheses. A classical theorem by
Gantmacher-Krein characterized the strictly sign regular (SSR) $m \times n$
matrices for $m>n$ using this property.
In this article we strengthen these results by characterizing all $m \times
n$ SSR matrices using VD. We further characterize strict sign regularity of a
given sign pattern in terms of VD together with a natural condition motivated
by total positivity. We then refine Motzkin's characterization of SR matrices
by omitting the rank condition and specifying the sign pattern. This concludes
a line of investigation on VD started by Fekete-P\'olya [Rend. Circ. Mat.
Palermo 1912] and continued by Schoenberg [Math. Z. 1930], Motzkin [PhD thesis,
1936], Gantmacher-Krein [1950 book], Brown-Johnstone-MacGibbon [J. Amer. Stat.
Assoc. 1981], and Choudhury [Bull. London Math. Soc. 2022, Bull. Sci. Math.
2023].
In fact we show stronger characterizations, by employing single test vectors
with alternating sign coordinates - i.e., lying in the alternating bi-orthant.
We also show that test vectors chosen from any other orthant will not work. | Projesh Nath Choudhury, Shivangi Yadav | 2023-07-21T18:00:15Z | http://arxiv.org/abs/2307.11822v3 | Sign regularity: refinement of gantmacher-krein-motzkin results on variation diminution and the linear preserver problem
###### Abstract.
Variation diminution (VD) is a fundamental property in total positivity theory, first studied by Fekete-Polya (1912) for one sided Polya frequency sequences, followed by Schoenberg (1930), and by Motzkin (1936) who characterized sign regular (SR) matrices using VD and some rank hypotheses. A classical theorem in 1950 by Gantmacher-Krein characterized the strictly sign regular (SSR) \(m\times n\) matrices for \(m>n\) using this property. In this article we strengthen their result by characterizing all \(m\times n\) SSR matrices using VD. We further characterize strict sign regularity of a given sign pattern in terms of VD together with a natural condition motivated by total positivity. We then refine Motzkin's characterization of SR matrices by omitting the rank condition and specifying the sign pattern. More strongly, these characterizations employ single test vectors with alternating sign coordinates - i.e., lying in an alternating bi-orthant.
The second contribution of our work includes the study of linear preservers of SR and SSR matrices. The linear preserver problem is an important question in matrix theory and operator theory. We classify all linear mappings \(\mathcal{L}:\mathbb{R}^{m\times n}\to\mathbb{R}^{m\times n}\) that preserve: (i) sign regularity and (ii) sign regularity with a given sign pattern, as well as strict versions of these.
Key words and phrases:Strict sign regularity, sign regularity, total positivity, variation diminishing property, linear preserver problem 2020 Mathematics Subject Classification: 15B48, 15A86 (primary); 15A24 (secondary).
###### Contents
* 1 Introduction and main results
* 2 Theorems A and B: Variation diminution for SSR
* 3 Theorem C: Linear preserver problem for sign regularity
* 4 Theorem D: Linear preserver problem for sign regularity with given sign pattern Acknowledgments
## 1. Introduction and main results
Given integers \(m,n\geq k\geq 1\), an \(m\times n\) real matrix \(A\) is _strictly sign regular of order \(k\)_ (SSR\({}_{k}\)) if there exists a sequence of signs \(\epsilon_{r}\in\{1,-1\}\) such that every \(r\times r\) minor of \(A\) has sign \(\epsilon_{r}\) for all \(1\leq r\leq k\). An SSR\({}_{k}\) matrix \(A\) is _strictly sign regular_ (SSR) if \(k=\min\{m,n\}\). If minors are allowed to also vanish, then \(A\) is correspondingly said to be _sign regular of order \(k\)_ (SR\({}_{k}\)) and _sign regular_ (SR). For an SSR (respectively SR) matrix \(A\), if \(\epsilon_{r}=1\) for all \(r\geq 1\), then A is said to be _totally positive_ (TP) (respectively _totally non-negative_ (TN)). These matrices have numerous applications in analysis, approximation theory, cluster algebras, combinatorics, differential equations, Gabor analysis, integrable systems, matrix theory, probability and statistics, Lie theory, and representation theory [1, 3, 4, 6, 7, 11, 13, 14, 16, 18, 19, 22, 24, 25, 26, 29, 30, 35, 38, 41, 42, 45].
SR and SSR matrices \(A\) enjoy the variation diminishing (VD) property, which states that the number of sign changes of \(A\mathbf{x}\) is bounded above by the number of sign changes of \(\mathbf{x}\), where \(\mathbf{x}\) is a vector. Variation diminution is considered to have originated from the famous 1883 memoir of Laguerre [27]. Polya coined the phrase "variation diminishing" ("variationsvermindernd" in
## 1. Introduction
The _\(n\)-dimensional space_ of \(n\)-dimensional matrices is the _\(n\)-dimensional Euclidean space_ of \(n\)-dimensional matrices. The _\(n\)-dimensional Euclidean space_ of \(n\)-dimensional matrices is the _\(n\)-dimensional Euclidean space_ of \(n\)-dimensional matrices.
_In fact, these are also equivalent to the following assertion with a severely reduced test set:_
1. _For every contiguous square submatrix_ \(A_{k}\) _of_ \(A\) _of size_ \(k\times k\)_, where_ \(1\leq k\leq\min\{m,n\}\)_, and given any fixed vector_ \(\mathbf{0}\neq\mathbf{v}:=(\alpha_{1},-\alpha_{2},\ldots,(-1)^{k-1}\alpha_{k})^ {T}\in\mathbb{R}^{k}\) _with all_ \(\alpha_{j}\geq 0\)_, we define the vector_ \[\mathbf{x}^{A_{k}}:=\operatorname{adj}(A_{k})\mathbf{v}.\] (1.1) _Then_ \(S^{+}(A_{k}\mathbf{x}^{A_{k}})\leq S^{-}(\mathbf{x}^{A_{k}})\)_. Moreover, if_ \(S^{+}(A_{k}\mathbf{x}^{A_{k}})=S^{-}(\mathbf{x}^{A_{k}})=r\)_, where_ \(0\leq r\leq k-1\)_, then the sign of the first (last) component of_ \(A_{k}\mathbf{x}^{A_{k}}\) _(if zero, the sign given in determining_ \(S^{+}(A_{k}\mathbf{x}^{A_{k}})\)_) agrees with_ \(\epsilon_{r}\epsilon_{r+1}\) _times the sign of the first (last) non-zero component of_ \(\mathbf{x}^{A_{k}}\)_._
**Remark 1.2**.: As a consequence of Theorem B, one can reduce the test set in Theorem A as well, to a single vector \(\mathbf{x}^{A_{k}}:=\operatorname{adj}(A_{k})\mathbf{v}\) for each contiguous submatrix, and where \(\mathbf{v}\) is as in Theorem B, with alternate signed coordinates.
In a sense, Theorems A and B are the culmination of many previous results, extending work on SR matrices (Schoenberg [40], Motzkin [33] and Gantmacher-Krein [17]) and TP/TN matrices (Gantmacher-Krein [17], Brown-Johnstone-MacGibbon [8] and Choudhury [9]).
Along with variation diminution, the study of linear preserver problems is another fundamental question. A _linear preserver problem_ asks, given a subset \(S\) of a vector space \(V\), to characterize all linear transformations \(\mathcal{L}:V\to V\) such that \(\mathcal{L}(S)=S\). Such a linear transformation \(\mathcal{L}\) is called a _linear preserver of \(S\)_ or _\(S\)-preserver_. In 1887, Frobenius [15] characterized the general form of all determinant preserving linear maps on matrix algebras, which is regarded as the first result on linear preserver problems. The linear preserver problem has since been widely studied in matrix theory and operator theory. Some examples include the problem of spectrum preserving transformations on a space of bounded linear operators over a Banach space (Jafarian-Sourour [21]), linear preservers of the unitary group in \(\mathbb{C}^{n\times n}\) (Marcus [31]) and on arbitrary \(C^{*}\)-algebras (Russo-Dye [39]), linear transformations preserving operators with fixed rank (Beasley [2] and Omladic-Semrl [34]), linear transformations on operator algebras preserving absolute values (Radjabalipour-Seddighi-Taghavi [37]), and linear maps preserving invertibility (Sourour [44]). For more details about linear preserver problems and some techniques to tackle them, we refer to [20, 28, 32]. The classification of linear preservers of various forms of positivity has long been studied and is an important problem in the preserver literature - for instance, linear preservers of positive semi-definite matrices (all of whose principal minors are non-negative). Coming to the related notion total positivity, the linear preservers of TP and TN matrices had been classified for square matrices by Berman-Hershkowitz-Johnson [5]. Our next two results extend their work in several ways. First, the results below hold for arbitrary sizes. Secondly, we classify linear preservers of SSR and SR matrices (allowing all sign patterns). Third, we also do this for every fixed sign pattern.
**Definition 1.3**.: We need the following definitions and notations in order to state these results.
1. A square matrix is said to be an _exchange matrix_ if it is an antidiagonal 0-1 matrix of the form \[P_{n}:=\begin{pmatrix}0&0&\ldots&0&1\\ 0&0&\ldots&1&0\\ \vdots&\vdots&\iddots&\vdots&\vdots\\ 0&1&\ldots&0&0\\ 1&0&\ldots&0&0\end{pmatrix}_{n\times n}\quad.\]
2. Let \(\mathcal{SR}\) denote the class of SR matrices of a given fixed size. Similarly we can define \(\mathcal{SR}_{2}(\epsilon)\), \(\mathcal{SR}(\epsilon)\), \(\mathcal{SR}\), and \(\mathcal{SSR}(\epsilon)\), where \(\mathcal{SR}_{2}(\epsilon)\) is only concerned with the signs \(\epsilon_{1}\) and \(\epsilon_{2}\).
3. Let \(P(S)\) denote the set of \(S\)-preservers, for \(S\) among \(\mathcal{SR},\mathcal{SR}_{2},\mathcal{SR}(\epsilon),\mathcal{SR}_{2}(\epsilon)\).
Our first theorem in this regard characterizes all linear preservers for the class of SR matrices. Moreover, we show that surprisingly, to classify the linear preservers of \(\mathcal{SR}\) it suffices to examine the linear preservers of \(\mathcal{SR}_{2}\).
**Theorem C**.: _Let \(\mathcal{L}:\mathbb{R}^{m\times n}\to\mathbb{R}^{m\times n}\) be a linear transformation, where either \(m=n\geq 3\) or \(m\neq n\) and \(m,n\geq 2\). Then the following statements are equivalent._
1. \(\mathcal{L}\) _maps the class of SR matrices onto itself._
2. \(\mathcal{L}\) _maps the class of SR_\({}_{2}\) _matrices onto itself._
3. \(\mathcal{L}\) _is a composition of one or more of the following types of transformations:_ 1. \(A\mapsto FAE\)_, in which_ \(F_{m\times m}\) _and_ \(E_{n\times n}\) _are diagonal matrices with positive diagonal entries;_ 2. \(A\mapsto-A\)_;_ 3. \(A\mapsto P_{m}A\)_, in which_ \(P_{m}\) _is an exchange matrix;_ 4. \(A\mapsto AP_{n}\)_; and_ 5. \(A\mapsto A^{T}\)_, provided_ \(m=n\)_._
_Moreover, the theorem is also true if SR is replaced by SSR._
Note that if \(m=n\) then (3)(d) is not needed, given (c) and (e).
**Remark 1.4**.: If \(m\neq n\) and either \(m=1\) or \(n=1\) in Theorem C, then the problem reduces to classifying linear \(\mathcal{SR}_{1}\)-preservers. In this case Theorem C still holds, but now the second statement is \(\mathcal{L}\in P(\mathcal{SR}_{1})\) in place of \(\mathcal{L}\in P(\mathcal{SR}_{2})\), and \(P_{m},P_{n}\) can be any permutation matrices instead of exchange matrices in the third statement. If \(m=n=2\), then there is only one \(2\times 2\) minor and since we are considering the set \(S\) of all sign patterns of SR matrices in Theorem C, the problem again reduces to classifying linear \(\mathcal{SR}_{1}\)-preservers. We consider this case in Theorem 3.10.
Theorem C guarantees that if \(\mathcal{L}\in P(\mathcal{SR})\) then \(A\in\mathcal{SR}\) if and only if \(\mathcal{L}(A)\in\mathcal{SR}\). However, it does not assure that the sign patterns of \(A\) and \(\mathcal{L}(A)\) will be identical. In our final main result, our objective is to characterize linear preservers for the class of SR(\(\epsilon\)) matrices for any size \(m\times n\) and any given sign pattern \(\epsilon\). Further, we show that it is again sufficient to study the linear preservers of \(\mathcal{SR}_{2}(\epsilon)\) in order to characterize linear \(\mathcal{SR}(\epsilon)\)-preserver.
**Theorem D**.: _Let \(\epsilon\) be a given sign pattern and \(\mathcal{L}:\mathbb{R}^{m\times n}\to\mathbb{R}^{m\times n}\) be a linear transformation, where \(m,n\geq 2\). Then the following statements are equivalent._
1. \(\mathcal{L}\) _maps the class of SR_\((\epsilon)\) _matrices onto itself._
2. \(\mathcal{L}\) _maps the class of SR_\({}_{2}(\epsilon)\) _matrices onto itself._
3. \(\mathcal{L}\) _is a composition of one or more of the following types of transformations:_ 1. \(A\mapsto FAE\)_, in which_ \(F_{m\times m}\) _and_ \(E_{n\times n}\) _are diagonal matrices with positive diagonal entries;_ 2. \(A\mapsto P_{m}AP_{n}\)_, where_ \(P_{m}\) _and_ \(P_{n}\) _are exchange matrices; and_ 3. \(A\mapsto A^{T}\)_, provided_ \(m=n\)_._
_Moreover, the theorem is also true if SR_\((\epsilon)\) _is replaced by SSR_\((\epsilon)\)_._
In particular, by taking \(m=n\) and \(\epsilon_{k}=1\) for all \(k\), Theorem D gives the linear preservers for the class of TP and TN matrices of order \(n\) as a special case which were characterized by Berman-Hershkowitz-Johnson [5].
**Remark 1.5**.: In both Theorems C and D, the linear preservers of \(\mathcal{SR}\) and \(\mathcal{SR}_{2}\) (respectively \(\mathcal{SR}(\epsilon)\) and \(\mathcal{SR}_{2}(\epsilon)\)) are automatically the set of all linear preservers of \(\mathcal{SR}_{k}\) (\(\mathcal{SR}_{k}(\epsilon)\)) for \(2\leq k\leq\min\{m,n\}\).
**Organization of the paper:** The remaining sections of the paper are devoted to proving our main results. In Section 2, we prove Theorems A and B. In fact, Theorem B uses a single test
vector whose coordinates alternate in sign. After proving Theorem B, we show that strict sign regularity with a given sign pattern can not be characterized by the variation diminishing property using test vectors from any open orthant other than the alternating bi-orthant - see Theorem 2.2. In Section 3 we prove Theorem C, which classifies all linear maps that preserve SR/SSR matrices. In the final section, we prove Theorem D.
## 2. Theorems A and B: Variation diminution for SSR
We begin by proving Theorems A and B for SSR matrices. In Section 2.1, we show a similar result to Theorem B for SR, which in particular strengthens Motzkin's result for each fixed sign pattern \(\epsilon\) by removing Motzkin's rank-hypothesis. For this section, we define \(\mathbf{d}_{p}:=(1,-1,\ldots,(-1)^{p-1})^{T}\in\mathbb{R}^{p},\) for any integer \(p\geq 1.\)
Proof of Theorem A.: Let \(A\in\mathbb{R}^{m\times n}\) be such that \(S^{+}(A\mathbf{x})\leq S^{-}(\mathbf{x})\) for all \(\mathbf{0}\neq\mathbf{x}\in\mathbb{R}^{n}\). Our aim is to show all minors of the same size of \(A\) have the same sign. The proof is by induction on the size of the minors. First we show that all \(p\times p\) minors of \(A\) are non-zero for \(1\leq p\leq\min\{m,n\}\). Assume to the contrary that \(\det A\begin{pmatrix}i_{1},\cdots,i_{p}\\ j_{1},\ldots,j_{p}\end{pmatrix}=0,\) where (henceforth) \(1\leq i_{1}<\cdots<i_{p}\leq m\) index the rows of the minor and \(1\leq j_{1}<\cdots<j_{p}\leq n\) index the columns. Then there exists \(\mathbf{0}\neq\mathbf{z}\in\mathbb{R}^{p}\) such that \(A\begin{pmatrix}i_{1},\cdots,i_{p}\\ j_{1},\ldots,j_{p}\end{pmatrix}\mathbf{z}=\mathbf{0}.\) Define \(\mathbf{x}\in\mathbb{R}^{n}\) such that the \(j_{l}\) component of \(\mathbf{x}\) is \(z_{l}\) for \(l=1,\ldots,p,\) and all other components are zero. Then \(S^{-}(\mathbf{x})\leq p-1\) while \(S^{+}(A\mathbf{x})\geq p,\) a contradiction. This shows the claim.
Next we show that all entries of \(A\) are of the same sign. For each \(j\leq n,\)\(S^{-}(\mathbf{e}^{j})=0,\) and thus \(S^{+}(A\mathbf{e}^{j})=0.\) Therefore, all the elements of \(A\) present in the \(j^{th}\) column are non-zero and further they all are of the same sign. Now, we will show that no two columns of \(A\) have different signs. On the contrary, assume without loss of generality, that the \(i_{1}\) column of \(A\) is positive while the \(i_{2}\) column is negative. We can choose positive real numbers \(x_{i_{1}}\) and \(x_{i_{2}}\) such that at least one entry of the vector \(x_{i_{1}}\mathbf{a}^{i_{1}}+x_{i_{2}}\mathbf{a}^{i_{2}}\in\mathbb{R}^{m}\) is zero where \(\mathbf{a}^{i_{1}}\) and \(\mathbf{a}^{i_{2}}\) are the \(i_{1}\) and \(i_{2}\) columns of \(A,\) respectively. Take \(\mathbf{0}\neq\mathbf{x}\in\mathbb{R}^{n}\) with \(i_{1}\) and \(i_{2}\) entries as \(x_{i_{1}}\) and \(x_{i_{2}}\), respectively and all other entries zero. Then \(S^{-}(\mathbf{x})=0\) whereas \(S^{+}(A\mathbf{x})\geq 1\), a contradiction. Thus \(A\) is SSR\({}_{1}.\) Next, we assume that \(A\) is SSR\({}_{p-1}\) for \(1<p\leq\min\{m,n\}\). We will show that \(A\) is SSR\({}_{p}.\)
The proof strategy is broadly as follows: To prove all \(p\times p\) minors of \(A\) have the same sign, we will choose arbitrary \(p\) columns of \(A\), say \(1\leq j_{1}<\cdots<j_{p}\leq n\) and then show that all \(p\times p\) minors of submatrix of \(A\) included in rows \(1,\ldots,m\) and columns \(j_{1},\ldots,j_{p}\) have the same sign. To complete the proof, we will further show that the sign of all \(p\times p\) minors of \(A\) is independent of the particular choice of the columns \(j_{1},\ldots,j_{p}\) and depends only on \(p.\)
Fix \(1\leq j_{1}<\cdots<j_{p}\leq n\). We will first prove that \(\det A\begin{pmatrix}i_{1},\ldots,i_{p}\\ j_{1},\ldots,j_{p}\end{pmatrix}\) for all \(1\leq i_{1}<\cdots<i_{p}\leq m\) have the same sign. If \(p=m,\) the conclusion is clear; if \(p<m,\) it suffices to show this for the \((p+1)\) minors of size \(p\times p\) that can be formed from an arbitrary choice of \((p+1)\) rows, \(i_{1}<\cdots<i_{p+1}\) (say). Consider the following \(p\times p\) submatrices of \(A\) which are obtained by deleting the \(i_{l}^{th}\) row of \(A\begin{pmatrix}i_{1},\ldots,i_{p+1}\\ j_{1},\ldots,j_{p}\end{pmatrix}\) for \(l=1,\ldots,p+1\):
\[A_{l}=\begin{pmatrix}a_{i_{1}j_{1}}&a_{i_{1}j_{2}}&\dots&a_{i_{1}j_{p}}\\ \vdots&\vdots&\ddots&\vdots\\ a_{i_{l-1}j_{1}}&a_{i_{l-1}j_{2}}&\dots&a_{i_{l-1}j_{p}}\\ a_{i_{l+1}j_{1}}&a_{i_{l+1}j_{2}}&\dots&a_{i_{l+1}j_{p}}\\ \vdots&\vdots&\ddots&\vdots\\ a_{i_{p+1}j_{1}}&a_{i_{p+1}j_{2}}&\dots&a_{i_{p+1}j_{p}}\end{pmatrix}_{p\times p}.\]
We will show that \(\det A_{1}=\det A_{l}\) for \(l=2,\dots,p+1\). Fix \(l\neq 1\), the determinant of the \(p\times p\) matrix \(A_{l}\) is given by
\[\det A_{l}=a_{i_{1}j_{1}}A_{l}^{11}-a_{i_{1}j_{2}}A_{l}^{12}+\dots+(-1)^{p-1}a _{i_{1}j_{p}}A_{l}^{1p},\]
where \(A_{l}^{11},\dots,A_{l}^{1p}\) are non-zero \((p-1)\times(p-1)\) minors of \(A\) and hence by the induction hypothesis they all have the same sign. Define \(\mathbf{0}\neq\mathbf{x}\in\mathbb{R}^{n}\) such that the entries in positions \(j_{1},j_{2},\dots,j_{p}\) are \(A_{l}^{11},-A_{l}^{12},\dots,(-1)^{p-1}A_{l}^{1p}\), respectively and zero elsewhere. Therefore, \(S^{-}(\mathbf{x})=p-1\). Now, the first \((p+1)\) entries of the vector \(A\mathbf{x}\) are given by
\[\det A_{l},0,\dots,0,(-1)^{l}\det A_{1},0,\dots,0\]
where \((-1)^{l}\det A_{1}\) is present in the \(l^{th}\) position. Suppose that \(\det A_{1}\) and \(\det A_{l}\) are of opposite signs. Note that for a sequence to have alternate signs, the signs of its component in the first and odd positions should be the same, respectively the opposite in the first and even positions. Therefore by our construction, the elements of the vector \(A\mathbf{x}\) in positions \(1\) and \(l\) have different signs for \(l\) even and the same sign for \(l\) odd. Hence \(S^{+}(A\mathbf{x})\geq p\) which is not possible. Thus \(\det A_{1}=\det A_{l}\) for all \(l=2,\dots,p+1.\) Therefore, the minors
\[\det A\begin{pmatrix}i_{1},\dots,i_{p}\\ j_{1},\dots,j_{p}\end{pmatrix}\]
for all \(1\leq i_{1}<\dots<i_{p}\leq m\) have the same signs. We denote this common sign of the minors by \(\varepsilon(j_{1},\dots,j_{p})\). Note that if \(p=n\), then we are done. To complete the proof, we assume \(p<n\) and show that this sign is independent of the particular choice of \(p\) columns \(j_{1},\dots,j_{p}\) and depends only on \(p\). Let us take \((p+1)\) columns of \(A\), with \(1\leq j_{1}<\dots<j_{p+1}\leq n\), and let \(\varepsilon_{r}=\varepsilon(j_{1},\dots,j_{r-1},j_{r+1},\dots,j_{p+1}).\) It suffices to prove \(\varepsilon_{r}=\varepsilon_{r+1}\) for \(r=1,\dots,p\).
Define a vector \(\mathbf{x}\in\mathbb{R}^{n}\) whose coordinates in position \(j_{k}\) are given by
\[x_{j_{k}}:=(-1)^{k-1}\det A\begin{pmatrix}1,2,\dots&\dots p-1,p\\ j_{1},\dots,j_{k-1},j_{k+1},\dots,j_{p+1}\end{pmatrix}\]
for \(k=1,\dots,p+1\) and zero elsewhere. Therefore, we have
\[\sum_{k=1}^{p+1}a_{ij_{k}}x_{j_{k}}=0\quad\text{for}\;i=1,\dots,p\]
and hence \(S^{+}(A\mathbf{x})\geq p\). If \(\varepsilon_{r}\varepsilon_{r+1}<0\) for some \(r\), then \(x_{j_{r}}x_{j_{r+1}}>0\). Thus, \(S^{-}(\mathbf{x})\leq p-1\) which is a contradiction. Hence, \(A\) is SSR.
To prove the converse, let \(A\in\mathbb{R}^{m\times n}\) be SSR. Take a non-zero vector \(\mathbf{x}\in\mathbb{R}^{n}\) and assume that \(S^{-}(\mathbf{x})=r\leq n-1\). Therefore, we can partition \(\mathbf{x}\) into \((r+1)\) contiguous components such that no two components having different signs belong to the same partition:
\[(x_{1},\dots,x_{s_{1}}),\quad(x_{s_{1}+1},\dots,x_{s_{2}}),\quad\dots,\quad(x_ {s_{r+1}},\dots,x_{n}).\]
We assume without loss of generality that the sign of the non-zero elements in the \(k^{th}\) partition is given by \((-1)^{k-1}\). Also, note that there is at least one non-zero element in each partition, otherwise
\(S^{-}(\mathbf{x})<r\). We can write \(A\) as \(A=(\mathbf{a}^{1}|\ldots|\mathbf{a}^{n})\) where \(\mathbf{a}^{i}\in\mathbb{R}^{m}\) for all \(i=1,\ldots n\). Therefore,
\[A\mathbf{x}=\sum_{k=1}^{n}x_{k}\mathbf{a}^{k}=\sum_{i=1}^{r+1}(-1)^{i-1} \mathbf{y}^{i}\]
where
\[\mathbf{y}^{i}=\sum_{k=s_{i-1}+1}^{s_{i}}|x_{k}|\mathbf{a}^{k}\;\;\text{for}\; \;i=1,\ldots,r+1,\quad s_{0}=0\;\;\text{and}\;\;s_{r+1}=n.\]
Let \(Y:=(\mathbf{y}^{1}|\ldots|\mathbf{y}^{r+1})\in\mathbb{R}^{m\times(r+1)}\). Using basic properties of determinants, we have
\[\det Y\begin{pmatrix}i_{1},\ldots,i_{p}\\ j_{1},\ldots,j_{p}\end{pmatrix}=\sum_{k_{1}=s_{j_{1}-1}+1}^{s_{j_{1}}}\ldots \sum_{k_{p}=s_{jp-1}+1}^{s_{jp}}|x_{k_{1}}|\cdots|x_{k_{p}}|\det A\begin{pmatrix} i_{1},\ldots,i_{p}\\ k_{1},\ldots,k_{p}\end{pmatrix}.\]
Since \(A\) is SSR, the terms \(\det A\begin{pmatrix}i_{1},\ldots,i_{p}\\ k_{1},\ldots,k_{p}\end{pmatrix}\) have the same sign, for all choices of \(1\leq i_{1}<\cdots<i_{p}\leq m\), \(1\leq k_{1}<\cdots<k_{p}\leq n\); and \(x_{k_{1}}\cdots x_{k_{p}}\neq 0\) for some choice of admissible \(\{k_{1},\ldots,k_{p}\}\) in the above sum. Hence \(Y\) is SSR. Note that the minors of \(A\) and \(Y\) have the same sign pattern.
With the above analysis in hand, we will show that \(S^{+}(A\mathbf{x})\leq r\). Consider the following two cases.
**Case I.**\(m\leq r+1\).
If \(m=r+1\), then \(A\mathbf{x}\neq\mathbf{0}\), since \(A\mathbf{x}=Y\mathbf{d}_{r+1}\) and \(Y\) is SSR. Hence \(S^{+}(A\mathbf{x})\leq r\). Otherwise, \(m\leq r\). Then clearly \(S^{+}(A\mathbf{x})\leq m\leq S^{-}(\mathbf{x})\).
**Case II.**\(m>r+1\).
Define \(w:=A\mathbf{x}=Y\mathbf{d}_{r+1}\) and to the contrary assume that \(S^{+}(A\mathbf{x})\geq r+1\). Thus there exist indices \(1\leq i_{1}<\cdots<i_{r+2}\leq m\) and \(\theta\in\{1,-1\}\) such that \(\theta(-1)^{j-1}w_{i_{j}}\geq 0\) for \(j=1,\ldots,r+2\). Note that at least two of the \(w_{i_{j}}\) are non-zero since \(Y\) has rank \((r+1)\). Now, consider the following determinant
\[\det\begin{pmatrix}w_{i_{1}}&y_{i_{1}1}&\ldots&y_{i_{1}r+1}\\ w_{i_{2}}&y_{i_{2}1}&\ldots&y_{i_{2}r+1}\\ \vdots&\vdots&\ddots&\vdots\\ w_{i_{r+2}}&y_{i_{r+2}1}&\ldots&y_{i_{r+2}r+1}\end{pmatrix}.\]
This vanishes since the first column is an alternating sum of the rest. Expanding this determinant along the first column gives
\[0=\sum_{j=1}^{r+2}(-1)^{j-1}w_{i_{j}}\det Y\begin{pmatrix}i_{1},\ldots,i_{j-1},i_{j+1},\ldots,i_{r+2}\\ 1,2,\ldots&\ldots,r,r+1\end{pmatrix}.\]
But the right hand side is non-zero, since \(Y\) is SSR, \((-1)^{j-1}w_{i_{j}}\) has the same sign \(\theta\) for \(j=1,\ldots,r+2\), and not all \(w_{i_{j}}\) are zero. This contradiction implies that \(S^{+}(A\mathbf{x})\leq r=S^{-}(\mathbf{x})\).
To prove our next main result, we recall a 1968 seminal result by Karlin for strict sign regularity, first proved by Fekete (1912) for total positivity and later refined by Schoenberg (1955) to total positivity of order \(k\).
**Theorem 2.1** (Karlin [23]).: _Suppose \(1\leq k\leq m,n\) are integers and matrix \(A\in\mathbb{R}^{m\times n}\). Then \(A\) is SSR\({}_{k}\) if and only if all contiguous minors of order \(r\in\{1,\ldots,k\}\) have the same sign._
Proof of Theorem B.: We will show a cyclic chain of implications: \((1)\implies(2)\implies(3)\implies(1)\). In addition, we also show that \((2)\implies(1)\) (although it is not needed) in keeping with previous proofs of the equivalence of SSR and VD. This will also be the strategy in our proof of Theorem 2.5 below.
We begin by showing \((1)\Longrightarrow(2)\). If \(A\in\mathbb{R}^{m\times n}\) is \(\operatorname{SSR}(\epsilon)\), that \(S^{+}(A\mathbf{x})\leq S^{-}(\mathbf{x})\) for all \(\mathbf{0}\neq\mathbf{x}\in\mathbb{R}^{n}\) immediately follows from Theorem A. It only remains to show the second part of the assertion. For \(0\leq r\leq\min\{m,n\}-1\), let \(S^{+}(A\mathbf{x})=S^{-}(\mathbf{x})=r\) with \(A\mathbf{x}\neq\mathbf{0}\). To proceed, we adopt the notation in the second subcase of the preceding proof's converse part; note that now there exist \(r+1\) indices instead of \(r+2\). We claim that if \(\theta(-1)^{j-1}w_{i_{j}}\geq 0\) for \(j=1,\ldots,r+1\), then \(\theta=\epsilon_{r}\epsilon_{r+1}\). To show this, we use the following system of equations
\[Y\begin{pmatrix}i_{1},\ldots,i_{r+1}\\ 1,\ldots,r+1\end{pmatrix}\mathbf{d}_{r+1}=\begin{pmatrix}w_{i_{1}}\\ \vdots\\ w_{i_{r+1}}\end{pmatrix}. \tag{2.1}\]
Since \(Y\in\mathbb{R}^{m\times(r+1)}\) is \(\operatorname{SSR}\), every set of \((r+1)\) rows of \(Y\) is linearly independent. Using Cramer's rule to solve the system (2.1) for the first component gives
\[1=\frac{\det\begin{pmatrix}w_{i_{1}}&y_{i_{1}2}&\ldots&y_{i_{1}r+1}\\ w_{i_{2}}&y_{i_{2}2}&\ldots&y_{i_{2}r+1}\\ \vdots&\vdots&\ddots&\vdots\\ w_{i_{r+1}}&y_{i_{r+1}2}&\ldots&y_{i_{r+1}r+1}\end{pmatrix}}{\det Y\begin{pmatrix} i_{1},\ldots,i_{r+1}\\ 1,\ldots,r+1\end{pmatrix}}.\]
Expanding the numerator along the first column gives
\[1=\frac{\sum_{j=1}^{r+1}(-1)^{j-1}w_{i_{j}}\det Y\begin{pmatrix} i_{1},\ldots,i_{j-1},i_{j+1},\ldots,i_{r+1}\\ 2,3,\ldots&\ldots,r,r+1\end{pmatrix}}{\det Y\begin{pmatrix}i_{1},\ldots,i_{r+ 1}\\ 1,\ldots,r+1\end{pmatrix}}.\]
Note that \(\theta(-1)^{j-1}w_{i_{j}}\geq 0\) for \(i=1,\ldots,r+1\) and all \(p\times p\) minors of \(Y\) have sign \(\epsilon_{p}\) for \(p=1,\ldots,r+1\). Multiplying both sides of the above equation by \(\theta\), we have \(\theta=\epsilon_{r}\epsilon_{r+1}\).
We next show that \((2)\Longrightarrow(1)\). Our proof is by induction on the size of the minors. Let \(\mathbf{e}^{1},\ldots,\mathbf{e}^{n}\in\mathbb{R}^{n}\) denote the standard basis. Then \(S^{+}(A\mathbf{e}^{j})\leq S^{-}(\mathbf{e}^{j})=0\), so \(A\mathbf{e}^{j}\) can not have zero entries - in fact all entries have sign \(\epsilon_{1}\) by (2).
We next claim that all minors of \(A\) are non-zero. Indeed, suppose for contradiction that \(\det A\begin{pmatrix}i_{1},\ldots,i_{p}\\ j_{1},\ldots,j_{p}\end{pmatrix}=0\), where \(2\leq p\leq\min\{m,n\}\), \(1\leq i_{1}<\cdots<i_{p}\leq m\) and \(1\leq j_{1}<\cdots<j_{p}\leq n\). Then there exists \(\mathbf{0}\neq\mathbf{z}\in\mathbb{R}^{p}\) such that \(A\begin{pmatrix}i_{1},\ldots,i_{p}\\ j_{1},\ldots,j_{p}\end{pmatrix}\mathbf{z}=\mathbf{0}.\) Define \(\mathbf{x}\in\mathbb{R}^{n}\) such that the \(j_{l}^{th}\) component of \(\mathbf{x}\) is \(z_{l}\) for \(l=1,\ldots,p\), and all other components are zero. Then \(S^{-}(\mathbf{x})\leq p-1\) while \(S^{+}(A\mathbf{x})\geq p\), a contradiction. Thus all minors of \(A\) are non-zero. Finally, we claim by induction on \(1\leq p\leq\min\{m,n\}\) that \(A\) is \(\operatorname{SSR}_{p}\) with sign pattern \(\epsilon_{1},\ldots,\epsilon_{p}\), with the base case \(p=1\) shown above. For the induction step, suppose \(1\leq i_{1}<\cdots<i_{p}\leq m\) and \(1\leq j_{1}<\cdots<j_{p}\leq n\) as above. Since \(A\begin{pmatrix}i_{1},\ldots,i_{p}\\ j_{1},\ldots,j_{p}\end{pmatrix}\) is non-singular, there exists \(\mathbf{z}=(z_{1},\ldots,z_{p})\in\mathbb{R}^{p}\) such that
\[A\begin{pmatrix}i_{1},\ldots,i_{p}\\ j_{1},\ldots,j_{p}\end{pmatrix}\mathbf{z}=\mathbf{d}_{p}. \tag{2.2}\]
Again, extend \(\mathbf{z}\) to \(\mathbf{x}\in\mathbb{R}^{n}\) by embedding in positions \(j_{1},\ldots,j_{p}\) and padding by zeros elsewhere. Then \(p-1\leq S^{-}(A\mathbf{x})\leq S^{+}(A\mathbf{x})\leq S^{-}(\mathbf{x})\leq p-1\). It follows that \(S^{+}(A\mathbf{x})=S^{-}(\mathbf{x})=p-1\). From this we conclude: (i) the coordinates of \(\mathbf{z}\) alternate in sign; (ii) all coordinates of \(A\mathbf{x}\) in positions
\(1,\ldots,i_{1}\) are positive; and (iii) \(\epsilon_{p-1}\epsilon_{p}z_{1}>0\) by the hypothesis. We now return to equation (2.2) and solve for \(z_{1}\) using Cramer's rule to obtain
\[z_{1}=\frac{\sum_{l=1}^{p}\det A\begin{pmatrix}i_{1},\ldots,i_{l-1},i_{l+1} \ldots,i_{p}\\ j_{2},j_{3},\ldots\ \ldots,j_{p-1},j_{p}\end{pmatrix}}{\det A\begin{pmatrix}i_{1}, \ldots,i_{p}\\ j_{1},\ldots,j_{p}\end{pmatrix}}.\]
Multiplying both sides by \(\epsilon_{p-1}\epsilon_{p}\), we obtain
\[0<\epsilon_{p-1}\epsilon_{p}z_{1}=\epsilon_{p-1}\epsilon_{p}\frac{\sum_{l=1}^ {p}\det A\begin{pmatrix}i_{1},\ldots,i_{l-1},i_{l+1}\ldots,i_{p}\\ j_{2},j_{3},\ldots\ \ldots,j_{p-1},j_{p}\end{pmatrix}}{\det A\begin{pmatrix}i_{1}, \ldots,i_{p}\\ j_{1},\ldots,j_{p}\end{pmatrix}}.\]
The sign of the numerator is \(\epsilon_{p-1}\) by the induction hypothesis, and hence the sign of \(\det A\begin{pmatrix}i_{1},\ldots,i_{p}\\ j_{1},\ldots,j_{p}\end{pmatrix}\) is \(\epsilon_{p}\). This concludes \((2)\Longrightarrow(1)\).
Now we will show that \((2)\Longrightarrow(3)\), where \(A_{k}\) is not necessarily required to be a contiguous submatrix of \(A\) and \(\mathbf{0}\neq\mathbf{x}^{A_{k}}\in\mathbb{R}^{k}\) can be arbitrary. Let \(A_{k}=A\begin{pmatrix}i_{1},\ldots,i_{k}\\ j_{1},\ldots,j_{k}\end{pmatrix}\), where \(1\leq i_{1}<\cdots<i_{k}\leq m\) and \(1\leq j_{1}<\cdots<j_{k}\leq n\). Define \(\mathbf{x}\in\mathbb{R}^{n}\) whose coordinates are \(x_{l}^{A_{k}}\) at position \(j_{l}\) for \(l=1,\ldots,k\) and zero elsewhere. By the hypothesis, we have
\[S^{+}(A_{k}\mathbf{x}^{A_{k}})\leq S^{+}(A\mathbf{x})\leq S^{-}(\mathbf{x})=S ^{-}(\mathbf{x}^{A_{k}})\quad\implies\quad S^{+}(A_{k}\mathbf{x}^{A_{k}})\leq S ^{-}(\mathbf{x}^{A_{k}}).\]
Now suppose that \(S^{+}(A_{k}\mathbf{x}^{A_{k}})=S^{-}(\mathbf{x}^{A_{k}})=r\) where \(0\leq r\leq k-1\). Then \(A_{k}\mathbf{x}^{A_{k}}\neq\mathbf{0}\) and
\[S^{+}(A\mathbf{x})=S^{-}(\mathbf{x})=r.\]
Since \(A_{k}\mathbf{x}^{A_{k}}\neq\mathbf{0}\) is a sub-string of \(A\mathbf{x}\) and hence \(A\mathbf{x}\neq\mathbf{0}\). Also, as \(S^{+}(A\mathbf{x})=S^{+}(A_{k}\mathbf{x}^{A_{k}})\) therefore by considering any \(S^{+}\)-filling of \(A\mathbf{x}\), all coordinates of \(A\mathbf{x}\) in positions \(1,\ldots,i_{1}\) (respectively \(i_{k},\ldots,m\)) have the same sign. Note that the first non-zero component of \(\mathbf{x}^{A_{k}}\) is same as that of \(\mathbf{x}\). Since \(S^{+}(A\mathbf{x})=S^{-}(\mathbf{x})\) and \(A\mathbf{x}\neq\mathbf{0}\), hence by (2), the sign of the first (last) component of \(A_{k}\mathbf{x}^{A_{k}}\) agrees with \(\epsilon_{r}\epsilon_{r+1}\) times the sign of the first (last) non-zero component of \(\mathbf{x}^{A_{k}}\).
To complete the proof, we show that \((3)\Longrightarrow(1)\). By Karlin's Theorem 2.1, it suffices to show that the sign of \(\det A_{r}\) is \(\epsilon_{r}\) for all \(r\times r\) contiguous submatrices \(A_{r}\) of \(A\), where \(1\leq r\leq\min\{m,n\}\). We prove this by induction on \(r\). If \(r=1\), then \(\operatorname{adj}A_{1}=(1)_{1\times 1}\) and
\[0\leq S^{+}(A_{1}x^{A_{1}})\leq S^{-}(x^{A_{1}})=0.\]
Thus \(A_{1}x^{A_{1}}\neq 0\) and by the hypothesis the sign of \(A_{1}x^{A_{1}}\) is \(\epsilon_{1}\) times the sign of \(x^{A_{1}}\). Hence all the components of \(A\) have sign \(\epsilon_{1}\).
For the induction step fix \(1\leq r\leq\min\{m,n\}\), and suppose that all contiguous \(p\times p\) minors of \(A\) have signs \(\epsilon_{p}\) for \(1\leq p\leq r-1\). Let \(A_{r}\) be an \(r\times r\) contiguous submatrix of \(A\). By Theorem 2.1, \(A_{r}\) is \(\operatorname{SSR}_{r-1}\). Therefore, all entries of \(\operatorname{adj}A_{r}\) are non-zero and have a checkerboard sign pattern. Now, define a vector \(\mathbf{x}^{A_{r}}:=\operatorname{adj}A_{r}\mathbf{v}\), as in (1.1). Note that all entries of \(\mathbf{x}^{A_{r}}\) are non-zero with alternating signs. We first show that \(A_{r}\) is non-singular. If \(A_{r}\) is singular, then
\[A_{r}\mathbf{x}^{A_{r}}=(\det A_{r})\mathbf{v}=\mathbf{0}\in\mathbb{R}^{r} \implies S^{+}(A_{r}\mathbf{x}^{A_{r}})=r>r-1=S^{-}(\mathbf{x}^{A_{r}}),\]
a contradiction. Thus, \(A_{r}\) is invertible.
Next, we show that \(\det A_{r}\) has sign \(\epsilon_{r}\). Since \(A_{r}\mathbf{x}^{A_{r}}=(\det A_{r})\mathbf{v}\), we have
\[r-1=S^{-}(\mathbf{x}^{A_{r}})\geq S^{+}(A_{r}\mathbf{x}^{A_{r}})=S^{+}((\det A _{r})\mathbf{v}).\]
Now, note that even if some entries of the vector \(\mathbf{v}\) are zero, the conditions on the \(\alpha_{j}\) imply that \(\mathbf{v}\) can be \(S^{+}\)-completed to a vector with all non-zero entries and alternating signs. In particular, \(S^{+}((\det A_{r})\mathbf{v})=r-1\). Thus, we have
\[S^{+}(A_{r}\mathbf{x}^{A_{r}})=S^{-}(\mathbf{x}^{A_{r}})=r-1.\]
Since the sign of the first component of \(A_{r}\mathbf{x}^{A_{r}}\) (or \(S^{+}\)-completion of \(A_{r}\mathbf{x}^{A_{r}}\)) is \(\operatorname{sign}(\alpha_{1}\det A_{r})\), therefore by (3), \(\operatorname{sign}(\alpha_{1}\det A_{r})\) and the first non-zero component of \(\mathbf{x}^{A_{r}}\) bear the following relation:
\[\operatorname{sign}(\alpha_{1}\det A_{r})=\epsilon_{r-1}\epsilon_{r} \operatorname{sign}(\sum_{j=1}^{r}\alpha_{j}A_{r}^{j1}),\]
where \(A_{r}^{ij}\) denotes the determinant of the \((r-1)\times(r-1)\) submatrix of \(A_{r}\) formed by deleting the \(i^{th}\) row and \(j^{th}\) column of \(A_{r}\). Observe that the sign of each summand on the right of the above equation is \(\epsilon_{r-1}\), since \(\operatorname{sign}(A_{r}^{j1})=\epsilon_{r-1}\) for \(j=1,\ldots,r\), \(\alpha_{j}\geq 0\) for \(j=1,\ldots,r\), and not all \(\alpha_{j}\) are zero. Hence \(\operatorname{sign}(\det A_{r})=\epsilon_{r}\). This completes our proof.
From part (3) of Theorem B, we have seen that given an \(m\times n\) matrix \(A\) and a sign pattern \(\epsilon\), the variation diminishing (VD) property at a single test vector, which is drawn from the alternating bi-orthant for each contiguous square submatrix of \(A\), suffices to show the strict sign regularity of \(A\) with sign pattern \(\epsilon\). Our next result shows that Theorem B is the "best possible" in the following sense: Any \(n\times n\) singular \(\text{SSR}_{n-1}\) matrix also satisfies the VD property on every vector in \(\mathbb{R}^{n}\) other than the alternating bi-orthant. Thus a characterization of strict sign regularity with a given sign pattern in terms of the VD property can not hold with test vectors in any open bi-orthant other than the alternating bi-orthant. The proof is similar to (1) \(\implies\) (2) of Theorem B.
**Theorem 2.2**.: _Suppose all coordinates of the vector \(\mathbf{x}\in\mathbb{R}^{n}\) are non-zero and at least two successive coordinates have the same sign. Let \(A\in\mathbb{R}^{n\times n}\) be \(\text{SSR}_{n-1}\). Then \(A\) satisfies \(S^{+}(A\mathbf{x})\leq S^{-}(\mathbf{x})\). Further, for all integers \(0\leq r\leq n-1\) and \(\mathbf{x}\in\mathbb{R}^{n}\) with \(S^{-}(\mathbf{x})=r\), if \(S^{+}(A\mathbf{x})=S^{-}(\mathbf{x})=r\), then the first (last) component of \(A\mathbf{x}\) (if zero, the sign given in determining \(S^{+}(A\mathbf{x})\)) has the sign same as \(\epsilon_{r}\epsilon_{r+1}\) times the sign of the first (last) non-zero component of \(\mathbf{x}\)._
It remains to show that for each positive integer \(n\), we can always construct a real \(n\times n\) matrix \(A\) such that \(A\) is \(\text{SSR}_{n-1}\) while \(\det A=0.\) This follows from a result of Gantmacher-Krein (see Theorem 2.3 below). The steps to construct such a matrix are given as follows:
1. Take any \(B\in\mathbb{R}^{(n-1)\times(n-1)}\) such that \(B\) is SSR. Clearly, rank \(B=n-1\).
2. Let \(B^{\prime}:=B\oplus\{0\}\in\mathbb{R}^{n\times n}\). Therefore, \(B^{\prime}\) is an \(n\times n\) SR matrix whose rank is \((n-1)\).
3. Define \(A:=F_{\sigma}^{(n)}B^{\prime}F_{\sigma}^{(n)}\) where \(F_{\sigma}^{(n)}=(\exp^{-\sigma(i-j)^{2}})_{i,j=1}^{n}\) for \(\sigma>0\) is TP. Therefore, \(A\) is \(\text{SSR}_{n-1}\) with rank \((n-1)\) and hence \(\det A=0\).
### Variation Diminution for Sign Regular Matrices
We conclude this section by characterizing all \(m\times n\) sign regular matrices with a given sign pattern using the variation diminishing property. The proof requires a density theorem proved by Gantmacher-Krein in 1950 using the total positivity of the Gaussian kernel.
**Theorem 2.3** (Gantmacher-Krein [17]).: _Let \(m,n\geq k\geq 1\) be integers. Given a sign pattern \(\epsilon=(\epsilon_{1},\ldots,\epsilon_{k})\), the set of \(m\times n\)\(\text{SSR}(\epsilon)\) matrices of order \(k\) is dense in the set of \(m\times n\)\(\text{SR}(\epsilon)\) matrices of order \(k\)._
The following basic lemma on sign changes of limits of vectors will also be needed in proving the theorem that follows it.
**Lemma 2.4**.: _[_35_, Lemma 3.2]_ _If \(\lim_{k\to\infty}\mathbf{x}_{k}=\mathbf{x}\neq\mathbf{0}\in\mathbb{R}^{n}\), then_
\[\lim_{k\to\infty}S^{-}(\mathbf{x}_{k})\geq S^{-}(\mathbf{x})\;\;\text{and}\; \;\;\lim_{k\to\infty}S^{+}(\mathbf{x}_{k})\leq S^{+}(\mathbf{x}).\]
**Theorem 2.5**.: _Given \(A\in\mathbb{R}^{m\times n}\) and \(\epsilon=(\epsilon_{1},\ldots,\epsilon_{\min\{m,n\}})\), the following are equivalent._
1. \(A\) _is_ \(\text{SR}(\epsilon)\)_._
2. _For all_ \(\mathbf{x}\in\mathbb{R}^{n}\)_, we have_ \(S^{-}(A\mathbf{x})\leq S^{-}(\mathbf{x})\)_. Moreover, for all integers_ \(0\leq r\leq\min\{m,n\}-1\) _and_ \(\mathbf{x}\in\mathbb{R}^{n}\) _with_ \(S^{-}(\mathbf{x})=r\)_, if_ \(S^{-}(A\mathbf{x})=S^{-}(\mathbf{x})\) _and_ \(A\mathbf{x}\neq\mathbf{0}\)_, then the sign of the first (last) non-zero component of_ \(A\mathbf{x}\) _agrees with_ \(\epsilon_{r}\epsilon_{r+1}\) _times the sign of the first (last) non-zero component of_ \(\mathbf{x}\)_._
3. _For every square submatrix_ \(A_{k}\) _of_ \(A\) _of size_ \(k\times k\)_, where_ \(1\leq k\leq\min\{m,n\}\)_, and given any fixed vector_ \(\boldsymbol{\alpha}:=(\alpha_{1},-\alpha_{2},\ldots,(-1)^{k-1}\alpha_{k})^{T} \in\mathbb{R}^{k}\) _with all_ \(\alpha_{j}>0\)_, we define the vector_ \[\mathbf{y}^{A_{k}}:=adj(A_{k})\boldsymbol{\alpha}.\] _Then_ \(S^{-}(A_{k}\mathbf{y}^{A_{k}})\leq S^{-}(\mathbf{y}^{A_{k}})\)_. Moreover, if_ \(S^{-}(A_{k}\mathbf{y}^{A_{k}})=S^{-}(\mathbf{y}^{A_{k}})=r\)_, where_ \(0\leq r\leq k-1\) _and_ \(A_{k}\mathbf{y}^{A_{k}}\neq\mathbf{0}\)_, then the sign of the first (last) non-zero component of_ \(A_{k}\mathbf{y}^{A_{k}}\) _agrees with_ \(\epsilon_{r}\epsilon_{r+1}\) _times the sign of the first (last) non-zero component of_ \(\mathbf{y}^{A_{k}}\)_._
Proof.: We first show that \((1)\Longrightarrow(2)\). Since \(A\in\mathbb{R}^{m\times n}\) is \(\text{SR}(\epsilon)\), by the Gantmacher-Krein Theorem 2.3, there exists a sequence of \(m\times n\)\(\text{SSR}(\epsilon)\) matrices \((A_{k})\) converging entrywise to \(A\). For \(\mathbf{0}\neq\mathbf{x}\in\mathbb{R}^{n}\), using Theorem B and Lemma 2.4, we have
\[S^{-}(A\mathbf{x})\leq\varliminf_{k\to\infty}S^{-}(A_{k}\mathbf{x})\leq \varliminf_{k\to\infty}S^{+}(A_{k}\mathbf{x})\leq\varliminf_{k\to\infty}S^{-} (\mathbf{x})=S^{-}(\mathbf{x}).\]
If \(S^{-}(A\mathbf{x})=S^{-}(\mathbf{x})=r\) where \(0\leq r\leq\min\{m,n\}-1\) and \(A\mathbf{x}\neq\mathbf{0}\), then for all \(k\) sufficiently large we necessarily have
\[r=S^{-}(A\mathbf{x})\leq S^{-}(A_{k}\mathbf{x})\leq S^{+}(A_{k}\mathbf{x}) \leq S^{-}(\mathbf{x})=r\]
using Theorem B and Lemma 2.4. Therefore, for large \(k\), \(S^{-}(A_{k}\mathbf{x})=S^{+}(A_{k}\mathbf{x})\), i.e. the sign patterns of \(A_{k}\mathbf{x}\) do not depend on any zero entries. Since \(S^{+}(A_{k}\mathbf{x})=S^{-}(\mathbf{x})\) and \(A_{k}\mathbf{x}\neq\mathbf{0}\), we have that the sign pattern of \(A_{k}\mathbf{x}\) agrees with \(\epsilon_{r}\epsilon_{r+1}\) times the sign of \(\mathbf{x}\) by Theorem B. By a limiting argument the non-zero sign patterns of \(A\mathbf{x}\) and \(A_{k}\mathbf{x}\) agree.
That \((2)\Longrightarrow(1)\) is shown similarly to the proof of Theorem B by induction on the size \(p\times p\), where \(1\leq p\leq\min\{m,n\}\). Again observe that \(S^{-}(A\mathbf{e}^{j})\leq S^{-}(\mathbf{e}^{j})=0\) for \(1\leq j\leq n\). Since the first non-zero component of \(\mathbf{e}^{j}\) is positive, by the hypothesis all non-zero components of \(A\mathbf{e}^{j}\) have sign \(\epsilon_{1}\) and hence all non-zero entries of \(A\) are of sign \(\epsilon_{1}\).
We now assume that all \((p-1)\times(p-1)\) minors of \(A\) have sign \(\epsilon_{p-1}\), where \(2\leq p\leq\min\{m,n\}\). Consider the \(p\times p\) submatrix of \(A\) indexed by rows \(1\leq i_{1}<\cdots<i_{p}\leq m\) and columns \(1\leq j_{1}<\cdots<j_{p}\leq n\). If the determinant of this submatrix is zero, then we are done. Therefore, assume that this minor is non-zero and hence there exists \(\mathbf{z}=(z_{1},\ldots,z_{p})^{T}\in\mathbb{R}^{p}\) satisfying
\[A\begin{pmatrix}i_{1},\ldots,i_{p}\\ j_{1},\ldots,j_{p}\end{pmatrix}\mathbf{z}=\mathbf{d}_{p}.\]
By repeating the corresponding part of the proof of Theorem B, we have \(S^{-}(A\mathbf{x})=S^{-}(\mathbf{x})=p-1\) and hence \(A\mathbf{x}\neq\mathbf{0}\), which further implies \(\epsilon_{p-1}\epsilon_{p}z_{1}>0\). Now by Cramer's rule,
\[z_{1}=\frac{\sum_{l=1}^{p}\det A\begin{pmatrix}i_{1},\ldots,i_{l-1},i_{l+1} \ldots,i_{p}\\ j_{2},j_{3},\ldots\ \ldots,j_{p-1},j_{p}\end{pmatrix}}{\det A\begin{pmatrix}i_{1}, \ldots,i_{p}\\ j_{1},\ldots,j_{p}\end{pmatrix}}.\]
Multiplying both sides by \(\epsilon_{p-1}\epsilon_{p}\), we obtain
\[0<\epsilon_{p-1}\epsilon_{p}z_{1}=\epsilon_{p-1}\epsilon_{p}\frac{\sum_{l=1}^{p} \det A\begin{pmatrix}i_{1},\ldots,i_{l-1},i_{l+1}\ldots,i_{p}\\ j_{2},j_{3},\ldots\ \ldots,j_{p-1},j_{p}\end{pmatrix}}{\det A\begin{pmatrix}i_{1}, \ldots,i_{p}\\ j_{1},\ldots,j_{p}\end{pmatrix}}.\]
By the induction hypothesis, the sign of the numerator is \(\epsilon_{p-1}\). Thus the sign of \(\det A\begin{pmatrix}i_{1},\ldots,i_{p}\\ j_{1},\ldots,j_{p}\end{pmatrix}\) is \(\epsilon_{p}\). This completes the induction step.
Now we show that \((2)\Longrightarrow(3)\) again for arbitrary vectors \(\mathbf{0}\neq\mathbf{y}^{A_{k}}\in\mathbb{R}^{k}\). The proof is similar to that of Theorem B. Fix \(1\leq k\leq\min\{m,n\}\), and let \(A_{k}\) be an arbitrary \(k\times k\) submatrix of \(A\) whose rows and columns are indexed by \(1\leq i_{1}<\cdots<i_{k}\leq m\) and \(1\leq j_{1}<\cdots<j_{k}\leq n\), respectively. Let us take \(\mathbf{x}\in\mathbb{R}^{n}\) whose coordinates are \(y_{l}^{A_{k}}\) at position \(j_{l}\) for \(l=1,\ldots,k\) and zero elsewhere. Then
\[S^{-}(A_{k}\mathbf{y}^{A_{k}})\leq S^{-}(A\mathbf{x})\leq S^{-}(\mathbf{x})= S^{-}(\mathbf{y}^{A_{k}}).\]
Now suppose \(S^{-}(A_{k}\mathbf{y}^{A_{k}})=S^{-}(\mathbf{y}^{A_{k}})=r\) where \(0\leq r\leq k-1\) with \(A_{k}\mathbf{y}^{A_{k}}\neq\mathbf{0}\). Then
\[S^{-}(A_{k}\mathbf{y}^{A_{k}})=S^{-}(\mathbf{y}^{A_{k}})=S^{-}(\mathbf{x})=S^ {-}(A\mathbf{x})=r\ \,\,\text{and}\,\,\,A\mathbf{x}\neq\mathbf{0}.\]
Assume without loss of generality that the first and the last non-zero entries of \(A_{k}\mathbf{y}^{A_{k}}\) are in positions \(s,t\in\{1,\ldots,k\}\), respectively. Since \(S^{-}(A\mathbf{x})=S^{-}(A_{k}\mathbf{y}^{A_{k}})=r\) and \(A_{k}\mathbf{y}^{A_{k}}\) is the substring of \(A\mathbf{x}\), hence all coordinates of \(A\mathbf{x}\) in positions \(1,2,\ldots,i_{s}\) (respectively \(i_{t}<\cdots<m\)) have the same sign which agrees with \(\epsilon_{r}\epsilon_{r+1}\) times the sign of the first (respectively last) non-zero component of \(\mathbf{x}\) - which is the sign of the first (respectively last) non-zero component of \(\mathbf{y}^{A_{k}}\).
Finally we show that \((3)\Longrightarrow(1)\). We show by induction on \(r\) that the sign of \(\det A_{r}\) is \(\epsilon_{r}\) for all \(r\times r\) non-singular submatrices of \(A\), where \(1\leq r\leq\min\{m,n\}\). For the base case \(r=1\), if \(A_{1}=(0)_{1\times 1}\), there is nothing to prove. Otherwise, \(\operatorname{adj}(A_{1})=(1)_{1\times 1}\) which further implies \(y^{A_{1}}=(\alpha_{1})_{1\times 1}\), where \(\alpha_{1}>0\). Since \(S^{-}(A_{1}y^{A_{1}})=S^{-}(y^{A_{1}})=0\) and \(A_{1}y^{A_{1}}\neq 0\), by the hypothesis the sign of \(A_{1}y^{A_{1}}\) is \(\epsilon_{1}\) times the sign of \(y^{A_{1}}\). Therefore, all non-zero components of \(A\) have sign \(\epsilon_{1}\).
For the induction step, \(A_{r}\) is a \(r\times r\) submatrix of \(A\) and \(A\) is \(\operatorname{SR}_{r-1}\) with sign pattern \(\epsilon_{1},\ldots,\epsilon_{r-1}\). If \(\det A_{r}=0\), we are done; else we assume \(\det A_{r}\neq 0\). Let
\[\boldsymbol{\alpha}=(\alpha_{1},-\alpha_{2},\ldots,(-1)^{r-1}\alpha_{r})^{T} \in\mathbb{R}^{r}\,\,\,\text{with}\,\,\,\text{all}\,\,\,\alpha_{j}>0\]
and define
\[\mathbf{y}^{A_{r}}:=\operatorname{adj}(A_{r})\boldsymbol{\alpha}.\]
Note that no row of \(\operatorname{adj}(A_{r})\) is zero as \(\det A_{r}\neq 0\). Also, since \(A_{r-1}\) is \(\operatorname{SR}\) with \(\epsilon=(\epsilon_{1},\ldots,\epsilon_{r-1})\), \(\operatorname{adj}A_{r}\) has entries whose signs are in a checkerboard pattern. Further, as all coordinates of \(\boldsymbol{\alpha}\) are non-zero with alternating signs, it follows that \(S^{-}(\mathbf{y}^{A_{r}})=r-1\). Since \(\mathbf{0}\neq A_{r}\mathbf{y}^{A_{r}}=(\det A_{r})\boldsymbol{\alpha}\), we have
\[S^{-}(A_{r}\mathbf{y}^{A_{r}})=S^{-}(\mathbf{y}^{A_{r}})=r-1.\]
By the hypothesis, the sign of the first non-zero entry of \(A_{r}\mathbf{y}^{A_{r}}\) and the first non-zero entry of \(\mathbf{y}^{A_{r}}\) have the following relation:
\[\operatorname{sign}(\alpha_{1}\det A_{r})=\epsilon_{r-1}\epsilon_{r} \text{sign}(\sum_{j=1}^{r}\alpha_{j}A_{r}^{j1}).\]
Thus \(\operatorname{sign}(\det A_{r})=\epsilon_{r}\). This completes our proof.
## 3. Theorem C: Linear preserver problem for sign regularity
The goal of this section is to prove Theorem C: Classify all linear sign regularity preservers. That \((1)\implies(2)\) is immediate, while \((3)\implies(1)\) follows from straightforward calculations. To complete the proof we need to show \((2)\implies(3)\). Note that the map \(A\mapsto A^{T}\) does not preserve the domain \(\mathcal{SR}\cap\mathbb{R}^{m\times n}\) for \(m\neq n\). So we need to treat the cases \(m=n\) and \(m\neq n\) separately. To proceed, we need some basic notations and a preliminary result.
1. Let \(E_{ij}\) denote the matrix whose \((i,j)\) entry is \(1\) and zero elsewhere.
2. \(S_{ij}\) is defined as \(S_{ij}:=\{E_{pq}:\mathcal{L}(E_{ij})_{pq}\neq 0\}\).
3. Let \(J=J_{m\times n}\) be the \(m\times n\) matrix with all entries \(1\). We will denote \(\mathcal{L}(J)\) by \(\mathcal{L}(J):=Y=(y_{ij})\), and we will specify the order \(m\times n\) as and when we use the matrix \(J\).
4. Let \(\epsilon_{i}:=\epsilon_{i}(A)\) denote the sign of the \(i\times i\) non-zero minors of \(A\in\mathcal{SR}\).
5. We say that a real matrix \(A=(a_{ij})\geq 0\) if each \(a_{ij}\geq 0\). Similarly for \(A\leq 0\).
6. A real square matrix \(A\) is called a _monomial matrix_ if all rows and columns of \(A\) contain one non-zero entry, which is moreover positive. It is well-known that these are precisely the matrices \(A\) such that \(A,A^{-1}\geq 0\).
The following preliminary lemma will be handy in classifying linear \(\mathcal{SSR}\)-preservers using linear \(\mathcal{SR}\)-preservers. We provide the proof for completeness.
**Lemma 3.1**.: _[_5_, Lemma 1]_ _Let \(S\) be a subset of a finite-dimensional real vector space \(V\). Then \(P(S)\subseteq P(\overline{S})\subseteq P(\operatorname{span}(S))\)._
Proof.: Let \(\mathcal{L}\in P(S)\). Then \(\mathcal{L}\) maps \(\operatorname{span}(S)\) homeomorphically onto itself, since \(\dim V<\infty\). As \(\overline{S}\subseteq\operatorname{span}(S)\), it follows that \(\operatorname{span}(S)=\operatorname{span}(\overline{S})\), and \(\mathcal{L}(\overline{S})=\overline{\mathcal{L}(S)}=\overline{S}\).
First, we will prove certain propositions which will be used in proving Theorem C for both the \(m=n\) and \(m\neq n\) case.
Let \(\mathcal{L}:\mathbb{R}^{m\times n}\rightarrow\mathbb{R}^{m\times n}\) be a linear transformation such that \(\mathcal{L}\) maps \(\mathcal{SR}_{2}\) onto itself, where either \(m=n\geq 3\) or \(m\neq n\) and \(m,n\geq 2\). If \(m\neq n\), then we assume without loss of generality that \(m>n\), i.e., the number of rows are more than the number of columns. The case when \(m<n\) is dealt similarly by pre - and post - composing \(\mathcal{L}\) with \(A\mapsto A^{T}\) to obtain a linear preserver of \(\mathcal{SR}_{2}\cap\mathbb{R}^{m\times n}\) with \(n>m\).
Note that \(E_{ij}\in\mathcal{SR}_{2}\) for all \(1\leq i\leq m\), \(1\leq j\leq n\). By Lemma 3.1, \(\mathcal{L}\) maps \(\mathbb{R}^{m\times n}\) onto itself. Hence \(\mathcal{L}^{-1}\) exists and \(\mathcal{L}^{-1}\in P(\mathcal{SR}_{2})\).
**Proposition 3.2**.: _Let \(\mathfrak{B}=\{E_{11},\ldots,E_{nn};E_{12},\ldots,E_{mn}\}\) be an ordered basis of \(\mathbb{R}^{m\times n}\). Then \(\epsilon_{1}(\mathcal{L}(E_{ij}))\) has the same sign for all \(E_{ij}\in\mathfrak{B}\)._
Proof.: Indeed, suppose that there exist two distinct \(E_{ef}\) and \(E_{kl}\) in \(\mathfrak{B}\) such that \(\mathcal{L}(E_{ef})=U=(u_{ij})_{i,j=1}^{m,n}\) and \(\mathcal{L}(E_{kl})=V=(v_{ij})_{i,j=1}^{m,n}\) but \(\epsilon_{1}(U)\neq\epsilon_{1}(V)\). Without loss of generality, assume that \(\epsilon_{1}(U)=1\) and \(\epsilon_{1}(V)=-1\). Therefore, \(u_{ij}\geq 0\geq v_{ij}\) for all \(1\leq i\leq m\), \(1\leq j\leq n\).
Note the following facts about the matrices \(U\) and \(V\):
1. \(U,V\neq 0_{m\times n}\), since \(\mathcal{L}\) is invertible.
2. \(u_{st}\neq 0\) _if and only if_ \(v_{st}\neq 0\) _where_ \(1\leq s\leq m\), \(1\leq t\leq n\). Assume to the contrary that some \(u_{st}>0\) but \(v_{st}=0\). Note that for all \(c\geq 0\), \(E_{ef}+cE_{kl}\in\mathcal{SR}_{2}\), and thus, \(\mathcal{L}(E_{ef}+cE_{kl})=U+cV\in\mathcal{SR}_{2}\). Now, \((U+cV)_{st}=u_{st}>0\). Since \(V\neq 0\) therefore there exist \(1\leq q\leq m\), \(1\leq r\leq n\) such that \(v_{qr}\neq 0\). Hence, we can choose \(c>0\) such that \((U+cV)_{qr}<0\), a contradiction.
3. _There exist at least two non-zero entries in \(U\) and \(V\)_. Suppose instead that \(u_{ij}\) is the only non-zero entry in \(U\). Again, we can choose \(c>0\) such that \(\mathcal{L}(E_{ef}+cE_{kl})=0_{m\times n}\), a contradiction.
4. \(U\neq\alpha V\) for any \(\alpha\in\mathbb{R}\).
From the above observations, there exist \(1\leq i,s\leq m\) and \(1\leq j,t\leq n\) such that \(u_{ij},v_{ij},u_{st},v_{st}\neq 0\) and \(\frac{u_{ij}}{v_{ij}}\neq\frac{u_{st}}{v_{st}}\). Without loss of generality, assume that \(\frac{-u_{ij}}{v_{ij}}<\frac{-u_{st}}{v_{st}}\). Now choose \(c\) such that \(0<\frac{-u_{ij}}{v_{ij}}<c<\frac{-u_{st}}{v_{st}}\). For this \(c>0\), \(\mathcal{L}(E_{ef}+cE_{bl})\in\mathcal{SR}_{2}\) has a positive and a negative entry, a contradiction. Hence \(\epsilon_{1}(\mathcal{L}(E_{ij}))\) has the same sign for all \(i,j\).
**Remark 3.3**.: Note that \(\mathcal{L}\) is a linear \(\mathcal{SR}_{2}\)-preserver if and only if \(\mathcal{L}\) composed with the map \(A\mapsto-A\) is an \(\mathcal{SR}_{2}\)-preserver. Hence, without loss of generality we assume that \(\epsilon_{1}(\mathcal{L}(E_{ij}))=1\) for all \(E_{ij}\in\mathfrak{B}\).
**Proposition 3.4**.: _Let \(L\in\mathbb{R}^{mn\times mn}\) be the matrix representation of \(\mathcal{L}\) with respect to the ordered basis \(\mathfrak{B}\) above. Then \(L\) is a monomial matrix._
Proof.: From Remark 3.3, \(\epsilon_{1}(\mathcal{L}(E_{ij}))=1\) for all \(E_{ij}\in\mathfrak{B}.\) Thus, \(L\geq 0\). Since \(\mathcal{L}^{-1}\) is also a linear \(\mathcal{SR}_{2}\)-preserver, by Proposition 3.2 either \(L^{-1}\geq 0\) or \(L^{-1}\leq 0\). But, \(L\geq 0\) and \(LL^{-1}=I\), therefore \(L^{-1}\geq 0\). Hence \(L\) is a monomial matrix.
**Remark 3.5**.: Since \(L\) is a monomial matrix, \(S_{ij}\) is a non-empty singleton set for all \(1\leq i\leq m\), \(1\leq j\leq n\). Also, \(S_{ij}\cap S_{kl}=\emptyset\) for \((i,j)\neq(k,l)\).
We now prove Theorem C using Proposition 3.4 for \(m=n\geq 3\).
Proof of Theorem C for \(m=n\).: To show \((2)\implies(3)\), let \(\mathcal{L}:\mathbb{R}^{n\times n}\to\mathbb{R}^{n\times n}\) for \(n\geq 3\) be a linear map sending the class of \(\mathrm{SR}_{2}\) matrices onto itself. Note that Proposition 3.4 now holds. We now split the proof of assertion (3) into several propositions.
**Proposition 3.6**.: _For \(\mathcal{L}\in P(\mathcal{SR}_{2})\), the element in each of the sets \(S_{11}\), \(S_{nn}\), \(S_{1n}\), and \(S_{n1}\) must be among the following:_
\[E_{11},\;E_{nn},\;E_{1n},\;\text{or}\;E_{n1}.\]
Proof.: First, we will prove the result for \(S_{11}\). Suppose that
\[S_{11}\neq\{E_{11}\},\{E_{nn}\},\{E_{1n}\},\{E_{n1}\}.\]
Let \(J(c)\) be the matrix of size \(n\times n\) obtained by multiplying the \((1,1)\) entry of \(J=J_{n\times n}\) by \(c\). Then \(J(c)\in\mathcal{SR}_{2}\) for \(c>0\) and hence \(\mathcal{L}(J(c)):=Y(c)\in\mathcal{SR}_{2}\). Note that all the entries of \(Y(c)\) are non-zero by Remark 3.5. Now, consider the following cases.
If \(S_{11}=\{E_{1k}\}\) where \(k\neq 1,n\), then consider the following two minors of size \(2\times 2\) of \(Y(c)\) included in
* rows \(1,n\) and columns \(1,k\): \(\det\begin{pmatrix}y_{11}&cy_{1k}\\ y_{n1}&y_{nk}\end{pmatrix}=y_{11}y_{nk}-cy_{1k}y_{n1}\), and
* rows \(1,n\) and columns \(k,n\): \(\det\begin{pmatrix}cy_{1k}&y_{1n}\\ y_{nk}&y_{nn}\end{pmatrix}=cy_{1k}y_{nn}-y_{1n}y_{nk}\).
It is always possible to choose \(c>1\) as well as \(0<c<1\) such that above two minors have opposite signs, a contradiction.
Similarly, we can show that \(S_{11}\neq\{E_{k1}\},\{E_{nk}\},\{E_{kn}\}\) for \(k\neq 1,n\).
If \(S_{11}=\{E_{ij}\}\) where \(\{i,j\}\neq\{1,n\}\), then the following two minors of size \(2\times 2\) of \(Y(c)\) included in
* rows \(1,i\) and columns \(1,j\): \(\det\begin{pmatrix}y_{11}&y_{1j}\\ y_{i1}&cy_{ij}\end{pmatrix}\), and
* rows \(i,n\) and columns \(1,j\): \(\det\begin{pmatrix}y_{i1}&cy_{ij}\\ y_{n1}&y_{nj}\end{pmatrix}\)
give us a contradiction.
Similarly, we can show this for each of \(S_{nn}\), \(S_{1n}\), and \(S_{n1}\) by multiplying \((n,n)\), \((1,n)\), and \((n,1)\) entry, respectively of \(J=J_{n\times n}\) by \(c>0\)
**Proposition 3.7**.: _For \(\mathcal{L}\in P(\mathcal{SR}_{2})\), the following pairwise combinations are possible._
* \(S_{11}=\{E_{11}\}\) _and_ \(S_{nn}=\{E_{nn}\}\)_, or_ \(S_{11}=\{E_{nn}\}\) _and_ \(S_{nn}=\{E_{11}\}\)_, or_ \(S_{11}=\{E_{1n}\}\) _and_ \(S_{nn}=\{E_{1n}\}\)_, or_ \(S_{11}=\{E_{n1}\}\) _and_ \(S_{nn}=\{E_{1n}\}\)_._
* \(S_{1n}=\{E_{1n}\}\) _and_ \(S_{n1}=\{E_{n1}\}\)_, or_ \(S_{1n}=\{E_{n1}\}\) _and_ \(S_{n1}=\{E_{1n}\}\)_, or_ \(S_{1n}=\{E_{11}\}\) _and_ \(S_{n1}=\{E_{11}\}\)_._
* \(S_{1n}=\{E_{11}\}\) _and_ \(S_{n1}=\{E_{nn}\}\)_, or_ \(S_{1n}=\{E_{nn}\}\) _and_ \(S_{n1}=\{E_{11}\}\)_._
Proof.: First, we prove (i). From Proposition 3.6, we have that \(S_{11}\) equals one of the sets \(\{E_{11}\},\{E_{nn}\},\)\(\{E_{n1}\},\) and similarly, \(S_{nn}\) equals one of \(\{E_{11}\},\{E_{nn}\},\{E_{n1}\},\{E_{1n}\}\). Now, out of sixteen possible combinations of \(S_{11}\) and \(S_{nn}\), the four cases wherein \(S_{11}=S_{nn}\) are discarded straightaway because of the "monomiality" of \(L\). It remains to show that eight other combinations of \(S_{11}\) and \(S_{nn}\) mentioned below are not possible:
\[S_{11}=\{E_{11}\}\text{ and }S_{nn}=\{E_{n1}\},\quad S_{11}=\{E_{11}\}\text{ and }S_{nn}=\{E_{1n}\},\]
\[S_{11}=\{E_{nn}\}\text{ and }S_{nn}=\{E_{n1}\},\quad S_{11}=\{E_{nn}\}\text{ and }S_{nn}=\{E_{1n}\},\]
\[S_{11}=\{E_{n1}\}\text{ and }S_{nn}=\{E_{11}\},\quad S_{11}=\{E_{n1}\}\text{ and }S_{nn}=\{E_{nn}\},\]
\[S_{11}=\{E_{1n}\}\text{ and }S_{nn}=\{E_{11}\},\quad S_{11}=\{E_{1n}\}\text{ and }S_{nn}=\{E_{nn}\}.\]
Indeed, assume that \(S_{11}=\{E_{11}\}\) and \(S_{nn}=\{E_{n1}\}\). Let \(J(c)\) be the \(n\times n\) matrix obtained by multiplying the \((1,1)\) and \((n,n)\) entries of \(J=J_{n\times n}\) by \(c\). Then \(J(c)\in\mathcal{SR}_{2}\) for \(c>0\) and hence \(\mathcal{L}(J(c)):=Y(c)\in\mathcal{SR}_{2}\). Now, consider the following two minors of size \(2\times 2\) of \(Y(c)\) included in
* rows \(1,2\) and columns \(1,2\): \(\det\begin{pmatrix}cy_{11}&y_{12}\\ y_{21}&y_{22}\end{pmatrix}=cy_{11}y_{22}-y_{12}y_{21}\), and
* rows \(2,n\) and columns \(1,2\): \(\det\begin{pmatrix}y_{21}&y_{22}\\ cy_{n1}&y_{n2}\end{pmatrix}=y_{21}y_{n2}-cy_{22}y_{n1}\).
We can always choose \(c>0\) such that the above two minors are of opposite signs, which is a contradiction. Similarly, for the remaining seven cases, we can always find two non-zero \(2\times 2\) minors of \(Y(c)\) having opposite signs.
Adapting the same argument as in the preceding half of this proof shows that (ii) holds.
**Remark 3.8**.: In part (i) of Proposition 3.7, we can assume without loss of generality that
\[S_{11}=\{E_{11}\}\text{ and }S_{nn}=\{E_{nn}\} \tag{3.1}\]
since \(\mathcal{L}\in P(\mathcal{SR}_{2})\) if and only if \(\mathcal{L}\) composed with the maps \(A\mapsto AP_{n}\) and \(A\mapsto P_{n}A\) is a linear \(\mathcal{SR}_{2}\)-preserver. Now, using Proposition 3.4 and (3.1) in part (ii) of Proposition 3.7, we have either
\[S_{11}=\{E_{11}\},\;S_{nn}=\{E_{nn}\},\;S_{1n}=\{E_{1n}\},\;\text{and}\;S_{n1} =\{E_{n1}\}\;\;\text{or}\]
\[S_{11}=\{E_{11}\},\;S_{nn}=\{E_{nn}\},\;S_{1n}=\{E_{n1}\},\;\text{and}\;S_{n1} =\{E_{1n}\}.\]
We henceforth assume that
\[S_{11}=\{E_{11}\},\;S_{nn}=\{E_{nn}\},\;S_{1n}=\{E_{1n}\},\;\text{and}\;S_{n1} =\{E_{n1}\} \tag{3.2}\]
without loss of generality, since \(\mathcal{L}\in P(\mathcal{SR}_{2})\) if and only if \(\mathcal{L}\) composed with the map \(A\mapsto A^{T}\) is a linear \(\mathcal{SR}_{2}\)-preserver.
**Proposition 3.9**.: _Let \(\mathcal{L}\in P(\mathcal{SR}_{2})\)._
* _Then_ \(\mathcal{L}\) _must map the first (and last) row and column of its arguments entirely to the first (and last) row and column, respectively._
* _Further,_ \(\mathcal{L}\) _must map all rows and columns of its arguments entirely to some row and column, respectively._
Proof.: We will first show that \(\mathcal{L}\) must map the first row of its arguments entirely to the first row. Again, let \(J(c)\) be the matrix of size \(n\times n\) obtained by multiplying the first row of \(J=J_{n\times n}\) by \(c>0\). Then \(J(c)\in\mathcal{SR}_{2}\) with \(\epsilon_{1}=1\) and hence \(\mathcal{L}(J(c)):=Y(c)\in\mathcal{SR}_{2}\). Assume that \(\mathcal{L}\) does not map the first row of its arguments entirely to the first row. Thus, there exists \(1<k<n\) such that the \((1,k)\) position of the matrix \(Y(c)\) is not occupied by the image of any element from the first row of \(J(c)\). Using (3.2), we can obtain the following two minors of size \(2\times 2\) of \(Y(c)\) included in
* rows \(1,n\) and columns \(1,k\): \(\det\begin{pmatrix}cy_{11}&y_{1k}\\ y_{n1}&\alpha y_{nk}\end{pmatrix}=c\alpha y_{11}y_{nk}-y_{1k}y_{n1}\), and
* rows \(1,n\) and columns \(k,n\): \(\det\begin{pmatrix}y_{1k}&cy_{1n}\\ \alpha y_{nk}&y_{nn}\end{pmatrix}=y_{1k}y_{nn}-c\alpha y_{1n}y_{nk}\),
where either \(\alpha=c\) or \(\alpha=1\).
Choosing \(c\) large enough gives the above two minors of \(Y(c)\) are of opposite signs, which is a contradiction. Similarly, the remaining assertions can be established by multiplying that particular row and column of \(J=J_{n\times n}\) by \(c>0\). This proves (i).
To prove (ii), we begin by showing \(\mathcal{L}\) must map the \(k^{th}\) row of its arguments entirely to some row for \(1<k<n\). Again, let \(J(c)\) be the \(n\times n\) matrix obtained by multiplying the \(k^{th}\) row of \(J=J_{n\times n}\) by \(c>0\). Then \(J(c)\in\mathcal{SR}_{2}\) and hence \(\mathcal{L}(J(c)):=Y(c)\in\mathcal{SR}_{2}\). By Proposition 3.9(i), \(\mathcal{L}\) must map the first column of its arguments entirely to the first column and hence the \((k,1)\) element of \(J(c)\) will be mapped to the first column. Let us say \(\mathcal{L}\) maps it to the \((s,1)\) position of \(Y(c)\), where \(1<s<n\). Suppose that \(\mathcal{L}\) does not map the \(k^{th}\) row of \(J(c)\) entirely to some row. Then there exists \(1<j\leq n\) such that the \((s,j)\) position of the matrix \(Y(c)\) is not occupied by the image of any element from the \(k^{th}\) row of \(J(c)\). By Proposition 3.9(i) and sufficiently large \(c\), we have the following two minors of size \(2\times 2\) of \(Y(c)\) included in
* rows \(1,s\) and columns \(1,j\): \(\det\begin{pmatrix}y_{11}&y_{1j}\\ cy_{1}&y_{sj}\end{pmatrix}<0\), and
* rows \(s,n\) and columns \(1,j\): \(\det\begin{pmatrix}cy_{s1}&y_{sj}\\ y_{n1}&y_{nj}\end{pmatrix}>0\).
Hence, \(\mathcal{L}\) must map every row of its argument entirely to some row. Similarly, one can show that \(\mathcal{L}\) must map every column of its argument entirely to some column by multiplying the \(k^{th}\) column of the matrix \(J=J_{n\times n}\) by \(c>0\) where \(1<k<n\).
To summarize, we have used the transformations \(A\mapsto-A\), \(A\mapsto P_{n}A\), \(A\mapsto AP_{n}\), and \(A\mapsto A^{T}\) to assume that \(\mathcal{L}\in P(\mathcal{SR}_{2})\) has the following properties:
1. \(\epsilon_{1}(\mathcal{L}(E_{ij}))=1\) for all \(1\leq i,j\leq n\).
2. \(S_{11}=\{E_{11}\},\;S_{nn}=\{E_{nn}\},\;S_{1n}=\{E_{1n}\},\;\text{and}\;S_{n1 }=\{E_{n1}\}\).
3. \(\mathcal{L}\) maps the entire first (and last) row and column of its arguments to the first (and last) row and column, respectively.
4. \(\mathcal{L}\) maps every other row and column to some row and column, respectively.
With the above information in hand, we are now ready to use induction on \(n\). We first prove the base case, when \(n=3\). Let \(\mathcal{L}:\mathbb{R}^{3\times 3}\to\mathbb{R}^{3\times 3}\) be a linear map such that \(\mathcal{L}(\mathcal{SR}_{2})=\mathcal{SR}_{2}\).
Let \(\mathfrak{B}=\{E_{11},E_{22},E_{33},E_{12},E_{13},E_{21},E_{23},E_{31},E_{32}\}\) be an ordered basis of \(\mathbb{R}^{3\times 3}\). We have \(S_{ij}=\{E_{ij}\}\) for all \(1\leq i,j\leq 3\) using Propositions 3.4 and 3.9. Thus, \(\mathcal{L}(E_{11})=l_{1}E_{11},\mathcal{L}(E_{22})=l_{2}E_{22},\ldots,\mathcal{ L}(E_{32})=l_{9}E_{32}\in\mathcal{SR}_{2}\). Note that \(l_{i}>0\) for all \(1\leq i\leq 9\) by Remark 3.3 and Proposition 3.4. Consider the following rank \(1\) matrix
\[J=\begin{pmatrix}1&1&1\\ 1&1&1\\ 1&1&1\end{pmatrix}_{3\times 3}\in\mathcal{SR}_{2}\implies\mathcal{L}(J)= \begin{pmatrix}l_{1}&l_{4}&l_{5}\\ l_{6}&l_{2}&l_{7}\\ l_{8}&l_{9}&l_{3}\end{pmatrix}\in\mathcal{SR}_{2}.\]
Next, we claim that the rank of \(\mathcal{L}(J)\) is \(1\). If \(\mathcal{L}(J)\) is not a rank \(1\) matrix, then it has at least one non-zero minor of size \(2\times 2\) and further all of these minors are of the same sign, say non-negative without loss of generality.
For \(c>1\), let
\[J(c):=\begin{pmatrix}1&1&c\\ 1&1&1\\ 1&1&1\end{pmatrix}\in\mathcal{SR}_{2}\implies\mathcal{L}(J(c))=\begin{pmatrix}l _{1}&l_{4}&cl_{5}\\ l_{6}&l_{2}&l_{7}\\ l_{8}&l_{9}&l_{3}\end{pmatrix}\in\mathcal{SR}_{2}.\]
But we can choose \(c\) sufficiently large such that \(l_{4}l_{7}-cl_{5}l_{2}<0.\) Thus, we must have that the first two columns of \(\mathcal{L}(J(c))\) and hence of \(\mathcal{L}(J)\) are linearly dependent.
Again for \(c^{\prime}>1\), let
\[J(c^{\prime}):=\begin{pmatrix}1&1&1\\ 1&1&1\\ c^{\prime}&1&1\end{pmatrix}\in\mathcal{SR}_{2}\implies\mathcal{L}(J(c^{\prime} ))=\begin{pmatrix}l_{1}&l_{4}&l_{5}\\ l_{6}&l_{2}&l_{7}\\ c^{\prime}l_{8}&l_{9}&l_{3}\end{pmatrix}\in\mathcal{SR}_{2}.\]
Hence \(l_{6}l_{9}-c^{\prime}l_{2}l_{8}<0\) for \(c^{\prime}\) large enough. Thus, the last two columns of \(\mathcal{L}(J)\) are linearly dependent. Hence, \(\mathcal{L}(J)\) is a rank \(1\) matrix.
Now, let
\[B=\begin{pmatrix}b_{11}&b_{12}&b_{13}\\ b_{21}&b_{22}&b_{23}\\ b_{31}&b_{32}&b_{33}\end{pmatrix}\in\mathcal{SR}_{2}\implies\mathcal{L}(B)= \begin{pmatrix}l_{1}b_{11}&l_{4}b_{12}&l_{5}b_{13}\\ l_{6}b_{21}&l_{2}b_{22}&l_{7}b_{23}\\ l_{8}b_{31}&l_{9}b_{32}&l_{3}b_{33}\end{pmatrix}\in\mathcal{SR}_{2}.\]
Since rank of \(\mathcal{L}(J)\) is \(1\), we can write \(\mathcal{L}(B)\) as
\[\mathcal{L}(B)=\begin{pmatrix}l_{1}&0&0\\ 0&l_{6}&0\\ 0&0&l_{8}\end{pmatrix}\begin{pmatrix}b_{11}&b_{12}&b_{13}\\ b_{21}&b_{22}&b_{23}\\ b_{31}&b_{32}&b_{33}\end{pmatrix}\begin{pmatrix}1&0&0\\ 0&l_{4}/l_{1}&0\\ 0&0&l_{7}/l_{6}\end{pmatrix}\]
which is a positive diagonal equivalence, i.e., \(\mathcal{L}\) maps \(B\mapsto FBE\) where \(F=\begin{pmatrix}l_{1}&0&0\\ 0&l_{6}&0\\ 0&0&l_{8}\end{pmatrix}\) and \(E=\begin{pmatrix}1&0&0\\ 0&l_{4}/l_{1}&0\\ 0&0&l_{7}/l_{6}\end{pmatrix}\). This completes the proof for the base case.
For the induction step, let \(\mathcal{L}:\mathbb{R}^{n\times n}\to\mathbb{R}^{n\times n}\) with \(n>3\) be a linear \(\mathcal{SR}_{2}\)-preserver. Let \(A\in\mathcal{SR}_{2}\). By Proposition 3.9(i), the leading principal submatrix of \(A\) of size \((n-1)\times(n-1)\) must be transformed to the leading principal submatrix of size \((n-1)\times(n-1)\) of \(\mathcal{L}(A)\). Since every \(\mathrm{SR}_{2}\) matrix \(\widehat{A}\) of size \((n-1)\times(n-1)\) is a leading principal submatrix of the \(\mathrm{SR}_{2}\) matrix \(\widehat{A}\oplus\{0\}\) of size \(n\times n\), the natural restriction of \(\mathcal{L}\) onto the \((n-1)\times(n-1)\) leading principal submatrix is a linear \(\mathcal{SR}_{2}\)-preserver on \(\mathbb{R}^{(n-1)\times(n-1)}\). By the induction hypothesis, the restriction of \(\mathcal{L}\) is a composition of one or more of the following maps: (i) \(X\mapsto-X\), (ii) \(X\mapsto XP_{n-1}\), (iii) \(X\mapsto P_{n-1}X\), (iv) \(X\mapsto X^{T}\), and (v) \(X\mapsto FXE\). In fact, it is a positive diagonal equivalence since the first row is transformed to the first row and \(\epsilon_{1}(\mathcal{L}(E_{ij}))=1\) for all \(1\leq i,j\leq n\). Thus,
\[S_{ij}=\{E_{ij}\}\;\;\text{for}\;\;\text{all}\;1\leq i,j\leq n-1.\]
By Proposition 3.9, \(\mathcal{L}\) maps every row and column of its arguments entirely to some row and column, respectively, and hence
\[S_{ij}=\{E_{ij}\}\;\;\text{for}\;\;\text{all}\;1\leq i,j\leq n. \tag{3.3}\]
Since we may compose the inverse positive diagonal equivalence relative to the leading principal submatrix of size \((n-1)\times(n-1)\) with \(\mathcal{L}\), we may assume without loss of generality, that
\[\mathcal{L}(A)\begin{pmatrix}1,\ldots,n-1\\ 1,\ldots,n-1\end{pmatrix}=A\begin{pmatrix}1,\ldots,n-1\\ 1,\ldots,n-1\end{pmatrix}.\]
By using (3.3), we have
\[\mathcal{L}(E_{in})=c_{i}E_{in},\;\mathcal{L}(E_{ni})=k_{i}E_{ni}\;\;\text{for }\;1 \leq i\leq n-1,\;\;\text{and}\;\;\mathcal{L}(E_{nn})=dE_{nn}\]
for some positive scalars \(c_{i}\), \(k_{i}\), and \(d\). We next claim that
\[c_{1}=\cdots=c_{n-1},\;k_{1}=\cdots=k_{n-1},\;\;\text{and}\;\;d=c_{1}k_{1}.\]
Consider the following rank \(1\) matrix
\[J=\begin{pmatrix}1&\ldots&1&1\\ \vdots&\ddots&\vdots&\vdots\\ 1&\ldots&1&1\\ 1&\ldots&1&1\end{pmatrix}_{n\times n}\in\mathcal{SR}_{2}\implies\mathcal{L}(J) =\begin{pmatrix}1&\ldots&1&c_{1}\\ \vdots&\ddots&\vdots&\vdots\\ 1&\ldots&1&c_{n-1}\\ k_{1}&\ldots&k_{n-1}&d\end{pmatrix}\in\mathcal{SR}_{2}.\]
We can show that \(\mathcal{L}(J)\) is a rank \(1\) matrix, similar to the case when \(n=3\). Hence, all \(2\times 2\) minors of \(\mathcal{L}(J)\) are zero. Thus
\[c_{1}=\cdots=c_{n-1},\;k_{1}=\cdots=k_{n-1},\;\;\text{and}\;\;d=c_{1}k_{1}.\]
Hence, \(\mathcal{L}\) maps \(A\) to \(FAE\) for some positive diagonal matrices \(F\) and \(E\). This concludes the induction step and the proof.
To complement Theorem C, we now classify all linear \(\mathcal{SR}\)-preservers on \(\mathbb{R}^{2\times 2}\).
**Theorem 3.10**.: _Let \(\mathcal{L}:\mathbb{R}^{2\times 2}\to\mathbb{R}^{2\times 2}\) be a linear transformation. Then the following statements are equivalent._
1. \(\mathcal{L}\) _maps the class of SR matrices onto itself._
2. \(\mathcal{L}\) _maps the class of SR_\({}_{1}\) _matrices onto itself._
3. \(\mathcal{L}\) _is a composition of one or more of the following types of transformations:_ 1. \(A\mapsto H\circ A\)_, where_ \(H\) _is an entrywise positive matrix;_ 2. \(A\mapsto-A\)_;_ 3. \(A\mapsto P_{2}A\)_, in which_ \(P_{2}\) _is an exchange matrix;_ 4. \(A\mapsto AP_{2}\)_;_ 5. \(A\mapsto A^{T}\)_; and_ 6. \(\begin{pmatrix}a_{11}&a_{12}\\ a_{21}&a_{22}\end{pmatrix}\mapsto\begin{pmatrix}a_{11}&a_{12}\\ a_{22}&a_{21}\end{pmatrix}\)_._
Proof.: \((1)\implies(2)\) and \((3)\implies(1)\) are trivial. It only remains to show \((2)\implies(3)\). Let \(\mathcal{L}:\mathbb{R}^{2\times 2}\to\mathbb{R}^{2\times 2}\) be a linear map such that \(\mathcal{L}(\mathcal{SR}_{1})=\mathcal{SR}_{1}\) and let \(\mathfrak{B}=\{E_{11},E_{22},E_{12},E_{21}\}\) be an ordered basis of \(\mathbb{R}^{2\times 2}\). Since \(E_{ij}\in\mathcal{SR}_{1}\) for all \(1\leq i,j\leq 2\) and \(\mathcal{L}\) maps \(\mathcal{SR}_{1}\) onto itself, therefore \(\mathcal{L}^{-1}\) exists, and further \(\mathcal{L}^{-1}\in P(\mathcal{SR}_{1})\). Notice that the proof of Proposition 3.2 holds for \(\mathcal{L}\in P(\mathcal{SR}_{1})\). Thus \(\epsilon_{1}(\mathcal{L}(E_{ij}))\) has the same sign for all \(i,j\). We can assume without loss of generality that \(\epsilon_{1}(\mathcal{L}(E_{ij}))=1\) for all \(i,j\) since \(\mathcal{L}\) is a linear \(\mathcal{SR}_{1}\)-preserver if and only if \(\mathcal{L}\) composed with the map \(A\mapsto-A\) is a linear \(\mathcal{SR}_{1}\)-preserver. By Proposition 3.4 and Remark 3.5, \(S_{ij}\) is a non-empty singleton set for all \(1\leq i,j\leq 2\) and \(S_{ij}\cap S_{kl}=\emptyset\) for \((i,j)\neq(k,l)\).
Thus the element in each of the sets \(S_{11}\), \(S_{12}\), \(S_{21}\), and \(S_{22}\) must be among the following:
\[E_{11},E_{12},E_{21},\;\text{and}\;E_{22}.\]
Thus, there are twenty-four possible combinations of the sets \(S_{11}\), \(S_{12}\), \(S_{21}\), and \(S_{22}\). They are listed in the below table.
\begin{tabular}{||c|c|c|c|c|c||} \hline \hline S.No. & \(S_{11}\) & \(S_{12}\) & \(S_{21}\) & \(S_{22}\) & \multicolumn{2}{c||}{\(\mathcal{SR}_{1}\)-preservers} \\ \hline \hline I. & \(E_{11}\) & \(E_{12}\) & \(E_{21}\) & \(E_{22}\) & - \\ & \(E_{11}\) & \(E_{21}\) & \(E_{12}\) & \(E_{22}\) & \(A\mapsto A^{T}\) \\ & \(E_{12}\) & \(E_{11}\) & \(E_{22}\) & \(E_{21}\) & \(A\mapsto AP_{2}\) \\ & \(E_{12}\) & \(E_{22}\) & \(E_{11}\) & \(E_{21}\) & \(A\mapsto AP_{2}\mapsto(AP_{2})^{T}\) \\ & \(E_{21}\) & \(E_{11}\) & \(E_{22}\) & \(E_{12}\) & \(A\mapsto P_{2}A\mapsto(P_{2}A)^{T}\) \\ & \(E_{21}\) & \(E_{22}\) & \(E_{11}\) & \(E_{12}\) & \(A\mapsto P_{2}A\) \\ & \(E_{22}\) & \(E_{21}\) & \(E_{12}\) & \(E_{11}\) & \(A\mapsto P_{2}AP_{2}\mapsto(P_{2}AP_{2})^{T}\) \\ \hline II. & \(E_{11}\) & \(E_{12}\) & \(E_{22}\) & \(E_{21}\) & - \\ & \(E_{11}\) & \(E_{21}\) & \(E_{22}\) & \(E_{12}\) & \(A\mapsto A^{T}\) \\ & \(E_{12}\) & \(E_{11}\) & \(E_{21}\) & \(E_{22}\) & \(A\mapsto AP_{2}\) \\ & \(E_{12}\) & \(E_{22}\) & \(E_{21}\) & \(E_{11}\) & \(A\mapsto AP_{2}\mapsto(AP_{2})^{T}\) \\ & \(E_{21}\) & \(E_{11}\) & \(E_{12}\) & \(E_{22}\) & \(A\mapsto P_{2}A\mapsto(P_{2}A)^{T}\) \\ & \(E_{21}\) & \(E_{22}\) & \(E_{12}\) & \(E_{11}\) & \(A\mapsto P_{2}A\) \\ & \(E_{22}\) & \(E_{21}\) & \(E_{11}\) & \(E_{12}\) & \(A\mapsto P_{2}AP_{2}\mapsto(P_{2}AP_{2})^{T}\) \\ & \(E_{22}\) & \(E_{12}\) & \(E_{11}\) & \(E_{21}\) & \(A\mapsto P_{2}AP_{2}\mapsto(P_{2}AP_{2})^{T}\) \\ \hline III. & \(E_{11}\) & \(E_{22}\) & \(E_{12}\) & \(E_{21}\) & - \\ & \(E_{11}\) & \(E_{22}\) & \(E_{21}\) & \(E_{12}\) & \(A\mapsto A^{T}\) \\ & \(E_{12}\) & \(E_{21}\) & \(E_{11}\) & \(E_{22}\) & \(A\mapsto AP_{2}\) \\ & \(E_{12}\) & \(E_{21}\) & \(E_{22}\) & \(E_{11}\) & \(A\mapsto AP_{2}\mapsto(AP_{2})^{T}\) \\ & \(E_{21}\) & \(E_{12}\) & \(E_{11}\) & \(E_{22}\) & \(A\mapsto P_{2}A\mapsto(P_{2}A)^{T}\) \\ & \(E_{21}\) & \(E_{12}\) & \(E_{22}\) & \(E_{11}\) & \(A\mapsto P_{2}A\) \\ & \(E_{22}\) & \(E_{11}\) & \(E_{21}\) & \(E_{12}\) & \(A\mapsto P_{2}AP_{2}\mapsto(P_{2}AP_{2})^{T}\) \\ \hline \hline \end{tabular}
In the last column of the above table, we have listed the \(\mathcal{SR}_{1}\)-preservers that we have used to assume without loss of generality:
1. \(S_{11}=\{E_{11}\},S_{12}=\{E_{12}\},S_{21}=\{E_{21}\},\) and \(S_{22}=\{E_{22}\};\) (3.4)
2. \(S_{11}=\{E_{11}\},S_{12}=\{E_{12}\},S_{21}=\{E_{22}\},\) and \(S_{22}=\{E_{21}\};\) (3.5)
3. \(S_{11}=\{E_{11}\},S_{12}=\{E_{22}\},S_{21}=\{E_{12}\},\) and \(S_{22}=\{E_{21}\}.\) (3.6)
Further, using the map \(\mathcal{M}:\begin{pmatrix}a_{11}&a_{12}\\ a_{21}&a_{22}\end{pmatrix}\mapsto\begin{pmatrix}a_{11}&a_{12}\\ a_{22}&a_{21}\end{pmatrix}\) in (3.5) and the composition of the maps \(\mathcal{M}\) and \(A\mapsto A^{T}\) in (3.6), we assume (3.4) without loss of generality, since \(\mathcal{L}\in P(\mathcal{SR}_{1})\) if and only if \(\mathcal{L}\) composed with the map \(\mathcal{M}\) is a linear \(\mathcal{SR}_{1}\)-preserver.
Let \(\mathcal{L}(E_{11})=l_{1}E_{11}\), \(\mathcal{L}(E_{22})=l_{2}E_{22}\), \(\mathcal{L}(E_{12})=l_{3}E_{12}\), and \(\mathcal{L}(E_{21})=l_{4}E_{21}\), where \(l_{i}>0\) for all \(1\leq i\leq 4\) as \(\epsilon_{1}(\mathcal{L}(E_{ij}))=1\) for all \(i,j\). Thus for any \(2\times 2\) matrix \(B=\begin{pmatrix}b_{11}&b_{12}\\ b_{21}&b_{22}\end{pmatrix}\in\mathcal{SR}_{1}\), \(\mathcal{L}(B)=\begin{pmatrix}l_{1}b_{11}&l_{3}b_{12}\\ l_{4}b_{21}&l_{2}b_{22}\end{pmatrix}=H\circ B\), where \(H=\begin{pmatrix}l_{1}&l_{3}\\ l_{4}&l_{2}\end{pmatrix}\). Therefore, the linear \(\mathcal{SR}_{1}\)-preservers on \(\mathbb{R}^{2\times 2}\) are compositions of one or more of the following maps:
(a) \(A\mapsto H\circ A\), where \(H\in\mathbb{R}^{2\times 2}\) is entrywise positive, (b) \(A\mapsto-A\), (c) \(A\mapsto A^{T}\), (d) \(A\mapsto AP_{2}\), (e) \(A\mapsto P_{2}A\), and (f) \(\begin{pmatrix}a_{11}&a_{12}\\ a_{21}&a_{22}\end{pmatrix}\mapsto\begin{pmatrix}a_{11}&a_{12}\\ a_{22}&a_{21}\end{pmatrix}\).
Next, we prove Theorem C for \(m>n\geq 2\) using the propositions stated at the beginning of this section.
Proof of Theorem C for \(m\neq n\).: Let \(m>n\geq 2\) and \(\mathcal{L}:\mathbb{R}^{m\times n}\to\mathbb{R}^{m\times n}\) be a linear map sending \(\mathcal{SR}_{2}\) onto itself. We claim that \(\mathcal{L}\) is a composition of one or more of the transformations listed in Theorem C(3). For convenience, again the proof is split into several propositions. The statements of these propositions are similar to the case \(m=n\geq 3\), but the proofs differ for the \(m>n\geq 2\) case.
**Proposition 3.11**.: _For \(\mathcal{L}\in P(\mathcal{SR}_{2})\), the element in each of the sets \(S_{11}\), \(S_{mn}\), \(S_{1n}\), and \(S_{m1}\) must be either of the following:_
\[E_{11},\;E_{mn},\;E_{1n},\;\mathrm{or}\;E_{m1}.\]
Proof.: We prove this by contradiction. Suppose that
\[S_{11}\neq\{E_{11}\},\{E_{mn}\},\{E_{1n}\},\{E_{m1}\}.\]
First, we consider the case \(n=2\). Let
\[J(c):=\begin{pmatrix}c&1\\ 1&1\\ \vdots&\vdots\\ 1&1\end{pmatrix}_{m\times 2}\in\mathcal{SR}_{2}\implies\text{either}\;\; \mathcal{L}(J(c))=\begin{pmatrix}y_{11}&y_{12}\\ \vdots&\vdots\\ cy_{i1}&y_{i2}\\ \vdots&\vdots\\ y_{m1}&y_{m2}\end{pmatrix}\;\;\mathrm{or}\;\;\mathcal{L}(J(c))=\begin{pmatrix}y_ {11}&y_{12}\\ \vdots&\vdots\\ y_{j1}&cy_{j2}\\ \vdots&\vdots\\ y_{m1}&y_{m2}\end{pmatrix}.\]
Since \(m>2\), we can find two minors of size \(2\times 2\) of \(\mathcal{L}(J(c))\) which are of opposite signs for large \(c\), a contradiction. Now, for \(m>n\geq 3\), the same arguments that were used in Proposition 3.6 can be used here.
Similarly, we can show this for each of \(S_{mn}\), \(S_{1n}\), and \(S_{m1}\) by multiplying the \((m,n)\), \((1,n)\), and \((m,1)\) entries, respectively of \(J=J_{m\times n}\) by \(c>0\).
**Proposition 3.12**.: _For \(\mathcal{L}\in P(\mathcal{SR}_{2})\), the following pairwise combinations are possible._
* \(S_{11}=\{E_{11}\}\;\mathrm{and}\;\;S_{mn}=\{E_{mn}\},\;\;\mathrm{or}\;\;S_{11} =\{E_{mn}\}\;\mathrm{and}\;\;S_{mn}=\{E_{11}\},\;\mathrm{or}\;\;S_{11}=\{E_{m1 }\}\;\mathrm{and}\;\;S_{mn}=\{E_{1n}\}.\)__
* \(S_{1n}=\{E_{1n}\}\;\mathrm{and}\;\;S_{m1}=\{E_{m1}\},\;\;\mathrm{or}\;\;S_{1n} =\{E_{m1}\}\;\mathrm{and}\;\;S_{m1}=\{E_{1n}\},\;\mathrm{or}\;\;S_{1n}=\{E_{m 1}\}\;\mathrm{and}\;\;S_{m1}=\{E_{1n}\}.\)__
Proof.: First, we prove (i). The proof is similar to that of Proposition 3.7, except for the case when \(n=2\). From Proposition 3.11, we have that \(S_{11}\) equals one of the sets \(\{E_{11}\},\{E_{mn}\},\{E_{m1}\},\{E_{1n}\}\) and similarly, \(S_{mn}\) equals \(\{E_{11}\},\{E_{mn}\},\{E_{m1}\},\{E_{1n}\}\). Since \(L\) is a monomial matrix, out of sixteen possible combinations of \(S_{11}\) and \(S_{mn}\), the four cases wherein \(S_{11}=S_{mn}\) are discarded straightaway. We next show that the following eight other combinations of \(S_{11}\) and \(S_{mn}\) below are also not possible:
\[S_{11}=\{E_{m1}\}\;\mathrm{and}\;S_{mn}=\{E_{mn}\},\quad S_{11}= \{E_{11}\}\;\mathrm{and}\;S_{mn}=\{E_{1n}\},\] \[S_{11}=\{E_{mn}\}\;\mathrm{and}\;S_{mn}=\{E_{m1}\},\quad S_{11}= \{E_{mn}\}\;\mathrm{and}\;S_{mn}=\{E_{1n}\},\] \[S_{11}=\{E_{m1}\}\;\mathrm{and}\;S_{mn}=\{E_{11}\},\quad S_{11}= \{E_{11}\}\;\mathrm{and}\;S_{mn}=\{E_{m1}\},\] \[S_{11}=\{E_{1n}\}\;\mathrm{and}\;S_{mn}=\{E_{11}\},\quad S_{11}= \{E_{1n}\}\;\mathrm{and}\;S_{mn}=\{E_{mn}\}.\]
Suppose \(S_{11}=\{E_{m1}\}\) and \(S_{mn}=\{E_{mn}\}\). Let \(n=2\) and for \(c>0\), define
\[J(c):=\begin{pmatrix}1+c&c\\ 1&1\\ \vdots&\vdots\\ 1&1\end{pmatrix}_{m\times 2}\in\mathcal{SR}\implies\mathcal{L}(J(c))\in \mathcal{SR}.\]
By Propositions 3.4 and 3.11, \(\mathcal{L}(J(c))\) can be either of the following:
\[\begin{pmatrix}y_{11}&cy_{12}\\ y_{21}&y_{22}\\ \vdots&\vdots\\ (1+c)y_{m1}&y_{m2}\end{pmatrix}\text{ or }\begin{pmatrix}cy_{11}&y_{12}\\ y_{21}&y_{22}\\ \vdots&\vdots\\ (1+c)y_{m1}&y_{m2}\end{pmatrix}.\]
For the first case, using the fact \(\mathcal{L}(J)\) is a rank \(1\) matrix, the following two minors of size \(2\times 2\) of \(Y(c)\) have opposite signs for appropriate choice of c,
\[\det\begin{pmatrix}y_{11}&cy_{12}\\ y_{21}&y_{22}\end{pmatrix}>0\text{ and }\det\begin{pmatrix}y_{21}&y_{22}\\ (1+c)y_{m1}&y_{m2}\end{pmatrix}<0.\]
For the other case, we can choose \(c\) large enough such that we obtain two minors of size \(2\times 2\) of \(Y(c)\) having opposite signs. Hence, we conclude that \(S_{11}\neq\{E_{m1}\}\) and \(S_{mn}\neq\{E_{mn}\}\) for \(n=2\). Now the proof for \(m>n\geq 3\) follows similarly to the proof of Proposition 3.7.
Using a similar argument as in the above proof shows that (ii) holds.
**Remark 3.13**.: In Proposition 3.12(i), we can assume without loss of generality that
\[S_{11}=\{E_{11}\}\text{ and }S_{mn}=\{E_{mn}\} \tag{3.7}\]
since \(\mathcal{L}\in P(\mathcal{SR})\) if and only if \(\mathcal{L}\) composed with the maps \(A\mapsto AP_{n}\) and \(A\mapsto P_{m}A\) is a linear \(\mathcal{SR}\)-preserver. Now, using Proposition 3.4 and (3.7) in Proposition 3.12(ii), we have either
\[S_{11}=\{E_{11}\},\;S_{mn}=\{E_{mn}\},\;S_{1n}=\{E_{1n}\},\;\text{and}\;S_{m1 }=\{E_{m1}\}\text{ or }\]
\[S_{11}=\{E_{11}\},\;S_{mn}=\{E_{mn}\},\;S_{1n}=\{E_{m1}\},\;\text{and}\;S_{m1 }=\{E_{1n}\}.\]
Next, we show that
\[S_{11}=\{E_{11}\},\;S_{mn}=\{E_{mn}\},\;S_{1n}=\{E_{m1}\},\;\text{and}\;S_{m1 }=\{E_{1n}\}\]
does not hold. To the contrary, suppose it holds. Let \(J(c)\) be the matrix obtained by multiplying the first row of \(J=J_{m\times n}\) by \(c>0\). Then \(J(c)\in\mathcal{SR}_{2}\) and so \(\mathcal{L}(J(c)):=Y(c)\in\mathcal{SR}_{2}\). Since \(m>n\), hence even if \(\mathcal{L}\) maps entire elements of the first row of \(J(c)\) to the first column of \(Y(c)\), there exists \(1<j<m\) such that the \((j,1)\) position of \(Y(c)\) is not occupied by the image of any element from the first row of \(J(c)\). Now consider the following two minors of size \(2\times 2\) of \(Y(c)\) included in
* rows \(1,j\) and columns \(1,n\): \(\det\begin{pmatrix}cy_{11}&y_{1n}\\ y_{j1}&\alpha y_{jn}\end{pmatrix}\), and
* rows \(j,m\) and columns \(1,n\): \(\det\begin{pmatrix}y_{j1}&\alpha y_{jn}\\ cy_{m1}&y_{mn}\end{pmatrix}\),
where either \(\alpha=c\) or \(\alpha=1\). It is always possible to choose \(c\) large enough such that the above two minors of \(Y(c)\) are of opposite signs, which is a contradiction. Thus, we have
\[S_{11}=\{E_{11}\},\;S_{mn}=\{E_{mn}\},\;S_{1n}=\{E_{1n}\},\;\text{and}\;S_{m1 }=\{E_{m1}\}. \tag{3.8}\]
**Proposition 3.14**.: _Let \(\mathcal{L}\in P(\mathcal{SR}_{2})\) such that (3.8) holds._
* _Then_ \(\mathcal{L}\) _must map the first (and last) row and column of its arguments entirely to the first (and last) row and column, respectively._
* _Moreover,_ \(\mathcal{L}\) _must map all rows and columns of its arguments entirely to some row and column, respectively._
Proof.: We begin by showing (i). We will first show that \(\mathcal{L}\) must map the first (and last) row of its arguments entirely to the first (and last) row. For \(n=2\), this follows directly from (3.8). For \(m>n\geq 3\), the proof is similar to that of Proposition 3.9.
Our next aim is to prove that \(\mathcal{L}\) maps the entire first column of its arguments to the first column. Let \(J(c)\) be the matrix obtained by multiplying the first column of \(J=J_{m\times n}\) by \(c>0\). Assume to the contrary that there exists \(k\) where \(1<k<m\) such that the \((k,1)\) position of the matrix \(Y(c)\) is not occupied by the image of any element from the first column of \(J(c)\). Since \(\mathcal{L}\) maps the entire first and last row of its arguments to the first and the last row, respectively, the following two minors of size \(2\times 2\) of \(Y(c)\) give us a contradiction:
* rows \(1,k\) and columns \(1,n\): \(\det\begin{pmatrix}cy_{11}&y_{1n}\\ y_{k1}&\alpha y_{kn}\end{pmatrix}\), and
* rows \(k,m\) and columns \(1,n\): \(\det\begin{pmatrix}y_{k1}&\alpha y_{kn}\\ cy_{m1}&y_{mn}\end{pmatrix}\),
where either \(\alpha=c\) or \(\alpha=1\).
We can prove similarly the corresponding assertion for the last column as well. This shows (i). Now we will show (ii). The proof is similar to Proposition 3.9(ii), except for the case when \(n=2\). By Proposition 3.14(i), this holds trivially for columns. Next we claim that \(\mathcal{L}\) maps every row of its argument to some row. Let \(J(c)\) be the \(m\times 2\) matrix obtained by multiplying the \(k^{th}\) row of \(J=J_{m\times 2}\) by \(c>0\) where \(1<k<m\). By Proposition 3.14(i), \(\mathcal{L}\) maps the entire first (and last) column of its arguments to the first (and last) column and hence the \((k,1)\) element of \(J(c)\) will be mapped to the first column; let us say \(\mathcal{L}\) maps it to the \((p,1)\) position of \(Y(c)\). To the contrary assume that \(\mathcal{L}\) does not map the entire \(k^{th}\) row of its argument to some row, then the \((p,2)\) position of \(Y(c)\) is not occupied by the image of the element of \(J(c)\) present in the \((k,2)\) position. Using Proposition 3.14(i), we obtain the following two minors of size \(2\times 2\) of \(Y(c)\) having opposite signs for an appropriate choice of \(c\), included in
* rows \(1,p\) and columns \(1,2\): \(\det\begin{pmatrix}y_{11}&y_{12}\\ cy_{p1}&y_{p2}\end{pmatrix}<0\), and
* rows \(p,m\) and columns \(1,2\): \(\det\begin{pmatrix}cy_{p1}&y_{p2}\\ y_{m1}&y_{m2}\end{pmatrix}>0\).
This completes the proof.
So far, we have used the transformations \(A\mapsto-A\), \(A\mapsto P_{m}A\), and \(A\mapsto AP_{n}\) to assume that \(\mathcal{L}\in P(\mathcal{SR}_{2})\) has the following properties:
1. \(\epsilon_{1}(\mathcal{L}(E_{ij}))=1\) for all \(1\leq i\leq m\), \(1\leq j\leq n\).
2. \(S_{11}=\{E_{11}\},\;S_{mn}=\{E_{mn}\},\;S_{1n}=\{E_{1n}\},\;\text{and}\;S_{m1 }=\{E_{m1}\}\).
3. \(\mathcal{L}\) maps the entire first (and last) row and column of its arguments to the first (and last) row and column, respectively.
4. \(\mathcal{L}\) maps every other row and column to some row and column, respectively.
With the above analysis in hand, we now complete the proof by induction on the sizes of the matrices. We first apply induction on \(m>2\) and show that the result holds for all \(m\times 2\) matrices. For the base case, let \(m=3\) and let \(\mathcal{L}:\mathbb{R}^{3\times 2}\to\mathbb{R}^{3\times 2}\) be a linear map such that \(\mathcal{L}(\mathcal{SR}_{2})=\mathcal{SR}_{2}\).
Let \(\mathfrak{B}=\{E_{11},E_{22},E_{12},E_{21},E_{31},E_{32}\}\) be an ordered basis of \(\mathbb{R}^{3\times 2}\). We have \(S_{ij}=\{E_{ij}\}\) for all \(1\leq i\leq 3\), \(1\leq j\leq 2\) because of Proposition 3.14. Thus, \(\mathcal{L}(E_{11})=l_{1}E_{11},\mathcal{L}(E_{22})=l_{2}E_{22},\ldots, \mathcal{L}(E_{32})=l_{6}E_{32}\in\mathcal{SR}_{2}\), where \(l_{i}>0\) for all \(1\leq i\leq 6\) by Remark 3.3 and Proposition 3.4. Let
\[J=\begin{pmatrix}1&1\\ 1&1\\ 1&1\end{pmatrix}_{3\times 2}\text{, \ and claim that \ }\mathcal{L}(J)= \begin{pmatrix}l_{1}&l_{3}\\ l_{4}&l_{2}\\ l_{5}&l_{6}\end{pmatrix}\text{ is a rank 1 matrix.}\]
Indeed, if rank of \(\mathcal{L}(J)\) is \(2\), then it has at least one non-zero minor of size \(2\times 2\) and further all of them are of the same sign, say non-negative without loss of generality.
For \(c>1\), let
\[J(c):=\begin{pmatrix}1&c\\ 1&1\\ 1&1\end{pmatrix}\in\mathcal{SR}_{2}\implies\mathcal{L}(J(c))=\begin{pmatrix}l_{1}& d_{3}\\ l_{4}&l_{2}\\ l_{5}&l_{6}\end{pmatrix}\in\mathcal{SR}_{2}.\]
But we can choose \(c\) sufficiently large such that \(l_{1}l_{2}-cl_{3}l_{4}<0.\) Thus, we must have \(l_{4}l_{6}-l_{2}l_{5}=0.\)
Again for \(c^{\prime}>1\), let
\[J(c^{\prime}):=\begin{pmatrix}1&1\\ 1&1\\ c^{\prime}&1\end{pmatrix}\in\mathcal{SR}_{2}\implies\mathcal{L}(J(c^{\prime}))= \begin{pmatrix}l_{1}&l_{3}\\ l_{4}&l_{2}\\ c^{\prime}l_{5}&l_{6}\end{pmatrix}\in\mathcal{SR}_{2}.\]
As above, choose \(c^{\prime}\) sufficiently large such that \(l_{4}l_{6}-c^{\prime}l_{2}l_{5}<0.\) Thus, we have \(l_{1}l_{2}-l_{3}l_{4}=0\). So \(l_{1}l_{6}-l_{3}l_{5}=0\) and hence \(\mathcal{L}(J)\) is a rank \(1\) matrix.
Let \(B=\begin{pmatrix}b_{11}&b_{12}\\ b_{21}&b_{22}\\ b_{31}&b_{32}\end{pmatrix}\in\mathcal{SR}_{2}.\) Then \(\mathcal{L}(B)=\begin{pmatrix}l_{1}b_{11}&l_{3}b_{12}\\ l_{4}b_{21}&l_{2}b_{22}\\ l_{5}b_{31}&l_{6}b_{32}\end{pmatrix}\in\mathcal{SR}_{2}.\) Since \(\mathcal{L}(J)\) is a rank \(1\) matrix, we can write \(\mathcal{L}(B)\) as
\[\mathcal{L}(B)=\begin{pmatrix}l_{1}&0&0\\ 0&l_{4}&0\\ 0&0&l_{5}\end{pmatrix}\begin{pmatrix}b_{11}&b_{12}\\ b_{21}&b_{22}\\ b_{31}&b_{32}\end{pmatrix}\begin{pmatrix}1&0\\ 0&l_{3}/l_{1}\end{pmatrix}\]
which is a positive diagonal equivalence. This completes the base case.
Strategy used for applying induction: For the induction step, we first prove it for the vector space of \(m\times 2\) matrices assuming it to be true for that of \((m-1)\times 2\) matrices, where \(m>3\). After this, we again apply induction on \(m\times n\) by assuming it holds for the space of \(m\times(n-1)\) matrices for fixed \(m\), where \(m>n\).
Let \(A\in\mathcal{SR}_{2}\) have size \(m\times 2\). By Proposition 3.14(i), the submatrix of \(A\) formed by the first \((m-1)\) rows and both columns must be transformed to the first \((m-1)\) rows of \(\mathcal{L}(A)\). Since every SR\({}_{2}\) matrix \(\widehat{A}\) of size \((m-1)\times 2\) is a submatrix of the SR\({}_{2}\) matrix \((\widehat{A}^{T}|\mathbf{0})^{T}\in\mathbb{R}^{m\times 2}\), the natural restriction of \(\mathcal{L}\) onto the \((m-1)\times 2\) leading submatrix is a linear \(\mathcal{SR}_{2}\)-preserver on \(\mathbb{R}^{(m-1)\times 2}\). By the induction hypothesis, the restriction of \(\mathcal{L}\) is a composition of one or more of the following maps: (i) \(X\mapsto-X\), (ii) \(X\mapsto XP_{2}\), (iii) \(X\mapsto P_{m-1}X\), and (iv) \(X\mapsto FXE\). In fact, it is a positive diagonal equivalence since the first row and column are transformed to the first row and column, respectively with \(\epsilon_{1}(\mathcal{L}(E_{ij}))=1\) for all \(i,j\). Thus
\[S_{ij}=\{E_{ij}\}\;\;\text{for}\;\;\text{all}\;1\leq i\leq m-1,\;1\leq j\leq 2.\]
By Proposition 3.14(i), \(\mathcal{L}\) maps the first (and last) column of its arguments entirely to the first (and last) column and hence
\[S_{ij}=\{E_{ij}\}\;\;\text{for}\;\;\text{all}\;1\leq i\leq m,\;1\leq j\leq 2. \tag{3.9}\]
Since we may compose the inverse positive diagonal equivalence relative to the upper left submatrix of size \((m-1)\times 2\) with \(\mathcal{L}\), we may assume without loss of generality, that
\[\mathcal{L}(A)\begin{pmatrix}1,\cdots,m-1\\ 1,2\end{pmatrix}=A\begin{pmatrix}1,\cdots,m-1\\ 1,2\end{pmatrix}.\]
Using (3.9), we have
\[\mathcal{L}(E_{mi})=k_{i}E_{mi}\;\;\text{for}\;\;1\leq i\leq 2,\]
for some positive scalars \(k_{1}\) and \(k_{2}\). We next claim that \(k_{1}=k_{2}.\) Consider the following rank \(1\) matrix
\[J=\begin{pmatrix}1&1\\ \vdots&\vdots\\ 1&1\\ 1&1\end{pmatrix}_{m\times 2}\in\mathcal{SR}_{2}\implies\mathcal{L}(J)= \begin{pmatrix}1&1\\ \vdots&\vdots\\ 1&1\\ k_{1}&k_{2}\end{pmatrix}\in\mathcal{SR}_{2}.\]
We can show \(\mathcal{L}(J)\) is a rank \(1\) matrix, similar to the case when \(m=3\) and \(n=2\). Hence all \(2\times 2\) minors of \(\mathcal{L}(J)\) are zero, and thus \(k_{1}=k_{2}.\) Thus \(\mathcal{L}\) is a positive diagonal equivalence. This completes the induction step for \(m\).
Next, fix arbitrary \(m\) and suppose the assertion holds for linear \(\mathcal{SR}_{2}\)-preservers from \(\mathbb{R}^{m\times(n-1)}\) to \(\mathbb{R}^{m\times(n-1)},\) where \(m>n\). Let \(\mathcal{L}:\mathbb{R}^{m\times n}\to\mathbb{R}^{m\times n}\) such that \(\mathcal{L}\in P(\mathcal{SR}_{2})\). For \(A\in\mathcal{SR}_{2}\) of size \(m\times n\), the submatrix of \(A\) formed by all rows and first \((n-1)\) columns must be transformed to the first \((n-1)\) columns of \(\mathcal{L}(A)\) because of Proposition 3.14(i). Since every \(\mathrm{SR}_{2}\) matrix \(\widehat{A}\) of size \(m\times(n-1)\) is a submatrix of the \(\mathrm{SR}_{2}\) matrix \((\widehat{A}|\mathbf{0})\in\mathbb{R}^{m\times n}\), the natural restriction of \(\mathcal{L}\) onto the \(m\times(n-1)\) left submatrix is a linear \(\mathcal{SR}_{2}\)-preserver on \(\mathbb{R}^{m\times(n-1)}\). By the induction hypothesis, it is a composition of one or more of the following maps: (i) \(X\mapsto-X\), (ii) \(X\mapsto XP_{n-1}\), (iii) \(X\mapsto P_{m}X\), and (iv) \(X\mapsto FXE\). By the same arguments in the preceding part, we have
\[S_{ij}=\{E_{ij}\}\;\;\text{for}\;\;\text{all}\;1\leq i\leq m,\;1\leq j\leq n-1.\]
By Proposition 3.14, \(\mathcal{L}\) maps each row of its argument entirely to some row and hence
\[S_{ij}=\{E_{ij}\}\;\;\text{for}\;\;\text{all}\;1\leq i\leq m,\;1\leq j\leq n. \tag{3.10}\]
Since we may compose the inverse positive diagonal equivalence relative to the upper left submatrix of size \(m\times(n-1)\) with \(\mathcal{L}\), we may assume without loss of generality, that
\[\mathcal{L}(A)\begin{pmatrix}1,\ldots,m\\ 1,\ldots,n-1\end{pmatrix}=A\begin{pmatrix}1,\ldots,m\\ 1,\ldots,n-1\end{pmatrix}.\]
Using (3.10), we have
\[\mathcal{L}(E_{in})=c_{i}E_{in}\;\;\text{for}\;\;1\leq i\leq m,\;\text{for some positive scalar}\;c_{i}.\]
Now, to complete the proof, we must show that \(c_{1}=\cdots=c_{m}.\)
Consider the following rank \(1\) matrix
\[J=\begin{pmatrix}1&\ldots&1&1\\ 1&\ldots&1&1\\ \vdots&\ddots&\vdots&\vdots\\ 1&\ldots&1&1\end{pmatrix}_{m\times n}\in\mathcal{SR}\implies\mathcal{L}(J)= \begin{pmatrix}1&\ldots&1&c_{1}\\ 1&\ldots&1&c_{2}\\ \vdots&\ddots&\vdots&\vdots\\ 1&\ldots&1&c_{m}\end{pmatrix}\in\mathcal{SR}.\]
We can show that \(\mathcal{L}(J)\) is a rank \(1\) matrix similar to the case when \(m=n\geq 3\). Hence, all \(2\times 2\) minors of \(\mathcal{L}(J)\) are zero. Therefore, we have \(c_{1}=\cdots=c_{m}.\) Thus, \(\mathcal{L}\) maps \(A\) to \(FAE\) for some positive diagonal matrices \(F\) and \(E\). This concludes the induction step and the proof.
**Remark 3.15**.: Theorem C holds also for the class of SSR matrices. Since the five linear transformations specified in Theorem C map \(\mathcal{SRR}\) onto \(\mathcal{SRR}\), so \(P(\mathcal{SR})\subseteq P(\mathcal{SSR})\). By Theorem 2.3, \(\mathcal{SR}=\overline{\mathcal{SSR}}\), and by Lemma 3.1, it follows that \(P(\mathcal{SSR})\subseteq P(\mathcal{SR})\). Therefore, \(P(\mathcal{SSR})=P(\mathcal{SR})\).
## 4. Theorem D: Linear preserver problem for sign regularity with given sign pattern
In the final section of this article, we show Theorem D, i.e., we characterize all linear maps preserving sign regularity with a given sign pattern. Again, that \((1)\implies(2)\) is immediate, while \((3)\implies(1)\) is a straightforward verification. To show \((2)\implies(3)\), we must handle the cases
\(m=n\) and \(m\neq n\) separately, since \(A\mapsto A^{T}\) does not map \(m\times n\) matrices to \(m\times n\) matrices. First, we will prove certain propositions which will be used in proving Theorem D in both cases.
Let \(\mathcal{L}:\mathbb{R}^{m\times n}\to\mathbb{R}^{m\times n}\) for \(m,n\geq 2\) be a linear transformation such that \(\mathcal{L}\) maps \(\mathcal{SR}_{2}(\epsilon)\) onto itself. If \(m\neq n\), then we assume without loss of generality that \(m>n\). The case \(m<n\) follows similarly. Also, assume without loss of generality that \(\epsilon_{1}>0.\) For \(\epsilon_{1}<0\), one can take the negatives of the basis elements below and proceed similarly.
Note that \(E_{ij}\in\mathcal{SR}_{2}(\epsilon)\) for all \(1\leq i\leq m\), \(1\leq j\leq n\); thus \(\mathcal{L}^{-1}\) exists and further \(\mathcal{L}^{-1}\in P(\mathcal{SR}_{2}(\epsilon))\). Let \(\mathfrak{B}=\{E_{11},E_{22},\ldots,E_{nn};E_{12},\ldots,E_{mn}\}\) be an ordered basis of \(\mathbb{R}^{m\times n}\) and \(L\) be the matrix which represents the linear transformation \(\mathcal{L}\) with respect to this basis.
**Proposition 4.1**.: \(L\) _is a monomial matrix._
Proof.: Since \(\mathcal{L},\mathcal{L}^{-1}\in P(\mathcal{SR}_{2}(\epsilon))\), \(L\) and \(L^{-1}\) are entrywise non-negative.
Note that Remark 3.5 holds and all the entries of \(\mathcal{L}(J)=Y\) are non-zero.
**Proposition 4.2**.: _For \(\mathcal{L}\in P(\mathcal{SR}_{2}(\epsilon))\) with \(\epsilon=(\epsilon_{1},\epsilon_{2})\), the following pairwise combinations are possible._
* \(S_{11}=\{E_{11}\}\) and \(S_{mn}=\{E_{mn}\},\) or \(S_{11}=\{E_{mn}\}\) and \(S_{mn}=\{E_{11}\}.\)__
* \(S_{1n}=\{E_{1n}\}\) and \(S_{m1}=\{E_{m1}\},\) or \(S_{1n}=\{E_{m1}\}\) and \(S_{m1}=\{E_{1n}\}.\)__
Proof.: Let us begin by proving (i). To the contrary suppose that
\[S_{11}\neq\{E_{11}\},\{E_{mn}\}.\]
Let \(J(c)\) be the matrix obtained by multiplying the \((1,1)\) entry of \(J=J_{m\times n}\) by \(c>0\). Then \(J(c)\in\mathcal{SR}_{2}(\epsilon)\) with \(\epsilon_{1}=\epsilon_{2}=1\) for \(c>1\), and with \(\epsilon_{1}=1\), \(\epsilon_{2}=-1\) for \(0<c<1\). Thus, \(\mathcal{L}(J(c)):=Y(c)\in\mathcal{SR}_{2}(\epsilon)\). Now, consider the following cases.
If \(S_{11}=\{E_{1k}\}\) where \(k\neq 1\), then an appropriate choice of \(c>0\) gives us
\[\epsilon_{2}\det\begin{pmatrix}y_{11}&cy_{1k}\\ y_{m1}&y_{mk}\end{pmatrix}<0,\text{ a contradiction.}\]
Similarly, we can show that \(S_{11}\neq\{E_{k1}\}\) for \(k\neq 1\), \(S_{11}\neq\{E_{mk}\}\) for \(k\neq n\), and \(S_{11}\neq\{E_{kn}\}\) for \(k\neq m\).
If \(S_{11}=\{E_{ij}\}\) where \((i,j)\neq(1,1)\) and \((m,n)\), then an appropriate choice of \(c>0\) gives us
\[\epsilon_{2}\det\begin{pmatrix}y_{i1}&cy_{ij}\\ y_{m1}&y_{mj}\end{pmatrix}<0,\text{ a contradiction.}\]
Hence, we conclude that either \(S_{11}=\{E_{11}\}\) or \(S_{11}=\{E_{mn}\}.\)
Again, to the contrary, suppose that
\[S_{mn}\neq\{E_{11}\}\,\text{ and }\,\{E_{mn}\}.\]
In this case, let \(J(c)\) be the matrix obtained by multiplying the \((m,n)\) entry of \(J=J_{m\times n}\) by \(c>0\). Proceed similar to the previous case to show that our assumption is not true. Hence,
\[S_{mn}=\{E_{11}\}\,\text{ or }\,S_{mn}=\{E_{mn}\}.\]
By Proposition 4.1, for any \(\epsilon=(\epsilon_{1},\epsilon_{2})\), we have either
\[S_{11}=\{E_{11}\}\,\text{ and }\,S_{mn}=\{E_{mn}\},\,\text{ or }\,S_{11}=\{E_{mn}\}\,\text{ and }\,S_{mn}=\{E_{11}\}.\]
The same argument can be adapted as in the preceding half of this proof to show that (ii) holds.
**Proposition 4.3**.: _Let \(\mathcal{L}\in P(\mathcal{SR}_{2}(\epsilon))\) with \(\epsilon=(\epsilon_{1},\epsilon_{2})\) such that_
\[S_{11}=\{E_{11}\},\;S_{mn}=\{E_{mn}\},\;S_{1n}=\{E_{1n}\}\,\text{ and }\,S_{m1}=\{E_{m1}\}. \tag{4.1}\]
* _Then_ \(\mathcal{L}\) _must map the first (and last) row and column of its arguments entirely to the first (and last) row and column, respectively._
2. _Moreover,_ \(\mathcal{L}\) _must map all rows and columns of its arguments entirely to some row and column, respectively._
Proof.: First, we show that \(\mathcal{L}\) must map the first row of its arguments entirely to the first row. Note this holds trivially for \(n=2\) by (4.1). Therefore, let \(n\geq 3\). Let \(J(c)\) be the matrix obtained by multiplying the first row of \(J=J_{m\times n}\) by \(c>0\). Then
\[J(c)\in\mathcal{SR}_{2}(\epsilon)\;\;\text{with}\;\;\epsilon_{1}=1\quad \implies\quad\mathcal{L}(J(c)):=Y(c)\in\mathcal{SR}_{2}(\epsilon)\;\;\text{ with}\;\;\epsilon_{1}=1.\]
Assume that \(\mathcal{L}\) does not map the first row of its arguments entirely to the first row. Thus, there exists \(1<p<n\) such that the \((1,p)\) position of the matrix \(Y(c)\) is not occupied by the image of any element from the first row of \(J(c)\). Using (4.1) and an appropriate choice of \(c>0\) gives us
\[\epsilon_{2}\det\begin{pmatrix}cy_{11}&y_{1p}\\ y_{m1}&\alpha y_{mp}\end{pmatrix}=\epsilon_{2}(c\alpha y_{11}y_{mp}-y_{1p}y_{m 1})<0,\]
where either \(\alpha=c\) or \(\alpha=1\), a contradiction. Similarly, we can prove the other cases by multiplying that particular row and column of \(J=J_{m\times n}\) by \(c>0\). This shows part (i).
To prove part (ii), we will first show that \(\mathcal{L}\) must map the \(k^{th}\) row of its arguments entirely to some row, where \(1<k<m\). Let \(J(c)\) be the matrix obtained by multiplying the \(k^{th}\) row of \(J=J_{m\times n}\) by \(c>0\). By Proposition 4.3(i), \(\mathcal{L}\) maps the \((k,1)\) element of \(J(c)\) to the first column; let us say \(\mathcal{L}\) maps it to the \((s,1)\) position of \(Y(c)\), where \(1<s<m\). If \(\mathcal{L}\) does not map the entire \(k^{th}\) row of \(J(c)\) to the \(s^{th}\) row of \(J(c)\), then there exists \(1<j\leq n\) such that the \((s,j)\) position of the matrix \(Y(c)\) is not occupied by the image of any element from the \(k^{th}\) row of \(J(c)\). Using Proposition 4.3(i) and an appropriate choice of \(c>0\) gives us
\[\epsilon_{2}\det\begin{pmatrix}cy_{s1}&y_{sj}\\ y_{m1}&y_{mj}\end{pmatrix}<0,\;\;\text{a contradiction.}\]
Similarly, we can prove this for columns.
Now, we will use the above proposition for \(m=n\geq 2\) to prove Theorem D.
Proof of Theorem D for \(m=n\).: To prove \((2)\implies(3)\), let \(n\geq 2\) and \(\mathcal{L}:\mathbb{R}^{n\times n}\to\mathbb{R}^{n\times n}\) be a linear transformation such that \(\mathcal{L}\) maps \(\mathcal{SR}_{2}(\epsilon)\) onto itself. By Proposition 4.2, we can assume without loss of generality, that
\[S_{11}=\{E_{11}\},\;S_{nn}=\{E_{nn}\},\;S_{1n}=\{E_{1n}\},\;\;\text{and}\;\;S _{n1}=\{E_{n1}\} \tag{4.2}\]
since \(\mathcal{L}\in P(\mathcal{SR}(\epsilon))\) if and only if \(\mathcal{L}\) composed with the maps \(A\mapsto P_{n}AP_{n}\) and \(A\mapsto A^{T}\) is a linear \(\operatorname{SR}(\epsilon)\)-preserver. Hence, Proposition 4.3 holds.
We now complete the proof by induction on \(n\) with the base case \(n=2\). Let \(\mathcal{L}:\mathbb{R}^{2\times 2}\to\mathbb{R}^{2\times 2}\) be a linear map such that \(\mathcal{L}(\mathcal{SR}_{2}(\epsilon))=\mathcal{SR}_{2}(\epsilon)\) and let \(\mathfrak{B}=\{E_{11},E_{22},E_{12},E_{21}\}\) be an ordered basis of \(\mathbb{R}^{2\times 2}\). By Propositions 4.1 and 4.3, we have \(S_{ij}=\{E_{ij}\}\) for all \(1\leq i,j\leq 2\). Thus, \(\mathcal{L}(E_{11})=l_{1}E_{11},\ldots,\mathcal{L}(E_{21})=l_{4}E_{21}\in \mathcal{SR}_{2}(\epsilon)\) with \(\epsilon_{1}=1\). By Proposition 4.1, \(l_{i}>0\) for all \(1\leq i\leq 4\).
Again, since \(J=\begin{pmatrix}1&1\\ 1&1\end{pmatrix}_{2\times 2}\in\mathcal{SR}_{2}(\epsilon)\;\;\text{with}\;\;\epsilon_{1}=1\), we have
\[\mathcal{L}(J)=\begin{pmatrix}l_{1}&l_{3}\\ l_{4}&l_{2}\end{pmatrix}\in\mathcal{SR}_{2}(\epsilon)\;\;\text{and}\;\; \mathcal{L}^{-1}(J)=\begin{pmatrix}\frac{1}{l_{l}}&\frac{1}{l_{3}}\\ \frac{1}{l_{4}}&\frac{1}{l_{2}}\end{pmatrix}\in\mathcal{SR}_{2}(\epsilon)\]
and hence
\[l_{1}l_{2}-l_{3}l_{4}=0. \tag{4.3}\]
Now, let
\[B=\begin{pmatrix}b_{11}&b_{12}\\ b_{21}&b_{22}\end{pmatrix}\in\mathcal{SR}_{2}(\epsilon)\implies\mathcal{L}(B)= \begin{pmatrix}l_{1}b_{11}&l_{3}b_{12}\\ l_{4}b_{21}&l_{2}b_{22}\end{pmatrix}\in\mathcal{SR}_{2}(\epsilon).\]
Using (4.3), we can write \(\mathcal{L}(B)\) as
\[\mathcal{L}(B)=\begin{pmatrix}l_{1}&0\\ 0&l_{4}\end{pmatrix}\begin{pmatrix}b_{11}&b_{12}\\ b_{21}&b_{22}\end{pmatrix}\begin{pmatrix}1&0\\ 0&l_{3}/l_{1}\end{pmatrix}\]
which is a positive diagonal equivalence. This completes the proof for \(n=2\).
For the induction step, let \(\mathcal{L}:\mathbb{R}^{n\times n}\to\mathbb{R}^{n\times n}\) be a linear \(\mathcal{SR}_{2}(\epsilon)\)-preserver with \(n>2\). Let \(A\in\mathcal{SR}_{2}(\epsilon)\). By Proposition 4.3(i), the leading principal submatrix of \(A\) of size \((n-1)\times(n-1)\) must be transformed to the leading principal submatrix of size \((n-1)\times(n-1)\) of \(\mathcal{L}(A)\). Since every \(\mathrm{SR}_{2}(\epsilon)\) matrix \(\widehat{A}\) of size \((n-1)\times(n-1)\) is a leading principal submatrix of the \(\mathrm{SR}_{2}(\epsilon)\) matrix \(\widehat{A}\oplus\{0\}\) of size \(n\times n\), the natural restriction of \(\mathcal{L}\) onto the \((n-1)\times(n-1)\) leading principal submatrix is a linear \(\mathcal{SR}_{2}(\epsilon)\)-preserver on \(\mathbb{R}^{(n-1)\times(n-1)}\). By the induction hypothesis, it is a composition of one or more of the following maps: (i) \(X\mapsto P_{n-1}XP_{n-1}\), (ii) \(X\mapsto X^{T}\), and (iii) \(X\mapsto FXE\). In fact, it is a positive diagonal equivalence since the first row is transformed to the first row. Thus
\[S_{ij}=\{E_{ij}\}\;\;\text{for}\;\;\text{all}\;\;1\leq i,j\leq n-1.\]
By Proposition 4.3, \(\mathcal{L}\) maps all arguments of rows and columns entirely to some row and column, respectively and hence
\[S_{ij}=\{E_{ij}\}\;\;\text{for}\;\;\text{all}\;\;1\leq i,j\leq n. \tag{4.4}\]
Since we may compose the inverse positive diagonal equivalence relative to the upper left principal submatrix of size \((n-1)\times(n-1)\) with \(\mathcal{L}\), we may assume without loss of generality that
\[\mathcal{L}(A)\begin{pmatrix}1,\ldots,n-1\\ 1,\ldots,n-1\end{pmatrix}=A\begin{pmatrix}1,\ldots,n-1\\ 1,\ldots,n-1\end{pmatrix}.\]
Using (4.4), we have
\[\mathcal{L}(E_{in})=c_{i}E_{in},\;\;\mathcal{L}(E_{ni})=k_{i}E_{ni},\;\;\text {for}\;\;1\leq i\leq n-1,\;\;\text{and}\;\;\mathcal{L}(E_{nn})=dE_{nn}\]
for some positive scalars \(c_{i},k_{i}\), and \(d\). We next claim that
\[c_{1}=\cdots=c_{n-1},\;k_{1}=\cdots=k_{n-1},\;\;\text{and}\;\;d=c_{1}k_{1}. \tag{4.5}\]
Let \(J=J_{n\times n}\). Then
\[\mathcal{L}(J)=\begin{pmatrix}1&\ldots&1&c_{1}\\ \vdots&\ddots&\vdots&\vdots\\ 1&\ldots&1&c_{n-1}\\ k_{1}&\ldots&k_{n-1}&d\end{pmatrix}\in\mathcal{SR}_{2}(\epsilon)\;\;\text{and} \;\;\mathcal{L}^{-1}(J)=\begin{pmatrix}1&\ldots&1&\frac{1}{c_{1}}\\ \vdots&\ddots&\vdots&\vdots\\ 1&\ldots&1&\frac{1}{c_{n-1}}\\ \frac{1}{k_{1}}&\ldots&\frac{1}{k_{n-1}}&\frac{1}{d}\end{pmatrix}\in\mathcal{ SR}_{2}(\epsilon).\]
Thus, (4.5) holds. Hence, \(\mathcal{L}\) maps \(A\) to \(FAE\) for some positive diagonal matrices \(F\) and \(E\). This concludes the induction step and the proof.
Next, we prove Theorem D for \(m>n\geq 2\), using the propositions stated at the beginning of this section.
Proof of Theorem D for \(m\neq n\).: To show \((2)\implies(3)\), let \(\mathcal{L}:\mathbb{R}^{m\times n}\to\mathbb{R}^{m\times n}\) where \(m>n\geq 2\) be a linear transformation such that \(\mathcal{L}\) maps \(\mathcal{SR}_{2}(\epsilon)\) onto itself. Since \(\mathcal{L}\) is an \(\mathcal{SR}_{2}(\epsilon)\)-preserver if and only if \(\mathcal{L}\) composed with the map \(A\mapsto P_{m}AP_{n}\) is also one, by Proposition 4.2(i), we can assume without loss of generality that
\[S_{11}=\{E_{11}\}\;\;\text{and}\;\;S_{mn}=\{E_{mn}\}. \tag{4.6}\]
Therefore, from (4.6) and part (ii) of Proposition 4.2, we have either
\[S_{11}=\{E_{11}\},\;S_{mn}=\{E_{mn}\},\;S_{1n}=\{E_{1n}\},\;\;\text{and}\;\;S_{ m1}=\{E_{m1}\}\;\;\text{or}\]
\[S_{11}=\{E_{11}\},\;S_{mn}=\{E_{mn}\},\;S_{1n}=\{E_{m1}\},\;\;\text{and}\;\;S_{ m1}=\{E_{1n}\}.\]
Next, we show that
\[S_{11}=\{E_{11}\},\;S_{mn}=\{E_{mn}\},\;S_{1n}=\{E_{m1}\},\;\;\text{and}\;\;S_{m1 }=\{E_{1n}\}\]
is not possible. Assume to the contrary that it holds. Let \(J(c)\) be the matrix obtained by multiplying the first row of \(J=J_{m\times n}\) by \(c>0\). Then \(J(c)\in\mathcal{SR}_{2}(\epsilon)\) and hence \(\mathcal{L}(J(c)):=Y(c)\in\mathcal{SR}_{2}(\epsilon)\) with \(\epsilon_{1}=1\). Since \(m>n\), even if \(\mathcal{L}\) maps all elements of the first row of \(J(c)\) to the first column of \(Y(c)\), there exists \(1<j<m\) such that the \((j,1)\) position of \(Y(c)\) is not occupied by the image of any element from the first row of \(J(c)\). Therefore, by our assumption and an appropriate choice of \(c>0\),
\[\epsilon_{2}\det\begin{pmatrix}cy_{11}&y_{1n}\\ y_{j1}&\alpha y_{jn}\end{pmatrix}<0,\]
where either \(\alpha=c\) or \(\alpha=1\), a contradiction. Thus
\[S_{11}=\{E_{11}\},\;S_{mn}=\{E_{mn}\},\;S_{1n}=\{E_{1n}\},\;\text{and}\;S_{m1 }=\{E_{m1}\}. \tag{4.7}\]
Hence, Proposition 4.3 holds.
We now complete the proof by induction on the size of the matrices, with the base case \(m=3\) and \(n=2\). Let \(\mathcal{L}:\mathbb{R}^{3\times 2}\to\mathbb{R}^{3\times 2}\) such that \(\mathcal{L}(\mathcal{SR}_{2}(\epsilon))=\mathcal{SR}_{2}(\epsilon)\).
Let \(\mathfrak{B}=\{E_{11},E_{22},E_{12},E_{21},E_{31},E_{32}\}\) be an ordered basis of \(\mathbb{R}^{3\times 2}\). We have \(S_{ij}=\{E_{ij}\}\) for all \(1\leq i\leq 3,\;\;1\leq j\leq 2\) because of Proposition 4.3. Thus, \(\mathcal{L}(E_{11})=l_{1}E_{11},\ldots,\mathcal{L}(E_{32})=l_{6}E_{32}\in \mathcal{SR}_{2}(\epsilon)\) with \(\epsilon_{1}=1\). By Proposition 4.1, \(l_{i}>0\) for all \(1\leq i\leq 6\).
Since \(A_{1}:=\begin{pmatrix}1&1\\ 1&1\\ 0&0\end{pmatrix}\in\mathcal{SR}_{2}(\epsilon)\),
\[\mathcal{L}(A_{1})=\begin{pmatrix}l_{1}&l_{3}\\ l_{4}&l_{2}\\ 0&0\end{pmatrix}\in\mathcal{SR}_{2}(\epsilon)\;\;\text{and}\;\;\mathcal{L}^{-1} (A_{1})=\begin{pmatrix}\frac{1}{l_{1}}&\frac{1}{l_{6}}\\ \frac{1}{l_{4}}&\frac{1}{l_{2}}\\ 0&0\end{pmatrix}\in\mathcal{SR}_{2}(\epsilon)\]
and hence
\[l_{1}l_{2}-l_{3}l_{4}=0. \tag{4.8}\]
Again, \(A_{2}:=\begin{pmatrix}1&1\\ 0&0\\ 1&1\end{pmatrix}\in\mathcal{SR}_{2}(\epsilon)\) and hence
\[\mathcal{L}(A_{2})=\begin{pmatrix}l_{1}&l_{3}\\ 0&0\\ l_{5}&l_{6}\end{pmatrix}\in\mathcal{SR}_{2}(\epsilon)\;\;\text{and}\;\; \mathcal{L}^{-1}(A_{2})=\begin{pmatrix}\frac{1}{l_{1}}&\frac{1}{l_{3}}\\ 0&0\\ \frac{1}{l_{5}}&\frac{1}{l_{6}}\end{pmatrix}\in\mathcal{SR}_{2}(\epsilon).\]
Thus
\[l_{1}l_{6}-l_{3}l_{5}=0. \tag{4.9}\]
Now let
\[B=\begin{pmatrix}b_{11}&b_{12}\\ b_{21}&b_{22}\\ b_{31}&b_{32}\end{pmatrix}\in\mathcal{SR}_{2}(\epsilon)\implies\mathcal{L}(B)= \begin{pmatrix}l_{1}b_{11}&l_{3}b_{12}\\ l_{4}b_{21}&l_{2}b_{22}\\ l_{5}b_{31}&l_{6}b_{32}\end{pmatrix}\in\mathcal{SR}_{2}(\epsilon).\]
Using (4.8) and (4.9), we can write \(\mathcal{L}(B)\) as
\[\mathcal{L}(B)=\begin{pmatrix}l_{1}&0&0\\ 0&l_{4}&0\\ 0&0&l_{5}\end{pmatrix}\begin{pmatrix}b_{11}&b_{12}\\ b_{21}&b_{22}\\ b_{31}&b_{32}\end{pmatrix}\begin{pmatrix}1&0\\ 0&l_{3}/l_{1}\end{pmatrix}\]
which is a positive diagonal equivalence. This completes the proof for the base case.
For the induction step, let \(\mathcal{L}:\mathbb{R}^{m\times 2}\to\mathbb{R}^{m\times 2}\) be a linear map such that \(\mathcal{L}\in P(\mathcal{SR}_{2}(\epsilon))\). Let \(A\in\mathcal{SR}_{2}(\epsilon)\) of size \(m\times 2\). By Proposition 4.3(i), the submatrix of \(A\) formed by the first \((m-1)\)
rows and both columns must be transformed to the first \((m-1)\) rows of \(\mathcal{L}(A)\). Since every \(\mathrm{SR}_{2}(\epsilon)\) matrix \(\widehat{A}\) of size \((m-1)\times 2\) is a submatrix of the \(\mathrm{SR}_{2}(\epsilon)\) matrix \((\widehat{A}^{T}|\mathbf{0})^{T}\in\mathbb{R}^{m\times 2}\), the natural restriction of \(\mathcal{L}\) onto the \((m-1)\times 2\) top submatrix is a linear \(\mathcal{SR}_{2}(\epsilon)\)-preserver on \(\mathbb{R}^{(m-1)\times 2}\). By the induction hypothesis, the restriction of \(\mathcal{L}\) is a composition of one or more of the following maps: (i) \(X\mapsto P_{m-1}XP_{2}\) and (ii) \(X\mapsto FXE\). In fact, it is a positive diagonal equivalence since the first row and column is transformed to the first row and column, respectively. Thus
\[S_{ij}=\{E_{ij}\}\;\;\text{for}\;\;\text{all}\;1\leq i\leq m-1,\;1\leq j\leq 2.\]
By Proposition 4.3(i), \(\mathcal{L}\) maps the first (and last) column of its arguments entirely to the first (and last) column, and hence
\[S_{ij}=\{E_{ij}\}\;\;\text{for}\;\;\text{all}\;1\leq i\leq m,\;1\leq j\leq 2. \tag{4.10}\]
Since we may compose the inverse positive diagonal equivalence relative to the upper left submatrix of size \((m-1)\times 2\) with \(\mathcal{L}\), we may assume without loss of generality, that
\[\mathcal{L}(A)\begin{pmatrix}1,\ldots,m-1\\ 1,2\end{pmatrix}=A\begin{pmatrix}1,\ldots,m-1\\ 1,2\end{pmatrix}.\]
Using (4.10), we have
\[\mathcal{L}(E_{mi})=k_{i}E_{mi}\;\;\text{for}\;\;1\leq i\leq 2,\]
for some positive scalars \(k_{1}\) and \(k_{2}\). We next claim that \(k_{1}=k_{2}.\) Let \(J=J_{m\times 2}\). Then
\[\mathcal{L}(J)=\begin{pmatrix}1&1\\ \vdots&\vdots\\ 1&1\\ k_{1}&k_{2}\end{pmatrix}\in\mathcal{SR}_{2}(\epsilon)\;\;\text{and}\;\; \mathcal{L}^{-1}(J)=\begin{pmatrix}1&1\\ \vdots&\vdots\\ 1&1\\ \frac{1}{k_{1}}&\frac{1}{k_{2}}\end{pmatrix}\in\mathcal{SR}_{2}(\epsilon).\]
Therefore, we have \(k_{1}=k_{2}\) and hence \(\mathcal{L}\) is a positive diagonal equivalence. This completes the induction step for \(m\) and the result holds for all linear \(\mathcal{SR}_{2}(\epsilon)\)-preservers from \(\mathbb{R}^{m\times 2}\) to \(\mathbb{R}^{m\times 2}\).
Now, fix arbitrary \(m\) and suppose the claim holds for all linear \(\mathcal{SR}_{2}(\epsilon)\)-preservers from \(\mathbb{R}^{m\times(n-1)}\to\mathbb{R}^{m\times(n-1)}\). Let \(\mathcal{L}:\mathbb{R}^{m\times n}\to\mathbb{R}^{m\times n}\) be a linear \(\mathcal{SR}_{2}(\epsilon)\)-preserver. For \(A\in\mathcal{SR}_{2}(\epsilon)\) of size \(m\times n\), the submatrix of \(A\) formed by all rows and the first \((n-1)\) columns must be transformed to the first \((n-1)\) columns of \(\mathcal{L}(A)\) because of Proposition 4.3(i). Since every \(\mathrm{SR}_{2}(\epsilon)\) matrix \(\widehat{A}\) of size \(m\times(n-1)\) is a submatrix of the \(\mathrm{SR}_{2}(\epsilon)\) matrix \((\widehat{A}|\mathbf{0})\in\mathbb{R}^{m\times n}\), the natural reduction of \(\mathcal{L}\) onto the \(m\times(n-1)\) left submatrix is a linear \(\mathcal{SR}_{2}(\epsilon)\)-preserver on \(\mathbb{R}^{m\times(n-1)}\). By the induction hypothesis, it is a composition of one or more of the following maps: (i) \(X\mapsto P_{m}XP_{n-1}\), and (ii) \(X\mapsto FXE\). By the same argument as in the preceding part, we have
\[S_{ij}=\{E_{ij}\}\;\;\text{for}\;\;\text{all}\;1\leq i\leq m,\;1\leq j\leq n-1.\]
By Proposition 4.3, \(\mathcal{L}\) maps all rows of its arguments entirely to some row and hence
\[S_{ij}=\{E_{ij}\}\;\;\text{for}\;\;\text{all}\;1\leq i\leq m,\;1\leq j\leq n. \tag{4.11}\]
Since we may compose the inverse positive diagonal equivalence relative to the upper left submatrix of size \(m\times(n-1)\) with \(\mathcal{L}\), we may assume without loss of generality, that
\[\mathcal{L}(A)\begin{pmatrix}1,\ldots,m\\ 1,\ldots,n-1\end{pmatrix}=A\begin{pmatrix}1,\ldots,m\\ 1,\ldots,n-1\end{pmatrix}.\]
Using (4.11), we have
\[\mathcal{L}(E_{in})=c_{i}E_{in}\;\;\text{for}\;\;1\leq i\leq m,\;\text{for some positive scalar}\;c_{i}.\]
Now, to complete the proof, we must show that \(c_{1}=\cdots=c_{m}.\) Let \(J=J_{m\times n}\). Then
\[\mathcal{L}(J)=\begin{pmatrix}1&\ldots&1&c_{1}\\ 1&\ldots&1&c_{2}\\ \vdots&\ddots&\vdots&\vdots\\ 1&\ldots&1&c_{m}\end{pmatrix}\in\mathcal{SR}_{2}(\epsilon)\,\,\,\text{and}\,\, \,\mathcal{L}^{-1}(J)=\begin{pmatrix}1&\ldots&1&\frac{1}{c_{1}}\\ 1&\ldots&1&\frac{1}{c_{2}}\\ \vdots&\ddots&\vdots&\vdots\\ 1&\ldots&1&\frac{1}{c_{m}}\end{pmatrix}\in\mathcal{SR}_{2}(\epsilon).\]
Therefore, we have \(c_{1}=\cdots=c_{m}.\) Thus, \(\mathcal{L}\) maps \(A\) to \(FAE\) for some positive diagonal matrices \(F\) and \(E\). This concludes the induction step and the proof.
**Remark 4.4**.: Theorem D holds also for the class of \(\operatorname{SSR}(\epsilon)\) matrices. The proof follows Remark 3.15 verbatim.
## Acknowledgments
We thank Apoorva Khare for a detailed reading of an earlier draft and for providing valuable feedback. The first author was partially supported by INSPIRE Faculty Fellowship research grant DST/INSPIRE/04/2021/002620 (DST, Govt. of India), and IIT Gandhinagar Internal Project: IP/IITGN/MATH/PNC/2223/25.
|
2302.06558 | Boundedness of log Fano pairs with certain K-stability | We prove several boundedness results for log Fano pairs with certain
K-stability. In particular, we prove that K-semistable log Fano pairs of Maeda
type form a log bounded family. We also compute K-semistable domains for some
examples. | Konstantin Loginov, Chuyu Zhou | 2023-02-13T17:57:19Z | http://arxiv.org/abs/2302.06558v1 | # Boundedness of log Fano pairs with certain K-stability
###### Abstract.
We prove several boundedness results for log Fano pairs with certain K-stability. In particular, we prove that K-semistable log Fano pairs of Maeda type in each dimension form a log bounded family. We also compute K-semistable domains for some examples.
Key words and phrases:Log Fano pair, Maeda type, boundedness, K-stability, K-semistable domain 2010 Mathematics Subject Classification: 14J45 _Competing interests_: This work started while the authors were enjoying the SinG school in Geometry in Trento (Italy) during January 23-27, 2023, where the hospitality is gratefully acknowledged.
We also consider another set of log pairs with certain K-stability. Fix two positive integers \(d\) and \(k\), a positive number \(v\), and a finite set \(I\) of non-negative rational numbers, we consider the set \(\mathcal{E}:=\mathcal{E}(d,k,v,I)\) of log pairs \((X,\sum_{i=1}^{k}D_{i})\) satisfying the following conditions:
1. \(X\) is a Fano variety of dimension \(d\) and \((-K_{X})^{d}=v\);
2. \(0\leq D_{i}\sim_{\mathbb{Q}}-K_{X}\) for every \(1\leq i\leq k\);
3. the coefficients of \(D_{i}\) are contained in \(I\);
4. there exists \((c_{1},...,c_{k})\in\Delta^{k}\) such that \((X,\sum_{i}c_{i}D_{i})\) is K-semistable, where \(\Delta^{k}:=\{(c_{1},...,c_{k})\ |\ c_{i}\in[0,1)\cap\mathbb{Q}\) and \(0\leq\sum_{i}c_{i}<1\}\).
We confirm the boundedness of \(\mathcal{E}\).
**Theorem 1.3**.: (= Theorem 4.1 ) _The set \(\mathcal{E}\) is log bounded._
When there is only one component in the boundary, i.e. \(k=1\), the log boundedness of \(\mathcal{E}\) is confirmed in [23, Section 5].
In the last section, we also compute K-semistable domains (see Definition 5.1) for various examples. In particular, we describe the domains for two special classes of pairs as follows.
**Theorem 1.4**.: (Theorem 5.8, Example 5.10) _For \(n\geq 2\), the K-semistable domain for \((\mathbb{P}^{n},Q+L)\) is a polytope generated by the following three K-semistable log pairs_
\[\mathbb{P}^{n},\quad(\mathbb{P}^{n},\frac{n+1}{2n}Q),\quad(\mathbb{P}^{n}, \frac{n}{2(n-1)}Q+\frac{1}{n-1}L).\]
_Here \(Q\) is a smooth quadric hypersurface and \(L\) is a hyperplane such that \((\mathbb{P}^{n},Q+L)\) is log smooth._
**Theorem 1.5**.: (Theorem 5.9, Example 5.11) _For \(n\geq 3\), the K-semistable domain for \((\mathbb{P}^{n},Q+Q^{\prime})\) is a polytope generated by the following four K-semistable log pairs_
\[\mathbb{P}^{n},\quad(\mathbb{P}^{n},\frac{n+1}{2n}Q),\quad(\mathbb{P}^{n}, \frac{n+1}{2n}Q^{\prime}),\quad(\mathbb{P}^{n},\frac{n+1}{2(n-1)}Q+\frac{n+1} {2(n-1)}Q^{\prime}).\]
_Here \(Q,Q^{\prime}\) are smooth quadric hypersurfaces such that \((\mathbb{P}^{n},Q+Q^{\prime})\) is log smooth._
**Acknowledgement.** We are grateful to Julia Schneider for her help on Latex techniques. C. Zhou is supported by the grant of European Research Council (ERC-804334). K. Loginov is supported by Russian Science Foundation under grant 21-71-00112.
## 2. Preliminaries
For the standard definitions of birational geometry, including the concepts of klt, lc, dlt and \(\epsilon\)-lc singularities, we refer to [11, 12].
We say that \((X,\Delta)\) is a _log pair_ if \(X\) is a normal projective variety and \(\Delta\) is an effective \(\mathbb{Q}\)-divisor on \(X\) such that \(K_{X}+\Delta\) is \(\mathbb{Q}\)-Cartier. The log pair \((X,\Delta)\) is called _log Fano_ if it admits lc singularities and \(-(K_{X}+\Delta)\) is ample; if \(\Delta=0\), we just say \(X\) is a _Fano variety_. The log pair \((X,\Delta)\) is called a _log Calabi-Yau pair_ if \(K_{X}+\Delta\sim_{\mathbb{Q}}0\).
### K-stability
Let \((X,\Delta)\) be a log Fano pair. Suppose \(f\colon Y\to X\) is a proper birational morphism between normal varieties and \(E\) is a prime divisor on \(Y\), we say that \(E\) is a prime divisor over \(X\) and we define the following invariant
\[A_{(X,\Delta)}(E):=1+\operatorname{ord}_{E}(K_{Y}-f^{*}(K_{X}+\Delta)),\]
which is called the _log discrepancy_ of \(E\) associated to the log pair \((X,\Delta)\). If \((X,\Delta)\) is a log Fano pair, we define the following invariant
\[S_{(X,\Delta)}(E):=\frac{1}{\operatorname{vol}(-K_{X}-\Delta)}\int_{0}^{ \infty}\operatorname{vol}(-f^{*}(K_{X}+\Delta)-xE)\mathrm{d}x.\]
Denote \(\beta_{(X,\Delta)}(E):=A_{(X,\Delta)}(E)-S_{(X,\Delta)}(E)\). By the works [11, 12], one can define K-stability of a log Fano pair by beta criterion as follows.
**Definition 2.1**.: Let \((X,\Delta)\) be a klt log Fano pair. We say that \((X,\Delta)\) is _K-semistable_ if \(\beta_{X,\Delta}(E)\geq 0\) for any prime divisor \(E\) over \(X\).
**Definition 2.2**.: Let \((X,\Delta)\) be a log Calabi-Yau pair, i.e. \(K_{X}+\Delta\sim_{\mathbb{Q}}0\). We say \((X,\Delta)\) is _K-semistable_ if \((X,\Delta)\) is log canonical (this is equivalent to saying that \(\beta_{(X,\Delta)}(E)\geq 0\) for any prime divisor \(E\) over \(X\) since \(S_{(X,\Delta)}(E)\)=0 in this case, see [1, Corollary 9.4], [10]).
**Remark 2.3**.: Let \((X,\Delta)\) be a log Fano pair and \(0\leq D\sim_{\mathbb{Q}}-K_{X}-\Delta\). We will use the following well-known results.
1. For a rational number \(0\leq c<1\), we have \[S_{(X,\Delta+cD)}(E)=(1-c)S_{(X,\Delta)}(E),\] where \(E\) is any prime divisor over \(X\).
2. Suppose \((X,\Delta)\) is K-semistable and \(\dim X=d\), then we have (e.g. [1]) \[\alpha(X,\Delta)\geq\frac{1}{d+1}.\] Here \(\alpha(X,\Delta)\) is the alpha-invariant of the log Fano pair \((X,\Delta)\).
### Complements
**Definition 2.4**.: Let \((X,\Delta)\) be a log Fano pair. We say a \(\mathbb{Q}\)-divisor \(D\geq 0\) is a _complement_ of \((X,\Delta)\) if \((X,\Delta+D)\) is log canonical and \(K_{X}+\Delta+D\sim_{\mathbb{Q}}0\); we say \(D\) is an _\(N\)-complement_ for some positive integer \(N\) if \(D\) is a complement and \(N(K_{X}+\Delta+D)\sim 0\).
**Theorem 2.5**.: ([1]) _Let \((X,\Delta)\) be a klt log Fano pair, then there exists a positive number \(N\) depending only on the dimension of \(X\) and the coefficients of \(\Delta\) such that \((X,\Delta)\) admits an \(N\)-complement._
### Boundedness of Fano varieties
We recall the following results on boundedness of Fano varieties.
**Theorem 2.6**.: ([1, 12]) _Fix a positive integer \(d\) and a positive real number \(\epsilon>0\). Then the following set lies in a bounded family:_
\[\{X\ |\ X\text{ is of dimension }d\text{ and }(X,\Delta)\text{ is }\epsilon\text{-lc log Fano for some }\mathbb{Q}\text{-divisor }\Delta\text{ on }X\}.\]
**Theorem 2.7**.: ([10]) _Fix a positive integer \(d\) and a positive real number \(v>0\). Then the following set lies in a bounded family:_
\[\{X\ |\ X\text{ is a K-semistable Fano variety of dimension }d\text{ with }(-K_{X})^{d}=v\}.\]
## 3. Boundedness I
The goal of this section is to prove Theorem 1.2. We recall the following definition introduced in [11]:
**Definition 3.1**.: A _log Fano pair of Maeda type_ is a pair \((X,\sum_{i=1}^{k}c_{i}D_{i})\) where \((X,\sum_{i=1}^{k}D_{i})\) is a log Fano manifold, \(c_{i}\in[0,1)\cap\mathbb{Q}\), and \(-K_{X}-\sum_{i=1}^{k}c_{i}D_{i}\) is ample. Here by a _log Fano manifold_ me mean a log pair \((X,\sum_{i=1}^{k}D_{i})\) such that it is log smooth and \(D_{i}\) are distinct prime divisors on \(X\) with ample \(-K_{X}-\sum_{i=1}^{k}D_{i}\).
To give an example, the log pair \((\mathbb{P}^{2},\frac{1}{2}(L_{1}+L_{2}))\), where \(L_{1}\) and \(L_{2}\) are two different lines on \(\mathbb{P}^{2}\), is of Maeda type, while the log pair \((\mathbb{P}^{2},\frac{1}{2}(L+Q))\), where \(L\) is a general line and \(Q\) is a general conic on \(\mathbb{P}^{2}\), is not of Maeda type.
**Definition 3.2**.: Let \(X\) be a projective normal variety and \(L\) a pseudo-effective \(\mathbb{Q}\)-line bundle on \(X\). Let \(D\) be a prime divisor on \(X\), then the _pseudo-effective threshold_ of \(D\) with respect to \(L\) is defined as
\[\tau(D;L):=\sup\{t\in\mathbb{R}\ |\ L-tD\text{ is pseudo-effective}\}.\]
**Lemma 3.3**.: _Let \((X,D:=\sum_{i=1}^{k}D_{i})\) be a log smooth pair such that \(D_{i}\) are prime divisors on \(X\) and \(-K_{X}-\sum_{i=1}^{k}D_{i}\) is big and nef, then there exists a positive number \(a_{d}\) depending only on the dimension \(d\) such that_
\[\tau(D;-K_{X}-\sum_{i}D_{i})\geq a_{d}.\]
Proof.: Suppose there exists a log pair of Maeda type such that
\[\tau(D;-K_{X}-\sum_{i}D_{i})<\frac{1}{d+1}.\]
Denote by \(M:=-K_{X}-D\), then we see that \(K_{X}+lM\) is not pseudo-effective for \(1\leq l\leq d+1\). This implies the following vanishing for \(1\leq l\leq d+1\)
\[H^{0}(X,K_{X}+lM)=0.\]
On the other hand, by Kawamata-Viehweg vanishing, we have
\[H^{i}(X,K_{X}+lM)=0\]
for any \(i\geq 1\) and \(l\geq 1\). Hence we see
\[\chi(X,K_{X}+lM)=0\]
for \(1\leq l\leq d+1\). By Hirzebruch-Riemann-Roch formula, the Euler characteristic \(\chi(X,K_{X}+lM)\) is a polynomial of degree \(d\) with leading term \(\frac{M^{d}}{d!}l^{d}\). This leads to a contradiction as the equation \(\chi(X,K_{X}+lM)=0\) cannot have \(d+1\) distinct roots.
We are done by taking \(a_{d}=\frac{1}{d+1}\).
**Theorem 3.4**.: _K-semistable log Fano pairs of Maeda type in dimension \(d\) form a log bounded family._
Proof.: Let \((X,\sum_{i=1}^{k}c_{i}D_{i})\) be a K-semistable log pair of Maeda type, we first show that \(X\) belongs to a bounded family. We claim that there exists a positive number \(\epsilon_{d}\) depending only on the dimension \(d\) such that \(1-c_{i}\geq\epsilon_{d}\) for each \(i\). We focus on \(c_{1}\).
By Lemma 3.3, there exists a positive number \(a_{d}\) depending only on the dimension \(d\) such that
\[\tau(D_{1};-K_{X}-\sum_{i}c_{i}D_{i})>\tau(D;-K_{X}-\sum_{i}D_{i})\geq a_{d}.\]
Thus there exists \(0\leq\Delta\sim_{\mathbb{Q}}-K_{X}-\sum_{i}c_{i}D_{i}\) such that \(\operatorname{ord}_{D_{1}}(\Delta)\geq a_{d}\). Let \(\alpha:=\alpha(X,\sum_{i}c_{i}D_{i})\) be the alpha invariant of the log Fano pair \((X,\sum_{i}c_{i}D_{i})\), then the log pair \((X,\sum_{i}c_{i}D_{i}+\frac{\alpha}{2}\Delta)\) is log canonical. Thus we see
\[c_{1}+\frac{\alpha}{2}\cdot a_{d}\leq 1.\]
Since \((X,\sum_{i}c_{i}D_{i})\) is K-semistable, then \(\alpha\geq\frac{1}{d+1}\) by [1]. Take \(\epsilon_{d}:=\frac{a_{d}}{2(d+1)}\), we see that
\[1-c_{1}\geq\epsilon_{d}.\]
Similarly, this inequality also holds for other \(c_{i}\), which implies that \((X,\sum_{i}c_{i}D_{i})\) is \(\epsilon_{d}\)-lc. By [1, 1], we see that \(X\) belongs to a bounded family.
To derive the log boundedness, we take a very ample line bundle \(A_{X}\) on \(X\) such that \(A_{X}^{d}\) is upper bounded by \(M(d)\), which only depends on the dimension \(d\). Note that \(-K_{X}A_{X}^{d-1}\) is also upper bounded by a positive number \(N(d)\) which depends only on the dimension \(d\), then we see
\[\sum_{i}D_{i}.A_{X}^{d-1}<-K_{X}A_{X}^{d-1}\leq N(d)\]
since \(-K_{X}-\sum_{i}D_{i}\) is ample. This means that the degree of \(\sum_{i}D_{i}\) is upper bounded. By a standard argument on Chow scheme we see that \((X,\sum_{i}D_{i})\) belongs to a log bounded family.
**Example 3.5**.: Note that if a log Fano pair \((X,\sum D_{i})\) is not log smooth, then even if \((X,\sum a_{i}D_{i})\) is K-semistable for some numbers \(a_{i}\), it is not true that \((X,\sum D_{i})\) is log bounded.
Indeed, for any \(n\geq 1\) put \(X=\mathbb{P}(1,1,n)\) with coordinates \(x,y,z\), and \(D=\{z=0\}\). Note that \(D\sim nH\) where \(H\) is a generator of the class group of \(X\). It is easy to see that the pair \((X,D)\) is a dlt log Fano pair. Then \((X,(1-\frac{1}{n})D)\) is a K-semistable log Fano pair (e.g. [1, Prop 2.11]) while there is no boundedness.
We conclude by formulating the following question:
**Question 3.6**.: Fix a natural number \(d\) and a rational number \(\epsilon>0\). Suppose that \((X,D=\sum D_{i})\) is a dlt log Fano pair of dimension \(d\), and \((X,(1-\epsilon)D)\) is \(\epsilon\)-lc. Also, assume that
\((X,\sum c_{i}D_{i})\) is K-semistable for some rational numbers \(c_{i}\in[0,1)\). Is it true that \((X,D)\) is log bounded?
## 4. Boundedness II
Fix two positive integers \(d\) and \(k\), a positive number \(v\), and a finite set \(I\) of non-negative rational numbers, we consider the set \(\mathcal{E}:=\mathcal{E}(d,k,v,I)\) of log pairs \((X,\sum_{i=1}^{k}D_{i})\) satisfying the following conditions:
1. \(X\) is a Fano variety of dimension \(d\) and \((-K_{X})^{d}=v\);
2. \(0\leq D_{i}\sim_{\mathbb{Q}}-K_{X}\) for every \(1\leq i\leq k\);
3. the coefficients of \(D_{i}\) are contained in \(I\);
4. there exists \((c_{1},...,c_{k})\in\Delta^{k}\) such that \((X,\sum_{i}c_{i}D_{i})\) is K-semistable, where \(\Delta^{k}:=\{(c_{1},...,c_{k})\ |\ c_{i}\in[0,1)\cap\mathbb{Q}\)_and_\(0\leq\sum_{i}c_{i}<1\}\).
**Theorem 4.1**.: _The set \(\mathcal{E}\) is log bounded._
Proof.: We divide the proof into several steps.
_Step 1._ In this step, we explain that it is enough to show that the set
\[\mathcal{G}:=\{X\ |\ (X,\sum_{i=1}^{k}D_{i})\in\mathcal{E}\}\]
is bounded. Suppose \(\mathcal{G}\) is bounded, then for each \((X,\sum_{i}D_{i})\in\mathcal{E}\), there exists a very ample line bundle \(A_{X}\) on \(X\) such that \(A_{X}^{d}\leq M(d)\) and \(-K_{X}A_{X}^{d-1}\leq N(d)\), where \(M(d)\) and \(N(d)\) are positive numbers depending only on the dimension \(d\). For each \(i\), it is clear that \(D_{i}A_{X}^{d-1}\leq N(d)\). As the coefficients of \(D_{i}\) are contained in a fixed set \(I\), we see that the degree of each component of \(D_{i}\) is upper bounded. Apply a standard argument on Chow scheme we see that \(D_{i}\) lies in a bounded family, hence \(\mathcal{E}\) is log bounded. The rest of the proof is devoted to the boundedness of \(\mathcal{G}\).
_Step 2._ In this step, we want to replace \((X,\sum_{i=1}^{k}D_{i})\in\mathcal{E}\) with \((X,\sum_{i=1}^{k}D_{i}^{\prime})\) such that \((X,\sum_{i=1}^{k}D_{i}^{\prime})\in\mathcal{E}\) and \((X,D_{i}^{\prime})\) is a log canonical Calabi-Yau pair for every \(1\leq i\leq k\). By [3], there exists a positive integer \(m(d)\) depending only on the dimension \(d\) such that \(X\) admits \(m(d)\)-complements. We may choose \(m(d)\) properly such that \(m(d)\cdot I\in\mathbb{Z}\). Put \(\mathbb{P}:=\frac{1}{m(d)}|-m(d)K_{X}|\), then we see that \(D_{i}\in\mathbb{P}\) for every \(1\leq i\leq k\). Since \((X,\sum_{i=1}^{k}D_{i})\in\mathcal{E}\), there exist some rational numbers \(0\leq c_{i}<1\) such that \((X,\sum_{i}c_{i}D_{i})\) is K-semistable. We are ready to replace \(D_{1}\) with some \(D_{1}^{\prime}\). Let \(\mathcal{D}\subset X\times\mathbb{P}\) be the universal divisor associated to the linear system \(\frac{1}{m(d)}|-m(d)K_{X}|\) and consider the universal family
\[(X\times\mathbb{P},c_{1}\mathcal{D}+\sum_{j=2}^{k}c_{j}D_{j}\times\mathbb{P} )\rightarrow\mathbb{P}.\]
Since there exists a fiber which is K-semistable, i.e. \((X,\sum_{i=1}^{k}c_{i}D_{i})\), one could find an open subset \(U\subset\mathbb{P}\) such that \((X,c_{1}\mathcal{D}_{t}+\sum_{j=2}^{k}c_{j}D_{j})\) is K-semistable for any \(t\in U\) (see [20, 1]). Recall that \(X\) admits \(m(d)\)-complements, so we conclude that \((X,\mathcal{D}_{t})\) is log canonical for general \(t\in U\). Replace \(D_{1}\) with such a \(\mathcal{D}_{t}\) and expand \(I\) properly, we obtain \(D_{1}^{\prime}\) as required. By the same way, we could replace other \(D_{j},2\leq j\leq k\), step by step.
_Step 3._ By step 2, we assume \((X,\sum_{i=1}^{k}D_{i})\in\mathcal{E}\) satisfies that \((X,D_{i})\) is log canonical for every \(1\leq i\leq k\). For such log pair \((X,\sum_{i}D_{i})\), we define the following invariant:
\[\mu(X,\sum_{i=1}^{k}D_{i}):=\inf\Bigg{\{}\sum_{i=1}^{k}c_{i}\ |\ (X,\sum_{i}c_{i}D_{i}) \text{ is K-semistable}\Bigg{\}}.\]
It is clear that \(0\leq\mu(X,\sum_{i}D_{i})<1\). We aim to show that there is a gap between \(\mu(X,\sum_{i}D_{i})\) and \(1\). More precisely, there exists a positive number \(0<\epsilon_{0}(d,I)<1\) depending only on \(d\) and \(I\) such that
\[1-\mu(X,\sum_{i=1}^{k}D_{i})\geq\epsilon_{0}.\]
Suppose not, we could find a sequence of pairs \((X_{j},\sum_{i=1}^{k}D_{ji})\) satisfying the following conditions:
1. \((X_{j},\sum_{i=1}^{k}D_{ji})\in\mathcal{E}\) for every \(j\),
2. \((X_{j},D_{ji})\) is log canonical for every \(1\leq i\leq k\) and \(j\),
3. \(\mu_{j}:=\mu(X_{j},\sum_{i=1}^{k}D_{ji})\) is an increasing sequence tending to \(1\).
By the definition of \(\mu_{j}\), one could find an increasing sequence of rational numbers \(a_{j}<\mu_{j}\) tending to \(1\) such that
\[(X_{j},\sum_{i=1}^{k}\frac{a_{j}}{k}D_{ji})\]
is K-unstable for every \(j\). By [1, 1], there exists a prime divisor \(E_{j}\) over \(X_{j}\) such that \(E_{j}\) computes the delta invariant of \((X_{j},\sum_{i=1}^{k}\frac{a_{j}}{k}D_{ji})\) and \(E_{j}\) induces a special test configuration
\[(\mathcal{X}_{j},\sum_{i=1}^{k}\frac{a_{j}}{k}\mathcal{D}_{ji})\to\mathbb{A} ^{1}.\]
Subtracting those \(a_{j}\) which are not close enough to \(1\), we may assume that the central fiber of the test configuration (after changing the coefficients), i.e. \((\mathcal{X}_{j,0},\sum_{i=1}^{k}\frac{1}{k}\mathcal{D}_{ji,0})\), is log canonical. This means that the test configuration degenerates the log canonical Calabi-Yau pair \((X_{j},\sum_{i=1}^{k}\frac{1}{k}D_{ji})\) to another log canonical Calabi-Yau pair \((\mathcal{X}_{j,0},\sum_{i=1}^{k}\frac{1}{k}\mathcal{D}_{ji,0})\). By [13, Lemma 2.8], \(E_{j}\) is an lc place of \((X_{j},\sum_{i=1}^{k}\frac{1}{k}D_{ji})\), which forces that \(E_{j}\) is an lc place of \((X_{j},D_{ji})\) for every \(1\leq i\leq k\). Recall that \(E_{j}\) computes the delta invariant of the K-unstable
log Fano pair \((X_{j},\sum_{i=1}^{k}\frac{a_{j}}{k}D_{ji})\), we have
\[\beta_{(X_{j},\sum_{i=1}^{k}\frac{a_{j}}{k}D_{ji})}(E_{j}) = A_{(X_{j},\sum_{i=1}^{k}\frac{a_{j}}{k}D_{ji})}(E_{j})-S_{(X_{j}, \sum_{i=1}^{k}\frac{a_{j}}{k}D_{ji})}(E_{j})\] \[= (1-a_{j})\{A_{X_{j}}(E_{j})-S_{X_{j}}(E_{j})\}\] \[< 0.\]
On the other hand, \((X_{j},\sum_{i=1}^{k}c_{ji}D_{ji})\) is K-semistable for some rational \(0\leq c_{ji}<1\), therefore,
\[\beta_{(X_{j},\sum_{i=1}^{k}c_{ji}D_{ji})}(E_{j}) = A_{(X_{j},\sum_{i=1}^{k}c_{ji}D_{ji})}(E_{j})-S_{(X_{j},\sum_{i=1 }^{k}c_{ji}D_{ji})}(E_{j})\] \[= (1-\sum_{i=1}^{k}c_{ji})\{A_{X_{j}}(E_{j})-S_{X_{j}}(E_{j})\}\] \[\geq 0,\]
which is a contradiction. This contradiction implies the existence of the gap we want.
_Step 4._ Combining step 2 and step 3, we see that for each \((X,\sum_{i=1}^{k}D_{i})\in\mathcal{E}\), one can find another log pair \((X,\sum_{i=1}^{k}D_{i}^{\prime})\in\mathcal{E}\) such that \((X,\sum_{i}c_{i}D_{i}^{\prime})\) is K-semistable for some numbers \(0\leq c_{i}<1\) with
\[1-\sum_{i=1}^{k}c_{i}\geq\epsilon_{0}(d,I),\]
where \(\epsilon_{0}(d,I)\) depends only on \(d,I\). Thus we have
\[\frac{A_{X}(E)}{(1-\sum_{i}c_{i})S_{X}(E)} \geq \frac{A_{(X,\sum_{i=1}^{k}c_{i}D_{i})}(E)}{S_{(X,\sum_{i=1}^{k}c_{ i}D_{i})}(E)}\geq 1\]
for any prime divisor \(E\) over \(X\). Hence,
\[\frac{A_{X}(E)}{S_{X}(E)}\geq 1-\sum_{i=1}^{k}c_{i}\geq\epsilon_{0}(d,I),\]
for any prime divisor \(E\) over \(X\). This says that the delta invariant of \(X\) is bounded from below by \(\epsilon_{0}(d,I)\). By [10], the set \(\mathcal{G}\) defined in step 1 lies in a bounded family. The proof is finished.
## 5. On K-semistable domains
We have studied the boundedness of two kinds of log Fano pairs with certain K-stability. It is then natural to ask what are the K-semistable domains for these log Fano pairs. We present some examples to predict that the domains should be polytopes for log pairs in \(\mathcal{E}\), and we will explore this problem from theoretical viewpoint in the future work.
**Definition 5.1**.: Let \((X,\sum_{i=1}^{k}D_{i})\) be a log Fano manifold or a log pair in the set \(\mathcal{E}\). We define the _K-semistable domain_ of \((X,\sum_{i=1}^{k}D_{i})\) as follows:
\[\operatorname{Kss}(X,\sum_{i=1}^{k}D_{i}):=\overline{\{(c_{1},...,c_{k})\ |\ c_{i} \in[0,1)\cap\mathbb{Q}\ \text{and}\ (X,\sum_{i=1}^{k}c_{i}D_{i})\ \text{is K-semistable}\}}.\]
The overline in the definition means taking the closure.
Before we present the examples, let us first note the following interpolation property for K-stability, which we will use frequently: if \((X,\Delta_{1})\) and \((X,\Delta_{2})\) are both K-semistable log pairs (log Fano or log Calabi-Yau), where \(\Delta_{i}\) are propotional to \(-K_{X}\), then \((X,t\Delta_{1}+(1-t)\Delta_{2})\) is also K-semistable for any \(t\in[0,1]\cap\mathbb{Q}\).
**Example 5.2**.: Consider a log pair \((X,D_{1}+D_{2}):=(\mathbb{P}^{2},3L_{1}+3L_{2})\), where \(L_{1},L_{2}\) are two distinct lines in \(\mathbb{P}^{2}\). Then
\[\operatorname{Kss}(X,D_{1}+D_{2})=\{(0,0)\}.\]
To see this, we suppose \((x,y)\in\operatorname{Kss}(X,D_{1}+D_{2})\), then \((\mathbb{P}^{2},3xL_{1}+3yL_{2})\) is K-semistable. Applying the beta criterion we have
\[\beta_{(\mathbb{P}^{2},3xL_{1}+3yL_{2})}(L_{1}) =\ 1-3x-\frac{1}{(3-3x-3y)^{2}}\int_{0}^{3-3x-3y}(3-3x-3y-t)^{2} \mathrm{dt}\] \[=\ 1-3x-(1-x-y)\] \[=\ (x+y)-3x\geq 0.\]
Similarly,
\[\beta_{(\mathbb{P}^{2},3xL_{1}+3yL_{2})}(L_{2}) =\ 1-3y-\frac{1}{(3-3x-3y)^{2}}\int_{0}^{3-3x-3y}(3-3x-3y-t)^{2} \mathrm{dt}\] \[=\ 1-3y-(1-x-y)\] \[=\ (x+y)-3y\geq 0.\]
The above two equations have the unique solution \((0,0)\), which agrees with Theorem 1.1.
**Example 5.3**.: Consider a log pair \((X,D_{1}+D_{2}):=(\mathbb{P}^{2},\frac{3}{2}Q_{1}+\frac{3}{2}Q_{2})\), where \(Q_{1},Q_{2}\) are two general smooth conics in \(\mathbb{P}^{2}\). We show that \(\operatorname{Kss}(X,D_{1}+D_{2})\) is given by the following polytope:
\[\begin{cases}x\geq 0\\ y\geq 0\\ y\leq\frac{1}{2}x+\frac{1}{2}\\ y\geq 2x-1\\ x+y\leq 1\end{cases}\]
It is presented by the following picture:
The extremal points of this polytope correspond to the pairs
\[\mathbb{P}^{2},\quad(\mathbb{P}^{2},\frac{3}{4}Q_{2}),\quad(\mathbb{P}^{2},\frac {1}{2}Q_{1}+Q_{2}),\quad(\mathbb{P}^{2},Q_{1}+\frac{1}{2}Q_{2}),\quad(\mathbb{P} ^{2},\frac{3}{4}Q_{1}),\]
all of which are K-semistable (note that a log Calabi-Yau pair is K-semistable means that it is log canonical). Thus the polytope is contained in \(\operatorname{Kss}(X,D_{1}+D_{2})\) by the interpolation property of K-stability. To see that it is exactly the K-semistable domain, it is enough to show that the points in
\[\begin{cases}0<x<\frac{1}{3}\\ \frac{1}{2}<y<\frac{2}{3}\\ y>\frac{1}{2}x+\frac{1}{2}\end{cases}\quad\text{ and }\quad\begin{cases}\frac{1}{2}<x< \frac{2}{3}\\ 0<y<\frac{1}{3}\\ y<2x-1\end{cases}\]
are not K-semistable. These two domains are presented below:
Suppose \((x,y)\in\operatorname{Kss}(X,D_{1}+D_{2})\), then \((\mathbb{P}^{2},\frac{3}{2}xQ_{1}+\frac{3}{2}yQ_{2})\) is K-semistable. Applying the beta criterion, we have
\[\beta_{(\mathbb{P}^{2},\frac{3}{2}xQ_{1}+\frac{3}{2}yQ_{2})}(Q_{1}) = 1-\frac{3}{2}x-\frac{1}{(3-3x-3y)^{2}}\int_{0}^{\frac{3-3x-3y}{2} }(3-3x-3y-2t)^{2}\mathrm{dt}\] \[= 1-\frac{3}{2}x-\frac{1-x-y}{2}\] \[= \frac{1}{2}y-x+\frac{1}{2}\geq 0.\]
Similarly,
\[\beta_{(\mathbb{P}^{2},\frac{3}{2}xQ_{1}+\frac{3}{2}yQ_{2})}(Q_{2}) = 1-\frac{3}{2}y-\frac{1}{(3-3x-3y)^{2}}\int_{0}^{\frac{3-3x-3y}{2} }(3-3x-3y-2t)^{2}\mathrm{dt}\] \[= 1-\frac{3}{2}y-\frac{1-x-y}{2}\] \[= -y+\frac{1}{2}x+\frac{1}{2}\geq 0.\]
It is clear that the points of the two domains cannot satisfy the above two equations, which implies that they are not contained in the K-semistable domain.
**Example 5.4**.: Consider a log pair \((X,D_{1}+D_{2}):=(\mathbb{P}^{2},\frac{3}{2}Q+3L)\), where \(Q\) is a smooth conic and \(L\) is a line such that \((\mathbb{P}^{2},Q+L)\) is log smooth. We show that \(\operatorname{Kss}(X,D_{1}+D_{2})\) is the polytope generated by the points
\[(0,0),\quad(\frac{1}{2},0),\quad(\frac{2}{3},\frac{1}{3}).\]
It is clear that the three points correspond to the the three log pairs
\[\mathbb{P}^{2},\quad(\mathbb{P}^{2},\frac{3}{4}Q),\quad(\mathbb{P}^{2},Q+L).\]
We denote this polytope by \(P\). To see \(\operatorname{Kss}(X,D_{1}+D_{2})=P\), first note that the three pairs are all K-semistable, thus \(P\subset\operatorname{Kss}(X,D_{1}+D_{2})\). To see the converse inclusion, we apply the beta criterion to \(Q\), \(L\). Suppose \((x,y)\in\operatorname{Kss}(X,D_{1}+D_{2})\), then \((\mathbb{P}^{2},\frac{3}{2}xQ+3yL)\) is K-semistable. Thus
\[\beta_{(\mathbb{P}^{2},\frac{3}{2}xQ+3yL)}(Q) = 1-\frac{3}{2}x-\frac{1}{(3-3x-3y)^{2}}\int_{0}^{\frac{3-3x-3y}{2 }}(3-3x-3y-2t)^{2}\mathrm{dt}\] \[= 1-\frac{3}{2}x-\frac{1-x-y}{2}\] \[= \frac{1}{2}y-x+\frac{1}{2}\geq 0,\]
\[\beta_{(\mathbb{P}^{2},\frac{3}{2}xQ+3yL)}(L) = 1-3y-\frac{1}{(3-3x-3y)^{2}}\int_{0}^{3-3x-3y}(3-3x-3y-t)^{2}\mathrm{ dt}\] \[= 1-3y-(1-x-y)\] \[= x-2y\geq 0.\]
The polytope defined by
\[\begin{cases}0\leq x\leq\frac{2}{3}\\ 0\leq y\leq\frac{1}{3}\\ \frac{1}{2}y-x+\frac{1}{2}\geq 0\\ x-2y\geq 0\end{cases}\]
is exactly \(P\), thus \(P=\mathrm{Kss}(X,D_{1}+D_{2})\).
We now plan to work out a class of examples in higher dimensions. Before that, we first list some results we will use.
Let \(V\) be a projective Fano manifold of dimension \(n\), and \(S\) is a smooth divisor on \(V\) such that \(S\sim_{\mathbb{Q}}-\lambda K_{V}\) for some positive rational number \(\lambda\). Recall that
\[\mathrm{Kss}(V,S)=\overline{\{a\in[0,1)\cap\mathbb{Q}\ |\ (V,aS)\text{ is K- semistable}\}}.\]
**Lemma 5.5**.: ([22]) _Notation as above, suppose V and S are both K-semistable and \(0<\lambda<1\), then \(\mathrm{Kss}(V,S)=[0,1-\frac{r}{n}]\), where \(r=\frac{1}{\lambda}-1\). In particular, if \(V\) and \(S\) are both K-polystable, then \((V,aS)\) is K-polystable for any \(a\in[0,1-\frac{r}{n})\)._
As a special case, we have
**Lemma 5.6**.: _Let \((\mathbb{P}^{n},S_{d})\) be a log pair where \(S_{d}\) is a hypersurface of degree \(1\leq d\leq n\). Suppose \(S_{d}\) is K-semistable (this is expected to be true for smooth hypersurfaces). Then we have \(\mathrm{Kss}(\mathbb{P}^{n},S_{d})=[0,1-\frac{r}{n}]\), where \(r=\frac{n+1-d}{d}\)._
The following lemma is also well known, see e.g. [1, Prop 2.11].
**Lemma 5.7**.: _Let \((V,\Delta)\) be an n-dimensional log Fano pair, and L an ample line bundle on V such that \(L\sim_{\mathbb{Q}}-\frac{1}{r}(K_{V}+\Delta)\) for some \(0<r\leq n+1\). Suppose Y is the projective cone over V associated to L with infinite divisor \(V_{\infty}\), then \((V,\Delta)\) is K-semistable (resp. K-polystable) if and only if \((Y,\Delta_{Y}+(1-\frac{r}{n+1})V_{\infty})\) is K-semistable (resp. K-polystable), where \(\Delta_{Y}\) is the divisor on Y naturally extended by \(\Delta\)._
Applying Lemma 5.6, a simple computation tells us the following three facts:
1. Let \(Q\subset\mathbb{P}^{n}\) be a smooth quadric, then \((\mathbb{P}^{n},aQ)\) is K-semistable (resp. K-polystable) for any \(a\in[0,\frac{n+1}{2n}]\) (resp. \(a\in[0,\frac{n+1}{2n})\)). Moreover, \((\mathbb{P}^{n},\frac{n+1}{2n}Q)\) is K-semistable but not K-polystable, and it admits a K-polystable degeneration \((X,\frac{n+1}{2n}D)\). Here \(X\) is a hypersurface in \(\mathbb{P}(1^{n+1},2)\) (with coordinates \((x_{0},...,x_{n},z)\)) defined by \(x_{0}^{2}+x_{1}^{2}+...+x_{n}^{2}=0\), and \(D=X\cap\{z=0\}\).
2. Let \(Q_{l}\subset\mathbb{P}^{l+1}\) be a smooth quadric of dimension \(l\), then \((Q_{l},aQ_{l-1})\) is K-semistable for any \(a\in[0,\frac{1}{l}]\).
3. Let \(Q_{l},Q^{\prime}_{l}\subset\mathbb{P}^{l+1}\) be a two smooth quadric hypersurfaces such that \((\mathbb{P}^{l+1},Q_{l}+Q^{\prime}_{l})\) is log smooth, then \((Q_{l},aQ^{\prime}_{l}|_{Q_{l}})\) is K-semistable for any \(a\in[0,\frac{(l+1)+3}{2(l+1)}]\).
We are ready to prove the following K-semistability.
**Theorem 5.8**.: _For \(n\geq 2\), the log pair \((\mathbb{P}^{n},\frac{n}{2(n-1)}Q+\frac{1}{n-1}L)\) is K-semistable, where \(Q\) is a smooth quadric hypersurface and \(L\) is a hyperplane such that \((\mathbb{P}^{n},Q+L)\) is log smooth._
Proof.: The case \(n=2\) is clear. We assume \(n\geq 3\). As we have observed, \((\mathbb{P}^{n},\frac{n+1}{2n}Q)\) is K-semistable and admits a K-polystable degeneration \((X,\frac{n+1}{2n}D)\), where \(X\) is a hypersurface in \(\mathbb{P}(1^{n+1},2)\) (with coordinates \((x_{0},...,x_{n},z)\)) defined by \(x_{0}^{2}+x_{1}^{2}+...+x_{n}^{2}=0\), and \(D=X\cap\{z=0\}\). Denote by \(D^{\prime}\) the corresponding degeneration of \(L\) under the test configuration.
It suffices to show that the log pair \((X,\frac{n}{2(n-1)}D+\frac{1}{n-1}D^{\prime})\) is a K-semistable log Fano pair. To see this, first note that \((X,\frac{n}{2(n-1)}D+\frac{1}{n-1}D^{\prime})\) is the projective cone over \((Q,\frac{1}{n-1}L\cap Q)\) with respect to the polarization \(\mathcal{O}_{Q}(2):=i^{*}\mathcal{O}_{\mathbb{P}^{n}}(2)\), where \(i:Q\to\mathbb{P}^{n}\) is the natural embedding. We have the following computation:
\[\mathcal{O}_{Q}(2)\sim_{\mathbb{Q}}-\frac{1}{r}(K_{Q}+\frac{1}{n-1}L\cap Q) \quad\text{and}\quad\frac{n}{2(n-1)}=1-\frac{r}{n}\]
for
\[r=\frac{n-1-\frac{1}{n-1}}{2}.\]
By our notation, we have
\[(Q_{n-1},\frac{1}{n-1}Q_{n-2})=(Q,\frac{1}{n-1}L\cap Q),\]
which is K-semistable as we have seen before. By Lemma 5.7, we see that \((X,\frac{n}{2(n-1)}D+\frac{1}{n-1}D^{\prime})\) is K-semistable.
**Theorem 5.9**.: _For \(n\geq 3\), the log pair \((\mathbb{P}^{n},\frac{n+1}{2(n-1)}Q+\frac{n+1}{2(n-1)}Q^{\prime})\) is K-semistable, where \(Q,Q^{\prime}\) are smooth quadric hypersurfaces such that \((\mathbb{P}^{n},Q+Q^{\prime})\) is log smooth._
Proof.: The case \(n=3\) is clear. We assume \(n\geq 4\). The same as before, \((\mathbb{P}^{n},\frac{n+1}{2n}Q)\) is K-semistable and admits a K-polystable degeneration \((X,\frac{n+1}{2n}D)\), where \(X\) is a hypersurface in \(\mathbb{P}(1^{n+1},2)\) (with coordinates \((x_{0},...,x_{n},z)\)) defined by \(x_{0}^{2}+x_{1}^{2}+...+x_{n}^{2}=0\), and \(D=X\cap\{z=0\}\). Denote by \(D^{\prime}\) the corresponding degeneration of \(Q^{\prime}\) under the test configuration.
It suffices to show that the log pair \((X,\frac{n+1}{2(n-1)}D+\frac{n+1}{2(n-1)}D^{\prime})\) is a K-semistable log Fano pair. To see this, first note that \((X,\frac{n+1}{2(n-1)}D+\frac{n+1}{2(n-1)}D^{\prime})\) is the projective cone over \((Q,\frac{n+1}{2(n-1)}Q^{\prime}|_{Q})\) with respect to the polarization \(\mathcal{O}_{Q}(2):=i^{*}\mathcal{O}_{\mathbb{P}^{n}}(2)\), where \(i:Q\to\mathbb{P}^{n}\) is the natural embedding. We have the following computation:
\[\mathcal{O}_{Q}(2)\sim_{\mathbb{Q}}-\frac{1}{r}(K_{Q}+\frac{n+1}{2(n-1)}Q^{ \prime}|_{Q})\quad\text{and}\quad\frac{n+1}{2(n-1)}=1-\frac{r}{n}\]
for
\[r=\frac{n-1-\frac{n+1}{n-1}}{2}.\]
Since \(\frac{n+1}{2(n-1)}\leq\frac{n+3}{2n}\), we know that \((Q,\frac{n+1}{2(n-1)}Q^{\prime}|_{Q})\) is K-semistable. By Lemma 5.7, we see that \((X,\frac{n+1}{2(n-1)}D+\frac{n+1}{2(n-1)}D^{\prime})\) is K-semistable.
Armed by Theorem 5.8, 5.9, we consider the following two classes of examples.
**Example 5.10**.: Consider a pair \((\mathbb{P}^{n},Q+L)\) for \(n\geq 2\), where \(Q\) is a smooth quadric hypersurface and \(L\) is a hyperplane such that \((\mathbb{P}^{n},Q+L)\) is log smooth. We want to compute \(\mathrm{Kss}(\mathbb{P}^{n},Q+L)\).
Suppose \((x,y)\in\mathrm{Kss}(\mathbb{P}^{n},Q+L)\), then \((\mathbb{P}^{n},xQ+yL)\) is K-semistable. Applying beta criterion, we have
\[\beta_{(\mathbb{P}^{n},xQ+yL)}(Q) = 1-x-\frac{1}{(n+1-2x-y)^{n}}\int_{0}^{\frac{n+1-2x-y}{2}}(n+1-2x- y-2t)^{n}\mathrm{dt}\] \[= 1-x-\frac{n+1-2x-y}{2(n+1)}\] \[= \frac{1}{2}-\frac{nx}{n+1}+\frac{y}{2(n+1)}\geq 0.\]
Similarly,
\[\beta_{(\mathbb{P}^{n},xQ+yL)}(L) = 1-y-\frac{1}{(n+1-2x-y)^{n}}\int_{0}^{n+1-2x-y}(n+1-2x-y-t)^{n} \mathrm{dt}\] \[= 1-y-\frac{n+1-2x-y}{(n+1)}\] \[= \frac{2x}{n+1}-\frac{ny}{n+1}\geq 0.\]
It is clear that the polytope given by
\[\begin{cases}0\leq x\leq 1\\ 0\leq y\leq 1\\ \frac{1}{2}-\frac{nx}{n+1}+\frac{y}{2(n+1)}\geq 0\\ \frac{2x}{n+1}-\frac{ny}{n+1}\geq 0\end{cases}\]
is generated by the extremal points
\[(0,0),\quad(\frac{n+1}{2n},0),\quad(\frac{n}{2(n-1)},\frac{1}{n-1}).\]
These three points correspond to log pairs
\[\mathbb{P}^{n},\quad(\mathbb{P}^{n},\frac{n+1}{2n}Q),\quad(\mathbb{P}^{n}, \frac{n}{2(n-1)}Q+\frac{1}{n-1}L),\]
which are all K-semistable by Theorem 5.8. Thus the polytope is exactly \(\mathrm{Kss}(\mathbb{P}^{n},Q+L)\). We also mention that when \(n=3\), the example corresponds to [13, Theorem 1 (1)].
**Example 5.11**.: Consider a log pair \((\mathbb{P}^{n},Q+Q^{\prime})\) for \(n\geq 3\), where \(Q,Q^{\prime}\) are smooth quadric hypersurfaces such that \((\mathbb{P}^{n},Q+Q^{\prime})\) is log smooth. We want to compute \(\mathrm{Kss}(\mathbb{P}^{n},Q+Q^{\prime})\).
Suppose \((x,y)\in\operatorname{Kss}(\mathbb{P}^{n},Q+Q^{\prime})\), then \((\mathbb{P}^{n},xQ+yQ^{\prime})\) is K-semistable. Applying the beta criterion, we have
\[\beta_{(\mathbb{P}^{n},xQ+yQ^{\prime})}(Q) = 1-x-\frac{1}{(n+1-2x-2y)^{n}}\int_{0}^{\frac{n+1-2x-2y}{2}}(n+1-2x -2y-2t)^{n}\mathrm{dt}\] \[= 1-x-\frac{n+1-2x-2y}{2(n+1)}\] \[= \frac{1}{2}-\frac{nx}{n+1}+\frac{y}{n+1}\geq 0.\]
Similarly,
\[\beta_{(\mathbb{P}^{n},xQ+yQ^{\prime})}(Q^{\prime}) = 1-y-\frac{1}{(n+1-2x-2y)^{n}}\int_{0}^{\frac{n+1-2x-2y}{2}}(n+1- 2x-2y-2t)^{n}\mathrm{dt}\] \[= 1-y-\frac{n+1-2x-2y}{2(n+1)}\] \[= \frac{1}{2}-\frac{ny}{n+1}+\frac{x}{n+1}\geq 0.\]
It is clear that the polytope given by
\[\begin{cases}0\leq x\leq 1\\ 0\leq y\leq 1\\ \frac{1}{2}-\frac{nx}{n+1}+\frac{y}{n+1}\geq 0\\ \frac{1}{2}-\frac{ny}{n+1}+\frac{x}{n+1}\geq 0\end{cases}\]
is generated by the extremal points
\[(0,0),\quad(\frac{n+1}{2n},0),\quad(0,\frac{n+1}{2n}),\quad(\frac{n+1}{2(n-1)},\frac{n+1}{2(n-1)}).\]
These four points correspond to log pairs
\[\mathbb{P}^{n},\quad(\mathbb{P}^{n},\frac{n+1}{2n}Q),\quad(\mathbb{P}^{n}, \frac{n+1}{2n}Q^{\prime}),\quad(\mathbb{P}^{n},\frac{n+1}{2(n-1)}Q+\frac{n+1}{ 2(n-1)}Q^{\prime}),\]
which are all K-semistable by Theorem 5.9. Thus the polytope is exactly \(\operatorname{Kss}(\mathbb{P}^{n},Q+Q^{\prime})\).
**Remark 5.12**.: For all examples we treat above, the K-semistable domains are polytopes. In [11], the K-semistable domains for some log Fano pairs of Maeda type in dimension three are computed, but it is hard to say whether these sets are convex or polytopes.
|
2307.05857 | FAIRO: Fairness-aware Adaptation in Sequential-Decision Making for
Human-in-the-Loop Systems | Achieving fairness in sequential-decision making systems within
Human-in-the-Loop (HITL) environments is a critical concern, especially when
multiple humans with different behavior and expectations are affected by the
same adaptation decisions in the system. This human variability factor adds
more complexity since policies deemed fair at one point in time may become
discriminatory over time due to variations in human preferences resulting from
inter- and intra-human variability. This paper addresses the fairness problem
from an equity lens, considering human behavior variability, and the changes in
human preferences over time. We propose FAIRO, a novel algorithm for
fairness-aware sequential-decision making in HITL adaptation, which
incorporates these notions into the decision-making process. In particular,
FAIRO decomposes this complex fairness task into adaptive sub-tasks based on
individual human preferences through leveraging the Options reinforcement
learning framework. We design FAIRO to generalize to three types of HITL
application setups that have the shared adaptation decision problem.
Furthermore, we recognize that fairness-aware policies can sometimes conflict
with the application's utility. To address this challenge, we provide a
fairness-utility tradeoff in FAIRO, allowing system designers to balance the
objectives of fairness and utility based on specific application requirements.
Extensive evaluations of FAIRO on the three HITL applications demonstrate its
generalizability and effectiveness in promoting fairness while accounting for
human variability. On average, FAIRO can improve fairness compared with other
methods across all three applications by 35.36%. | Tianyu Zhao, Mojtaba Taherisadr, Salma Elmalaki | 2023-07-12T00:35:19Z | http://arxiv.org/abs/2307.05857v2 | # FAIRO: Fairness-aware Adaptation in Sequential-Decision Making for Human-in-the-Loop Systems
###### Abstract.
Achieving fairness in sequential-decision making systems within Human-in-the-Loop (HITL) environments is a critical concern, especially when multiple humans with different behavior and expectations are affected by the same adaptation decisions in the system. This human variability factor adds more complexity since policies deemed fair at one point in time may become discriminatory over time due to variations in human preferences resulting from inter- and intra-human variability. This paper addresses the fairness problem from an equity lens, considering human behavior variability, and the changes in human preferences over time. We propose FAIRO, a novel algorithm for fairness-aware sequential-decision making in HITL adaptation, which incorporates these notions into the decision-making process. In particular, FAIRO decomposes this complex fairness task into adaptive sub-tasks based on individual human preferences through leveraging the Options reinforcement learning framework. We design FAIRO to generalize to three types of HITL application setups that have the shared adaptation decision problem.
Furthermore, we recognize that fairness-aware policies can sometimes conflict with the application's utility. To address this challenge, we provide a fairness-utility tradeoff in FAIRO, allowing system designers to balance the objectives of fairness and utility based on specific application requirements. Extensive evaluations of FAIRO on the three HITL applications demonstrate its generalizability and effectiveness in promoting fairness while accounting for human variability. On average, FAIRO can improve fairness compared with other methods across all three applications by 35.36%.
sequential-decision making, fairness, human-in-the-loop, adaptation, equity +
Footnote †: journal: Journal of LaTeX Templates
+
Footnote †: journal: Journal of LaTeX Templates
+
Footnote †: journal: Journal of LaTeX Templates
+
Footnote †: journal: Journal of LaTeX Templates
+
Footnote †: journal: Journal of LaTeX Templates
+
Footnote †: journal: Journal of LaTeX Templates
+
Footnote †: journal: Journal of LaTeX Templates
+
Footnote †: journal: Journal of LaTeX Templates
+
Footnote †: journal: Journal of LaTeX Templates
+
Footnote †: journal: Journal of LaTeX Templates
+
Footnote †: journal: Journal of LaTeX Templates
+
Footnote †: journal: Journal of LaTeX Templates
+
Footnote †: journal: Journal of LaTeX Templates
+
Footnote †: journal: Journal of LaTeX Templates
+
Footnote †: journal: Journal of LaTeX Templates
+
Footnote †: journal: Journal of LaTeX Templates
+
Footnote †: journal: Journal of LaTeX Templates
+
Footnote †: journal: Journal of LaTeX Templates
+
Footnote †: journal: Journal of LaTeX Templates
+
Footnote †: journal: Journal of LaTeX Templates
+
Footnote †: journal: Journal of LaTeX Templates
+
Footnote †: journal: Journal of LaTeX Templates
+
Footnote †: journal: Journal of LaTeX Templates
+
Footnote †: journal: Journal of LaTeX Templates
+
Footnote †: journal: Journal of LaTeX Templates
+
Footnote †: journal: Journal of LaTeX Templates
+
Footnote †: journal: Journal of LaTeX Templates
+
Footnote †: journal: Journal of LaTeX Templates
+
Footnote †: journal: Journal of LaTeX Templates
+
Footnote †: journal: Journal of LaTeX Templates
+
Footnote †: journal: Journal of LaTeX Templates
+
Footnote †: journal: Journal of LaTeX Templates
+
Footnote †: journal: Journal of LaTeX Templates
+
Footnote †: journal: Journal of LaTeX Templates
+
Footnote †: journal: Journal of LaTeX Templates
+
Footnote †: journal: Journal of LaTeX Templates
+
Footnote †: journal: Journal of LaTeX Templates
+
Footnote †: journal: Journal of LaTeX Templates
+
Footnote †: journal: Journal of LaTeX Templates
+
Footnote †: journal: Journal of LaTeX Templates
+
Footnote †: journal: Journal of LaTeX Templates
+
Footnote †: journal: Journal of LaTeX Templates
+
Footnote †: journal: Journal of LaTeX Templates
+
Footnote †: journal: Journal of LaTeX Templates
+
Footnote †: journal: Journal of LaTeX Templates
+
Footnote †: journal: Journal of LaTeX Templates
+
Footnote †: journal: Journal of LaTeX Templates
+
Footnote †: journal: Journal of LaTeX Templates
+
Footnote †: journal: Journal of LaTeX Templates
+
Footnote †: journal: Journal of LaTeX Templates
+
Footnote †: journal: Journal of LaTeX Templates
+
Footnote †: journal: Journal of LaTeX Templates
+
Footnote †: journal: Journal of LaTeX Templates
+
Footnote †: journal: Journal of LaTeX Templates
+
Footnote †: journal: Journal of LaTeX Templates
+
Footnote †: journal: Journal of LaTeX Templates
+
Footnote †: journal: Journal of LaTeX Templates
+
Footnote †: journal: Journal of LaTeX Templates
+
Footnote †: journal: Journal of LaTeX Templates
+
Footnote †: journal: Journal of LaTeX Templates
+
Footnote †: journal: Journal of LaTeX Templates
+
Footnote †: journal: Journal of LaTeX Templates
+
Footnote †: journal: Journal of LaTeX Templates
+
Footnote †: journal: Journal of LaTeX Templates
+
Footnote †: journal: Journal of LaTeX Templates
+
Footnote †: journal: Journal of LaTeX Templates
+
Footnote †: journal: Journal of LaTeX Templates
+
Footnote †: journal: Journal of LaTeX Templates
+
Footnote †: journal: Journal of LaTeX Templates
+
Footnote †: journal: Journal of LaTeX Templates
+
Footnote †: journal: Journal of LaTeX Templates
+
Footnote †: journal: Journal of LaTeX Templates
+
Footnote †: journal: Journal of LaTeX Templates
+
Footnote †: journal: Journal of LaTeX Templates
+
Footnote †: journal: Journal of LaTeX Templates
+
Footnote †: journal: Journal of LaTeX Templates
+
Footnote †: journal: Journal of LaTeX Templates
+
Footnote †: journal: Journal of LaTeX Templates
+
Footnote †: journal: Journal of LaTeX Templates
+
Footnote †: journal: Journal of LaTeX Templates
+
Footnote †: journal: Journal of LaTeX Templates
+
Footnote †: journal: Journal of LaTeX Templates
+
Footnote †: journal: Journal of LaTeX Templates
+
Footnote †: journal: Journal of LaTeX Templates
+
Footnote †: journal: Journal of LaTeX Templates
+
Footnote †: journal: Journal of LaTeX Templates
+
Footnote †: journal: Journal of LaTeX Templates
+
Footnote †: journal: Journal of LaTeX Templates
+
Footnote †: journal: Journal of LaTeX Templates
+
Footnote †: journal: Journal of LaTeX Templates
+
Footnote †: journal: Journal of LaTeX Templates
+
Footnote †: journal: Journal of LaTeX Templates
+
Footnote †: journal: Journal of LaTeX Templates
+
Footnote †: journal: Journal of LaTeX Templates
+
Footnote †: journal: Journal of LaTeX Templates
+
Footnote †: journal: Journal of LaTeX Templates
+
Footnote †: journal: Journal of LaTeX Templates
+
Footnote †: journal: Journal of LaTeX Templates
+
Footnote †: journal: Journal of LaTeX Templates
+
Footnote †: journal: Journal of LaTeX Templates
+
Footnote †: journal: Journal of LaTeX Templates
+
Footnote †: journal: Journal of LaTeX Templates
+
Footnote †: journal: Journal of LaTeX Templates
+
Footnote †: journal: Journal of LaTeX Templates
+
Footnote †: journal: Journal of LaTeX Templates
+
Footnote †: journal: Journal of LaTeX Templates
+
Footnote †: journal: Journal of LaTeX Templates
+
Footnote †: journal: Journal of LaTeX Templates
+
Footnote †: journal: Journal of LaTeX Templates
+
Footnote †: journal: Journal of LaTeX Templates
+
Footnote †: journal: Journal of LaTeX Templates
+
Footnote †: journal: Journal of LaTeX Templates
+
Footnote †: journal: Journal of LaTeX Templates
+
Footnote †: journal: Journal of LaTeX Templates
+
Footnote †: journal: Journal of LaTeX Templates
+
Footnote †: journal: Journal of LaTeX Templates
+
Footnote †: journal of LaTeX Templates
+
Footnote †: journal: Journal of LaTeX Templates
+
Footnote †: journal: Journal of LaTeX Templates
+
Footnote †: journal: Journal of LaTeX Templates
+
Footnote †: journal: Journal of LaTeX Templates
+
Footnote †: journal: Journal of LaTeX Templates
+
Footnote †: journal: Journal of LaTeX Templates
+
Footnote †: journal: Journal of LaTeX Templates
+
Footnote †: journal: Journal of LaTeX Templates
+
Footnote †: journal: Journal of LaTeX Templates
+
Footnote †: journal: Journal of LaTeX Templates
+
Footnote †: journal: Journal of LaTeX Templates
+
Footnote †: journal: Journal of LaTeX Templates
+
Footnote †: journal: Journal of LaTeX Templates
+
Footnote †: journal: Journal of LaTeX Templates
+
Footnote †: journal: Journal of LaTeX Templates
+
Footnote †: journal: Journal of LaTeX Templates
+
Footnote †: journal: Journal of LaTeX Templates
+
Footnote †: journal: Journal of LaTeX Templates
+
Footnote †: journal: Journal of LaTeX Templates
+
Footnote †: journal: Journal of LaTeX Templates
+
Footnote †: journal: Journal of LaTeX Templates
+
Footnote †: journal: Journal of LaTeX Templates
+
Footnote †: journal: Journal of LaTeX Templates
+
Footnote †: journal: Journal of LaTeTeX Templates
+
Footnote †: journal: Journal of LaTeX Templates
+
Footnote †: journal: Journal of LaTeX Templates
+
Footnote †: journal: Journal of LaTeX Templates
+
Footnote †: journal: Journal of LaTeX Templates
+
Footnote †: journal: Journal of LaTeX Templates
+
Footnote †: journal: Journal of LaTeX Templates
+
Footnote †: journal: Journal of LaTeX Templates
+
Footnote †: journal: Journal of LaTeX Templates
+
Footnote †: journal: Journal of LaTeX Templates
+
Footnote †: journal: Journal of LaTeX Templates
+
Footnote †: journal: Journal of LaTeTe Templates
+
Footnote †: journal: Journal of LaTe Templates
+
Footnote †: journal: Journal of LaTeTe Templates
+
Footnote †: journal: Journal of LaTeTe Templates
+
Footnote †: journal: Journal of LaTeTe Templates
+
Footnote †: journal: Journal of LaTeTe Templates
+
Footnote †: journal: Journal of LaTeTe Templates
+
Footnote †: journal: Journal of LaTeTe Templates
+
Footnote †: journal: Journal of LaTeTe Templates
+
Footnote †: journal: Journal of LaTeTe Templates
+
Footnote †: journal: Journal of LaTeTeTe Templates
+
Footnote †: journal: Journal of LaTeTeTe Templates
+
Footnote †: journal: Journal of LaTeTe Templates
+
Footnote †: journal: Journal of LaTeTe Templates
+
Footnote †: journal: Journal of LaTeTe Templates
+
Footnote †: journal: Journal of LaTeTe Templates
+
Footnote †: journal: Journal of LaTeTe Templates
+
Footnote †: journal: Journal of LaTeTe Templates
+
Footnote †: journal: Journal of LaTe Templates
+
Footnote †: journal: Journal of LaTeTe Templates
+
Footnote †: journal: Journal of LaTeTeTe Templates
+
Footnote †: journal: Journal of LaTeTe Templates
+
Footnote †: journal: Journal of LaTeTe Templates
+
Footnote †: journal: Journal of LaTeTe Templates
+
Footnote †: journal: Journal of LaTeTeTe Templates
+
Footnote †: journal: Journal of LaTeTe Templates
+
Footnote †: journal: Journal of LaTeTeTe Templates
+
Footnote †: journal: Journal of LaTeTeTe Templates
+
Footnote †: journal: Journal of LaTeTeTe Templates
+
Footnote †: journal: Journal of LaTeTeTe Templates
+
Footnote †: journal: Journal of LaTeTe Templates
+
Footnote †: journal: Journal of LaTeTeTe Templates
+
Footnote †: journal: Journal of LaTeTeTe Templates
+
Footnote †: journal: Journal of LaTeTe Templates
+
Footnote †: journal: Journal of LaTeTeTe Templates
+
Footnote †: journal: Journal of LaTeTeTe Templates
+
Footnote †: journal of LaTeTeTe Templates
+
Footnote †: journal: Journal of LaTeTe Templates
+
Footnote †: journal: Journal of LaTeTeTe Templates
+
Footnote †: journal: Journal of LaTeTeTe Templates |
2307.14782 | On the Hilbert scheme of smooth curves of degree $15$ and genus $14$ in
$\mathbb{P}^5$ | We denote by $\mathcal{H}_{d,g,r}$ the Hilbert scheme of smooth curves, which
is the union of components whose general point corresponds to a smooth
irreducible and non-degenerate curve of degree $d$ and genus $g$ in
$\mathbb{P}^r$. In this article, we show that $\mathcal{H}_{15,14,5}$ is non
empty and reducible with two components of the expected dimension hence
generically reduced. We also study the birationality of the moduli map up to
projective motion and several key properties such as gonality of a general
element as well as specifying smooth elements of each components. | Edoardo Ballico, Changho Keem | 2023-07-27T11:23:41Z | http://arxiv.org/abs/2307.14782v2 | # On the Hilbert scheme of smooth curves of degree \(15\) and genus \(14\) in \(\mathbb{P}^{5}\)+
###### Abstract
We denote by \(\mathcal{H}_{d,g,r}\) the Hilbert scheme of smooth curves, which is the union of components whose general point corresponds to a smooth irreducible and non-degenerate curve of degree \(d\) and genus \(g\) in \(\mathbb{P}^{r}.\) In this article, we show that \(\mathcal{H}_{15,14,5}\) is non empty and reducible with two components of the expected dimension hence generically reduced. We also study the birationality of the moduli map up to projective motion and several key properties such as gonality of a general element as well as specifying smooth elements of each components.
H remarkRemark
## 1 Introduction
We denote by \(\mathcal{H}_{d,g,r}\) the Hilbert scheme of smooth curves of degree \(d\) and genus \(g\) in \(\mathbb{P}^{r}.\)\(\mathcal{H}_{d,g,r}^{\mathcal{L}}\) is the subscheme of \(\mathcal{H}_{d,g,r}\) consisting of components of \(\mathcal{H}_{d,g,r}\) whose general element is linearly normal.
In this paper we study a certain peculiar Hilbert scheme of smooth curves in \(\mathbb{P}^{5}\) of degree \(d=15,g=14.\) Specifically, we determine the number of components and study further property of \(\mathcal{H}_{15,14,5}\) such as the gonality of smooth element in each component.
Determining the irreducibility of a given Hilbert scheme is rather a non-trivial task, which goes back to the era of Severi [27] who asserted that the Hilbert scheme \(\mathcal{H}_{d,g,r}\) is irreducible for those triples of \((d,g,r)\) in the range
(i) \(d\geq g+r\) or
in the following Brill-Noether range which is much wider
\[\text{(ii) }\rho(d,g,r):=g-(r+1)(g-d+r)\geq 0.\]
The assertion of Severi turns out to be true for \(r=3,4\) under the condition (i); cf. [10, 11]. It is also known that \(\mathcal{H}_{d,g,3}\) is irreducible in an extended range \(d\geq g\); cf. [21] and references therein. For \(r=4\), irreducibility of \(\mathcal{H}_{d,g,4}^{\mathcal{L}}\) also holds in the range \(d\geq g+1\) except for some sporadic small values of the genus \(g\); cf. [23, 22].
For \(r=5\) the irreducibility of \(\mathcal{H}_{d,g,5}\) is not known in the range \(d\geq g+5\) conjectured by Severi; the best known result so far regarding the irreducibility of \(\mathcal{H}_{d,g,5}\) is the result of H. Iliev who showed that \(\mathcal{H}_{d,g,5}\) is irreducible whenever \(d\geq\max\{\frac{11}{10}g+2,g+5\}\); cf. [18].
Shifting our attention to the family of linearly normal curves, it makes sense talking about the Hilbert scheme of linearly normal curves \(\mathcal{H}_{d,g,r}^{\mathcal{L}}\) only if \(g-d+r\geq 0\) by Riemann-Roch formula; otherwise \(\mathcal{H}_{d,g,r}^{\mathcal{L}}=\emptyset\). For the case \(r=5\) and in the non-trivial range \(d\leq g+5\), \(\mathcal{H}_{d,g,5}^{\mathcal{L}}\) is better understood than \(\mathcal{H}_{d,g,5}\) in general. Moreover when the degree of the curve \(d\) is relatively high with respect to the genus \(g\), \(\mathcal{H}_{d,g,5}^{\mathcal{L}}\) behaves in quite reasonable manner;
1. \(\mathcal{H}_{g+5,g,5}^{\mathcal{L}}\neq\emptyset\) and is irreducible; [4, Theorem 2.1].
2. \(\mathcal{H}_{g+4,g,5}^{\mathcal{L}}\neq\emptyset\) and is irreducible if and only if \(g\geq 6\); [4, Theorem 2.2].
3. \(\mathcal{H}_{g+3,g,5}^{\mathcal{L}}\neq\emptyset\) and is irreducible if and only if \(g\geq 8\); [4, Theorem 2.3, Remark 2.4].
4. \(\mathcal{H}_{g+2,g,5}^{\mathcal{L}}\neq\emptyset\) if and only if \(g\geq 10\) and is irreducible if and only if \(g\geq 10\) & \(g\neq 10,12\); cf. [19, Remark 3.2 (ii), Prop 3.3, Theorem 3.4, Theorem 3.7], [4, Theorem 2.5, Remark 2.6].
5. \(\mathcal{H}_{g+1,g,5}^{\mathcal{L}}\neq\emptyset\) if and only if \(g\geq 12\) by [20, Theorem 2.4]. However the irreducibility of \(\mathcal{H}_{g+1,g,5}^{\mathcal{L}}\) is known only for very low genus \(g\); if \(g=12\), \(\mathcal{H}_{g+1,g,5}^{\mathcal{L}}\) is reducible and is irreducible if \(g=13\); cf. [20, proof of Theorem 2.4 & Theorem 3.4]
In this paper we focus our attention on the next case \(\mathcal{H}_{15,14,5}^{\mathcal{L}}\) as an initial attempt for a better understanding of \(\mathcal{H}_{g+1,g,5}^{\mathcal{L}}\) for \(g\geq 14\). We also study the birationality of the moduli map up to projective motion (Proposition 6.5 and Proposition 6.7) and several key properties such as gonality of a general element of any component (Proposition 5.1 and Remark??) as well as identifying all smooth elements of each components ; e.g. Proposition 5.2 and Remark 6.6. The main result of this paper is the following theorem.
**Theorem 1.1**.: \(\mathcal{H}_{15,14,5}\) _is reducible with two components of the (same) expected dimension._
Indeed the case \(g=14\) on which we focus is the first non-trivial case concerning the irreducibility of \(\mathcal{H}_{g+1,g,5}\). For curves with higher genus \(g\geq 15\), virtually nothing is known about the irreducibility of \(\mathcal{H}_{g+1,g,5}\). However the
main result of the paper suggests that the irreducibility of \({\cal H}_{d,g,5}\) beyond the conjectured range \(d\geq g+5\) does not hold for \(d\) not too much below \(g+5\).
It is also worthwhile to mention that \({\cal H}_{15,14,5}={\cal H}_{15,14,5}^{\cal L}\) in our particular case \((d,g,r)=(15,14,5)\); every smooth curve in \({\mathbb{P}}^{5}\) of degree \(d=15\) and genus \(g=14\) is linearly normal by Castelnuovo genus bound, i.e. genus \(g=14\) is too large for a non-degenerate curve of degree \(d=15\) sitting inside \({\mathbb{P}}^{6}\).
The organization of this paper is as follows. In the next section, we prepare some basic preliminaries required for our study. We also determine the lower bound of the gonality of a smooth curve in \({\cal H}_{15,14,5}\). In the subsequent section we consider smooth curves \(X\in{\cal H}_{15,14,5}\) whose **dual curves**\(C\subset{\mathbb{P}}^{3}\) - by definition the image curve of a morphism induced by the residual series \(K_{X}(-1)\) - lie on quadric surfaces and identify one of the possible components of \({\cal H}_{15,14,5}\). In section 4, we consider \(X\in{\cal H}_{15,14,5}\) whose dual curve does not lie on a quadric surface and identify further possible component of \({\cal H}_{15,14,5}\). In the last two sections we study the gonality of some/all curves and prove the uniqueness of a complete very ample linear series \(g_{15}^{5}\) for a general curve in each component of \({\cal H}_{15,14,5}\) (Propositions 6.5 and 6.7). One component is formed by 5-gonal curves (Remark 6.6). The general element of the other component is 7-gonal (Proposition 6.7). No element of \({\cal H}_{15,14,5}\) has gonality \(\leq 4\) (Propositions 2.7 and 5.2).
For notation and conventions, we follow those in [2] and [3]; e.g. \(\pi(d,r)\) is the maximal possible arithmetic genus of an irreducible, non-degenerate and reduced curve of degree \(d\) in \({\mathbb{P}}^{r}\) which is usually referred as the first Castelnuovo genus bound. \(\pi_{1}(d,r)\) is the so-called the second Castelnuovo genus bound which is the maximal possible arithmetic genus of an irreducible, non-degenerate and reduced curve of degree \(d\) in \({\mathbb{P}}^{r}\) not lying on a surface of minimal degree \(r-1\); cf. [14, page 99], [2, page 123].
Following classical terminology, a linear series of degree \(d\) and dimension \(r\) on a smooth curve \(C\) is denoted by \(g_{d}^{r}\). A base-point-free linear series \(g_{d}^{r}\) (\(r\geq 2\)) on a smooth curve \(C\) is called **birationally very ample** when the morphism \(C\to{\mathbb{P}}^{r}\) induced by the \(g_{d}^{r}\) is generically one-to-one onto (or is birational to) its image curve. A base-point-free linear series \(g_{d}^{r}\) on \(C\) is said to be compounded of an involution (**compounded** for short) if the morphism induced by the linear series gives rise to a non-trivial covering map \(C\to C^{\prime}\) of degree \(k\geq 2\). Throughout we work exclusively over the field of complex numbers.
## 2 Some generalities and easy remarks
**Remark 2.1**.:
1. Let \(\Gamma\) be any irreducible component of the Hilbert scheme \({\cal H}_{d,g,r}\) of smooth curves of degree \(d\) and genus \(g\) in \({\mathbb{P}}^{r}\). We recall that \[\dim\Gamma\geq 3g-3+\rho(d,g,r)+\dim{\rm Aut}({\mathbb{P}}^{r})=(r+1)d+(3-r)(g-1).\]
In particular each irreducible component of \(\mathcal{H}_{15,14,5}\) has dimension at least \(64\). Thus an irreducible family \(\mathcal{F}\subset\mathcal{H}_{15,14,5}\) such that \(\dim\mathcal{F}\leq 63\) is always contained in a larger family.
2. Let \(\xi\) be the natural functorial map \(\xi:\mathcal{H}_{15,14,5}\to\mathcal{M}_{14}\). Let \(\Gamma\subset\mathcal{H}_{15,14,5}\) be and irreducible component. Let \(\mathcal{G}_{\Gamma}\) be the irreducible family consisting of pairs \((g^{3}_{11},p)\); \(g^{3}_{11}=|K_{X}(-1)|\), \(X\subset\mathbb{P}^{5}\) corresponds to \(\xi^{-1}(p)\), \(p\in\mathrm{Image}\ \xi(\Gamma)\subset\mathcal{M}_{14}\).
3. To each irreducible component \(\Gamma\subset\mathcal{H}_{15,14,5}\) (hence \(\dim\Gamma\geq 64\)), there is an irreducible family \(\mathcal{G}_{\Gamma}\) such that \[\dim\mathcal{G}_{\Gamma}=\dim\Gamma-\dim\mathrm{Aut}(\mathbb{P}^{5})\geq 29.\] In other words, if an irreducible family \(\mathcal{H}\subset\mathcal{H}_{15,14,5}\) up to projective motion of \(\mathbb{P}^{5}\) followed by residualization of the hyperplane series (which is one-to-one) produces a family of linear series \(g^{3}_{11}\) of dimension strictly less than \(29\), then such a family \(\mathcal{H}\) does not constitute a full irreducible component.
For the existence part of particular components, we utilize Example 3.1 and Example 4.1 and then use Remark 2.1 to see that there are exactly two irreducible components of \(\mathcal{H}_{15,14,5}\) having the same expected dimension \(64\) and hence the first cohomology of the normal bundle of \(X\) vanishes for a general element \(X\) in each of the \(2\) irreducible components.
Fix a smooth \(X\in\mathcal{H}_{15,14,5}\) and let \(B\) be the base locus of \(|K_{X}(-1)|\). Set \(b:=\deg(B)\). By Riemann-Roch \(|K_{X}(-1)(-B)|\) induces a morphism \(u_{X}:X\to\mathbb{P}^{3}\). We say that \(C:=u_{X}(X)\) is the **dual curve1** of \(X\). Instead of working directly with a family in \(\mathcal{H}_{15,14,5}\), we will be working with the family of (possibly singular) curves in \(\mathbb{P}^{3}\) which are the family of dual curves \(\{u_{X}(X);X\in\mathcal{H}_{15,14,5}\}\). In fact, as we will see, this turns out to be more effective and easier to deal with; \(\deg u_{X}(X)<\deg X\), the dimension of the ambient projective space gets lower, etc.
Footnote 1: Readers are advised not to be confused with another notion of “dual curve” in the sense of Plücker.
Note that \(\deg(C)=\deg u_{X}(X)=11-\deg B\) if \(u_{X}\) is birational onto its image. We need the following key result concerning a family of nodal curves on a Hirzebruch surface \(\mathbb{F}_{e}\); we only need it for \(e=0\), the smooth quadric surface in \(\mathbb{P}^{3}\).
**Remark 2.2**.:
1. Take a Hirzebruch surface \(\mathbb{F}_{e}\), \(e\geq 0\) and line bundle \(L\cong\mathcal{O}_{\mathbb{F}_{e}}(ah+bf)\) on \(\mathbb{F}_{e}\) such that \(a>0\) and \(b\geq ae\); \(h^{2}=-e,h\cdot f=1,f^{2}=0\). A general element of \(|L|\) is a smooth connected curve of genus \(q=(a-1)(b-1-\frac{1}{2}ae)\).
2. Fix an integer \(g\) such that \(0\leq g\leq q\). Let \(\Sigma_{g}\) be the locus of all integral curves \(Y\in|L|\) with geometric genus \(g\). Let \(\widetilde{\Sigma}_{g}\) be the locus of all nodal \(Y\in\Sigma_{g}\) with \(\delta:=q-g\) nodes as its only singularities. By [8, Cor. 4.4] a general member of every irreducible component of \(\Sigma_{g}\) is nodal curve. For a fixed \(p\in\mathbb{F}_{e}\), being nodal at \(p\) imposes three conditions on \(|L|\) (or in any subvariety of \(|L|\)) and hence the family of curves in
\(|L|\) with one node has codimension \(3-\dim\mathbb{F}_{e}=1\) in \(\Sigma_{g}\). Therefore \(\widetilde{\Sigma}_{g}\) has codimension \(\delta\) in \(|L|\) and for a general finite subset \(S\subset\mathbb{F}_{e}\) with \(\operatorname{Card}(S)=\delta\) there is an integral nodal curve \(T\in|L|\) with \(\operatorname{Sing}(T)=S\). By [28] the locus \(\widetilde{\Sigma}_{g}\) is irreducible of dimension \(\dim|L|-\delta\). Hence \(\Sigma_{g}\) - which is the closure of \(\widetilde{\Sigma}_{g}\) - is also irreducible of dimension \(\dim|L|-\delta\).
The following inequality - known as Castelnuovo-Severi inequality - shall be used occasionally; cf. [1, Theorem 3.5].
**Remark 2.3** (Castelnuovo-Severi inequality).: Let \(C\) be a curve of genus \(g\) which admits coverings onto curves \(E_{h}\) and \(E_{q}\) with respective genera \(h\) and \(q\) of degrees \(m\) and \(n\) such that these two coverings admit no common non-trivial factorization; if \(m\) and \(n\) are primes this will always be the case. Then
\[g\leq mh+nq+(m-1)(n-1).\]
The following simple fact regarding spanned linear systems on smooth Del Pezzo surfaces will be used in the course of the proof of our main theorem. It should be noted that the fact is widely known to people as a folklore. However the authors could not find an adequate source in the literature.
**Remark 2.4**.: Let \(S\) be a smooth Del Pezzo surface. A line bundle on \(S\) is spanned (resp. very ample) if and only if it is nef (resp. ample) ([9, Cor. 4.7]). \(T\subset S\) an integral projective curve such that \(T^{2}>0\). Since \(T\) is integral and \(T^{2}>0\), \(T\cdot D\geq 0\) for all curves \(D\). Thus the line bundle \(\mathcal{O}_{S}(T)\) is nef and hence it is spanned. Thus a general element of \(|T|\) is smooth by the Bertini theorem.
**Remark 2.5**.: We recall that - as we have indicated in the introduction - by Castelnuovo's upper bound for the arithmetic genus of an integral non-degenerate curve in \(\mathbb{P}^{r}\), \(r\geq 6\) ([14, Th. 3.13]; take \(d=15\), \(r:=6\) and hence \(m=2\), \(\varepsilon=4\) to get \(\pi(15,6)=13\)), each \(X\in\mathcal{H}_{15,14,5}\) is linearly normal.
**Remark 2.6**.: Being very ample is an open condition in any irreducible family of line bundles with prescribed degree and number of sections on a family of smooth curves of prescribed genus.
**Proposition 2.7**.: \(\mathcal{H}_{15,14,5}\) _contains no trigonal curve._
Proof.: Assume the existence of a trigonal curve \(X\in\mathcal{H}_{15,14,5}\). Let \(R\) be the trigonal line bundle on \(X\) and let \(m\) be the Maroni invariant of \(X\), i.e. the first integer such that \(h^{0}(R^{\otimes(m+2)})\geq m+4\) ([24, eq. 1.2]). By [24, eq. 1.1], \((14-4)/3<4\leq m\leq(14-2)/2=6\). Since \(m\geq 4\), \(h^{0}(R^{\otimes 5})=6\) and \(R^{\otimes 5}\) is compounded. Since \(\mathcal{O}_{X}(1)\) is very ample, \(\mathcal{O}_{X}(1)\neq R^{\otimes 5}\).
1. Assume that \(u_{X}\) is induced by \(|R|^{\otimes 3}\), i.e. that \(b=2\) and \(\deg(u_{X})=3\). Thus \(\mathcal{O}_{X}(1)\cong K_{X}\otimes(R^{\otimes 3})^{\vee}(-B)\) for some \(B=p+p^{\prime}\in X_{2}\). Let \(B^{\prime}\in X_{2}\) be such that \(p^{\prime}+B^{\prime}\in|R|\); \(\mathcal{O}_{X}(1)(-B^{\prime})\cong K_{X}\otimes(R^{\otimes 4})^{\vee}(-p)\). Since \(m\geq 4\), \(h^{0}(R^{\otimes 4})=5\) and hence \(h^{0}(K_{X}\otimes(R^{\otimes 4})^{\vee}))=6\) and therefore \(h^{0}(\mathcal{O}_{X}(1)(-B^{\prime}))=h^{0}(K_{X}\otimes(R^{\otimes 4})^{\vee}(-p))\geq 5\), contradicting the very ampleness of \(\mathcal{O}_{X}(1)\).
2. Now assume that \(u_{X}\) is not induced by \(g_{3}^{1}\). Thus the dual curve has degree \(11-b\) and \(K_{X}(-1)(-B)\) is not a multiple of \(g_{3}^{1}\), contradicting [24, Prop. 1] and the fact that \(\mathcal{O}_{X}(1)\) is not a multiple of \(g_{3}^{1}\).
**Lemma 2.8**.: \(u_{X}\) _is birational onto its image._
Proof.: Suppose \(u_{X}\) is not birational onto its image. Since \(11\) is a prime and \(C=u_{X}(X)\) spans \(\mathbb{P}^{3}\), we have the following two cases:
\[\begin{cases}b=1,\deg(u_{X})=2,\deg(C)=5\\ b=2,\deg(u_{X})=3,\deg(C)=3;\text{this case is excluded by Proposition \ref{prop:2.2}.}\end{cases}\]
Now assume \(\deg(u_{X})=2\) and hence \(\deg(C)=5\). Since \(|K_{X}(-1)(-B)|\) is complete, \(C\) has geometric genus \(h=2\), smooth and \(|\mathcal{O}_{C}(1)|=g_{5}^{3}\) is non-special. Since \(|K_{X}(-1)|=|u_{X}^{*}\mathcal{O}_{C}(1)+B|\), for any \(p\in C\), we have
\[|K_{X}(-1)+u_{X}^{*}(p)| =|u_{X}^{*}(\mathcal{O}_{C}(1))+B+u_{X}^{*}(p)|=|u_{X}^{*}( \mathcal{O}_{C}(1)\otimes\mathcal{O}_{C}(p))+B|\] \[=|u_{X}^{*}(g_{6}^{4})+B|=g_{13}^{s},\ s\geq 4\]
hence
\[|\mathcal{O}_{X}(1)-u_{X}^{*}(p)| =|K_{X}-K_{X}(-1)-u_{X}^{*}(p)|=|K_{X}-(K_{X}(-1)+u_{X}^{*}(p))|\] \[=|K_{X}-g_{13}^{s}|=g_{13}^{s},\]
and therefore \(|\mathcal{O}_{X}(1)|\) is compounded, a contradiction.
**Remark 2.9**.: Assume \(b=\deg B>0\) and \(\deg(u_{X})=1\), i.e. assume \(\deg(u_{X}(X))=11-\deg B\leq 10\). Since \(u_{X}(X)=C\) has geometric genus \(g=14\) and \(\pi_{1}(11-b,3)<g\), \(C\) is contained in a unique quadric surface ([14, Th. 3.13]).
## 3 Dual curves contained in a quadric surface \(Q\subset\mathbb{P}^{3}\)
We assume that for \(X\in\mathcal{H}_{15,14,5}\) the dual curve \(u_{X}(X)=C\subset\mathbb{P}^{3}\) lies on a smooth quadric surface \(Q\). Without loss of generality we may assume that \(C\in|\mathcal{O}_{Q}(a,11-b-a)|\) with \(a\leq 11-b-a\), i.e. \(2a\leq 11-b\). Proposition 2.7 implies \(a\geq 4\) and hence \(a=4\) if \(b=2\) and \(a\in\{4,5\}\) if \(b\in\{0,1\}\). Note that
\[\dim|\mathcal{O}_{Q}(a,11-b-a)|=(a+1)(12-b-a)-1\]
and
\[p_{a}(C)=(a-1)(10-b-a).\]
By Remark 2.2, in each case \(a\in\{4,5\}\), we get an irreducible family whose general element is a nodal curve. We may also assume that the nodes are general in \(Q\). Such a family has dimension
\[\dim |\mathcal{O}_{Q}(a,11-b-a)|-(p_{a}(C)-g)+\dim\mathbb{P}(H^{0}( \mathbb{P}^{3},\mathcal{O}(2))+b-\dim\operatorname{Aut}(\mathbb{P}^{3})\] \[=(a+1)(12-b-a)-1-((a-1)(10-b-a)-14)+9+b-15\] \[=-b+29<29=\lambda(d,g,r)=3g-3+\rho(d,g,r);\ (d,g,r)=(11,14,3),\]
if \(b>0\) and hence the family of such \(X\)'s does not constitute a full component; Remark 2.1(iii).
Therefore we have \(b=0\). Let \(C\in|\mathcal{O}_{Q}(a,11-a)|\) with \(\delta=p_{a}(C)-g\) nodes. Let \(\tilde{C}\stackrel{{\rightarrow}}{{\rightarrow}}C\subset Q\) be the normalization of the nodal curve \(C\). In the following we check if \(\mathcal{O}_{\tilde{C}}(1)=\pi^{*}(\mathcal{O}_{\mathbb{P}^{3}}(1))\) has very ample residual series \(|K_{\tilde{C}}(-1)|\); if this the case, we then may conclude that the family of nodal curves \(C\subset Q\subset\mathbb{P}^{3}\) under consideration indeed comes from a family of smooth curves \(X\subset\mathbb{P}^{5}\).
Before proceeding we recall some standard notations concerning linear systems and divisors on a blown up projective plane. Let \(\mathbb{P}^{2}_{s}\) be the rational surface \(\mathbb{P}^{2}\) blown up at \(s\) general points. Let \(e_{i}\) be the class of the exceptional divisor \(E_{i}\) and \(l\) be the class of a line \(L\) in \(\mathbb{P}^{2}\). For integers \(b_{1}\geq b_{2}\geq\cdots\geq b_{s}\), let \((a;b_{1},\cdots,b_{i},\cdots,b_{s})\) denote class of the linear system \(|aL-\sum b_{i}E_{i}|\) on \(\mathbb{P}^{2}_{s}\). By abuse of notation we use the expression \((a;b_{1},\cdots,b_{i},\cdots,b_{s})\) for the divisor \(aL-\sum b_{i}E_{i}\) and \(|(a;b_{1},\cdots,b_{i},\cdots,b_{s})|\) for the linear system \(|aL-\sum b_{i}E_{i}|\). We use the convention
\[(a;b_{1}^{s_{1}},\cdots,b_{j}^{s_{j}},\cdots,b_{t}^{s_{t}}),\ \sum s_{j}=s\]
when \(b_{j}\) appears \(s_{j}\) times consecutively in the linear system \(|aL-\sum b_{i}E_{i}|\).
**Example 3.1**.:
1. \(C\in|\mathcal{O}_{Q}(5,6)|\): \(C\) is a nodal curve with \(\delta=p_{a}(C)-g=6\) nodes. Choose a node \(q_{0}\in C\) and blow up \(Q\) at \(q_{0}\) - which we denote by \(Q_{q_{0}}\) - and then blow up successively at the remaining five nodes \(q_{3},\cdots,q_{7}\) to get \(Q_{q_{0},q_{3},\cdots q_{7}}\). Note that \(Q_{q_{0}}\cong\mathbb{P}^{2}_{2}\) the two point blow up of the projective plane \(\mathbb{P}^{2}\) and hence \(Q_{q_{0},q_{3},\cdots q_{7}}\cong\mathbb{P}^{2}_{7}\). Under the identification \(Q_{q_{0}}\cong\mathbb{P}^{2}_{2}\), the exceptional divisor of the blow up \(Q_{q_{0}}\to Q\) is the proper transformation of the line through two points in \(\mathbb{P}^{2}\) which are the images of the two rulings of \(Q\) under the projection \(Q-\{q_{0}\}\rightarrow\mathbb{P}^{2}\).
Let \(\tilde{C}\in|(d;b_{1},\cdots,b_{7})|\in\mathrm{Pic}(\mathbb{P}^{2}_{7})\), where \(\tilde{C}\) is the smooth curve after resolving all the nodes of \(C\). Since \(q_{0}\in C\subset Q\) as well as the the remaining \(5\) points \(q_{3},\cdots,q_{7}\) are double points, we have
\[(l-e_{1}-e_{2})\cdot\tilde{C}=d-b_{1}-b_{2}=2,\ b_{j}=e_{j}\cdot\tilde{C}=2, \ j=3,\cdots,7.\]
Note that
\[\deg C=(2l-e_{1}-e_{2})\cdot\tilde{C}=2d-b_{1}-b_{2}=11\]
\[\tilde{C}\cdot(l-e_{1})=a=5,\ \tilde{C}\cdot(l-e_{2})=11-a=6\]
and therefore \(\tilde{C}\in|(9;4,3,2^{5})|\) from which it follows
\[|K_{\tilde{C}}(-1)| =|K_{\mathbb{P}^{2}_{7}}+\tilde{C}-(2l-e_{1}-e_{2})|_{|\tilde{C}}\] \[=|-(3;1^{7})+(9;4,3,2^{5})-(2;1^{2},0^{5})|=|(4;2,1^{6})|_{| \tilde{C}}.\]
We note that the restriction map
\[\mathcal{M}:=|K_{\mathbb{P}^{2}_{7}}+\tilde{C}-(2l-e_{1}-e_{2})|\longrightarrow |K_{\mathbb{P}^{2}_{7}}+\tilde{C}-(2l-e_{1}-e_{2})|_{|\tilde{C}}\]
is surjective; by Kodaira vanishing theorem, \[h^{1}(\mathbb{P}^{2}_{7},\mathcal{I}_{\tilde{C}}\otimes\mathcal{M})=h^{1}(\mathbb{ P}^{2}_{7},-(5;2^{2},1^{5}))=0\] since \(|(5;2^{2},1^{5})|\) is (very) ample. Hence \(|K_{\tilde{C}}(-1)|\) is completely cut out by the very ample linear system \(|(4;2,1^{6})|\) on \(\mathbb{P}^{2}_{7}\) (by [9]) and the very ampleness of \(|K_{\tilde{C}}(-1)|\) follows.
2. \(C\in|\mathcal{O}_{Q}(4,7)|\colon C\) is nodal with \(\delta=p_{a}(C)-g=4\) nodes. We carry out a similar computation as in (i). Choose a node \(q_{0}\in C\) and blow \(Q\) at \(q_{0}\) - which we again denote by \(Q_{q_{0}}\) - and then blow up successively at the remaining three nodes \(q_{3},\cdots,q_{5}\) to get \(Q_{q_{0},q_{3},\cdots q_{5}}\). Let \(\tilde{C}\in|(d;b_{1},\cdots,b_{5})|\in\mathrm{Pic}(\mathbb{P}^{2}_{5})\), where \(\tilde{C}\) is the smooth curve after resolving all the \(\delta\) nodes of \(C\). Since \(q_{0}\in C\subset Q\) and the remaining 3 points \(q_{3},\cdots,q_{5}\) are double points, we again have \[(l-e_{1}-e_{2})\cdot\tilde{C}=d-b_{1}-b_{2}=2\text{ and }b_{j}=e_{j}\cdot\tilde{C}=2,j=3,\cdots,5.\] Note that \[\deg C=(2l-e_{1}-e_{2})\cdot\tilde{C}=2d-b_{1}-b_{2}=11\] \[\tilde{C}\cdot(l-e_{1})=4,\ \tilde{C}\cdot(l-e_{2})=7\] and therefore \(\tilde{C}\in|(9;5,2^{4})|\). We set \[\mathcal{M} :=|K_{\mathbb{P}^{2}_{5}}+\tilde{C}-(2l-e_{1}-e_{2})|\] \[=|-(3;1^{5})+(9;5,2^{4})-(2;1^{2},0^{3})|=|(4;3,0,1^{3})|\] and consider the exact sequence \[0\to\mathcal{O}(-\tilde{C}+\mathcal{M})\to\mathcal{O}(\mathcal{M})\to \mathcal{O}(\mathcal{M})_{|\tilde{C}}\to 0\] Note that \(\mathcal{L}:=\mathcal{O}(-\tilde{C}+\mathcal{M})=\mathcal{O}(-(5;2^{2},1^{3}))\) and \(\mathcal{L}^{-1}=\mathcal{O}((5;2^{2},1^{3}))\) is ample (indeed very ample) and hence \[h^{1}(\mathbb{P}^{2}_{5},\mathcal{O}(-\tilde{C}+\mathcal{M}))=0\] by Kodaira vanishing theorem. By the surjectivity of restriction map \[H^{0}(\mathbb{P}^{2}_{5},\mathcal{O}_{\mathbb{P}^{2}_{5}}(K_{\mathbb{P}^{2}_{ 5}}+\tilde{C}-(2,1^{2},0^{3}))\to H^{0}(\tilde{C},K_{\tilde{C}}(-1)),\] \(|K_{\tilde{C}}(-1)|\) is completely cut out on \(\tilde{C}\) by \(|K_{\mathbb{P}^{2}_{5}}+\tilde{C}-(2l-e_{1}-e_{2})|\). Note that the linear system \(\mathcal{M}=|(4;3,0,1^{3})|\) contracts the exceptional divisor \(e_{3}\), whereas \(e_{3}\cdot\tilde{C}=2\). Hence the morphism induced by \(\mathcal{M}\) on \(\mathbb{P}^{2}_{5}\) and then restricted to \(\tilde{C}\) produces a singularity on the image curve. In sum, we conclude that the normalization \(\widetilde{C}\) of \(C\in|\mathcal{O}_{Q}(4,7)|\) does not have a smooth counterpart in \(\mathbb{P}^{5}\) (or very ample residual series \(|K_{\tilde{C}}(-1)|\)) and hence does not contribute to a component of \(\mathcal{H}_{15,14,5}\).
**Remark 3.2**.: (i) If a dual curves of a smooth \(X\in\mathcal{H}_{15,14,5}\) lies on a quadric cone, such curves are in the boundary of the component corresponding to the family of curves \(C\in|\mathcal{O}_{Q}(5,6)|\) of geometric genus \(g=14\) on a smooth
quadric surface \(Q\subset\mathbb{P}^{3}\); recall that curves on a quadric cone is specialization of curves on a smooth quadric by [17]; see also [6, Introduction].
(ii) We take \(|L|=|\mathcal{O}_{Q}(5,6)|\) in Remark 2.2. We have
\[\dim\Sigma_{14}=\dim|\mathcal{O}_{Q}(5,6)|+14-p_{a}(C)=35,\]
hence the irreducible family of smooth curves in \(\mathbb{P}^{5}\) of degree \(15\) and genus \(g=14\) consisting of image curves of the morphism \(\widetilde{C}\hookrightarrow\mathbb{P}^{5}\) induced by \(|K_{\widetilde{C}}(-1)|\) has dimension
\[\dim\Sigma_{14}-\dim\operatorname{Aut}(Q)+\dim\operatorname{Aut}(\mathbb{P}^ {5})=64,\]
which may well constitute a component of \(\mathcal{H}_{15,14,5}\).
## 4 The final step of the proof of Theorem 1.1
In this section we finish the proof of Theoerm 1.1. Now we consider the case in which \(h^{0}(\mathbb{P}^{3},\mathcal{I}_{C}(2))=0\). By Remark 2.9, \(b=0\) and \(\deg(C)=11\). Recall that \(C\) has geometric genus \(g=14\) and hence \(p_{a}(C)\geq 14\).
(a) Let \(C\subset\mathbb{P}^{3}\) be non-degenerate curve not contained in a cubic surface. Since \(\deg(C)>3^{2}\) and by [13], a general hyperplane section \(H\cap C=\Delta\) of \(C\) is not contained in a plane cubic. One then easily computes
\[h^{1}(\mathcal{I}_{\Delta}(1))=8,h^{1}(\mathcal{I}_{\Delta}(2))=5,h^{1}( \mathcal{I}_{\Delta}(3))=1,h^{1}(\mathcal{I}_{\Delta}(t))=0\text{ for all }t\geq 4.\]
By [14, Cor. 3.2] one knows
\[p_{a}(C)\leq\sum d-h_{\Delta}(n)=\sum h^{1}(H,\mathcal{I}_{\Delta}(n))\]
where \(h_{\Delta}(n)\) is the Hilbert function of \(\Delta\). Hence \(p_{a}(C)=14\) and therefore \(C\) is smooth. Furthermore \(C\) is arithmetically Cohen-Macaulay (ACM for short) by [14, Remark 3.1.1]. In particular \(h^{0}(\mathcal{I}_{C}(4))=h^{0}(\mathbb{P}^{3},\mathcal{O}(4))-h^{0}(C, \mathcal{O}_{C}(4))=4\).
By [26, Prop. 3.1], \(C\) is directly linked to a ACM curve \(D\) of degree \(5\) and \(p_{a}(D)=2\) by a complete intersection of two general quartics containing \(C\) (i.e. a general pencil of quartics containing \(C\) which may be regarded as a general member of the Grassmannian \(\mathbb{G}(1,\mathbb{P}(H^{0}(C,\mathcal{I}_{C}(4))))\).
Conversely let \(\mathcal{E}\) be the family consisting of all ACM curves \(D\subset\mathbb{P}^{3}\) of degree \(5\) and \(p_{a}(D)=2\). It is known that \(\mathcal{E}\) is irreducible of dimension \(4\cdot 5=20\) and a general element of \(\mathcal{E}\) is smooth. For every \(D\in\mathcal{E}\),
\[h^{0}(D,\mathcal{I}_{D}(4))=h^{0}(\mathbb{P}^{3},\mathcal{O}(4))-h^{0}(D, \mathcal{O}(4)=35-(20-2+1)=16\]
and \(D\) is directly linked to an ACM curve \(C\) with \(p_{a}(C)=14\) and degree \(11\).
Therefore we may consider the locus
\[\Sigma\subset\mathbb{G}(1,\mathbb{P}(H^{0}(\mathbb{P}^{3},\mathcal{O}(4))))= \mathbb{G}(1,34)\]
of pencils of quartic surfaces whose base locus consists of a curve \(C\) of degree \(d=11\) and genus \(g=14\) and a quintic \(D\) of genus \(h=2\) where \(C\) and \(D\) are
directly linked via a complete intersection of quartics corresponding to the pencil, together with two obvious maps
\[\mathbb{G}(1,34)\supset\Sigma\qquad\begin{array}{c}\stackrel{{\pi_{C }}}{{\rightarrow}}\\ \end{array}\quad\Gamma\subset\ \mathcal{H}_{11,14,3}\]
\[\begin{array}{c}\stackrel{{[}}{{\downarrow}}\pi_{D}\\ \end{array}\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad
We have \((H+E)\cdot E=1\) and hence \(\mathcal{O}_{S}(H+E)_{|E}\) is a degree \(1\) line bundle on \(E\cong\mathbb{P}^{1}\). Thus \(\mathcal{O}_{E}(H+E)\) is very ample. Since \(h^{1}(\mathcal{O}_{S}(H))=0\), we get that \(Z\) imposes \(2\) independent conditions on \(|H+E|\) and hence on \(|H+E||_{C}\). Now assume \(Z\cap E\neq\emptyset\) and \(Z\nsubseteq E\). Write \(Z\cap E=\{p\}\). The point \(p\) is not a base point of \(\mathcal{O}_{C}(H+E)\), because \(|H+E||_{E}\) has no base point. Since \(H\) is globally generated, there is \(M\in|\mathcal{O}_{S}(H)|\) such that \(Z\nsubseteq M\). Since \(Z\nsubseteq E\), \(Z\) imposes \(2\) independent conditions on \(|\mathcal{O}_{C}(H+E)|\).
(b) We assume that \(C\) is contained in an irreducible cubic surface \(S\). Since \(\deg(C)=11>9\), such cubic surface is unique. Since \(h^{0}(\mathbb{P}^{3},\mathcal{I}_{C}(2))=0\), we get \(p_{a}(C)\leq\pi_{1}(11,3)=15\). Thus either \(C\) is smooth or it has a unique singular point, which is either an ordinary node or an ordinary cusp. There are several possibilities for the cubic surface \(S\) containing \(C\).
1. \(S\) is a cubic ruled surface which is projection of a rational normal surface scroll \(S^{\prime}\subset\mathbb{P}^{4}\) from a point \(p\notin S^{\prime}\).
2. \(S\) is a projection of a cone \(S^{\prime\prime}\subset\mathbb{P}^{4}\) over a twisted cubic curve in \(\mathbb{P}^{3}\) from \(p\notin S^{\prime\prime}\), i.e. \(S\) is a cone over a singular plane cubic which has a double line.
3. \(S\) is a cone over a non-singular plane curve.
4. \(S\) is a non-singular cubic surface.
5. \(S\) is a singular and normal surface with isolated singularities.
We remark that the first two cases (i) and (ii) may be eliminated from our consideration just because the morphism \(u_{X}:X\to S\subset\mathbb{P}^{3}\) with \(C\) as its image is induced by a complete linear system; note that the morphism \(u_{X}\) lifts to a morphism \(X\to S^{\prime}(\) or \(S^{\prime\prime})\subset\mathbb{P}^{4}\) then followed by an external projection into \(\mathbb{P}^{3}\) which cannot be induced by a complete linear series.
In the third case (iii) we need a dimension count as follows. Fix a smooth cubic curve \(E\subset\mathbb{P}^{2}\) and a triple covering \(X\xrightarrow{\pi}E\) such that
\[|K_{X}(-1)|=|\pi^{*}(\mathcal{O}_{E}(1))+B|=g_{11}^{3},B\in X_{2}.\]
Choice of \(E\) depends on \(\dim\mathcal{M}_{1}+\dim W_{3}^{2}(E)=2\) parameters. Choice of a triple covering \(X\) over a fixed \(E\) depends on \(2g-3\) parameters by Riemann's moduli count; [3, Theorem 8.23, p828]. Choice of degree two effective divisors \(B\in X_{2}\) depends on at most one parameters; note that \(|\pi^{*}(\mathcal{O}_{E}(1))+B|=g_{11}^{2}\) for general \(B\in X_{2}\). However these numbers do not add up the minimal possible dimension \(\lambda(15,14,5)=29\) which is necessary to constitute a component.
In the case (iv), we first assume \(C=u_{X}(X)\) is smooth. By [12, Prop. B1] the family of smooth curves \(C\subset S\) of degree \(d=11\) has dimension at most \(d+g+18=43<44\) and therefore does not constitute a full component. If the image curve \(u_{X}(X)=C\) is singular (and hence \(p_{a}(C)=15\)), such \(C\) forms a family of positive codimension inside a family generically consisting of smooth curves of degree \(11\) and of genus \(g=p_{a}(C)=15\), which forms a family of dimension \(11+p_{a}(C)+18=44\); cf. Remark 2.4.
In the last case (v), note that \(C=u_{X}(X)\) can be either singular or smooth. However, in both cases, the family consisting of curves on normal
has dimension strictly less than \(11+p_{a}(C)+18\leq 44\) by [6] hence does not constitute a full component.
**Conclusion.** We now have exhausted all the possibilities for the dual curve \(C\subset\mathbb{P}^{3}\), therefore we conclude that there are only two irreducible components of \(\mathcal{H}_{15,14,5}\);
(i) one component generically consisting of curves whose dual curve \(C\) is a nodal curve in \(|\mathcal{O}_{Q}(5,6)|\) and
(ii) the other component generically consisting of image curves of the morphism \(C\hookrightarrow\mathbb{P}^{5}\) induced by \(|K_{C}(-1)|\) where \(C\subset\mathbb{P}^{3}\) is directly linked to a quintic of genus \(h=2\).
Since both families have the same dimension \(64\), one is not in the boundary of the other.
**Example 4.2**.: There is a singular curve \(C\subset\mathbb{P}^{3}\) with \((d,g)=(11,14)\) lying on a smooth cubic surface such that \(|K_{\widetilde{C}}(-1)|\) is very ample, where \(\widetilde{C}\longrightarrow C\) is the normalization. Therefore the image curve \(X\) of \(\widetilde{C}\) under \(\widetilde{C}\stackrel{{|K_{\widetilde{C}}(-1)|}}{{\longrightarrow }}X\subset\mathbb{P}^{5}\) is a curve with the right degree and genus \((d,g)=(15,14)\). However such curves does not constitute a full component as we remarked in the last stage of the proof of the main theorem.
Let \(S\subset\mathbb{P}^{3}\) be a smooth cubic surface. Take \(C\in|(10;4,3^{5})|\); \(p_{a}(C)=15\) and assume that \(C\) has one node and no further singularities. By blowing up the only node of \(C\), the proper transformation of \(C\) is \(\widetilde{C}\in|(10;4,3^{5},2)|\) and
\[|K_{\widetilde{C}}(-1)|=|\widetilde{C}+K_{\mathbb{P}^{2}_{7}}-H|_{|\widetilde{ C}}=|(10;4,3^{5},2)-(3;1^{7})-(3,1^{6},0)|=|(4;2,1^{6})|\]
Note that \(\dim|(4;2,1^{6})|=14-3-6=5\) and \(|(4;2,1^{6})|\) is very ample on \(\mathbb{P}^{2}_{7}\) by [9]. Therefore \(\widetilde{C}\hookrightarrow X\subset\mathbb{P}^{5}\) is an embedding with smooth image curve \(X\) of degree \(d=(10;4,3^{5},2)\cdot(4,;2,1^{6})=15\) and genus \(g=p_{a}(\widetilde{C})=p_{a}(C)-1=14\). The dimension of the family of very ample \(g^{5}_{15}\)'s arising this way is
\[\dim|(10;4,3^{5})| -1+\dim\mathbb{P}(H^{0}(\mathbb{P}^{3},\mathcal{O}(3))-\dim \operatorname{Aut}(\mathbb{P}^{3})\] \[=24+19-15=28<\lambda(15,14,5)=29,\]
and therefore this family does not constitute a full component.
By using Sakai's method [25] which is an effective numerical criteria for the gonality of a plane curve with prescribed singularities, \(\tilde{C}\) is 6-gonal which is left to readers for verification. We also note that the curve \(X\subset\mathbb{P}^{5}\) lies on a surface of degree six since \((4;2,1^{6})^{2}=6\).
## 5 Gonality of a general element of a component of \(\mathcal{H}_{15,14,5}\)
Let \(\Gamma_{1}\) (resp. \(\Gamma_{2}\)) be the irreducible component of \(\mathcal{H}_{15,14,5}\) whose general element has dual not contained (resp. contained) in a quadric surface.
**Proposition 5.1**.: _A general \(X\in\Gamma_{1}\) is 7-gonal._
Proof.: The dual curve \(C\) of \(X\) is a smooth degree 11 ACM curve \(C\subset\mathbb{P}^{3}\) whose homogeneous ideal is generated by forms of degree 4. Thus there is no line \(J\subset\mathbb{P}^{3}\) such that \(\deg(J\cap C)\geq 5\). There are lines \(J\) such that \(\deg(J\cap C)=4\) ([15, Proposition 3.1]). Thus \(C\) is 7-gonal ([15, Theorem 1.3]).
**Proposition 5.2**.: _No smooth element \(X\in\mathcal{H}_{15,14,5}\) is 4-gonal._
Proof.: Recall that \(\deg B\leq 1\), where \(B\) is the base locus of \(|K_{X}(-1)|\). We also recall that \(u_{X}\) is birationally very ample or very ample by Lemma 2.8.
We now assume the existence of a 4-gonal \(X\in\mathcal{H}_{15,14,5}\) with a unique \(g_{4}^{1}\).
1. \(\deg B=1\): \(\varphi\) is birationally very ample and by Remark 2.9, \(C\) is contained in a quadric \(Q\subset\mathbb{P}^{3}\). Assume that \(Q\) is smooth. If \(C\in|\mathcal{O}_{Q}(5,5)|\), \(X\) has a base point free \(g_{5}^{1}\) implying that \(g\leq(4-1)(5-1)=12\) by Castelnuovo-Severi inequality, a contradiction. If \(C\in|\mathcal{O}_{Q}(4,6)|\), \(p_{a}(C)=15\) hence \(C\) has a double point which is either a node or a cusp. Let \(q\in C\subset Q\) be the singular point and we blow up \(Q\) at \(q\) to get \(Q_{q}\cong\mathbb{P}_{2}^{2}\). Let \(\widetilde{C}\subset\mathbb{P}_{2}^{2}\) be the proper transformation of \(C\) and set \(\widetilde{C}\in|(d;b_{1},b_{2})|\). We carry a similar computation as we did before as follows. The exceptional divisor of the blow up is \(l-e_{1}-e_{2}\). Since \(q\) is a double point \(\widetilde{C}\cdot(l-e_{1}-e_{2})=d-b_{1}-b_{2}=2\). Also \(\widetilde{C}\cdot(2l-e_{1}-e_{2})=2d-b_{1}-b_{2}=\deg C=10\), \(\widetilde{C}\cdot(l-e_{1})=d-b_{1}=4\), \(\widetilde{C}\cdot(l-e_{1})=d-b_{2}=6\) and hence we have \(\widetilde{C}\in(8;4,2)\). We set \[\mathcal{M} :=|K_{\mathbb{P}_{2}^{2}}+\tilde{C}-(2l-e_{1}-e_{2})|\] \[=|-(3;1^{2})+(8;4,2)-(2;1^{2})|=|(3;2,0)|\] and consider the exact sequence \[0\to\mathcal{O}(-\tilde{C}+\mathcal{M})\to\mathcal{O}(\mathcal{M})\to \mathcal{O}(\mathcal{M})_{|\tilde{C}}\to 0\] Note that \(\mathcal{L}:=\mathcal{O}(-\tilde{C}+\mathcal{M})=\mathcal{O}(-(5;2^{2}))\) and \(\mathcal{L}^{-1}=\mathcal{O}((5;2^{2}))\) is ample (indeed very ample) and hence \[h^{1}(\mathbb{P}_{5}^{2},\mathcal{O}(-\tilde{C}+\mathcal{M}))=0\] by Kodaira vanishing theorem. By the surjectivity of restriction map \[H^{0}(\mathbb{P}_{2}^{2},\mathcal{O}_{\mathbb{P}_{2}^{2}}(K_{\mathbb{P}_{2}^{ 2}}+\tilde{C}-(2,1^{2}))\to H^{0}(\tilde{C},K_{\tilde{C}}(-1)),\] \(|K_{\tilde{C}}(-1)|\) is completely cut out on \(\tilde{C}\) by \(|K_{\mathbb{P}_{2}^{2}}+\tilde{C}-(2l-e_{1}-e_{2})|\). The morphism \(Q_{q}\cong\mathbb{P}_{2}^{2}\overset{|(3;2,0)|}{\hookrightarrow}\mathbb{P}^{6}\) is a morphism contracting \(e_{2}\) whereas \(\widetilde{C}\cdot e_{2}=2\). Hence \(|K_{\widetilde{C}}(-1)|\) is not very ample and so is the projection given by \(|K_{\widetilde{C}}(-1)-B|\) from the base point \(B\). Note that \[|\mathcal{O}_{X}(1)|=|K_{X}-K_{X}(-1)|=|K_{\widetilde{C}}-(\mathcal{O}_{ \widetilde{C}}(1)+B)|=|K_{\widetilde{C}}(-1)-B|\] via the identification \(X\cong\widetilde{C}\).
Assume \(C\) lies on a quadric cone \(Q\) with vertex \(P\). Using the notation in Remark 2.2, let \(\mathbb{F}_{2}\longrightarrow Q\subset\mathbb{P}^{3}\) be the morphism induced by \(|h+2f|\). Let \(\widetilde{C}\) be the proper transformation of \(C\) under the desingularization \(\mathbb{F}_{2}\longrightarrow Q\subset\mathbb{P}^{3}\) given by \(|h+2f|\). Setting \(\widetilde{C}\in|ah+bf|\), \(\widetilde{C}\cdot(h+2f)=(ah+bf)\cdot(h+2f)=b=10\), \(0\leq\widetilde{C}\cdot h=-2a+b\leq m\) where \(m\) is the multiplicity of \(C\) at the vertex \(P\). Hence we have \[(a,b,m)=\begin{cases}(5,10,0)\\ (4,10,2).\end{cases}\] In the first case \((a,b,m)=(5,10,0)\), the ruling of the cone cut out a base point free \(g_{5}^{1}\) which is impossible under the existence of a base point free \(g_{4}^{1}\) by Casetelnuovo-Severi inequality. In the case \((a,b,m)=(4,10,2)\), \(p_{a}(\widetilde{C})=15,\widetilde{C}\in|4h+10f|\), and hence \(\widetilde{C}\) is singular. Choose \(f_{0}\) the unique fibre containing the unique singular point \(p_{0}\in\widetilde{C}\). Blow up \(\mathbb{F}_{2}\) at \(p_{0}\) and let \(e\) be the exceptional divisor of the blow up \(\mathbb{F}_{2,1}\stackrel{{\pi}}{{\rightarrow}}\mathbb{F}_{2}\). Let \(\tilde{f}_{0}\) (\(\widetilde{\widetilde{C}}\) resp.) be the proper transformation of \(f_{0}\) (\(\widetilde{C}\) resp.). By abusing notation, we denote \(\pi^{*}(h)\) & \(\pi^{*}(f)\) by \(h\) & \(f\). We have \[\widetilde{\widetilde{C}}\in|4h+10f-2e|\] \[\mathcal{M} :=|\widetilde{\widetilde{C}}+K_{\mathbb{F}_{2,1}}-(h+2f)|\] \[=|(4h+10f-2e)+(-2h-4f+e)-(h+2f)|\] \[=|h+4f-e|\] Since \(\mathcal{O}_{\mathbb{F}_{2,1}}(\mathcal{M}-\widetilde{\widetilde{C}})= \mathcal{O}_{\mathbb{F}_{2,1}}(-(3h+6f-e))\) and \(f\cdot(\mathcal{M}-\widetilde{\widetilde{C}})<0\), we see that \(h^{0}(\mathbb{F}_{2,1},\mathcal{O}(\mathcal{M}-\widetilde{\widetilde{C}}))=0\) implying that the restriction map \[\rho:H^{0}(\mathbb{F}_{2,1},\mathcal{O}(\mathcal{M}))\longrightarrow H^{0}( \widetilde{\widetilde{C}},\mathcal{O}(\mathcal{M})\otimes\mathcal{O}_{ \widetilde{\widetilde{C}}})\] is injective. Suppose \(\rho\) is not surjective: \[\mathrm{Im}(\rho)\subsetneq H^{0}(\widetilde{\widetilde{C}},\mathcal{M} \otimes\mathcal{O}_{\widetilde{\widetilde{C}}}).\] By the projection formula, \(\pi_{*}\pi^{*}\mathcal{O}_{\mathbb{F}_{2}}(h+4f))=\mathcal{O}_{\mathbb{F}_{2}} (h+4f)\) \[h^{0}(\mathbb{F}_{2,1},\pi^{*}\mathcal{O}_{\mathbb{F}_{2}}(h+4f)) =h^{0}(\mathbb{F}_{2},\pi_{*}\pi^{*}\mathcal{O}_{\mathbb{F}_{2}}( h+4f)))\] \[=h^{0}(\mathbb{F}_{2},\mathcal{O}(h+4f))=8\] (cf. [5, Ex.2, p.53] for the last equality above) and hence \[h^{0}(\mathbb{F}_{2,1},\mathcal{O}(\mathcal{M})) =h^{0}(\mathbb{F}_{2,1},\mathcal{O}(h+4f-e))\] \[=h^{0}(\mathbb{F}_{2},\mathcal{O}(h+4f))-1=7.\] Then by the injectivity and the non-surjectivity of \(\rho\), \[r=\dim\mathbb{P}(H^{0}(\widetilde{\widetilde{C}},\mathcal{O}( \mathcal{M})\otimes\mathcal{O}_{\widetilde{\widetilde{C}}})) >\dim\mathbb{P}(H^{0}(\mathbb{F}_{2,1},\mathcal{O}(\mathcal{M}))\] \[=\dim\mathbb{P}(\mathrm{Im}(\rho))=6.\]
Since \(|\mathrm{Im}(\rho)|\subsetneq\mathbb{P}(H^{0}(\overset{\approx}{C},\mathcal{O}( \mathcal{M})\otimes\mathcal{O}_{\overset{\approx}{C}}))\) induces a morphism birational onto its image, the complete linear system \(\mathbb{P}(H^{0}(\overset{\approx}{C},\mathcal{O}(\mathcal{M})\otimes\mathcal{O }_{\overset{\approx}{C}}))\) is still birationally very ample, which contradicts the Castelnuovo upper bound for the arithmetic genus of degree \(\overset{\approx}{C}\cdot\mathcal{M}=(4h+10f-2e)\cdot(h+4f-e)=16\) curves in \(\mathbb{P}^{r\geq 7}\); \(\pi(16,7)=12<g=14\). Therefore we have \[\mathrm{Im}(\rho)=H^{0}(C,\mathcal{O}(\mathcal{M})\otimes\mathcal{O}_{ \overset{\approx}{C}})\] and the restriction map \(\rho\) is an isomorphism. Note that \(\tilde{f}_{0}^{2}=-1,\tilde{f}_{0}\cdot e=1,h\cdot\tilde{f}_{0}=1,\overset{ \approx}{C}\cdot\tilde{f}_{0}=2,\mathcal{M}\cdot\tilde{f}_{0}=(h+4f-e)\cdot \tilde{f}_{0}=0\). Hence the morphism \(\psi\) given by \(\mathcal{M}=|\overset{\approx}{C}+K_{\mathbb{F}_{2,1}}-(h+2f)|\) contracts the \((-1)\) curve \(\tilde{f}_{0}\) and the image \(\psi(\overset{\approx}{C})\subset\mathbb{P}^{6}\) acquires a singularity. Recall that \(K_{X}(-1)=\mathcal{O}_{\overset{\approx}{C}}(1)+B\). Hence \[\mathbb{P}(H^{0}(\overset{\approx}{C},\mathcal{O}(\mathcal{M}) \otimes\mathcal{O}_{\overset{\approx}{C}})) =|K_{\overset{\approx}{C}}(-1)|\] \[=|K_{\overset{\approx}{C}}-(K_{X}(-1)-B)|=|\mathcal{O}_{X}(1)+B|.\] Since \(\mathbb{P}(H^{0}(\overset{\approx}{C},\mathcal{O}(\mathcal{M})\otimes \mathcal{O}_{\overset{\approx}{C}}))=|\mathcal{O}_{X}(1)+B|\) is not very ample, \(\mathcal{O}_{X}(1)\) is not very ample, a contradiction.
2. \(\deg B=0\) and \(C\) is not contained in a quadric surface. (a) Suppose \(K_{X}(-1)=g_{11}^{3}\) is very ample. Recall that a complete linear series \(g_{d}^{r}\) on a \(4\)-gonal curve \(C\) is of type \(1\) (type \(2\), resp.) if \(g_{d}^{r}\) is composed of \(g_{4}^{1}\) (if the residual series \(|K_{C}-g_{d}^{r}|\) is composed of \(g_{4}^{1}\), resp.) according to Coppens-Martens [7]. In the current situation, the linear series \(K_{X}(-1)\) is neither type \(1\) nor type \(2\) and hence by [7, Th. 1.9], \(K_{X}(-1)=g_{11}^{3}\) is of the form \(|2g_{4}^{1}+F|\) with \(\deg(F)=3\) i.e. there is \(m\in\{0,2\}\) such that \(g_{11}^{3}=|\frac{m+r-1}{2}g_{4}^{1}+F|\) and \(h^{0}(mg_{4}^{1}+F)=m+2\). Since we already avoided the case \(C\in|\mathcal{O}_{Q}(4,7)|\) in Example 3.1 (ii), \(m\neq 0\) and therefore \(g_{11}^{3}=|2g_{4}^{1}+F|\). Write \(F=p+F^{\prime}\) with \(\deg(F^{\prime})=2\). Since \(g_{11}^{3}=|2g_{4}^{1}+F|=|2g_{4}^{1}+F^{\prime}+p|\) is very ample, we get \(h^{0}(2g_{4}^{1}+p)=2\), a contradiction. (b) Recall \(p_{a}C)\leq 15\) by the assumption that \(C\) is not contained in a quadric. Now assume \(p_{a}(C)=15\). We exclude the case \(g_{11}^{3}=|2g_{4}^{1}+F|\). Since \(p_{a}(C)=15\) and \(g=14\), \(C\) can have only one node or only one simple cusp (a double point). On the other hand, since \(g_{11}^{3}=|2g_{4}^{1}+F|\), \(\deg F=3\), \(F\) collapses under the morphism given by \(|2g_{4}^{1}+F|\) onto a singular point of \(C\) of multiplicity three which certainly cannot be a node or a simple cusp.
3. Assume \(\deg B=0\) and \(C\) contained in a smooth quadric \(Q\). Since we are assuming the existence of a \(g_{4}^{1}\), \(C\in|\mathcal{O}_{Q}(5,6)|\) is impossible by the Casetelnuovo-Severi inequality. In the case \(C\in|\mathcal{O}_{Q}(4.7)|\) we have \(p_{a}(C)=18\) and \(C\) may have rather bad singularities, not just nodes or cusps. Treating all the possible combinations of singularities on \(C\) would be somewhat too much cumbersome. Instead, our strategy here is to show that a general curve in the Severi variety of curves of geometric
genus \(g=14\) on smooth \(Q\) in the linear system \(\mathcal{L}=|\mathcal{O}_{Q}(4.7)|\) does not have very ample \(|K_{\widetilde{C}}(-1)|\) where \(\widetilde{C}\) is the normalization of \(C\subset Q\subset\mathbb{P}^{3}\). Since being very ample is an open condition (Remark 2.6), this will show that there does not exist a smooth \(4\)-gonal curve \(X\in\mathcal{H}_{15,14,5}\) whose dual curve \(C\) belongs to \(|\mathcal{O}_{Q}(4.7)|\), whatsoever the singularity of \(C\) is. On the other hand, we already showed that if \(C\in|\mathcal{O}_{Q}(4.7)|\) is general with four nodes, \(\mathcal{O}_{X}(1)=K_{\widetilde{C}}(-1)\) is not very ample in Example 3.1(ii).
4. Assume \(\deg B=0\) and \(C\) is contained in a quadric cone \(Q\). We consider the desingularization \(\mathbb{F}_{2}\stackrel{{\pi}}{{\longrightarrow}}C\subset Q\) given by \(|h+2f|\). Let \(\widetilde{C}\subset\mathbb{F}_{2}\) be the strict transformation of \(C\) under \(\pi\). Set \(\widetilde{C}\in|ah+bf|\) and we have \(\widetilde{C}\cdot(h+2f)=(ah+bf)\cdot(h+2f)=b=11\), \(0\leq\widetilde{C}\cdot h=-2a+b\leq m\) where \(m\) is the multiplicity of \(C\) at the vertex \(P\). Hence we have \[(a,b,m)=\begin{cases}(5,11,1)\\ (4,11,3).\end{cases}\] The case \((5,11,1)\) is out of our consideration; the ruling of the cone cuts out a base point free \(g_{5}^{1}\) which is a contradiction while assuming the existence of \(g_{4}^{1}\) by Castelnuovo-Severi inequality. If \(\widetilde{C}\in|4h+11f|\), by adjuction we have \(p_{a}(\widetilde{C})=18\). Again we adopt the same strategy (as in the case \(C\in|\mathcal{O}_{Q}(4.7)|\) on a smooth quadric) to show that a general element in the Severi variety of curves of geometric genus \(g=14\) on \(\mathbb{F}_{2}\) in \(\mathcal{L}=|4h+11f|\) does not have very ample \(|K_{\widetilde{C}}(-1)|\) where \(\widetilde{C}\) is the normalization of \(\widetilde{C}\subset\mathbb{F}_{2}\). The following is parallel to the case we already considered in the case \(\deg B=1\), \((a,b,m)=(4,10,2)\) and \(C\) lies on a quadric cone. However we provide some computations for the convenience of readers.
We assume that \(\widetilde{C}\subset\mathbb{F}_{2}\) has four nodes as its only singularities in general position. Let \(e_{i}(i=1,\cdots,4)\) be exceptional divisors and let \(f_{i}(i=1,\cdots,4)\) be fibers containing the four nodal points of \(\widetilde{C}\). After resolving all the four nodes we get a smooth curve \(\widetilde{C}\subset\mathbb{F}_{2,4}\) on the Hirzebruch surface \(\mathbb{F}_{2}\) blown up at four points. We have
\[\widetilde{C}\in|4h+11f-\sum 2e_{i}|\] \[\mathcal{M}: =|\widetilde{C}+K_{\mathbb{F}_{2,4}}-(h+2f)|\] \[=|(4h+11f-\sum 2e_{i})+(-2h-4f+\sum e_{i})-(h+2f)|\] \[=|h+5f-\sum e_{i}|\] Since \(\mathcal{O}_{\mathbb{F}_{2,4}}(\mathcal{M}-\widetilde{C})=\mathcal{O}_{ \mathbb{F}_{2,4}}(-(3h+6f-\sum e_{i}))\) and \(f\cdot(\mathcal{M}-\widetilde{C})<0\), we see that \(h^{0}(\mathbb{F}_{2,4},\mathcal{O}(\mathcal{M}-\widetilde{C}))=0\) implying that the restriction map
\[\rho:H^{0}(\mathbb{F}_{2,4},\mathcal{O}(\mathcal{M}))\longrightarrow H^{0}( \widetilde{C},\mathcal{O}(\mathcal{M})\otimes\mathcal{O}_{\widetilde{C}})\]
is injective. Suppose
\[\mathrm{Im}(\rho)\neq H^{0}(\widetilde{\widetilde{C}},\mathcal{M}\otimes \mathcal{O}_{\widetilde{\widetilde{C}}}).\]
Then
\[\dim\mathbb{P}(H^{0}(\mathbb{F}_{2,4},\mathcal{O}(\mathcal{M})\otimes\mathcal{O} _{\widetilde{\widetilde{C}}}))=\dim\mathbb{P}(H^{0}(\widetilde{\widetilde{C}}, \mathcal{M}\otimes\mathcal{O}_{\widetilde{\widetilde{C}}}))=r\geq 6.\]
Since \(|\mathrm{Im}(\rho)|\subset\mathbb{P}(H^{0}(\mathbb{F}_{2,4},\mathcal{O}( \mathcal{M})\otimes\mathcal{O}_{\widetilde{\widetilde{C}}}))\) induces a morphism birational onto its image, the complete linear system \(\mathbb{P}(H^{0}(\widetilde{\widetilde{C}},\mathcal{O}(\mathcal{M})\otimes \mathcal{O}_{\widetilde{\widetilde{C}}}))\) is still birationally very ample, which contradicts the Castelnuovo upper bound for the arithmetic genus of degree \(15\) curves in \(\mathbb{P}^{r\geq 6}\); \(\pi(15,6)=13<g=14\). Therefore we have
\[\mathrm{Im}(\rho)=H^{0}(C,\mathcal{O}(\mathcal{M})\otimes\mathcal{O}_{ \widetilde{\widetilde{C}}}).\]
Denoting by \(\widetilde{f}_{i}\) the proper transformation of \(f_{i}\) under the blow up,
\[\widetilde{f}_{i}^{2}=-1,\tilde{f}_{i}\cdot e_{i}=1,h\cdot\tilde{f}_{i}=1, \widetilde{\widetilde{C}}\cdot\tilde{f}_{i}=2,(h+5f-\Sigma e_{i})\cdot\tilde{ f}_{i}=0.\]
Hence the morphism \(\psi\) given by \(\mathcal{M}=|\widetilde{\widetilde{C}}+K_{\mathbb{F}_{2,4}}-(h+2f)|\) contracts \((-1)\) curves \(\tilde{f}_{i}\) and the image curve \(\psi(\widetilde{\widetilde{C}})\subset\mathbb{P}^{5}\) acquires singularities. Since the complete linear system \(\mathrm{Im}(\rho)\) maps \(\widetilde{\widetilde{C}}\) onto \(\psi(\widetilde{\widetilde{C}})\subset\mathbb{P}^{5}\) with at least \(4\) singular points, \(\mathcal{M}\otimes\mathcal{O}_{\widetilde{\widetilde{C}}}=|K_{\widetilde{ \widetilde{C}}}(-1)|\) is not very ample.
## 6 Birationality of the moduli map
Recall that \(\Gamma_{1}\) (resp. \(\Gamma_{2}\)) is the irreducible component of \(\mathcal{H}_{15,14,5}\) whose general element has a dual curve not contained (resp. contained) in a quadric surface.
Let \(\mu_{i}:\Gamma_{i}\longrightarrow\mathcal{M}_{14}\), \(i=1,2\), denote the natural moduli map. We first need the following easiest kind of super abundance lemma regarding configuration of points with respect to a certain linear system on a smooth quadric \(Q\subset\mathbb{P}^{3}\) and we omit its verification which is rather elementary.
**Lemma 6.1**.: _Fix an integer \(e\) such that \(0\leq e\leq 6\) and a set \(A\subset Q\) such that \(\#A=e\) and \(h^{1}(\mathcal{I}_{A}(3,4))\neq 0\). Then either \(e=6\) and there is \(L_{1}\in|\mathcal{O}_{Q}(1,0)|\) such that \(A\subset L_{1}\) or \(e\in\{5,6\}\) and there is \(L_{2}\in|\mathcal{O}_{Q}(0,1)|\) such that \(\#(A\cap L_{2})\geq 5\) and in the latter case there is \(A^{\prime}\subseteq A\) such that \(\#A^{\prime}=5\) and \(h^{1}(\mathcal{I}_{A^{\prime}}(3,4))>0\)._
**Lemma 6.2**.: _Fix an integer \(a\) such that \(0\leq a\leq 6\). Let \(G\subset Q\) be a general subset of \(Q\) with cardinality \(a\). Then \(G\) is the set-theoretic base locus of \(|\mathcal{I}_{G}(2,2)|\)._
Proof.: It is sufficient to check for \(a=6\). Assume the existence of \(p\in Q\setminus G\) such that \(|\mathcal{I}_{G}(2,2)|=|\mathcal{I}_{G\cup\{p\}}(2,2)|\). Since \(G\) is general, no two points among \(G\) lie on on a line in \(Q\). For any \(E\subset G\) such that \(\#E=3\) the non-empty linear system \(|\mathcal{I}_{E}(1,1)|\) has a unique element \(C_{E}\) which is smooth.
The generality of \(G\) implies that \(h^{0}(Q,\mathcal{I}_{B}(1,1))=0\) for all \(B\subset G\) such that \(\#B=4\) and hence \(G\cap C_{E}=E.\) Likewise, we let \(C_{G\setminus E}\) be the uniqe conic in \(|\mathcal{I}_{G\setminus E}(1,1)|.\) Since \(G\subset C_{E}\cup C_{G\setminus E},\)\(C_{E}\cup C_{G\setminus E}\in|\mathcal{I}_{G}(2,2)|=|\mathcal{I}_{G\cup\{p\}}(2,2)|\) and therefore \(p\in C_{E}\cup C_{G\setminus E}.\) Assume for instance \(p\in C_{E}.\) On smooth \(C_{E}\cong\mathbb{P}^{1},\)\(\deg(\mathcal{O}_{C_{E}}(2,2))=4\) and hence \(\dim|\mathcal{O}_{C_{E}}(2,2)|=4.\) Furthermore the pencil \(|\mathcal{I}_{E,C_{E}}(2,2)|\subset|\mathcal{O}_{C_{E}}(2,2)|\) on \(C_{E}\) has the base locus \(E.\) Consider the exact sequence
\[0\xrightarrow{}\mathcal{I}_{G\setminus E}(1,1)\xrightarrow{}\mathcal{I}_{G \cup\{p\}}(2,2)\xrightarrow{}\mathcal{I}_{E\cup\{p\},C_{E}}(2,2)\xrightarrow {}0 \tag{2}\]
Since \(p\in Q\setminus G\) is not in the base locus \(E\) of the pencil \(|\mathcal{I}_{E,C_{E}}(2,2)|,\) we have \(h^{1}(C_{E},\mathcal{I}_{E\cup\{p\},C_{E}}(2,2))=0.\) Since \(G\setminus E\) is general and \(\#(G\setminus E)=3,\) we have \(h^{1}(Q,\mathcal{I}_{G\setminus E}(1,1))=0.\) Therefore it follows from (2) that \(h^{1}(Q,\mathcal{I}_{G\cup\{p\}}(2,2))=0,\) contradicting the assumption that \(p\) is in the base locus of \(|\mathcal{I}_{G}(2,2)|.\)
**Lemma 6.3**.: _Fix an integer \(a\) such that \(0\leq a\leq 6\) and take a general \(G\subset Q\) such that \(\#G=a.\) Fix a set \(A\subset Q\) such that \(\#A=6\), \(G\cap A=\emptyset\), \(h^{1}(Q,\mathcal{I}_{G\cup A}(3,4))>0\) and \(h^{1}(Q,\mathcal{I}_{G\cup A^{\prime}}(3,4))=0\) for all \(A^{\prime}\subsetneq A.\) Then there is \(L\in|\mathcal{O}_{Q}(1,0)|\) such that \(A\subset L\)._
Proof.: Since \(h^{1}(\mathcal{I}_{G\cup A^{\prime}}(3,4))=0\) for all \(A^{\prime}\subsetneq A,\) there is no \(J\in|\mathcal{O}_{Q}(0,1)|\) such that \(\#(A\cap J)\geq 5.\) Thus the case \(a=0\) is true by Lemma 6.1. Thus we may assume \(a>0\) and that the lemma is true for the integer \(a-1.\) Hence either there exists \(L\in|\mathcal{O}_{Q}(1,0))|\) such that \(A\subset L\) or \(h^{1}(\mathcal{I}_{G^{\prime}\cup A}(3,4))=0\) for all \(G^{\prime}\subsetneq G.\) Thus from now on we assume \(h^{1}(\mathcal{I}_{G^{\prime}\cup A}(3,4))=0\) for all \(G^{\prime}\subsetneq G.\) Fix \(A^{\prime}\subset A\) such that \(\#A^{\prime}=5.\) Set \(\{p\}:=A\setminus A^{\prime}.\) Since \(\#A^{\prime}=5,\)\(|\mathcal{I}_{A^{\prime}}(1,2)|\neq\emptyset.\) Take a general \(T\in|\mathcal{I}_{A^{\prime}}(1,2)|.\) Consider the residual exact sequence of \(T\):
\[0\xrightarrow{}\mathcal{I}_{G\cup A\setminus(G\cup A)\cap T}(2,2) \xrightarrow{}\mathcal{I}_{G\cup A}(3,4)\xrightarrow{}\mathcal{I}_{T\cap( G\cup A),T}(3,4)\xrightarrow{}0 \tag{3}\]
(a) First assume \(p\notin T.\) In this case we have \(G\cup A\setminus(G\cup A)\cap T\subseteq G\cup\{p\}\) and \(T\cap(G\cup A^{\prime})=T\cap(G\cup A)\subseteq G\cup A^{\prime}.\) Since \(h^{1}(\mathcal{I}_{G\cup A^{\prime}}(3,4))=0\) by assumption, we have \(h^{1}(T,\mathcal{I}_{T\cap(G\cup A),T}(3,4))=h^{1}(T,\mathcal{I}_{T\cap(G \cup A^{\prime}),T}(3,4))=0\) from the exact sequence (3) adapted to the set \(G\cup A^{\prime}.\) Hence by assumption \(h^{1}(Q,\mathcal{I}_{G\cup A}(3,4))>0,\) the long exact sequence on cohomology of the exact sequence of (3) implies
\[h^{1}(Q,\mathcal{I}_{\{p\}\cup(G\setminus G\cap T)}(2,2)) =h^{1}(Q,\mathcal{I}_{G\cup A\setminus((G\cup A)\cap T)}(2,2))\] \[=h^{1}(Q,\mathcal{I}_{\{p\}\cup(G\cup A^{\prime})\setminus((G \cup A)\cap T)}(2,2))>0.\]
and hence \(h^{1}(Q,\mathcal{I}_{G\cup\{p\}}(2,2))>0.\) Since \(G\) is general, \(h^{1}(Q,\mathcal{I}_{G}(2,2))=0.\) Thus \(p\notin G\) is in the base locus of \(|\mathcal{I}_{G}(2,2)|,\) contradicting Lemma 6.2.
(b) Now assume \(p\in T.\) In this case, \(A\subset T\) and \(G\cup A\setminus(G\cup A)\cap T\subseteq G.\) Since \(G\) is general in \(Q,\)\(\#G=a\leq h^{0}(\mathcal{I}_{Q}(2,2))=9\) we have \(h^{1}(Q,\mathcal{I}_{G}(2,2))=0\) and hence \(h^{1}(Q,\mathcal{I}_{G\cup A\setminus(G\cup A)\cap T}(2,2))=0.\) Thus from (3)
and the assumption \(h^{1}(Q,\mathcal{I}_{G\cup A}(3,4))>0\) we have \(h^{1}(T,\mathcal{I}_{T\cap(G\cup A),T}(3,4))>0.\) From the standard exact sequence of the restriction map,
\[0\rightarrow\mathcal{I}_{G\cup A}(3,4)\rightarrow\mathcal{I}_{(G\cup A)\cap T }(3,4)\rightarrow\mathcal{I}_{(G\cup A)\cap T}(3,4)\otimes\mathcal{O}_{T}\to 0\]
one has \(h^{1}(Q,\mathcal{I}_{T\cap(G\cup A)}(3,4))>0\) since \(h^{1}(T,\mathcal{I}_{T\cap(G\cup A),T}(3,4))>0.\) By the assumption \(h^{1}(\mathcal{I}_{G^{\prime}\cup A}(3,4))=0\) for all \(G^{\prime}\subsetneq G\) and by \(h^{1}(Q,\mathcal{I}_{T\cap(G\cup A)}(3,4))>0,\) we have \(T\cap G=G\) and \(T\cap(G\cup A)=G\cup A\subset T,\) hence \(G\cup A\) is in the base locus \(\mathcal{B}\) of \(|\mathcal{I}_{A^{\prime}}(1,2)|.\) Therefore it follows that
\[0<h^{0}(Q,\mathcal{I}_{A^{\prime}}(1,2))\leq h^{0}(Q,\mathcal{I}_{G\cup A}(1, 2))\leq h^{0}(Q,\mathcal{I}_{G}(1,2))\leq h^{0}(\mathcal{O}_{Q}(1,2))-a.\]
For \(\#G=6,\) this is an obvious absurdity.
Now assume \(\#G=a=5.\) In this case \(|\mathcal{I}_{G}(1,2)|\) has a unique element which is a twisted cubic \(C\cong\mathbb{P}^{1}.\) Note that \(\deg\mathcal{I}_{G\cup A,C}(3,4)=\deg\mathcal{O}_{C}(3,4)-\deg\mathcal{I}_{G \cup A,C}=0\) and we have \(h^{1}(C,\mathcal{I}_{G\cup A,C}(3,4))=0.\) Since \(h^{1}(Q,\mathcal{O}_{Q}(2,2))=0,\) the residual exact sequence (3) adapted to \(C\subset Q\) gives a contradiction to the assumption \(h^{1}(Q,\mathcal{I}_{G\cup A}(3,4))>0.\)
(b1) Assume \(a=4.\) Fix \(A^{\prime\prime}\subset A\) such that \(\#A^{\prime\prime}=4\) and take a general \(T_{1}\in|\mathcal{I}_{A^{\prime\prime}}(1,2)|.\) By step (a) applied to all \(A^{\prime}\subset A\) with \(\#A^{\prime}=5,\) we may assume either \(A\subset T_{1}\) or \(A\cap T_{1}=A^{\prime\prime}.\)
* First assume \(A\subset T_{1}\) and hence \(G\cup A\setminus(G\cup A)\cap T_{1})\subseteq G.\) Since \(G\) is general, \(h^{1}(Q,\mathcal{I}_{G}(2,2))=0\) and hence \(h^{1}(Q,\mathcal{I}_{G\cup A\setminus T_{1}\cap(G\cup A)}(2,2))=0.\) Considering the long exact sequence on cohomology of (3) with \(T_{1}\) instead of \(T\), we have \(h^{1}(T_{1},\mathcal{I}_{(G\cup A)\cap T_{1}}(3,4))>0.\) On the other hand from the long exact sequence on cohomology of the standard restriction map \[0\rightarrow\mathcal{I}_{(G\cup A)}(3,4)\rightarrow\mathcal{I}_{T_{1}\cap(G \cup A)}(3,4)\rightarrow\mathcal{I}_{T_{1}\cap(G\cup A)}(3,4)\otimes\mathcal{O }_{T_{1}}\to 0\] we have \(h^{1}(Q,\mathcal{I}_{T_{1}\cap(G\cup A)}(3,4))>0.\) By assumption \(h^{1}(Q,\mathcal{I}_{G^{\prime}\cup A}(3,4))=0\) for all \(G^{\prime}\subsetneq G,\) we have \(T_{1}\cap(G\cup A)=G\cup A\) and hence \(T_{1}\supset G\cup A.\) Since \(T_{1}\) is general in \(|\mathcal{I}_{A^{\prime\prime}}(1,2)|,\)\(G\subset T_{1},\)\(\#G=\#A^{\prime\prime}\) and \(G\) is general, we get \[|\mathcal{I}_{A^{\prime\prime}}(1,2)|=|\mathcal{I}_{G\cup A}(1,2)|=|\mathcal{I}_ {G}(1,2))|\] and that each element of \(|\mathcal{I}_{G}(1,2)|\) contains \(A.\) Since \(\#G=4\) and \(G\) is general the base locus of the pencil \(|\mathcal{I}_{G}(1,2)|\) is the intersection of \(2\) general \(C,C_{1}\in|\mathcal{O}_{Q}(1,2)|.\) Since \(\#(C\cap C_{1})=4<10=\#(G\cup A),\) we get a contradiction.
* Now assume \(A\cap T_{1}=A^{\prime\prime}.\) Since \(A^{\prime\prime}\subsetneq A,\) we have \(h^{1}(Q,\mathcal{I}_{G\cup A^{\prime\prime}}(3,4))=0.\) Thus \(h^{1}(T_{1},\mathcal{I}_{T_{1}\cap(G\cup A^{\prime\prime},T_{1}}(3,4))=0.\) Since \(T_{1}\cap(G\cup A)=T_{1}\cap(G\cup A^{\prime\prime}),\) the long cohomology exact sequence of (3) for \(T_{1}\) gives \(h^{1}(Q,G\cup A\setminus(G\cup A)\cap T_{1})=h^{1}(Q,\mathcal{I}_{(G\setminus G \cap T_{1})\cup(A\setminus A^{\prime\prime})}(2,2))>0\) and hence \(h^{1}(Q,\mathcal{I}_{G\cup(A\setminus A^{\prime\prime})}(2,2))>0.\) Take a general \(D\in|\mathcal{I}_{A\setminus A^{\prime\prime}}(1,1)|\) and consider the exact sequence \[0\rightarrow\mathcal{I}_{G\setminus D\cap G}(1,1)\rightarrow\mathcal{I}_{G \cup(A\setminus A^{\prime\prime})}(2,2)\rightarrow\mathcal{I}_{D\cap(G\cup(A \setminus A^{\prime\prime}),D}(2,2)\to 0\] (4) Since \(\#(G\setminus G\cap D)\leq 4\) and \(G\) is general, \(h^{1}(Q,\mathcal{I}_{G\setminus D\cap G}(1,1))=0.\) Since \(h^{1}(Q,\mathcal{I}_{G\cup(A\setminus A^{\prime\prime})}(2,2))>0,\) the long cohomology exact sequence
of (4) yields \(h^{1}(D,\mathcal{I}_{D\cap(G\cup(A\setminus A^{\prime\prime}),D}(2,2))>0\). Note that \(D\) is either a smooth rational curve or \(D=L_{1}\cup L_{2}\) with \(L_{1}\in|\mathcal{O}_{Q}(1,0)|\) and \(L_{2}\in|\mathcal{O}_{Q}(0,1)|\). Since \(\#G=4\), \(h^{0}(Q,\mathcal{O}_{Q}(1,1))=4\) and \(G\) is general, we have \(h^{0}(Q,\mathcal{I}_{G}(1,1))=0\). Thus \(G\nsubseteq D\). Thus \(\#(D\cap(G\cup(A\setminus A^{\prime\prime}))\leq 5\). Assume that \(D\) is a smooth rational curve. Since \(\#(D\cap(G\cup(A\setminus A^{\prime\prime}))\leq 5\) and by \[\deg\mathcal{I}_{D\cap(G\cup(A\setminus A^{\prime\prime}),D}(2,2)\geq-5+\deg( \mathcal{O}_{D}(2,2))=-1\] we get \(h^{1}(D,\mathcal{I}_{D\cap(G\cup(A\setminus A^{\prime\prime}),D}(2,2))=0\), a contradiction. Now assume \(D=L_{1}\cup L_{2}\). Since \(G\) is general and \(h^{0}(\mathcal{O}_{Q}(1,0))=h^{0}(\mathcal{O}_{Q}(0,1))=2\), \(\#(L_{i}\cap G)\leq 1\) for all \(i\) and hence \(\#(D\cap G)\leq 2\) and \(\#(D\cap G)=1\) if \(L_{1}\cap L_{2}\in G\). Recall that \(h^{1}(D,\mathcal{I}_{D\cap(G\cup(A\setminus A^{\prime\prime}),D}(2,2))>0\) and hence \(h^{1}(Q,\mathcal{I}_{D\cap(G\cup(A\setminus A^{\prime\prime})}(2,2))>0\). Set \(W:=D\cap(G\cup(A\setminus A^{\prime\prime}))\). Fix \(i\in\{1,2\}\) such that \(\#(W\cap L_{i})\geq\#(W\cap L_{3-i})\). Consider the residual exact sequence of \(L_{i}\): \[0\rightarrow\mathcal{I}_{W\setminus W\cap L_{i}}(2,2)(-L_{i})\rightarrow \mathcal{I}_{W}(2,2)\rightarrow\mathcal{I}_{L_{i}\cap W,L_{i}}(2,2)\to 0\] (5) Since \(\#(G\cap L_{i})\leq 1\) and \(\#(A\setminus A^{\prime\prime})=2\), we have \(\#(W\cap L_{i})\leq 3\). Since \(\deg(\mathcal{O}_{L_{i}}(2,2))=2\), we have \(h^{1}(L_{i},\mathcal{I}_{L_{i}\cap W,L_{i}}(2,2))=0\). Thus from the long cohomology exact sequence of (5), \(h^{1}(Q,\mathcal{I}_{W\setminus W\cap L_{i}}(2,2)(-L_{i}))>0\). We have \(\#W\leq 5\) and \(\#(L_{i}\cap W)\geq\#(L_{3-i}\cap W)\), we have \(\#(W\setminus W\cap L_{i})\leq 2\). Thus \(h^{1}(Q,\mathcal{I}_{W\setminus W\cap L_{i}}(2,2)(-L_{i}))=0\), a contradiction. (b2) Assume \(1\leq a\leq 3\). Fix \(A_{1}\subset A\) such that \(\#A_{1}=3\). Take a general \(D_{1}\in|\mathcal{I}_{A_{1}}(1,1)|\). We use (4) for \(D_{1}\) instead of \(D\). If \(h^{1}(Q,\mathcal{I}_{G\cup A\setminus(D_{1}\cap(G\cup A))}(2,2))=0\), we conclude as above. Thus we may assume \(h^{1}(D,\mathcal{I}_{D_{1}\cap(G\cup A)}(2,2))>0\). We conclude as in in the last part of step (b1.2).
**Lemma 6.4**.: _For all integers \(0\leq e\leq 6\), the normalization \(Y\) of a general nodal \(D\in|\mathcal{O}_{Q}(5,6)|\) with exactly \(e\) nodes has a unique base-point free \(g_{6}^{1}\), the one induced by \(|\mathcal{O}_{Q}(1,0)|\), and a unique \(g_{5}^{1}\), the one induced by \(|\mathcal{O}_{Q}(0,1)|\)._
Proof.: The curve \(D\) has geometric genus \(g=20-e\). We first prove that \(Y\) has a unique \(g_{5}^{1}\). First of all \(Y\) has no \(g_{4}^{1}\), because any element of \(|\mathcal{O}_{\mathbb{P}^{1}\times\mathbb{P}^{1}}(5,4)|\) has arithmetic genus \(12<20-e\). Assume that \(Y\) has another base point free \(g_{5}^{1}\). Since 5 is prime, these two base point free \(g_{5}^{1}\) induces a morphism \(\pi:Y\rightarrow\mathbb{P}^{1}\times\mathbb{P}^{1}\) birational onto its image \(\pi(Y)\in|\mathcal{O}_{\mathbb{P}^{1}\times\mathbb{P}^{1}}(5,5)|\). Since \(\pi(Y)\) has arithmetic genus \(q=16\), we have \(4\leq e\leq 6\).
The set of all possible such \(\pi(Y)\)'s on a fixed \(Q\) depend on at most
\[\dim|\mathcal{O}_{\mathbb{P}^{1}\times\mathbb{P}^{1}}(5,5)|-(q-g)=35-(16-(20-e) )=39-e\]
parameters by Remark 2.2. Since the set of all \(D\) is irreducible dimension \(41-e>39-e\) a curve with two \(g_{5}^{1}\) is not general in \(\widetilde{\Sigma}_{g}\), i.e does not constitute a component.
Now we prove that a general \(Y\) has a unique base point free \(g_{6}^{1}\), the one induced by \(|\mathcal{O}_{Q}(1,0)|\). Set \(E:=\operatorname{Sing}(D)\). Since \(D\) is general, \(E\) is a general subset of \(Q\) with cardinality \(e\); Remark 2.2. Let \(f:Y\to D\) be
the normalization map. Fix a possibly incomplete base point free pencil \(\mathcal{L}\in\operatorname{Pic}^{6}(Y)\) whi. We need to prove that \(\mathcal{L}\cong f^{*}(\mathcal{O}_{D}(1,0))\). Since \(Y\) has a unique \(g^{1}_{5}\), \(h^{0}(\mathcal{L})=2\). For any \(G\subseteq E\) let \(f_{G}:Y_{G}\to D\) be the partial normalization of \(D\) in which we only normalize the points of \(G\). Hence \(Y_{G}\) is nodal with exactly \(e-\#G\) nodes. Let \(a_{G}:Y\to Y_{G}\) denote the normalization map. Take a minimal \(G\subseteq E\) such that there is a degree \(6\) line bundle \(\mathcal{R}\) on \(Y_{G}\) such that \(h^{0}(\mathcal{R})\geq 2\) and \(a^{*}_{G}(\mathcal{R})=\mathcal{L}\). Since \(h^{0}(\mathcal{L})=2\) and \(\mathcal{L}\) is base point free, \(h^{0}(\mathcal{R})=2\) and \(\mathcal{R}\) is base point free. Fix a general \(B\in|\mathcal{R}|\) and set \(A:=f_{G}(B)\subset D\). Since \(\mathcal{R}\) is base point free, \(B\) is formed by \(6\) distinct points of \(Y_{G}\) and \(G\cap B=\emptyset\). Since \(h^{0}(\mathcal{R})=2\), Riemann-Roch on \(Y_{G}\) gives \(B\) imposes exactly \(5\) independent conditions on \(|\omega_{Y_{G}}|\). Since the canonical series \(|\omega_{Y_{G}}|\) is completely cut out on \(D\) by \(|\omega_{Q}+D|=|\mathcal{O}_{Q}(-2,-2)+\mathcal{O}_{Q}(5,6)|=|\mathcal{O}_{Q}( 3,4)|\), we get \(h^{1}(Q,\mathcal{I}_{G\cup A}(3,4))>0\). Since \(h^{0}(\mathcal{R})=2\) and \(\mathcal{R}\) is base point free, \(h^{1}(Q,\mathcal{I}_{G\cup A^{\prime}}(3,4))=0\) for all \(A^{\prime}\subsetneq A\). Thus Lemma 6.3 concludes the proof.
**Proposition 6.5**.: _The generic fiber of the map \(\mu_{2}:\Gamma_{2}\to\mathcal{M}_{14}\) is formed by a unique orbit by the group \(\operatorname{Aut}(\mathbb{P}^{5})\), i.e. a general \(X\in\Gamma_{2}\) has a unique \(g^{5}_{15}\)._
Proof.: A non-empty open subset \(W\) of \(\Gamma_{2}\) is formed by the normalizations of all nodal \(C\in|\mathcal{O}_{Q}(5,6)|\) in which the set \(E\) of the nodes is " general " in the sense that each for all \(a,b\in\mathbb{N}\) and each \(S\subset E\) we have \(h^{0}(Q,\mathcal{I}_{S}(a,b))=\max\{0,(a+1)(b+1)-\#S\}\). Fix a general \(X\in\Gamma_{2}\). In particular \(X\in W\). Assume that \(X\) is isomorphic to another element \(X_{1}\in\Gamma_{2}\). To prove the proposition we need to prove that \(X\) and \(X_{1}\) are projectively equivalent. For a general \(X\) the curve \(X_{1}\) is general in \(W\). Let \(C_{1}\) be dual curve of \(X_{1}\). It is sufficient to prove that \(C\) and \(C_{1}\) are projectively equivalent. Since \(C_{1}\) is a general nodal element of \(|\mathcal{O}_{Q}(5,6)|\) of geometric genus \(14\) and \(X\cong X_{1}\), the case \(e=6\) of Lemma 6.4 applied to \(C_{1}\) and \(C\) give that both \(C_{1}\) and \(C\) are obtained by the unique pair of base point free \(g^{1}_{5}\) and \(g^{1}_{6}\) on \(X\cong X_{1}\) so that \(\mathcal{O}_{C_{1}}(1)\cong\mathcal{O}_{C}(1)=|g^{1}_{5}+g^{1}_{6}|\), hence \(C\) and \(C_{1}\) are projectively equivalent.
**Remark 6.6**.: By Proposition 5.2 and Proposition 2.7, all smooth \(X\in\Gamma_{2}\) is \(5\)-gonal.
**Proposition 6.7**.: _Fix a general \(X\in\Gamma_{1}\). Then_
_(a) \(X\) has a unique very ample \(g^{5}_{15}\), \(\mathcal{O}_{X}(1)\), and a unique very ample \(g^{3}_{11}\), \(K_{X}(-1)\)._
_(b) The general fiber of the morphism \(\mu_{1}:\Gamma_{1}\longrightarrow\mathcal{M}_{14}\) has a unique orbit under the action of the group \(\operatorname{Aut}(\mathbb{P}^{5})\)._
Proof.: Note that part (b) is just a translation of the uniqueness of the very ample \(g^{5}_{15}\) claimed in part (a).
We proved that the set of all \(X_{1}\in\Gamma_{1}\) whose dual is not smooth have lower dimension. In particular we may assume that the dual curve \(C\) of a general \(X\in\Gamma_{1}\) is smooth. We may also assume; \(C\) is not in a cubic surface ([12, Prop. B1], \(C\) is ACM, \(C\) is \(7\)-gonal and that for each \(g^{1}_{7}\), \(|R|\), on \(C\) there
is a \(4\)-secant line \(J\) of \(C\) such that \(|R|\) is induced by the pencil of planes through \(J\) (Proposition 5.1)..
To prove part (a) by duality it is sufficient to prove that the dual \(C\) of a general \(X\in\Gamma_{1}\) has a unique very ample \(g_{11}^{3}=|\mathcal{O}_{C}(1)|\). Take any very ample \(|M|=g_{11}^{3}\) on \(C\). Call \(C_{1}\subset\mathbb{P}^{3}\) the image of \(C\) by this \(|M|\), i.e. call \(\varphi:C\to C_{1}\subset\mathbb{P}^{3}\) the embedding associated to this \(|M|\). By assumption \(\varphi\) is an isomorphism of genus \(14\) smooth curves. Note that \(M=\varphi^{*}(\mathcal{O}_{C_{1}}(1))\). Two line bundles on \(C\) are isomorphic if they are associated to the same effective divisor of \(C\). Thus to prove that \(|M|=|\mathcal{O}_{C}(1)|\), i.e. to prove that \(M\cong\mathcal{O}_{C}(1)\) it is sufficient to prove the existence of \(A\in|M|\) with \(A\in|\mathcal{O}_{C}(1)|\), i.e. sufficient to find a plane \(H\subset\mathbb{P}^{3}\) such that \(\varphi^{-1}(H\cap C_{1})\in|\mathcal{O}_{C}(1)|\). The generality of \(X\)[12, Prop. B1] imply that \(C_{1}\) is not contained in a cubic surface. Thus \(C_{1}\) is ACM and in the irreducible component \(\Delta\) of the set of all ACM space curves consisting of those which are linked to curves of genus \(2\) and degree \(5\). Since \(X\) is general in \(\Gamma_{1}\), \(C_{1}\) is general in \(\Delta\) and hence its gonality is computed by \(4\)-secant lines, i.e. \(C_{1}\) has gonality \(7\) and for each base point free \(g_{7}^{1}=|R_{1}|\) on \(C_{1}\) there is a \(4\)-secant line \(J_{1}\) such that for all \(E_{1}\in|R_{1}|\) there is a plane \(H\supset J_{1}\) such that \(H\cap C_{1}=(J_{1}\cap C_{1})+E_{1}\). Fix \(E_{1}\in|R_{1}|\) and set \(E:=\varphi^{-1}(E_{1})\). Since \(\varphi\) is an isomorphism, \(|R|:=|E|\) is a also \(g_{7}^{1}\). Thus there is a \(4\)-secant line \(J\) of \(C\) such that \(|R|\) is induced by the intersection with \(C\) of all planes containing \(J\). Take a plane \(H\supset E\). We have \(H\cap C\in|\mathcal{O}_{C}(1)|\) and \(H\cap C=(C\cap J)+E\). Note that \(C\cap J\) is the base locus of \(\mathcal{O}_{C}(1)(-E)\). Since \(E=\varphi^{-1}(E_{1})\) and \(J_{1}\cap C_{1}\) is the base locus of \(|\mathcal{O}_{C_{1}}(1)(-E_{1})|\) and \(\varphi^{-1}(J_{1}\cap C_{1})=J\cap C\). Thus
\[\varphi^{-1}((J_{1}\cap C_{1})+E_{1}))=(J\cap C)+E\in|\mathcal{O}_{C}(1)|\]
and we have \(\varphi^{*}(\mathcal{O}_{C_{1}}(1))\cong\mathcal{O}_{C}(1)\).
## 7 An epilogue, a small remark on the irreducibility of \(\mathcal{H}_{g+1,g,5}\)
So far we have dealt with a special case (\(g=14\)) of the Hilbert scheme \(\mathcal{H}_{g+1,g,5}\) and showed its reducibility. For \(g\leq 13\), the followings are known already.
**Remark 7.1**.:
1. \(\mathcal{H}_{g+1,g,5}\neq\emptyset\) if and only if \(g\geq 12\).
2. For \(g=12\), \(\mathcal{H}_{g+1,g,5}\) is reducible consisting of curves of maximal genus and the reducibility is known by [14, Corollary 3.12, page 92][20, proof of Theorem 2.4].
3. For \(g=13\), \(\mathcal{H}_{g+1,g,5}^{\mathcal{L}}\) is irreducible by [19, Theorem 3.4]. Since \(\pi(14,13,6)<g=13\) we have \(\mathcal{H}_{g+1,g,5}^{\mathcal{L}}=\mathcal{H}_{g+1,g,5}\) and is irreducible.
The case \(g=14\) which we treated in this paper is the first non-trivial case in this context. For curves with higher \(g\geq 15\), virtually nothing is known about the irreducibility of \(\mathcal{H}_{g+1,g,5}\). However the main result of this paper suggests that the irreducibility of \(\mathcal{H}_{d,g,5}\) beyond the range \(d\geq g+5\) conjectured by Severi does not hold for \(d\) not too much below \(g+5\).
## Declarations
**Conflict of interest** The authors have no conflict of interest.
|
2302.11024 | Gradient Flows for Sampling: Mean-Field Models, Gaussian Approximations
and Affine Invariance | Sampling a probability distribution with an unknown normalization constant is
a fundamental problem in computational science and engineering. This task may
be cast as an optimization problem over all probability measures, and an
initial distribution can be evolved to the desired minimizer dynamically via
gradient flows. Mean-field models, whose law is governed by the gradient flow
in the space of probability measures, may also be identified; particle
approximations of these mean-field models form the basis of algorithms. The
gradient flow approach is also the basis of algorithms for variational
inference, in which the optimization is performed over a parameterized family
of probability distributions such as Gaussians, and the underlying gradient
flow is restricted to the parameterized family.
By choosing different energy functionals and metrics for the gradient flow,
different algorithms with different convergence properties arise. In this
paper, we concentrate on the Kullback-Leibler divergence after showing that, up
to scaling, it has the unique property that the gradient flows resulting from
this choice of energy do not depend on the normalization constant. For the
metrics, we focus on variants of the Fisher-Rao, Wasserstein, and Stein
metrics; we introduce the affine invariance property for gradient flows, and
their corresponding mean-field models, determine whether a given metric leads
to affine invariance, and modify it to make it affine invariant if it does not.
We study the resulting gradient flows in both probability density space and
Gaussian space. The flow in the Gaussian space may be understood as a Gaussian
approximation of the flow. We demonstrate that the Gaussian approximation based
on the metric and through moment closure coincide, establish connections
between them, and study their long-time convergence properties showing the
advantages of affine invariance. | Yifan Chen, Daniel Zhengyu Huang, Jiaoyang Huang, Sebastian Reich, Andrew M. Stuart | 2023-02-21T21:44:08Z | http://arxiv.org/abs/2302.11024v7 | # Gradient Flows for Sampling: Mean-Field Models, Gaussian Approximations and Affine Invariance+
###### Abstract
Sampling a probability distribution with an unknown normalization constant is a fundamental problem in computational science and engineering. This task may be cast as an optimization problem over all probability measures, and an initial distribution can be evolved to the desired minimizer (the target distribution) dynamically via gradient flows. Mean-field models, whose law is governed by the gradient flow in the space of probability measures, may also be identified; particle approximations of these mean-field models form the basis of algorithms. The gradient flow approach is also the basis of algorithms for variational inference, in which the optimization is performed over a parameterized family of probability distributions such as Gaussians, and the underlying gradient flow is restricted to the parameterized family.
By choosing different energy functionals and metrics for the gradient flow, different algorithms with different convergence properties arise. In this paper, we concentrate on the Kullback-Leibler divergence as the energy functional after showing that, up to scaling, it has the _unique_ property (among all \(f\)-divergences) that the gradient flows resulting from this choice of energy do not depend on the normalization constant of the target distribution. For the metrics, we focus on variants of the Fisher-Rao, Wasserstein, and Stein metrics; we introduce the affine invariance property for gradient flows, and their corresponding mean-field models, determine whether a given metric leads to affine invariance, and modify it to make it affine invariant if it does not.
We study the resulting gradient flows in both the space of all probability density functions and in the subset of all Gaussian densities. The flow in the Gaussian space may be understood as a Gaussian approximation of the flow in the density space. We demonstrate that, under mild assumptions, the Gaussian approximation based on the metric and through moment closure coincide; the moment closure approach is more convenient for calculations. We establish connections between these approximate gradient flows, discuss their relation to natural gradient methods in parametric variational inference, and study their long-time convergence properties showing, for some classes of problems and metrics, the advantages of affine invariance. Furthermore, numerical experiments are included which demonstrate that affine invariant gradient flows have desirable convergence properties for a wide range of highly anisotropic target distributions.
B ayesian inference, sampling, gradient flow, mean-field dynamics, Gaussian approximation, variational inference, affine invariance.
68Q25, 68R10, 68U05
## 1 Introduction
### Context
This paper is concerned with the problem of sampling a probability distribution (the target) known up to normalization. This problem is fundamental in many applications arising in computational science and engineering and is widely studied in the applied mathematics, machine learning and statistics communities. A particular application is Bayesian inference for large-scale inverse problems; such problems are ubiquitous, arising in applications from climate science [66, 119, 64, 92], through numerous problems in engineering [137, 38, 21] to machine learning [112, 101, 28, 32]. These applications have fueled the need for efficient and scalable algorithms which employ noisy data to learn about unknown parameters \(\theta\) appearing in models and perform uncertainty quantification for predictions then made by those models.
Mathematically, the objective is to sample the target probability distribution with
density \(\rho_{\mathrm{post}}(\cdot)\), for the parameter \(\theta\in\mathbb{R}^{N_{\theta}}\), given by
\[\rho_{\mathrm{post}}(\theta)\propto\exp(-\Phi_{R}(\theta)), \tag{1}\]
where \(\Phi_{R}:\mathbb{R}^{N_{\theta}}\to\mathbb{R}_{+}\) is a known function. We use the notation \(\rho_{\mathrm{post}}\) because of the potential application to Bayesian inference; however we do not explicitly use the Bayesian structure in this paper, and our analysis applies to arbitrary target distributions.
We study the use of gradient flows in the space of probability distributions in order to create algorithms to sample the target distribution. By studying gradient flows with respect to different metrics, and by studying mean-field based particle models and Gaussian approximations, both related to these underlying gradient flows, we provide a unifying approach to the construction of a wide family of algorithms. The choice of metric plays a key role in the behavior of the resulting methods and we highlight the importance of affine invariance in this regard. In Subsection 1.2 we provide a literature review pertinent to our contributions; the contributions we make are described in Subsection 1.3 and in Subsection 1.4 we describe the organization of the remainder of the paper.
### Literature Review
In this subsection, we describe the research landscape in which our work sits. We start by discussing the general background, and describing our work in this context, and then we give more detailed literature reviews relating to the topics of gradient flows, mean-field models, Gaussian approximations, and affine invariance.
#### 1.2.1 Background
Numerous approaches to the sampling problem have been proposed in the literature. One way of classifying them is into: a) methods which deform a _given_ source measure (for example the prior in Bayesian inference) into the target measure, in a fixed time of fixed finite number of steps or in a finite continuous time interval; and b) methods which transform _any_ initial measure into the target measure after an infinite number of steps, or at time infinity in continuous time. Continuous time formulations are used for insight into the algorithms; discrete time must be used in practice. Typical methods in category a) are sequential Monte Carlo (SMC) approaches [44], with (typically not optimal) transport being the underpinning continuous time concept [128]; typical methods in category b) are Markov chain Monte Carlo (MCMC) approaches [18], with stochastic differential equations (SDEs) which are ergodic with respect to the target, such as Langevin equations [107], being the underpinning continuous time concept. Making practical algorithms out of these ideas, for large scale problems in science and engineering, often requires invocation of further reduction of the space in which solutions are sought, for example by variational inference [15] or by ensemble Kalman approximation [20].
In this paper we focus primarily on methods in category b) and describe a general methodology for the derivation of a wide class of sampling algorithms; however the methods can be interpreted as being partially motivated by dynamics of transports arising in the methods of type a). Specifically, we focus on methods created by studying the gradient flow, in various metrics, induced by an energy which measures divergence of the current estimate of the target from the true target. With this perspective, we provide a unifying viewpoint on a number of sampling methods appearing in the literature and a methodology for deriving new methods. We focus on the Fisher-Rao, Wasserstein, and Stein metrics, and variants thereof. Creating useful algorithms out of this picture requires further simplifications; we study mean-field
models, which lead to particle methods, and methods based on confining the gradient descent of the energy to the space of Gaussians. In both settings we precisely define the concept of being affine invariant; roughly speaking this concept requires that any invertible affine transformation of \(\theta\) makes no difference to the gradient flow. We include numerical experiments which demonstrate the advantage of affine invariant methods for anisotropic targets. Because the analysis is cleaner we work in continuous time; but time-discretization is employed to make implementable algorithms. Furthermore we emphasize that our statements about the existence of gradient flows are purely formal. For the rigorous underpinnings of gradient flows see [3]; and for a recent extension of this rigorous analysis to a sub-class of gradient flows with respect to an affine invariant metric see [19].
#### 1.2.2 Gradient Flows
There is existing literature on the use of gradient flows in the probability density space, employing a variety of different metric tensors, to minimize an energy defined as the Kullback-Leibler (KL) divergence between the current density and the target distribution. Particle realizations of these flows then lead to sampling schemes. For example, the Wasserstein gradient flow [69, 106] and Stein variational gradient flow [90, 89] have led to sampling algorithms based on Langevin dynamics and Stein variational gradient descent respectively; in [95], the Fisher-Rao gradient flow, using kernel-based density approximations, has been proposed for sampling. Furthermore, the paper [94] proposed the Wasserstein-Fisher-Rao gradient flow to sample multi-modal distributions. In [54, 55], the Kalman-Wasserstein metric was introduced and gradient flows with respect to this metric were advocated. Interpolation between the Wasserstein metric and Stein metric was studied in [62]. Accelerated gradient flows in the probability space have been studied in [135]. A recent overview of the use of gradient flows in optimization and sampling can be found in [127].
The Wasserstein gradient flow was identified in the seminal work [69]. The authors showed that the Fokker-Planck equation is the Wasserstein gradient flow of the KL divergence of the current density estimate from the target. Since then, Wasserstein gradient flow has played a significant role in optimal transport [116], sampling [30, 81], machine learning [34, 115], partial differential equations [106, 22] and many other areas. The Fisher-Rao metric was introduced by C.R. Rao [111] via the Fisher information matrix. The original definition is in parametric density spaces, and the corresponding Fisher-Rao gradient flow in the parameter space leads to natural gradient descent [1]. The Fisher-Rao metric in infinite dimensional probability spaces was discussed in [51, 123]. The concept underpins information geometry [2, 6]. The gradient flow of the KL divergence under the Fisher-Rao metric is induced by a mean-field model of birth-death type. The birth-death process has been used in sequential Monte Carlo samplers to reduce the variance of particle weights [40] and to accelerate Langevin sampling [94, 95]. The discovery of the Stein metric [89] follows the introduction of the Stein variational gradient descent algorithm [90]. The study of the Stein gradient flow [89, 93, 45] sheds light on the analysis and improvements of the algorithm [41, 134, 135].
#### 1.2.3 Mean-Field Models
It is natural to ask which evolution equations in state space \(\mathbb{R}^{N_{\theta}}\) give rise to a given gradient flow in the space of probability measures. Continuous time linear Markov processes with continuous sample paths are limited to Ito SDEs (or equivalent models written in terms of Stratonovich or other stochastic integrals) [107]. It is thus natural to seek mean-field models in the form of Ito SDEs which depend on their own density, which is therefore governed by a nonlinear Fokker-Planck equation. Examples of particle models giving rise to linear and nonlinear
Fokker-Planck equations with gradient structure include Langevin dynamics [69, 106] and Stein variational gradient descent [90, 89] for sampling the Wasserstein gradient flow and the Stein variational gradient flow respectively. It is also of potential interest to go beyond Ito SDEs and include Levy (jump) processes [13, 4], as well as birth-death models [74]. Finally, we mention mean-field models for ensemble Kalman type algorithms [20]; these typically do not have a law which is a gradient flow in the space of probability measures, except in the linear-Gaussian setting. These mean-field models combine gradient flow, Gaussian approximations, and mean-field equations [42, 16, 20].
In practice, mean-field models are approximated by interacting particle systems [67], in which integration against the density is replaced by integration against the empirical measure of the particle system. At the level of the nonlinear Markov process for the density on \(\mathbb{R}^{N_{\theta}}\) defined by the mean-field model, this corresponds to approximation by a linear Markov process for the density on \(\mathbb{R}^{JN_{\theta}}\), where \(J\) is the number of particles; the concepts of exchangeability and propagation of chaos may be used to relate the two Markov processes. See [100, 125, 26] and the references therein.
#### 1.2.4 Gaussian Approximations
There is substantial work on the use of gradient flows in the space of Gaussian, or other parametric density spaces, to minimize the Kullback-Leibler (KL) divergence [130, 15]. These methods, in the Gaussian setting, aim to solve the problem
\[(m^{\star},C^{\star})=\operatorname*{arg\,min}_{m,C}\ \operatorname{KL}[ \mathcal{N}(m,C)\|\rho_{\mathrm{post}}(\theta)]. \tag{2}\]
Again, but now restricted to variations in the space of Gaussian densities, different metric tensors lead to different gradient flows to identify \((m^{\star},C^{\star}).\) Recently, the work [81] proved the global convergence of the Wasserstein natural gradient descent algorithm when the posterior is log-concave. Other work on the use of Gaussian variational inference methods includes the papers [105, 110, 75, 87, 52, 138].
In addition to their role in parametric variational inference, Gaussian approximations have been widely deployed in various generalizations of Kalman filtering [72, 122, 71, 131, 48]. For Bayesian inverse problems, iterative ensemble Kalman samplers have been proposed which are in category a) defined in subsection 1.2.1[47, 31, 131]. The paper [63] introduced an ensemble Kalman methodology falling in category b), defined in subsection 1.2.1, based on a novel mean-field dynamical system that depends on its own filtering distribution. For all these algorithms based on a Gaussian ansatz, the accuracy depends on some measure of being close to Gaussian. Regarding the use of Gaussian approximations in Kalman inversion we highlight, in addition to the approximate Bayesian methods already cited, the use of ensemble Kalman methods for optimization: see [65, 24, 77, 64, 136]. Kalman filtering has also been used in combination with variational inference [79]. The relation between iterative Kalman filtering and Gauss-Newton or Levenberg Marquardt algorithms are studied in [11, 10, 64, 25], and leads to ensemble Kalman based optimization methods which are affine invariant.
#### 1.2.5 Affine Invariance
The idea of affine invariance was introduced for MCMC methods in [58, 49], motivated by the empirical success of the Nelder-Mead simplex algorithm [102] in optimization. Sampling methods with the affine invariance property can be effective for highly anisotropic distributions; this is because they behave identically in all coordinate systems related through an affine transformation; in particular, they can be understood by studying the best possible coordi
nate system, which reduces anisotropy to the maximum extent possible within the class of affine transformations. The numerical studies presented in [58] demonstrate that affine-invariant MCMC methods offer significant performance improvements over standard MCMC methods. This idea has been further developed to enhance sampling algorithms in more general contexts. Preconditioning strategies for Langevin dynamics to achieve affine-invariance were discussed in [84]. And in [54], the Kalman-Wasserstein metric was introduced, gradient flows in this metric were advocated and in [55] the methodology was shown to achieve affine invariance. Moreover, the authors in [54, 55, 108] used the empirical covariance of an interacting particle approximation of the mean-field limit, leading to a family of derivative-free sampling approaches in continuous time. Similarly, the work [91] employed the empirical covariance to precondition second order Langevin dynamics. Affine invariant samplers can also be combined with the pCN (preconditioned Crank-Nicolson) MCMC method [36], to boost the performance of MCMC in function space [37, 46]. Another family of affine-invariant sampling algorithms is based on Newton or Gauss-Newton, since the use of the Hessian matrix as the preconditioner in Newton's method induces the affine invariance property. Such methods include stochastic Newton MCMC [99] and the Newton flow with different metrics [41, 134].
### Our Contributions
The primary contributions of the work are as follows:
* we highlight a general methodology for the design of algorithms to sample a target probability distribution known up to normalization, unifying and generalizing an emerging scattered literature;
* the methodology is based on the introduction of gradient flows of the KL divergence between the time-dependent density which is solution of the gradient flow, and the target density;
* we justify the choice of the KL divergence as the energy functional by showing that, among all \(f\)-divergences, it is the unique choice (up to scaling) for which the resulting gradient flow is independent of the normalization constant of the target distribution;
* the notion of gradient flow requires a metric and we employ the Fisher-Rao, Wasserstein and Stein metrics to provide concrete instantiations of the methodology;
* to design implementable algorithms from the gradient flows we discuss the use of particle approximations of mean-field models, whose law is governed by the gradient flow, and restriction of the gradient flow to a parameterized Gaussian family, which we show to be equivalent to a moment closure approach;
* we define the concept of affine invariant metrics, demonstrate links to affine invariant mean-field models and variational methods restricted to the set of Gaussians, and describe numerical results highlighting the benefits of affine invariant methods;
* we prove results concerning the long time behavior of the underlying gradient flows, in both the full and Gaussian density spaces, further highlighting the benefits of affine invariance in some cases.
Our code is accessible online:
[https://github.com/Zhengyu-Huang/InverseProblems.jl](https://github.com/Zhengyu-Huang/InverseProblems.jl)
### Organization
The remainder of the paper is organized as follows. In Section 2, we introduce energy functionals in the density space. In Section 3, we review the basics of gradient flows in the space of all probability density functions;
we make links to mean-field models and we propose a definition of affine invariant metrics, leading to affine invariant gradient flows and mean-field models. Examples of affine invariant Fisher-Rao, Wasserstein and Stein gradient flows are discussed and their convergence properties are studied theoretically; some of these results highlight the benefits of affine invariance. In Section 4, we review the basics of Gaussian approximate gradient flows. We define different Gaussian approximate gradient flows under the aforementioned metrics, computing the dynamics governing the evolution of the mean and covariance, studying their convergence properties, and again identifying the effects of affine invariance. A by-product of our computations is to show that the evolution equations for mean and covariance can be computed by simple use of moment closure. In Section 5, numerical experiments are provided to empirically confirm the theory and in particular to demonstrate the effectiveness of the affine invariance property in designing algorithms for certain classes of problems. We make concluding remarks in Section 6. Four appendices contain details of the proofs of the results stated in the main body of the paper.
## 2 Energy Functional
Consider the space of all strictly positive probability density functions, defined by1
Footnote 1: Here, and in what follows, we omit the domain \(\mathbb{R}^{N_{\theta}}\) of the integral, unless including it is needed for purposes of disambiguation. Furthermore, extension to non-negative densities is possible and is relevant in some applications; but we work in the simpler setting of strict positivity throughout this paper.
\[\mathcal{P}=\Big{\{}\rho\in L_{1}(\mathbb{R}^{N_{\theta}}):\int\rho\mathrm{d} \theta=1,\,\rho>0\Big{\}}. \tag{1}\]
Now consider any \(\rho\in\mathcal{P}\) which is absolutely continuous with respect to \(\rho_{\mathrm{post}}\), noting that we may then define the energy \(\mathcal{E}(\rho)\) by
\[\mathcal{E}(\rho)=\mathrm{KL}[\rho\|\rho_{\mathrm{post}}]=\int\rho\log\!\Big{(} \frac{\rho}{\rho_{\mathrm{post}}}\Big{)}\,\mathrm{d}\theta; \tag{2}\]
we can extend \(\mathcal{E}(\cdot)\) to the whole of \(\mathcal{P}\) by setting it to \(\infty\) when \(\rho\) is not absolutely continuous with respect to \(\rho_{\mathrm{post}}\). The minimization problem
\[\min_{\rho\in\mathcal{P}}\mathrm{KL}[\rho\|\rho_{\mathrm{post}}] \tag{3}\]
has as unique global minimizer \(\rho=\rho_{\mathrm{post}}\). This suggests the derivation of algorithms to identify \(\rho_{\mathrm{post}}\) based on minimization of \(\mathcal{E}(\rho)\) over \(\mathcal{P}\); we refer to this as nonparametric variational inference.
When minimizing \(\mathcal{E}(\rho)\) the first variation plays a central role. For the specific choice of KL divergence as the energy functional, the first variation is given by
\[\frac{\delta\mathcal{E}}{\delta\rho}=\log\rho-\log\rho_{\mathrm{post}}-\int( \log\rho-\log\rho_{\mathrm{post}})\mathrm{d}\theta, \tag{4}\]
where we have used the fact that \((\rho\log\rho)^{\prime}=\log\rho+1\), and we impose \(\int\frac{\delta\mathcal{E}}{\delta\rho}(\theta)\mathrm{d}\theta=0\). From the formula (4) we see that, for the KL divergence, \(\frac{\delta\mathcal{E}}{\delta\rho}\) remains unchanged if we scale \(\rho_{\mathrm{post}}\) by any positive constant \(c>0\), i.e. if we change \(\rho_{\mathrm{post}}\) to \(c\rho_{\mathrm{post}}\). This property eliminates the need to know the normalization constant of \(\rho_{\mathrm{post}}\) in order to calculate the first variation. It is common in Bayesian inference for the normalization to be unknown and indeed the fact that MCMC methods do not need
the normalization constant is central to their widespread use; it is desirable that the methodology presented here has the same property.
The following Proposition 2.1 shows that this property of the KL divergence is special: among all \(f\)-divergences with continuously differentiable \(f\) defined on the positive reals it is the only one to have this property. Here the \(f\)-divergence between two continuous density functions \(\rho\) and \(\rho_{\mathrm{post}}\), positive everywhere, is defined as
\[D_{f}[\rho\|\rho_{\mathrm{post}}]=\int\rho_{\mathrm{post}}f\Big{(}\frac{\rho} {\rho_{\mathrm{post}}}\Big{)}\mathrm{d}\theta.\]
For convex \(f\) with \(f(1)=0\), Jensen's inequality implies that \(D_{f}[\rho\|\rho_{\mathrm{post}}]\geq 0\). The KL divergence used in (2) corresponds to the choice \(f(x)=x\log x\). In what follow we view this \(f\)-divergence as a function of probability measure \(\rho\), parameterized by \(\rho_{\mathrm{post}}\); in particular we observe that this parameter-dependent function of probability density \(\rho\) makes sense if \(\rho_{\mathrm{post}}\) is simply a positive function: it does not need to be a probability density; we may thus scale \(\rho_{\mathrm{post}}\) by any positive real.
**Proposition 2.1**: _Assume that \(f:(0,\infty)\to\mathbb{R}\) is continuously differentiable and \(f(1)=0\). Then the KL divergence is the only \(f\)-divergence (up to scalar factors) whose first variation with respect to \(\rho\) is invariant with respect to \(\rho_{\mathrm{post}}\mapsto c\rho_{\mathrm{post}}\), for any \(c\in(0,\infty)\) and for any \(\rho_{\mathrm{post}}\in\mathcal{P}\)._
The proof of the proposition can be found in Appendix A.1.
_Remark 2.2_: As a consequence of Proposition 2.1, the gradient flows defined by the energy (2) do not depend on the normalization constant of the posterior, as we will see in the next section. Hence the numerical implementation is more straightforward in comparison with the use of other divergences or metrics to define the energy \(\mathcal{E}(\rho)\). This justifies the choice of KL divergence as energy functional for sampling. and our developments in most of this paper are specific to the energy (2). However, other energy functionals can be, and are, used for constructing gradient flows; for example, the chi-squared divergence [33, 88]:
\[\chi^{2}(\rho\|\rho_{\mathrm{post}})=\int\rho_{\mathrm{post}}\Big{(}\frac{\rho }{\rho_{\mathrm{post}}}-1\Big{)}^{2}\mathrm{d}\theta=\int\frac{\rho^{2}}{\rho_ {\mathrm{post}}}\mathrm{d}\theta-1. \tag{5}\]
The normalization constant can appear explicitly in the gradient flow equation for general energy functionals. Additional structures need to be explored to simulate such flows. For example, when the energy functional is the chi-squared divergence, in [33], kernelization is used to avoid the normalization constant in the Wasserstein gradient flow. Moreover, in [88] where a modification of the Fisher-Rao metric is used, ensemble methods with birth-death type dynamics are adopted to derive numerical methods; the normalization constant can be absorbed into the birth-death rate. \(\Diamond\)
In the context of algorithms, it is also of interest to consider minimization of \(\mathcal{E}(\cdot)\) given by (2) over parameterized manifolds in \(\mathcal{P}\): parametric variational inference. To illustrate this, we consider the manifold of Gaussian densities2\(\mathcal{P}^{G}\subset\mathcal{P}\)
Footnote 2: The extension to \(C\succeq 0\) may be relevant for some applications but we work in the simpler, strictly positive covariance, setting here.
\[\mathcal{P}^{G}:=\Big{\{}\rho_{a}:\rho_{a}(\theta)=\frac{\exp \bigl{(}-\frac{1}{2}(\theta-m)^{T}C^{-1}(\theta-m)\bigr{)}}{\sqrt{|2\pi C|}} \text{ with }a=(m,C)\in\mathcal{A}\Big{\}}, \tag{6a}\] \[\mathcal{A}=(m\in\mathbb{R}^{N_{\theta}},\,C\succ 0\in\mathbb{R}^{N_{ \theta}\times N_{\theta}}); \tag{6b}\]
here \(|\cdot|\) denotes the determinant when the argument is a matrix. This definition leads to Gaussian variational inference:
\[\min_{\rho\in\mathcal{P}^{G}}\mathrm{KL}[\rho\|\rho_{\mathrm{post}}]. \tag{7}\]
Any minimizer \(\rho_{a_{\star}}=\mathcal{N}(m_{\star},C_{\star})\) satisfies [80]3
Footnote 3: We use \(\nabla_{\theta}\nabla_{\theta}f(\theta)\) to denote the Hessian matrix associated with scalar field \(f(\theta)\). In doing so we follow the convention in the continuum mechanics literature, noticing that the Hessian operator is formed as the composition of the gradient acting on a scalar field, followed by the gradient acting on a vector field [57]. The notation \(\nabla_{\theta}^{2}f(\theta)\) is used by some authors to denote the Hessian; we avoid this because of potential confusion with its useage in some fields to denote the Laplacian (trace of the Hessian).
\[\mathbb{E}_{\rho_{a_{\star}}}\big{[}\nabla_{\theta}\log\rho_{\mathrm{post}}( \theta)\big{]}=0\quad\text{and}\quad C_{\star}^{-1}=-\mathbb{E}_{\rho_{a_{ \star}}}\big{[}\nabla_{\theta}\nabla_{\theta}\log\rho_{\mathrm{post}}(\theta) \big{]}. \tag{8}\]
## 3 Gradient Flow
In this section, we start by introducing the concept of metric, the gradient flow of energy (2) that it induces, the related mean-field dynamics in the state space \(\mathbb{R}^{N_{\theta}}\), and the concept of affine invariance, all in Subsection 3.1. Then, in subsequent subsections, we introduce the Fisher-Rao (Subsection 3.2), Wasserstein (Subsection 3.3) and Stein gradient flows (Subsection 3.4), together with affine invariant modifications when relevant. In fact the Fisher-Rao metric has a stronger invariance property: it is invariant under any diffeomorphism of the parameter space. Finally, we discuss the convergence properties of these gradient flows in Subsection 3.5.
### Basics of Gradient Flows
In this subsection, we introduce gradient flows in the probability space and affine invariance in this context. Our focus is on formal calculations to derive these flows. We do not focus on the rigorous analytical underpinnings of gradient flows in a metric space; the reader interested in further details should consult [3].
#### 3.1.1 Metric
For simplicity, the probability space we consider in this paper is the manifold \(\mathcal{P}\) of smooth strictly positive densities given by (1). Then, at any \(\rho\in\mathcal{P}\), the tangent space of \(\mathcal{P}\) is
\[T_{\rho}\mathcal{P}:=\Big{\{}\sigma\in C^{\infty}(\mathbb{R}^{N_{\theta}}): \int\sigma\mathrm{d}\theta=0\Big{\}}. \tag{9}\]
We can identify the cotangent space \(T_{\rho}^{*}\mathcal{P}\simeq T_{\rho}\mathcal{P}\). This isomorphism relates the tangent vector \(\sigma\in T_{\rho}\mathcal{P}\) to the covector in \(T_{\rho}^{*}\mathcal{P}\) via the identification \(\langle\sigma,\cdot\rangle\), and where \(\langle\sigma_{1},\sigma_{2}\rangle=\int\sigma_{1}\sigma_{2}\) denotes the inner product in \(T_{\rho}\mathcal{P}\). From any cotangent vector in \(T_{\rho}^{*}\mathcal{P}\) we can use the Riesz representation theorem to represent it in \(T_{\rho}\mathcal{P}\).
Given a metric tensor at \(\rho\), defined by \(M(\rho):T_{\rho}\mathcal{P}\to T_{\rho}^{*}\mathcal{P}\), we define Riemannian metric \(g_{\rho}:T_{\rho}\mathcal{P}\times T_{\rho}\mathcal{P}\to\mathbb{R}\) via \(g_{\rho}(\sigma_{1},\sigma_{2})=\langle M(\rho)\sigma_{1},\sigma_{2}\rangle\). The inverse of \(M(\rho)\) is sometimes referred to as the Onsager operator.
In the definition of \(M(\rho)\), we identify \(M(\rho)\sigma_{1}\in T_{\rho}^{*}\mathcal{P}\) as its representer in \(T_{\rho}\mathcal{P}\) and thus \(\langle M(\rho)\sigma_{1},\sigma_{2}\rangle\) is well-defined, given that \(\langle\cdot,\cdot\rangle\) is the \(L^{2}\) inner product in the tangent space. In fact, for ease of notation, in this paper we sometimes use the same notation for a dual element and its representer in the primal space. One may also understand \(\langle\cdot,\cdot\rangle\) in \(\langle M(\rho)\sigma_{1},\sigma_{2}\rangle\) as the primal-dual pairing between \(T_{\rho}^{*}\mathcal{P}\) and \(T_{\rho}\mathcal{P}\); however we employ the \(L^{2}\) inner product interpretation throughout this paper for convenience.
The geodesic distance \(\mathcal{D}:\mathcal{P}\times\mathcal{P}\to\mathbb{R}^{+}\) under metric \(g\) is defined by the formula
\[\mathcal{D}(\rho_{A},\rho_{B})^{2}=\inf_{\rho_{t}}\Bigl{\{}\int_{0}^{1}g_{\rho_{ t}}(\partial_{t}\rho_{t},\partial_{t}\rho_{t})\mathrm{d}t:\,\rho_{0}=\rho_{A}, \rho_{1}=\rho_{B}\Bigr{\}}. \tag{10}\]
The distance \(\mathcal{D}\) defines a metric on probability measures; however, to avoid confusion with the Riemannian metric \(g\), in this paper we always refer to \(\mathcal{D}\) as a distance.
We also recall that the geodesic distance \(\mathcal{D}\) has the following property [43]:
\[\lim_{\epsilon\to 0}\frac{1}{\epsilon^{2}}\mathcal{D}(\rho+\epsilon\sigma, \rho)^{2}=g_{\rho}(\sigma,\sigma)=\langle M(\rho)\sigma,\sigma\rangle. \tag{11}\]
#### 3.1.2 Flow Equation
Recall that the dual element of the first variation of \(\mathcal{E}(\rho)\), denoted \(\frac{\delta\mathcal{E}}{\delta\rho}\in T_{\rho}^{*}\mathcal{P}\), is defined by
\[\Bigl{\langle}\frac{\delta\mathcal{E}}{\delta\rho},\sigma\Bigr{\rangle}=\lim _{\epsilon\to 0}\frac{\mathcal{E}(\rho+\epsilon\sigma)-\mathcal{E}(\rho)}{ \epsilon},\]
for any \(\sigma\in T_{\rho}\mathcal{P}\). In the preceding, we identify \(\frac{\delta\mathcal{E}}{\delta\rho}\in T_{\rho}^{*}\mathcal{P}\) via its representer in \(T_{\rho}\mathcal{P}\) so that the inner product notation is well-defined and consistent; see Remark 3.1. The gradient of \(\mathcal{E}\) under the Riemannian metric, denoted by \(\nabla\mathcal{E}\), is defined via the condition
\[\forall\sigma\in T_{\rho}\mathcal{P}\qquad g_{\rho}(\nabla\mathcal{E},\sigma) =\Bigl{\langle}\frac{\delta\mathcal{E}}{\delta\rho},\sigma\Bigr{\rangle}.\]
Using the metric tensor, we can write \(\nabla\mathcal{E}(\rho)=M(\rho)^{-1}\frac{\delta\mathcal{E}}{\delta\rho}\).
The gradient flow of \(\mathcal{E}\) with respect to this metric is thus defined by
\[\frac{\partial\rho_{t}}{\partial t}=-\nabla\mathcal{E}(\rho_{t})=-M(\rho_{t}) ^{-1}\frac{\delta\mathcal{E}}{\delta\rho}\Big{|}_{\rho=\rho_{t}}, \tag{12}\]
in which the right hand side is an element in \(T_{\rho_{t}}\mathcal{P}\).
_Remark 3.2_.: The gradient flow can also be interpreted from the proximal perspective. Given the metric \(g\) and the geodesic distance function under this metric, \(\mathcal{D}\), the proximal point method uses the following iteration
\[\mathfrak{r}_{n+1}=\operatorname*{arg\,min}_{\rho\in\mathcal{P}}\Bigl{(} \mathcal{E}(\rho)+\frac{1}{2\Delta t}\mathcal{D}(\rho,\mathfrak{r}_{n})^{2} \Bigr{)} \tag{13}\]
to minimize the energy functional \(\mathcal{E}\) in density space \(\mathcal{P}\). When \(\Delta t\) is small it is natural to seek \(\mathfrak{r}_{n+1}=\mathfrak{r}_{n}+\Delta t\sigma_{n}\) and note that, invoking the approximation implied by (11),
\[\sigma_{n}\approx\operatorname*{arg\,min}_{\sigma\in T_{\mathfrak{r}_{n}} \mathcal{P}}\Bigl{(}\mathcal{E}(\mathfrak{r}_{n}+\Delta t\sigma)+\frac{1}{2} \Delta t\langle\sigma,M(\mathfrak{r}_{n})\sigma\rangle\Bigr{)}.\]
To leading order in \(\Delta t\), this expression is minimized by choosing
\[\sigma_{n}\approx-M(\mathfrak{r}_{n})^{-1}\frac{\delta\mathcal{E}}{\delta \rho}\Big{|}_{\rho=\mathfrak{r}_{n}}.\]
Letting \(\mathfrak{r}_{n}\approx\rho_{n\Delta t}\), the formal continuous time limit of the proximal algorithm leads to the corresponding gradient flow (12) [69].
For the specific choice (2) of KL divergence functional as the energy functional, the first variation is given by (4). Thus the resulting gradient flow has the form
\[\frac{\partial\rho_{t}}{\partial t}=-M(\rho_{t})^{-1}\bigl{(}\log\rho_{t}-\log \rho_{\mathrm{post}}-\int(\log\rho-\log\rho_{\mathrm{post}})\mathrm{d}\theta \bigr{)}. \tag{6}\]
Most of the paper is focused on this gradient flow, for specific choices of metric, mean-field models that have law governed by this gradient flow, or restrictions of this gradient flow to variations in the manifold of Gaussians.
#### 3.1.3 Affine Invariance
We now introduce the concept of affine invariance. Roughly speaking, affine invariant gradient flows are invariant under any invertible affine transformations of the density variables; as a consequence, the convergence rate is independent of the affine transformation. It is thus natural to expect that algorithms with this property have an advantage for sampling highly anisotropic posteriors.
Let \(\varphi:\theta\to\tilde{\theta}\) denote a diffeomorphism in \(\mathbb{R}^{N_{\theta}}\). When \(\varphi(\theta)=A\theta+b\), \(A\in\mathbb{R}^{N_{\theta}\times N_{\theta}},b\in\mathbb{R}^{N_{\theta}}\) and \(A\) is invertible, then the diffeomorphism is an affine transformation.
**Definition 3**: We define the _pushforward operation \(\#\)_for various objects as follows:
* for density \(\rho\), we write \(\tilde{\rho}=\varphi\#\rho\), which satisfies \(\tilde{\rho}(\tilde{\theta})=\rho(\varphi^{-1}(\tilde{\theta}))|\nabla_{\tilde {\theta}}\varphi^{-1}(\tilde{\theta})|\);
* for tangent vector \(\sigma\in T_{\rho}\mathcal{P}\), we have \(\tilde{\sigma}=\varphi\#\sigma\in T_{\tilde{\rho}}\mathcal{P}\) which satisfies \[\tilde{\sigma}(\tilde{\theta})=\sigma(\varphi^{-1}(\tilde{\theta}))|\nabla_{ \tilde{\theta}}\varphi^{-1}(\tilde{\theta})|;\]
* for functional \(\mathcal{E}\) on \(\mathcal{P}\), we define \(\tilde{\mathcal{E}}=\varphi\#\mathcal{E}\) via \(\tilde{\mathcal{E}}(\tilde{\rho})=\mathcal{E}(\varphi^{-1}\#\tilde{\rho})\).
In the above definition, the forms of \(\tilde{\rho}\) and \(\tilde{\sigma}\) can be derived by the standard change-of-variable formula [129].
Then we can define affine invariant gradient flow, metric, and mean-field dynamics.
**Definition 4** **(Affine Invariant Gradient Flow): Fix a Riemannian metric \(g\) and the gradient operation \(\nabla\) with respect to this metric. Consider the gradient flow
\[\frac{\partial\rho_{t}}{\partial t}=-\nabla\mathcal{E}(\rho_{t})\]
and the affine transformation \(\tilde{\theta}=\varphi(\theta)=A\theta+b\). Let \(\tilde{\rho}_{t}:=\varphi\#\rho_{t}\) denote the distribution of \(\tilde{\theta}\) at time \(t\) and set \(\tilde{\mathcal{E}}=\varphi\#\mathcal{E}\). The _gradient flow is affine invariant_ if
\[\frac{\partial\tilde{\rho}_{t}}{\partial t}=-\nabla\tilde{\mathcal{E}}(\tilde {\rho}_{t}).\]
The key idea in the preceding definition is that, after the change of variables, the dynamics of \(\tilde{\rho}_{t}\) is itself a gradient flow, in the same metric as the gradient flow in the original variables.
**Definition 5** **(Affine Invariant Metric): Define the _pull-back operator on Riemannian metric \(g\) by
\[(\varphi^{\#}g)_{\rho}(\sigma_{1},\sigma_{2})=g_{\varphi\#\rho}(\varphi\# \sigma_{1},\varphi\#\sigma_{2}),\]
for any \(\rho\in\mathcal{P}\) and \(\sigma_{1},\sigma_{2}\in T_{\rho}\mathcal{P}\). We say that Riemannian metric \(g\) is affine invariant if \(\varphi^{\#}g=g\) for any affine transformation \(\varphi\).
The affine invariance of gradient flows is closely related to that of the Riemannian metric:
**Proposition 3.6**: _The following two conditions are equivalent:_
1. _the gradient flow under Riemannian metric_ \(g\) _is affine invariant for any_ \(\mathcal{E}\)_;_
2. _the Riemannian metric_ \(g\) _is affine invariant._
We provide a proof for this proposition in Appendix B.2. Given this, it suffices to focus on the affine invariance of the Riemannian metrics that we consider in this paper; furthermore, we may modify them where needed to make them affine invariant.
In Proposition 3.6, we consider the affine invariance property to hold for any \(\mathcal{E}\); the metric is _independent_ of \(\mathcal{E}\). However, it is possible to choose a metric that depends on the energy functional \(\mathcal{E}\). An example of this is Newton's method where the Riemannian metric is given by the Hessian of the energy functional, assuming it is positive definite; see the discussion of Newton's flow on probability space in [134]. Recall that our motivation for introducing the affine invariance property is that algorithms with this property will, in settings where an affine transformation removes anisotropy, have favorable performance when sampling highly anisotropic posteriors. Our current definition of affine invariance is tied to the energy functional \(\mathcal{E}\) without direct reference to \(\rho_{\mathrm{post}}\). Given \(\mathcal{E}\), an affine invariant gradient flow has the same convergence property when the energy functional changes to \(\mathcal{E}(\varphi^{-1}\#\rho)\) where \(\varphi\) is an invertible affine transformation. To connect the transformation of the energy functional to that of \(\rho_{\mathrm{post}}\), we note that the KL divergence satisfies the property
\[\mathcal{E}(\varphi^{-1}\#\rho)=\mathrm{KL}[\varphi^{-1}\#\rho\|\rho_{\mathrm{ post}}]=\mathrm{KL}[\rho\|\varphi\#\rho_{\mathrm{post}}]. \tag{10}\]
Therefore, affine invariant gradient flows of the KL divergence have the same convergence property when \(\rho_{\mathrm{post}}\) changes to \(\varphi\#\rho_{\mathrm{post}}\), for any invertible affine transformation \(\varphi\). This suggests that the flow will have favorable behaviour for sampling highly anisotropic posteriors provided, under at least one affine transformation, the anisotropy is removed.
#### 3.1.4 Mean-Field Dynamics
Approximating the dynamics implied by (11) is often a substantial task. One approach is to identify a mean-field stochastic dynamical system, with state space \(\mathbb{R}^{N_{\theta}}\), defined so that its law is given by (11). For example, we may introduce the Ito SDE
\[\mathrm{d}\theta_{t}=f(\theta_{t};\rho_{t},\rho_{\mathrm{post}})\mathrm{d}t+h( \theta_{t};\rho_{t},\rho_{\mathrm{post}})\mathrm{d}W_{t}, \tag{11}\]
where \(W_{t}\in\mathbb{R}^{N_{\theta}}\) is a standard Brownian motion. Because the drift \(f:\mathbb{R}^{N_{\theta}}\times\mathbb{R}\times\mathbb{R}\to\mathbb{R}^{N_{ \theta}}\) and diffusion coefficient \(h:\mathbb{R}^{N_{\theta}}\times\mathbb{R}\times\mathbb{R}\to\mathbb{R}^{N_{ \theta}\times N_{\theta}}\) are evaluated at \(\rho_{t}\), the density of \(\theta_{t}\) itself, this is a mean-field model.
The density is governed by a nonlinear Fokker-Planck equation
\[\frac{\partial\rho_{t}}{\partial t}=-\nabla_{\theta}\cdot(\rho_{t}f)+\frac{1} {2}\nabla_{\theta}\cdot\big{(}\nabla_{\theta}\cdot(hh^{T}\rho_{t})\big{)}. \tag{12}\]
By choice of \(f,h\) it may be possible to ensure that (12) coincides with (11). Then an interacting particle system can be used to approximate (11), generating an empirical measure which approximates \(\rho_{t}\).
As the affine invariance property is important for gradient flows, we also need to study this property for mean-field dynamics that are used to approximate these flows.
**Definition 3.9** (Affine Invariant Mean-Field Dynamics): _Consider the mean-field dynamics (3.8) and the affine transformation \(\tilde{\theta}=\varphi(\theta)=A\theta+b\). The mean-field dynamics is affine invariant, when_
\[Af(\theta;\rho,\rho_{\rm post}) = f(\varphi(\theta);\varphi\#\rho,\varphi\#\rho_{\rm post}), \tag{3.10a}\] \[Ah(\theta;\rho,\rho_{\rm post}) = h(\varphi(\theta);\varphi\#\rho,\varphi\#\rho_{\rm post}), \tag{3.10b}\]
_for any affine transformation \(\varphi\). This implies that \(\tilde{\theta}_{t}=\varphi(\theta_{t})\) satisfies a SDE of the same form as (3.8):_
\[{\rm d}\tilde{\theta}_{t}=f(\tilde{\theta}_{t};\tilde{\rho}_{t},\tilde{\rho}_ {\rm post}){\rm d}t+h(\tilde{\theta}_{t};\tilde{\rho}_{t},\tilde{\rho}_{\rm post }){\rm d}W_{t}, \tag{3.11}\]
_where \(\tilde{\rho}_{t}=\varphi\#\rho_{t}\) and \(\tilde{\rho}_{\rm post}=\varphi\#\rho_{\rm post}\) by definition._
If we use this definition, then mean-field dynamics of affine invariant gradient flows need not be affine invariant, since there may be different \(f,h\) giving rise to the same flow - equivalence classes. For the affine invariance of the corresponding mean-field dynamics, we have the following proposition, noting that the condition on the energy is satisfied for (2.2) by (3.7).
**Proposition 3.10**: _Consider the energy functional \(\mathcal{E}(\rho;\rho_{\rm post})\), making explicit the dependence on \(\rho_{\rm post}\), and assume that \(\mathcal{E}(\varphi^{-1}\#\rho;\rho_{\rm post})=\mathcal{E}(\rho;\varphi\# \rho_{\rm post})\) holds. Then, corresponding to any any affine invariant gradient flow of \(\mathcal{E}\) there is a mean-field dynamics of the form (3.8) which is affine invariant._
The proof of this proposition may be found in Appendix B.3.
As a consequence, Proposition 3.10 unifies the affine invariance property of the gradient flow in probability space and the corresponding mean-field dynamics. We note, however, that the mean-field dynamics is not unique and we only prove the existence of one choice (amongst many) which is affine invariant. In our later discussions, we will give some specific construction of the mean-field dynamics for several gradient flows, and show that they are indeed affine invariant.
**Remark 3.11**: _The condition assumed in Proposition 3.10 indicates that the push-forward of the functional \(\mathcal{E}\) (See Definition 3.3) satisfies_
\[\tilde{\mathcal{E}}(\tilde{\rho})=\tilde{\mathcal{E}}(\tilde{\rho};\rho_{\rm post })=\mathcal{E}(\varphi^{-1}\#\tilde{\rho};\rho_{\rm post})=\mathcal{E}(\tilde {\rho};\varphi\#\rho_{\rm post})=\mathcal{E}(\tilde{\rho};\tilde{\rho}_{\rm post }). \tag{3.12}\]
_Thus, this condition allows to connect the affine invariance defined via the transformation of energy functional and via the transformation of the target posterior distributions, as explained in Remark 3.8. Beyond the KL divergence (see (3.7)), the condition is also satisfied by various widely used energy functionals, such as the Hellinger distance and the chi-squared divergence. \(\Diamond\)_
### Fisher-Rao Gradient Flow
#### 3.2.1 Metric
The Fisher-Rao Riemannian metric is
\[g_{\rho}^{\rm FR}(\sigma_{1},\sigma_{2})=\int\frac{\sigma_{1}\sigma_{2}}{\rho} {\rm d}\theta.\]
Writing tangent vectors on a multiplicative scale, by setting \(\sigma=\rho\psi_{\sigma}\), we see that this metric may be written as
\[g_{\rho}^{\rm FR}(\sigma_{1},\sigma_{2})=\int\psi_{\sigma_{1}}\psi_{\sigma_{2} }\rho{\rm d}\theta,\]
and hence that in the \(\psi_{\sigma}\) variable the metric is described via the \(L^{2}_{\rho}\) inner-product.
The Fisher-Rao metric tensor \(M^{\mathrm{FR}}(\rho)\) associated to \(g^{\mathrm{FR}}_{\rho}\) satisfies
\[M^{\mathrm{FR}}(\rho)\sigma=\psi_{\sigma}-\int\psi_{\sigma}\mathrm{ d}\theta,\quad\forall\ \sigma\in T_{\rho}\mathcal{P} \tag{13a}\] \[M^{\mathrm{FR}}(\rho)^{-1}\psi=\rho(\psi-\mathbb{E}_{\rho}[\psi]), \quad\forall\ \psi\in T^{*}_{\rho}\mathcal{P}. \tag{13b}\]
The corresponding geodesic distance \(\mathcal{D}^{\mathrm{FR}}:\mathcal{P}\times\mathcal{P}\to\mathbb{R}^{+}\) is
\[\mathcal{D}^{\mathrm{FR}}(\rho_{A},\rho_{B})^{2}=\inf_{\rho_{t}}\Bigl{\{}\int_ {0}^{1}\mathrm{d}t\int\frac{|\partial_{t}\rho_{t}|^{2}}{\rho_{t}}\mathrm{d} \theta:\,\rho_{0}=\rho_{A},\rho_{1}=\rho_{B}\Bigr{\}}. \tag{14}\]
If we do not restrict the distributions \(\rho_{t}\) to be on the probability space (i.e., we allow they to have any positive mass), then by using the relation
\[\frac{|\partial_{t}\rho_{t}|^{2}}{\rho_{t}}=4\Bigl{|}\frac{\mathrm{d}}{ \mathrm{d}t}\sqrt{\rho_{t}}\Bigr{|}^{2}\]
and the Cauchy-Schwarz inequality, we can solve the optimization problem in (14) explicitly. The optimal objective value will be \(4\int|\sqrt{\rho_{A}}-\sqrt{\rho_{B}}|^{2}\mathrm{d}\theta.\) This is (up to a constant scaling) the Hellinger distance [56].
On the other hand, if we constrain \(\rho_{t}\) to be on the probability space, then the geodesic distance will be (up to a constant scaling) the spherical Hellinger distance:
\[\mathcal{D}^{\mathrm{FR}}(\rho_{A},\rho_{B})^{2}=4\arccos^{2}\left(\int\sqrt{ \rho_{A}}\sqrt{\rho_{B}}\mathrm{d}\theta\right).\]
For more discussions, see [61, 82, 95]. In view of this relation, Fisher-Rao gradient flows are sometimes referred to as spherical Hellinger gradient flows in the literature [88, 95].
#### 3.2.2 Flow Equation
From (4) and (13b) we see that the Fisher-Rao gradient flow of the KL divergence is
\[\frac{\partial\rho_{t}}{\partial t}= -M^{\mathrm{FR}}(\rho_{t})^{-1}\frac{\delta\mathcal{E}}{\delta \rho}\Bigr{|}_{\rho=\rho_{t}}, \tag{15}\] \[= \rho_{t}\bigl{(}\log\rho_{\mathrm{post}}-\log\rho_{t}\bigr{)}- \rho_{t}\mathbb{E}_{\rho_{t}}[\log\rho_{\mathrm{post}}-\log\rho_{t}].\]
The gradient flow (15) in probability space has the form typical of a mean-field model which is a birth-death process - it is possible to create and kill particles to sample this process. However, the support of the empirical distribution using this algorithm never increases during evolution. To address this issue, the work [94] added Langevin diffusion to the birth-death process, resulting in what they term the Wasserstein-Fisher-Rao gradient flow. Alternatively, the authors in [88] utilized a Markov chain kernel and MCMC to sample the birth death dynamics arising from the Fisher-Rao gradient flow, using the chi-squared divergence [86] instead of (2).
\(\Diamond\)
_Remark 13_.: When the target distribution (1) arises from a Bayesian inverse problem it may be written in the form
\[\rho_{\mathrm{post}}(\theta)\propto\exp(-\Phi(\theta))\rho_{0}(\theta); \tag{16}\]
function \(\Phi:\mathbb{R}^{N_{\theta}}\to\mathbb{R}_{+}\) is the negative log likelihood and \(\rho_{0}\) is the prior. In this context it is interesting to consider
\[\mathcal{E}(\rho)=\int\rho(\theta)\,\Phi(\theta)\,\mathrm{d}\theta \tag{3.17}\]
with associated Fisher-Rao gradient flow
\[\frac{\partial\rho_{t}}{\partial t}=-\rho_{t}\left(\Phi-\mathbb{E}_{\rho_{t}}[ \Phi]\right). \tag{3.18}\]
It may be shown that the density \(\rho_{t}\) is explicitly given by
\[\rho_{t}(\theta)=\frac{\exp(-t\Phi(\theta))\rho_{0}(\theta)}{\mathbb{E}_{\rho _{0}}[\exp(-t\Phi)]}. \tag{3.19}\]
Hence we recover (3.16) at \(t=1\). This observation is at the heart of homotopy-based approaches to Bayesian inference [40], leading to methods based on particle filters; the link to an evolution equation for \(\rho_{t}\) is employed and made explicit in various other approaches to filtering [39, 113]. See [35, 20] for overviews. Such Fisher-Rao gradient flow structure for Bayes updates has also been identified in the context of filtering in [83, 61, 60].
We also note that, by letting \(t\to\infty\) one finds that
\[\lim_{t\to\infty}\rho_{t}=\delta_{\theta^{*}} \tag{3.20}\]
where \(\theta^{*}\) denotes the (assumed unique) minimizer of \(\Phi\), in the support of \(\rho_{0}\), and \(\delta_{\theta^{*}}\) denotes the Dirac delta function centred at \(\theta^{*}\). \(\lozenge\)
#### 3.2.3 Affine Invariance
The Fisher-Rao metric is affine invariant. One may understand this property through the affine invariance property of Newton's method when the energy functional is the KL divergence. To see this note that, from (2.4), the Hessian of \(\mathcal{E}\) given by (2.2) has the form
\[\frac{\delta^{2}\mathcal{E}(\rho)}{\delta\rho^{2}}=\frac{\delta^{2}\mathrm{KL} [\rho\|\rho_{\mathrm{post}}]}{\delta\rho^{2}}=\frac{1}{\rho}=M^{\mathrm{FR}}( \rho). \tag{3.21}\]
Therefore, the Fisher-Rao gradient flow of the KL divergence behaves like Newton's method, which is affine invariant. In fact the Fisher-Rao metric is invariant under _any diffeomorphism_ of the parameter space, not just invertible affine transformations. Indeed, it is the only metric, up to constant, that satisfies this strong invariance property [23, 5, 9].
#### 3.2.4 Mean-Field Dynamics
The Fisher-Rao gradient flow (3.15) in \(\rho_{t}\) can be realized as the law of a mean-field ordinary differential equation in \(\theta_{t}\)
\[\frac{\mathrm{d}\theta_{t}}{\mathrm{d}t}=f(\theta_{t};\rho_{t},\rho_{\mathrm{ post}}). \tag{3.22}\]
Writing the nonlinear Liouville equation associated with this model and equating it to (3.15) shows that drift \(f\) satisfies
\[-\nabla_{\theta}\cdot(\rho_{t}f)=\rho_{t}\bigl{(}\log\rho_{\mathrm{post}}- \log\rho_{t}\bigr{)}-\rho_{t}\mathbb{E}_{\rho_{t}}[\log\rho_{\mathrm{post}}- \log\rho_{t}]. \tag{3.23}\]
Note that \(f\) is not uniquely determined by (3.23). Writing \(f\) as a gradient of a potential, with respect to \(\theta\), shows that the potential satisfies a linear elliptic PDE,
and under some conditions this will have a unique solution; but there will be other choices of \(f\) which are not a pure gradient, leading to nonuniqueness.
By Proposition 3.10, for affine invariant gradient flows, there exist mean-field dynamics (i.e., via choosing certain \(f\) in (3.23)) that are affine invariant. Here, we construct a specific class of \(f\) that leads to affine invariant mean-field dynamics for the Fisher-Rao gradient flow.
First, we introduce a matrix valued function: \(P:\mathbb{R}^{N_{\theta}}\times\mathcal{P}\to\mathbb{R}^{N_{\theta}\times N_{ \theta}}_{\succ 0}\), where the output space is the cone of positive-definite symmetric matrices; we refer to matrices such as \(P\) as _preconditioners_ throughout this paper. Then, the following proposition shows that the choice of \(f=P(\theta,\rho_{t})\nabla\phi\) leads to affine invariance of the dynamics, under certain condition on \(P\). The proof can be found in Appendix B.4.
**Proposition 3.14**: _Consider the invertible affine transformation \(\tilde{\theta}=\varphi(\theta)=A\theta+b\) and correspondingly \(\tilde{\rho}=\varphi\#\rho\). Assume that the preconditioning matrix satisfies_
\[P(\tilde{\theta},\tilde{\rho})=AP(\theta,\rho)A^{T}. \tag{3.24}\]
_Assume, furthermore, that the solution \(\phi(\theta;\rho,\rho_{\rm post})\) of the equation_
\[-\nabla_{\theta}\cdot(\rho P\nabla_{\theta}\phi)=\rho\big{(}\log\rho_{\rm post }-\log\rho\big{)}-\rho\mathbb{E}_{\rho}[\log\rho_{\rm post}-\log\rho] \tag{3.25}\]
_exists, is unique (up to constants) and belongs to \(C^{2}(\mathbb{R}^{N_{\theta}})\), for any \(\rho\in\mathcal{P}\). Then, the corresponding mean-field equation (3.22) with \(f=P\nabla_{\theta}\phi\) is affine invariant._
More generally, given any alternative functional \(\mathcal{E}\), such as (3.17), one can define affine invariant mean-field ordinary differential equations of the form (3.22) with drift \(f=P\nabla_{\theta}\phi\) and potential \(\phi\) satisfying the equation
\[-\nabla_{\theta}\cdot(\rho P\nabla_{\theta}\phi)=\rho\Big{(}\frac{\delta \mathcal{E}}{\delta\rho}-\mathbb{E}_{\rho}\Big{[}\frac{\delta\mathcal{E}}{ \delta\rho}\Big{]}\Big{)}. \tag{3.26}\]
\(\Diamond\)
In addition to the above choice of mean field models, birth-death type mean field dynamics have also been used to simulate Fisher-Rao gradient flows for sampling; see [94, 95].
### Wasserstein Gradient Flow
#### 3.3.1 Metric
Generalizing the relationship between \(\sigma\) and \(\psi_{\sigma}\) introduced in the Fisher-Rao context, we define \(\psi_{\sigma}\) to be solution of the PDE
\[-\nabla_{\theta}\cdot(\rho\nabla_{\theta}\psi_{\sigma})=\sigma. \tag{3.27}\]
This definition requires specification of function spaces to ensure unique invertibility of the divergence form elliptic operator. One then defines the Wasserstein metric tensor \(M^{\rm W}(\rho)\) and its inverse by
\[M^{\rm W}(\rho)\sigma=\psi_{\sigma}\quad\forall\ \sigma\in T_{ \rho}\mathcal{P}, \tag{3.28a}\] \[M^{\rm W}(\rho)^{-1}\psi=-\nabla_{\theta}\cdot(\rho\nabla_{ \theta}\psi),\quad\forall\ \psi\in T^{*}_{\rho}\mathcal{P}. \tag{3.28b}\]
Elementary manipulations show that the corresponding Riemannian metric is given by
\[g^{\rm W}_{\rho}(\sigma_{1},\sigma_{2}) =\langle\sigma_{1},M^{\rm W}(\rho)\sigma_{2}\rangle \tag{3.29a}\] \[=\langle M^{\rm W}(\rho)^{-1}\psi_{\sigma_{1}},\psi_{\sigma_{2}}\rangle\] (3.29c) \[=\int\rho(\theta)\nabla_{\theta}\psi_{\sigma_{1}}(\theta)^{T} \nabla_{\theta}\psi_{\sigma_{2}}(\theta){\rm d}\theta.\]
Here \(g_{\rho}^{\rm W}\) is positive-definite and hence a valid metric. It is termed the Wasserstein Riemannian metric throughout this paper.
The Wasserstein Riemannian metric has a transport interpretation. To understand this fix \(\sigma\in T_{\rho}\mathcal{P}\) and consider the family of _velocity fields_\(v\) related to \(\sigma\) via the constraint \(\sigma=-\nabla_{\theta}\cdot(\rho v)\). Then define \(v_{\sigma}=\arg\min_{v}\int\rho|v|^{2}\) in which the minimization is over all \(v\) satisfying the constraint. A formal Lagrange multiplier argument can be used to deduce that \(v_{\sigma}=\nabla_{\theta}\psi_{\sigma}\) for some \(\psi_{\sigma}\). This motivates the relationship appearing in (3.27) as well as the form of the Wasserstein Riemannian metric appearing in (3.29) which may then be viewed as measuring the kinetic energy \(\int\rho|v_{\sigma}|^{2}{\rm d}\theta\). We emphasize that, for ease of understanding, our discussion on the Riemannian structure of the Wasserstein metric is purely formal; for rigorous treatments, the reader can consult [3]. \(\Diamond\)
To further develop the preceding discussion, consider the Liouville equation for dynamical system in \(\mathbb{R}^{N_{\theta}}\) driven by vector field \(v_{\sigma}:=\nabla_{\theta}\psi_{\sigma}\). Let \(\rho_{A},\rho_{B}\) be two elements in \(\mathcal{P}\) and let \(\rho_{t}\) be a path in time governed by this Liouville equation, and satisfying the boundary conditions \(\rho_{0}=\rho_{A},\rho_{1}=\rho_{B}\). Then
\[\frac{\partial\rho_{t}}{\partial t}+\nabla_{\theta}\cdot(\rho_{t}\nabla_{ \theta}\psi_{t})=0,\rho_{0}=\rho_{A},\rho_{1}=\rho_{B}. \tag{3.30}\]
With this equation, we can write the geodesic distance \(\mathcal{D}^{\rm W}:\mathcal{P}\times\mathcal{P}\rightarrow\mathbb{R}^{+}\) as:
\[\mathcal{D}^{\rm W}(\rho_{A},\rho_{B})^{2} =\inf_{\rho_{t}}\Bigl{\{}\int_{0}^{1}g_{\rho_{t}}^{\rm W}(\partial _{t}\rho_{t},\partial_{t}\rho_{t}){\rm d}t:\,\rho_{0}=\rho_{A},\rho_{1}=\rho_ {B}\Bigr{\}} \tag{3.31}\] \[=\inf_{\psi_{t}\in\mathsf{L}}\Bigl{\{}\int_{0}^{1}{\rm d}t\int \rho_{t}|\nabla_{\theta}\psi_{t}|^{2}{\rm d}\theta\Bigr{\}},\]
where \(\mathsf{L}\) is the set of time-dependent potentials \(\psi_{t}\) such that equation (3.30) holds. This is the celebrated Benamou-Brenier formula for the 2-Wasserstein distance [12].
#### 3.3.2 Flow Equation
The Wasserstein gradient flow of the KL divergence is
\[\frac{\partial\rho_{t}}{\partial t} =-M^{\rm W}(\rho_{t})^{-1}\frac{\delta\mathcal{E}}{\delta\rho} \Bigr{|}_{\rho=\rho_{t}} \tag{3.32}\] \[=\nabla_{\theta}\cdot\bigl{(}\rho_{t}(\nabla_{\theta}\log\rho_{t }-\nabla_{\theta}\log\rho_{\rm post})\bigr{)}\] \[=-\nabla_{\theta}\cdot(\rho_{t}\nabla_{\theta}\log\rho_{\rm post })+\Delta_{\theta}\rho_{t}.\]
This is simply the Fokker-Planck equation for the Langevin dynamics
\[{\rm d}\theta_{t}=\nabla_{\theta}\log\rho_{\rm post}(\theta){\rm d}t+\sqrt{2}{ \rm d}W_{t}, \tag{3.33}\]
where \(W_{t}\in\mathbb{R}^{N_{\theta}}\) is a standard Brownian motion. This is a trivial mean-field model of the form (3.8) in the sense that there is no dependence on the density \(\rho_{t}\) associated with the law of \(\theta\).
We note that elliptic equations defining certain potentials arise in the context of both the Fisher-Rao as well as the Wasserstein metric. However, while (3.27) appears in the definition of the Wasserstein metric only, solving (3.26) is required for obtaining the mean-field equations (3.22) in the Fisher-Rao setting. Returning to the cost functional (3.17), we find that the associated Wasserstein gradient mean-field dynamics simply reduces to gradient descent
\[{\rm d}\theta_{t}=-\nabla_{\theta}\Phi(\theta){\rm d}t \tag{3.34}\]
while the associated Fisher-Rao mean-field equations are more complex and linked to Bayesian inference as discussed earlier in Remark 3.13. \(\Diamond\)
#### 3.3.3 Affine Invariance
The Wasserstein Riemannian metric (3.29) is not affine invariant. Hence, in this subsection, we introduce an affine invariant modification to the Wasserstein metric. To this end, we consider preconditioner \(P:\mathbb{R}^{N_{\theta}}\times\mathcal{P}\to\mathbb{R}^{N_{\theta}\times N_{ \theta}}_{>0}\), where the output space is the cone of positive-definite symmetric matrices.
We generalize (3.27) and let \(\psi_{\sigma}\) solve the PDE
\[-\nabla_{\theta}\cdot(\rho P(\theta,\rho)\nabla_{\theta}\psi_{\sigma})=\sigma, \tag{3.35}\]
again noting that specification of function spaces is needed to ensure unique invertibility of the divergence form elliptic operator (see Proposition 3.14 where similar considerations arise). We may then generalize the metric tensor in (3.28) to obtain \(M^{\mathrm{AIW}}(\rho)\) and inverse given by
\[M^{\mathrm{AIW}}(\rho)\sigma=\psi_{\sigma},\quad\forall\ \sigma \in T_{\rho}\mathcal{P}, \tag{3.36a}\] \[M^{\mathrm{AIW}}(\rho)^{-1}\psi=-\nabla_{\theta}\cdot(\rho P( \theta,\rho)\nabla_{\theta}\psi),\quad\forall\ \psi\in T^{*}_{\rho}\mathcal{P}. \tag{3.36b}\]
Manipulations similar to use in (3.29), but using \(M^{\mathrm{AIW}}(\rho)\), show that
\[g^{\mathrm{AIW}}_{\rho}(\sigma_{1},\sigma_{2}) =\langle\sigma_{1},M^{\mathrm{AIW}}(\rho)\sigma_{2}\rangle\] \[=\langle M^{\mathrm{AIW}}(\rho)^{-1}\psi_{\sigma_{1}},\psi_{ \sigma_{2}}\rangle\] \[=\int\rho(\theta)\nabla_{\theta}\psi_{\sigma_{1}}(\theta)^{T}P( \theta,\rho)\nabla_{\theta}\psi_{\sigma_{2}}(\theta)\mathrm{d}\theta.\]
It follows that \(g^{\mathrm{AIW}}_{\rho}\) is positive-definite and hence a valid metric tensor. We have the following proposition to guarantee this metric tensor is affine invariant:
**Proposition 3.18**: _Under the assumption on \(P\) given in Proposition 3.14, leading to (3.24), the metric corresponding to \(M^{\mathrm{AIW}}\) is affine invariant. Consequently, the associated gradient flow of the KL divergence, namely_
\[\frac{\partial\rho_{t}(\theta)}{\partial t}=\nabla_{\theta}\cdot\Big{(}\rho_ {t}P(\theta,\rho_{t})(\nabla_{\theta}\log\rho_{t}-\nabla_{\theta}\log\rho_{ \mathrm{post}})\Big{)}, \tag{3.37}\]
_is affine invariant._
The proof of this proposition is provided in Appendix B.5. Henceforth we refer to \(M^{\mathrm{AIW}}\) satisfying the condition of the preceding proposition as an affine invariant Wasserstein metric tensor.
#### 3.3.4 Mean-Field Dynamics
As discussed in relation to the topic of affine invariance in Subsection 3.1.4, mean-field models with a given law are not unique. In the specific context of the Wasserstein gradient flow this suggests looking beyond (3.33) for a mean-field model with governing law given by (3.32). This can be achieved as follows [124]. Fix arbitrary \(h:\mathbb{R}^{N_{\theta}}\times\mathbb{R}\to\mathbb{R}^{N_{\theta}\times N_{ \theta}}\), define \(D(\theta,\rho)=\frac{1}{2}h(\theta,\rho)h(\theta,\rho)^{T}\) and choose \(d(\theta,\rho)=\nabla_{\theta}\cdot D(\theta,\rho)\). Then, for any \(h\), consider the SDE
\[\mathrm{d}\theta_{t}=\Big{(}\nabla_{\theta}\log\rho_{\mathrm{post}}(\theta_{t })+\big{(}D(\theta_{t},\rho_{t})-I\big{)}\nabla_{\theta}\log\rho_{t}(\theta_{ t})-d(\theta_{t},\rho_{t})\Big{)}\mathrm{d}t+h(\theta_{t},\rho_{t})\mathrm{d}W_{t}, \tag{3.38}\]
When \(h=\sqrt{2I}\) we recover (3.33). When this condition does not hold, so that \(D(\theta,\rho_{t})\neq I\), the equation requires knowledge of the score function \(\nabla_{\theta}\log\rho_{t}(\theta_{t})\); and
particle methods to approximate (3.38) will require estimates of the score; various approaches have been adopted in the literature [97, 133, 120, 17]. See also [121] and references therein for discussion of score estimation. Notably, by choosing \(h=0\) in (3.38), one can obtain a deterministic particle system, which may be preferred in practical implementations. Alternatively, in [62], interpolation between the Wasserstein metric and Stein metric was studied to derive determinstic particle approximations of the Wasserstein gradient flow.
We now apply similar considerations to the preconditioned Wasserstein gradient flow (3.37). Employing the same choices of \(d\) and \(D\) from \(h\) as in the unpreconditioned case we obtain the following mean-field evolution equation:
\[\mathrm{d}\theta_{t}=P(\theta_{t},\rho_{t})\nabla_{\theta}\log \rho_{\mathrm{post}}(\theta_{t})\mathrm{d}t \tag{3.39}\] \[\qquad\qquad\qquad\qquad+\Big{(}\big{(}D(\theta_{t},\rho_{t})-P( \theta_{t},\rho_{t})\big{)}\nabla_{\theta}\log\rho_{t}(\theta_{t})-d(\theta_{ t},\rho_{t})\Big{)}\mathrm{d}t\] \[\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad+h(\theta_{t}, \rho_{t})\mathrm{d}W_{t}.\]
For this specific mean-field equation (3.39), we can also establish affine invariance; see the following proposition and its proof in Appendix B.6.
**Proposition 3.19**: _The mean-field equation (3.39) is affine invariant under the assumption on the preconditioner \(P\) given in Proposition 3.14, leading to (3.24), and the assumptions on \(h\) given in (3.10b)._
In particular, let \(C(\rho)\) denote the covariance matrix of \(\rho.\) If we take \(P(\theta,\rho)=C(\rho)\) then we recover the affine invariant Kalman-Wasserstein metric introduced in [54, 55]. Furthermore, then making the choice of \(h(\theta,\rho)=\sqrt{2C(\rho)}\) leads to the following affine invariant overdamped Langevin equation, also introduced in [54, 55]:
\[\mathrm{d}\theta_{t}=C(\rho_{t})\nabla_{\theta}\log\rho_{\mathrm{post}}( \theta_{t})\mathrm{d}t+\sqrt{2C(\rho_{t})}\mathrm{d}W_{t}. \tag{3.40}\]
Comparison with (3.33) demonstrates that it is a preconditioned version of the standard overdamped Langevin equation.
### Stein Gradient Flow
#### 3.4.1 Metric
Generalizing (3.27) we let \(\psi_{\sigma}\) solve the integro-partial differential equation
\[-\nabla_{\theta}\cdot\Big{(}\rho(\theta)\int\kappa(\theta,\theta^{\prime}, \rho)\rho(\theta^{\prime})\nabla_{\theta^{\prime}}\psi_{\sigma}(\theta^{ \prime})\mathrm{d}\theta^{\prime}\Big{)}=\sigma(\theta). \tag{3.41}\]
As before definition of function space setting is required to ensure that this equation is uniquely solvable. Now define the Stein metric tensor \(M^{\mathrm{S}}(\rho)\), and its inverse, as follows:
\[M^{\mathrm{S}}(\rho)\sigma=\psi_{\sigma},\quad\forall\ \sigma \in T_{\rho}\mathcal{P}, \tag{3.42a}\] \[M^{\mathrm{S}}(\rho)^{-1}\psi=-\nabla_{\theta}\cdot\Big{(}\rho (\theta)\int\kappa(\theta,\theta^{\prime},\rho)\rho(\theta^{\prime})\nabla_{ \theta^{\prime}}\psi(\theta^{\prime})\mathrm{d}\theta^{\prime}\Big{)},\quad \forall\ \psi\in T_{\rho}^{*}\mathcal{P}. \tag{3.42b}\]
Computations analogous to those shown in (3.29) show that the Stein Riemannian metric implied by metric tensor \(M^{\mathrm{S}}\) is given by
\[g^{\mathrm{S}}_{\rho}(\sigma_{1},\sigma_{2}) =\langle\sigma_{1},M^{\mathrm{W}}(\rho)\sigma_{2}\rangle \tag{3.43a}\] \[=\int\int\kappa(\theta,\theta^{\prime},\rho)\rho(\theta)\nabla_{ \theta}\psi_{\sigma_{1}}(\theta)^{T}\nabla_{\theta^{\prime}}\psi_{\sigma_{2}} (\theta^{\prime})\rho(\theta^{\prime})\mathrm{d}\theta\mathrm{d}\theta^{ \prime}. \tag{3.43b}\]
_Remark 3.20_.: As in the Wasserstein setting, the Stein Riemannian metric [89] also has a transport interpretation. The Stein metric identifies, for each \(\sigma\in T_{\rho}\mathcal{P},\) the set of velocity fields \(v\) satisfying the constraint \(\sigma=-\nabla_{\theta}\cdot(\rho v)\). Then \(v_{\sigma}=\arg\min_{v}\|v\|_{\mathcal{H}_{\kappa}}^{2},\) with minimization over all \(v\) satisfying the constraint, and where \(\mathcal{H}_{\kappa}\) is a Reproducing Kernel Hilbert Space (RKHS) with kernel \(\kappa\). A formal Lagrangian multiplier argument shows that
\[v_{\sigma}=\int\kappa(\theta,\theta^{\prime},\rho)\rho(\theta^{\prime})\nabla_{ \theta^{\prime}}\psi_{\sigma}(\theta^{\prime})\mathrm{d}\theta^{\prime}\]
for some \(\psi_{\sigma}.\) The Stein metric measures this transport change via the RKHS norm \(\|v_{\sigma}\|_{\mathcal{H}_{\kappa}}^{2},\) leading to the interpretation that the Stein Riemannian metric can be written in the form
\[g_{\rho}^{\mathrm{S}}(\sigma_{1},\sigma_{2})=\langle v_{\sigma_{1}},v_{\sigma _{2}}\rangle_{\mathcal{H}_{\kappa}}.\]
\(\Diamond\)
Analogously to (3.30), for any \(\rho_{A},\rho_{B}\in\mathcal{P},\) we may write a path to connect these end points and defined by
\[\frac{\partial\rho_{t}}{\partial t}+\nabla_{\theta}\cdot\Big{(}\rho_{t}\int \kappa(\theta,\theta^{\prime},\rho_{t})\rho_{t}(\theta^{\prime})\nabla_{ \theta^{\prime}}\psi_{t}(\theta^{\prime})\mathrm{d}\theta^{\prime}\Big{)}=0, \rho_{0}=\rho_{A},\rho_{1}=\rho_{B}. \tag{3.44}\]
The corresponding geodesic distance \(\mathcal{D}^{\mathrm{S}}:\mathcal{P}\times\mathcal{P}\rightarrow\mathbb{R}^{+}\) is
\[\mathcal{D}^{\mathrm{S}}(\rho_{A},\rho_{B})^{2} =\inf_{\rho_{t}}\Bigl{\{}\int_{0}^{1}g_{\rho_{t}}^{\mathrm{S}}( \partial_{t}\rho_{t},\partial_{t}\rho_{t})\mathrm{d}t:\,\rho_{0}=\rho_{A},\rho _{1}=\rho_{B}\Bigr{\}}\] \[=\inf_{\psi_{t}\in\mathsf{L}}\Bigl{\{}\int_{0}^{1}\mathrm{d}t\int \int\kappa(\theta,\theta^{\prime},\rho_{t})\rho_{t}(\theta)\nabla_{\theta} \psi_{t}(\theta)\cdot\nabla_{\theta^{\prime}}\psi_{t}(\theta^{\prime})\rho_{t} (\theta^{\prime})\mathrm{d}\theta\mathrm{d}\theta^{\prime}\Bigr{\}}, \tag{3.45}\]
where \(\mathsf{L}\) is the set of time-dependent potentials \(\psi_{t}\) such that equation (3.44) holds.
#### 3.4.2 Flow Equation
The Stein variational gradient flow is
\[\frac{\partial\rho_{t}(\theta)}{\partial t} =-\Bigl{(}M^{\mathrm{S}}(\rho_{t})^{-1}\frac{\delta\mathcal{E}}{ \delta\rho}\Big{|}_{\rho=\rho_{t}}\Bigr{)}(\theta)\] \[=\nabla_{\theta}\cdot\Big{(}\rho_{t}(\theta)\int\kappa(\theta, \theta^{\prime},\rho_{t})\rho_{t}(\theta^{\prime})\nabla_{\theta^{\prime}} \bigl{(}\log\rho_{t}(\theta^{\prime})-\log\rho_{\mathrm{post}}(\theta^{\prime}) \bigr{)}\mathrm{d}\theta^{\prime}\Big{)}. \tag{3.46}\]
#### 3.4.3 Affine Invariance
The Stein metric (3.42) is not affine invariant. To address this, in this subsection we introduce an affine invariant modification. The generalization is similar to that undertaken to obtain an affine invariant version of the Wasserstein metric and so we will make the presentation brief. We define
\[M^{\mathrm{AIS}}(\rho):T_{\rho}\mathcal{P}\to T_{\rho}^{*}\mathcal{P},\]
so that for any \(\psi\in T_{\rho}^{*}\mathcal{P},\) it holds that
\[M^{\mathrm{AIS}}(\rho)^{-1}\psi=-\nabla_{\theta}\cdot\Big{(}\rho(\theta)\int \kappa(\theta,\theta^{\prime},\rho)\rho(\theta^{\prime})P(\theta,\theta^{ \prime},\rho)\nabla_{\theta^{\prime}}\psi(\theta^{\prime})\mathrm{d}\theta^{ \prime}\Big{)}. \tag{3.47}\]
Here \(\kappa:\mathbb{R}^{N_{\theta}}\times\mathbb{R}^{N_{\theta}}\rightarrow\mathbb{R}\) is a positive definite kernel and we factorize the preconditioner \(P:\mathbb{R}^{N_{\theta}}\times\mathbb{R}^{N_{\theta}}\times\mathcal{P} \rightarrow\mathbb{R}^{N_{\theta}\times N_{\theta}}\), which can be written in the form
\(P(\theta,\theta^{\prime},\rho)=L(\theta,\rho)L(\theta^{\prime},\rho)^{T}\). With this is hand it follows that
\[\langle\psi,M^{\mathrm{AIS}}(\rho)^{-1}\psi\rangle\] \[= \int\int\kappa(\theta,\theta^{\prime},\rho)\rho(\theta)\left(L( \theta,\rho)^{T}\nabla_{\theta}\psi(\theta)\right)^{T}\left(L(\theta^{\prime}, \rho)^{T}\nabla_{\theta^{\prime}}\psi(\theta^{\prime})\right)\rho(\theta^{ \prime})\mathrm{d}\theta\mathrm{d}\theta^{\prime}\geq 0\]
and the resulting metric is well-defined. We have the following proposition to guarantee this metric tensor is affine invariant:
**Proposition 3.21**: _Consider the invertible affine transformation \(\tilde{\theta}=\varphi(\theta)=A\theta+b\) and correspondingly \(\tilde{\rho}=\varphi\#\rho\); moreover \(\tilde{\theta}^{\prime}=\varphi(\theta^{\prime})\). Assume that the preconditioning matrix satisfies_
\[\kappa(\tilde{\theta},\tilde{\theta}^{\prime},\tilde{\rho})P(\tilde{\theta}, \tilde{\theta}^{\prime},\tilde{\rho})=\kappa(\theta,\theta^{\prime},\rho)AP( \theta,\theta^{\prime},\rho)A^{T}.\]
_Then the metric corresponding to \(M^{\mathrm{AIS}}\) is affine invariant. Consequently, the associate gradient flow of the KL divergence, namely_
\[\frac{\partial\rho_{t}(\theta)}{\partial t} =\nabla_{\theta}\cdot(\mathsf{f})\] \[\mathsf{f} =\Big{(}\rho_{t}(\theta)\int\kappa(\theta,\theta^{\prime},\rho_{ t})\rho_{t}(\theta^{\prime})P(\theta,\theta^{\prime},\rho_{t})\nabla_{ \theta^{\prime}}\big{(}\log\rho_{t}(\theta^{\prime})-\log\rho_{\mathrm{post}} (\theta^{\prime})\big{)}\mathrm{d}\theta^{\prime}\Big{)}\]
_is affine invariant._
The proof of this proposition is in Appendix B.7. Henceforth we refer to \(M^{\mathrm{AIS}}\) satisfying the condition of the preceding proposition as an affine invariant Stein metric tensor. As an example, we can obtain an affine invariant Stein metric by making the choices \(P=C(\rho)\) and \(\kappa(\theta,\theta^{\prime},\rho)\propto\exp\!\left\{-\frac{1}{2}(\theta- \theta^{\prime})^{T}C(\rho)^{-1}(\theta-\theta^{\prime})\right\}\); this set-up is considered in our numerical experiments; see Section 5.
#### 3.4.4 Mean-Field Dynamics
The Stein gradient flow (3.46) has the following as a mean-field counterpart [90, 89] in \(\theta_{t}\) with the same law \(\rho_{t}\):
\[\frac{\mathrm{d}\theta_{t}}{\mathrm{d}t} =\int\kappa(\theta_{t},\theta^{\prime},\rho_{t})\rho_{t}(\theta^{ \prime})\nabla_{\theta^{\prime}}\big{(}\log\rho_{\mathrm{post}}(\theta^{\prime })-\log\rho_{t}(\theta^{\prime})\big{)}\mathrm{d}\theta^{\prime} \tag{3.49}\] \[=\int\kappa(\theta_{t},\theta^{\prime},\rho_{t})\rho_{t}(\theta^{ \prime})\nabla_{\theta^{\prime}}\log\rho_{\mathrm{post}}(\theta^{\prime})+ \rho_{t}(\theta^{\prime})\nabla_{\theta^{\prime}}\kappa(\theta_{t},\theta^{ \prime},\rho_{t})\mathrm{d}\theta^{\prime}.\]
Here, the second equality is obtained using integration by parts; it facilitates an expression which avoids the score (gradient of log density function of \(\rho_{t}\)). This is useful because, when implementing particle methods, the resulting integral can then be approximated directly by Monte Carlo methods.
Similarly, for the preconditioned Stein gradient flow (3.48), we can construct the following mean-field equation:
\[\frac{\mathrm{d}\theta_{t}}{\mathrm{d}t} =\int\!\Big{(}\kappa(\theta_{t},\theta^{\prime},\rho_{t})\rho_{t} (\theta^{\prime})P(\theta_{t},\theta^{\prime},\rho_{t})\nabla_{\theta^{\prime }}\log\rho_{\mathrm{post}}(\theta^{\prime}) \tag{3.50}\] \[\qquad\qquad\qquad\qquad+\nabla_{\theta^{\prime}}\cdot(\kappa( \theta_{t},\theta^{\prime},\rho_{t})P(\theta_{t},\theta^{\prime},\rho_{t})) \rho_{t}(\theta^{\prime})\Big{)}\mathrm{d}\theta^{\prime}.\]
The mean-field equation (3.50) is affine invariant; see the following proposition and its proof in Appendix B.8.
**Proposition 3.22**: _The mean-field equation (3.50) is affine invariant under the assumption on the preconditioner in Proposition 3.21._
### Large-Time Asymptotic Convergence
In the three preceding subsections we studied gradient flows, under various different metrics, of the energy \(\mathcal{E}\) given in (2). In this subsection, we study the convergence of these gradient flows both surveying known, and adding new, results. In short, the convergence of the Fisher-Rao gradient flow occurs at rate \(\mathcal{O}(\exp(-t))\) and is hence problem independent; this reflects the invariance of the metric under any diffeomorphism. In contrast, the proven results for Wasserstein and Stein gradient flows have convergence rates that depend on the problem, even after being modified to be affine invariant. We note, however, that when \(\rho_{\mathrm{post}}\) is Gaussian, the affine invariant Wasserstein gradient flows also achieve \(\mathcal{O}(\exp(-t))\)[54, 55]. Numerical results illustrating and complementing the analysis in this section may be found in Section 5.
#### 3.5.1 Fisher-Rao Gradient Flow
We have the following proposition concerning large-time convergence of the gradient flow:
**Proposition 3.23**: _Let \(\rho_{t}\) solve the Fisher-Rao gradient flow (3.15). Assume also that there exist constants \(K,B>0\) such that the initial density \(\rho_{0}\) satisfies_
\[e^{-K(1+|\theta|^{2})}\leq\frac{\rho_{0}(\theta)}{\rho_{\mathrm{post}}(\theta )}\leq e^{K(1+|\theta|^{2})}, \tag{3.51}\]
_and both \(\rho_{0},\rho_{\mathrm{post}}\) have bounded second moment_
\[\int|\theta|^{2}\rho_{0}(\theta)\mathrm{d}\theta\leq B,\quad\int|\theta|^{2} \rho_{\mathrm{post}}(\theta)\mathrm{d}\theta\leq B. \tag{3.52}\]
_Then, for any \(t\geq\log\bigl{(}(1+B)K\bigr{)}\),_
\[\mathrm{KL}[\rho_{t}\|\rho_{\mathrm{post}}]\leq(2+B+eB)Ke^{-t}. \tag{3.53}\]
It is notable that the exponential convergence rate is _independent_ of the properties of the target distribution \(\rho_{\mathrm{post}}\); this reflects invariance of the flow under any diffeomorphism. The proof of this proposition is in Appendix C.1. Similar propositions are in [94, Theorem 3.3] and [95, Theorem 2.3]; our results relax the assumptions required on the initial condition.
#### 3.5.2 Wasserstein Gradient Flow
The convergence of the Wasserstein gradient flow (3.32) is widely studied [129]. A variety of different conditions on \(\rho_{\mathrm{post}}\) lead to the exponential convergence of the Wasserstein gradient flow to \(\rho_{\mathrm{post}}\) with convergence rate \(e^{-2\alpha t}\)[8]. They include that \(\rho_{\mathrm{post}}\) is \(\alpha\)-strongly logconcave (Definition 3.24) [7] or that \(\rho_{\mathrm{post}}\) satisfies the log-Sobolev inequality [59] or Poincare inequality [109] with constant \(1/\alpha\). We have the following proposition concerning the convergence of the affine-invariant Wasserstein gradient flow (3.37):
**Definition 3.24**: _The distribution \(\rho_{\mathrm{post}}(\theta)\) is called \(\alpha\)-strongly logconcave, if_
\[-\nabla_{\theta}\nabla_{\theta}\log\rho_{\mathrm{post}}(\theta)\succeq\alpha I. \tag{3.54}\]
**Proposition 3.25**: _Assume \(\rho_{\mathrm{post}}(\theta)\) is \(\alpha\)-strongly logconcave and there exists \(\lambda>0\) such that \(P(\theta,\rho)\succeq\lambda I\) along the flow. Then the solution \(\rho_{t}\) of the affine-invariant Wasserstein gradient flow (3.37) satisfies_
\[\frac{1}{2}\|\rho_{t}-\rho_{\mathrm{post}}\|_{1}^{2}\leq\mathrm{KL}[\rho_{0}\| \rho_{\mathrm{post}}]e^{-2\alpha\lambda t},\]
_where \(\|\cdot\|_{1}\) denotes the \(L_{1}\) norm._
The proof of the proposition is in Appendix C.2. It is a generalization of [54, Proposition 3.1] which concerns the specific preconditioner \(P_{t}:=P(\theta_{t},\rho_{t})\) chosen to equal \(C_{t}\), the covariance at time \(t\). A key point to appreciate is that, in contrast to the exponential rates reported for Fisher-Rao gradient descent, the exponential rates reported here depend on the problem. When \(\rho_{\mathrm{post}}\) is Gaussian, the affine invariant Wasserstein gradient flows, however, provably achieves convergence rate \(\mathcal{O}(\exp(-t))\)[54, 55]; it would be of interest to identify classes of non-Gaussian problems where this rate is also achievable for the affine invariant Wasserstein gradient flow.
The result shows that a key determining factor in the exponential rates is lower-bounding the preconditioner. Regarding the choice of the covariance as preconditioner we have the following lower bounds:
**Lemma** : _Let \(\rho_{t}\) solve the Wasserstein gradient flow (3.37) with preconditioner \(P_{t}=C_{t}\). Assume that there exist \(\lambda_{0,\min},\beta>0\) such that \(C_{0}\succeq\lambda_{0,\min}I\) and \(-\nabla_{\theta}\nabla_{\theta}\log\rho_{\mathrm{post}}\preceq\beta I\). Then we have:_
* _if_ \(N_{\theta}=1\)_, then_ \(C_{t}\geq\min\{\frac{1}{\beta},\lambda_{0,\min}\}\)_;_
* _if_ \(N_{\theta}>1\) _and_ \(\operatorname{tr}C_{t}\leq K\)_, then_ \(C_{t}\succeq\min\{\frac{1}{\beta^{2}K},\lambda_{0,\min}\}I\)_._
The proof of this lemma is in Appendix C.3. A similar result, providing a non-degeneracy guarantee for the empirical covariance in particle approximations, can be found in [55, Proposition 4.4].
#### 3.5.3 Stein Gradient Flow
For the Stein gradient flow (3.46) the solution \(\rho_{t}\) converges weakly to \(\rho_{\mathrm{post}}\) as \(t\to\infty\), under certain assumptions [93, Theorem 2.8][78, Proposition 2]; the exponential rates are problem-dependent, similar to those for Wasserstein gradient flows in the preceding subsection, and in contrast to those for the Fisher-Rao gradient flow which give a universal rate across wide problem classes. Quantitative rates and necessary functional inequalities for the exponential convergence near the equilibrium in terms of the decay of the KL divergence are discussed in [45]. However, the speed of convergence for initial distributions far from equilibrium remains an open and challenging problem.
## 4 Gaussian Approximate Gradient Flow
In this section, we revisit the gradient flows of the energy (2.2) under the Fisher-Rao, Wasserstein and Stein metrics. We confine variations to the manifold of Gaussian densities \(\mathcal{P}^{G}\) defined in (2.6), in contrast to the previous Section 3, in which we consider variations in the whole of \(\mathcal{P}\) defined in (2.1). The corresponding Gaussian approximate gradient flows underpin Gaussian variational inference, which aims to identify the minimizers of (1.2). We first introduce the basics of metrics and gradient flow in the Gaussian density space, identify the ways that Gaussian approximations can be made and develop the concept of affine invariance for them in Subsection 4.1. Then we introduce the Gaussian approximate Fisher-Rao gradient flow in Subsection 4.2, Gaussian approximate Wasserstein gradient flow in Subsection 4.3 and Gaussian approximate Stein gradient flow in Subsection 4.4; in all cases we also discuss affine invariance and introduce affine invariant modifications where appropriate. We find that different affine invariant metrics lead to very similar gradient flows; and in particular to flows with very similar large time behaviour. We discuss the large time convergence properties of these Gaussian approximate gradient flows in Subsection 4.5.
### Basics of Gaussian Approximate Gradient Flows
In this subsection, we introduce gradient flows in the Gaussian density space; we follow the structure of Subsection 3.1. We study the problem from perspective of the metric in Subsec
tion 4.1.1, the perspective of the flow equations in Subsection 4.1.2, the perspective of affine invariance in Subsection 4.1.3, and the perspective of mean-field equations in Subsection 4.1.4. For Gaussian evolutions the mean-field models are evolution equations for the state defined by affine (in the state) tangent vector field; the affine map is defined by mean-field expectations with respect to the Gaussian with mean and covariance of the state.
#### 4.1.1 Metric
Recall the manifold of Gaussian densities in (2.6), which has dimension \(N_{a}\). We assume we are given a metric \(g_{\rho}\) and metric tensor \(M(\rho)\), depending on \(\rho\in\mathcal{P}\), and we now wish to find corresponding objects defined for parametric variations within the family of Gaussian densities \(\mathcal{P}^{G}\)4. To this end we introduce \(\rho_{a}\), with \(a\in\mathbb{R}^{N_{a}}\), denoting the parametric family. We aim to find reduced metric \(\mathfrak{g}_{a}\) and metric tensor \(\mathfrak{M}(a)\) in the parameter space \(\mathbb{R}^{N_{a}}\) rather than in \(\mathcal{P}\).
Footnote 4: In fact our development is readily generalized to the determination of the corresponding objects for any parametrically dependent manifold of densities, not just Gaussians.
Noting that
\[\lim_{\epsilon\to 0}\frac{\rho_{a+\epsilon\sigma}-\rho_{a}}{\epsilon}=\nabla_{a} \rho_{a}\cdot\sigma, \tag{4.1}\]
we see that any element in the tangent space \(T_{\rho_{a}}\mathcal{P}^{G}\) can be identified with a vector \(\sigma\in\mathbb{R}^{N_{a}}\). We denote the Riemannian metric tensor restricted to \(\mathcal{P}^{G}\) at \(\rho_{a}\) as \(\mathfrak{g}_{a}\). Then
\[\mathfrak{g}_{a}(\sigma_{1},\sigma_{2}):=g_{\rho_{a}}(\nabla_{a}\rho_{a} \cdot\sigma_{1},\nabla_{a}\rho_{a}\cdot\sigma_{2})=\langle\mathfrak{M}(a) \sigma_{1},\sigma_{2}\rangle_{\mathbb{R}^{N_{a}}}, \tag{4.2}\]
where \(\sigma_{1},\sigma_{2}\in T_{\rho_{a}}(\mathcal{P}^{G})\), and the induced metric tensor is given by
\[\mathfrak{M}(a):=\int\nabla_{a}\rho_{a}(\theta)\big{(}M(\rho_{a})\nabla_{a} \rho_{a}^{T}\big{)}(\theta)\mathrm{d}\theta. \tag{4.3}\]
#### 4.1.2 Flow Equation
Given (4.3) it is intuitive that the gradient flow in the parameter space implied by the gradient flow in the manifold of Gaussians is given by
\[\frac{\partial a_{t}}{\partial t}=-\mathfrak{M}(a_{t})^{-1}\left.\frac{ \partial\mathcal{E}(\rho_{a})}{\partial a}\right|_{a=a_{t}}. \tag{4.4}\]
We refer to (4.4) as the Gaussian approximate gradient flow; it is formulated as an evolution equation in the parameter space. It is also possible to write an evolution equation for \(\rho_{a_{t}}\) in the space of Gaussian probability densities \(\mathcal{P}^{G}\).
Our goal now is to show that (4.4) may be derived by using any one of the following proximal, Riemannian, and moment closure perspectives. In particular, these perspectives justify that the gradient flow in the space of Gaussian densities \(\mathcal{P}^{G}\) is a _Gaussian approximation_ of the gradient flow on the whole probability space. Such approximation can be interpreted either by constraining the minimization underlying the proximal perspective, by the projection of the flow field based on the Riemannian metric, or through a moment closure reduction of probability densities. The latter moment closure approach is particularly expedient for determination of the form of equation (4.4).
_Proximal Perspective._ Given the metric \(g\) and the corresponding distance function \(\mathcal{D}\), the proximal point method (3.5) can be restricted to the space of Gaussian densities, leading to the iteration
\[\mathfrak{r}_{n+1}=\operatorname*{arg\,min}_{\rho\in\mathcal{P}^{G}}\Bigl{(} \mathcal{E}(\rho)+\frac{1}{2\Delta t}\mathcal{D}(\rho,\mathfrak{r}_{n})^{2} \Bigr{)} \tag{4.5}\]
to minimize the energy functional \(\mathcal{E}\) in Gaussian density function space \(\mathcal{P}^{G}\). Since elements in \(\mathcal{P}^{G}\) are uniquely defined via a point \(a\in\mathbb{R}^{N_{a}}\), the map \(\mathfrak{r}_{n}\mapsto\mathfrak{r}_{n+1}\) implicitly defines a map \(a_{n}\mapsto a_{n+1}.\) Thus we write \(\mathfrak{r}_{n}=\rho_{a_{n}}\) and determine the update equation for \(a_{n}\). When \(\Delta t\) is small it is natural to seek \(a_{n+1}=a_{n}+\Delta t\sigma_{n}\) and note that, invoking the approximations implied by (4.1) and (3.3),
\[\sigma_{n} \approx\operatorname*{arg\,min}_{\sigma\in\mathbb{R}^{N_{a}}} \Bigl{(}\mathcal{E}(\rho_{a_{n}}+\Delta t\nabla_{a}\rho_{a_{n}}\cdot\sigma)+ \frac{1}{2}\Delta t\langle\nabla_{a}\rho_{a_{n}}\cdot\sigma,M(\rho_{a_{n}}) \nabla_{a}\rho_{a_{n}}\cdot\sigma\rangle\Bigr{)},\] \[\approx\operatorname*{arg\,min}_{\sigma\in\mathbb{R}^{N_{a}}} \Bigl{(}\mathcal{E}(\rho_{a_{n}}+\Delta t\nabla_{a}\rho_{a_{n}}\cdot\sigma)+ \frac{1}{2}\Delta t\langle\sigma,\mathfrak{M}(a_{n})\sigma\rangle_{\mathbb{R }^{N_{a}}}\Bigr{)}.\]
To leading order in \(\Delta t\), this expression is minimized by choosing
\[\sigma_{n}=-\mathfrak{M}(a_{n})^{-1}\Bigl{\langle}\frac{\delta\mathcal{E}}{ \delta\rho}\Bigr{|}_{\rho=\rho_{a_{n}}},\nabla_{a}\rho_{a_{n}}\Bigr{\rangle}=- \mathfrak{M}(a_{n})^{-1}\frac{\partial\mathcal{E}(\rho_{a})}{\partial a} \Bigr{|}_{a=a_{n}}.\]
Letting \(a_{n}\approx a_{n\Delta t}\) shows that the formal continuous time limit of the proximal algorithm leads to the corresponding gradient flow (4.4).
_Riemannian Perspective._ We start by defining the projection \(P^{G}:T_{\rho}\mathcal{P}\to T_{\rho}\mathcal{P}^{G}\) as follows: for any \(\psi\in T_{\rho}\mathcal{P}\) we define \(P^{G}\) by requiring that
\[g_{\rho}(\psi,\sigma)=g_{\rho}(P^{G}\psi,\sigma),\quad\forall\sigma\in T_{ \rho}\mathcal{P}^{G}. \tag{4.6}\]
Now consider the gradient flow
\[\frac{\partial\rho_{t}}{\partial t} =\sigma_{t}\in T_{\rho_{t}}\mathcal{P}, \tag{4.7a}\] \[\sigma_{t} =-M(\rho_{t})^{-1}\frac{\delta\mathcal{E}}{\delta\rho}\Bigr{|}_{ \rho=\rho_{t}} \tag{4.7b}\]
designed to decrease the functional \(\mathcal{E}\) under the metric \(g\). We note that, by virtue of (2.4), \(\sigma_{t}=\sigma_{t}(\theta,\rho_{t})\).
We may now consider restriction of the gradient flow to variations in the manifold of Gaussian densities, leading to equation for \(\rho_{a_{t}}\in\mathcal{P}^{G}\subseteq\mathcal{P}\), defined through the corresponding gradient flow
\[\frac{\partial\rho_{a_{t}}}{\partial t}=P^{G}\sigma_{t}\in T_{\rho_{a_{t}}} \mathcal{P}^{G}. \tag{4.8}\]
The proof of the following proposition may be found in Appendix D.2.
**Proposition 4.1**: _The flow of the parameter \(a_{t}\) implied by the evolution equation (4.8) for the density \(\rho_{a_{t}}\) is the Gaussian approximate gradient flow (4.4)._
_Moment Closure Perspective._ For any gradient flow (4.7) designed to decrease the functional \(\mathcal{E}\) under the metric \(g\), we consider the following moment closure approach to obtain a Gaussian approximation. First, we write evolution equations for the mean and covariance under (4.7) noting that they satisfy the following identities:
\[\frac{\mathrm{d}m_{t}}{\mathrm{d}t} =\frac{\mathrm{d}}{\mathrm{d}t}\int\rho_{t}(\theta)\theta\mathrm{ d}\theta=\int\sigma_{t}(\theta,\rho_{t})\theta\mathrm{d}\theta, \tag{4.9}\] \[\frac{\mathrm{d}C_{t}}{\mathrm{d}t} =\frac{\mathrm{d}}{\mathrm{d}t}\int\rho_{t}(\theta)(\theta-m_{t}) (\theta-m_{t})^{T}\mathrm{d}\theta=\int\sigma_{t}(\theta,\rho_{t})(\theta-m_{t })(\theta-m_{t})^{T}\mathrm{d}\theta.\]
This is not, in general, a closed system for the mean and covariance; this is because \(\rho_{t}\) is not, in general, determined by only first and second moments. To close the system, we replace \(\sigma_{t}(\theta,\rho_{t})\) by \(\sigma_{t}(\theta,\rho_{a_{t}})\), where \(\rho_{a_{t}}=\mathcal{N}(m_{t},C_{t})\). We obtain the following closed system for the evolution of \((m_{t},C_{t})\):
\[\frac{\mathrm{d}m_{t}}{\mathrm{d}t} =\int\sigma_{t}(\theta,\rho_{a_{t}})\theta\mathrm{d}\theta, \tag{4.10}\] \[\frac{\mathrm{d}C_{t}}{\mathrm{d}t} =\int\sigma_{t}(\theta,\rho_{a_{t}})(\theta-m_{t})(\theta-m_{t})^ {T}\mathrm{d}\theta.\]
The proof of the following proposition, which shows that this moment closure approach delivers the mean and covariance evolution equation of the Gaussian approximate gradient flow (4.4), may be found in Appendix D.2.
**Proposition 2**: _Suppose that the cotangent space in the Gaussian submanifold corresponding to the metric \(g\) is a collection of linear and quadratic functions of \(\theta\):_
\[T^{*}_{\rho_{a}}\mathcal{P}^{G}:= M(\rho_{a})T_{\rho_{a}}\mathcal{P}^{G}=\mathrm{span}\{ \theta_{i},\theta_{i}\theta_{j},1\leq i,j\leq N_{\theta}\}. \tag{4.11}\]
_Here \(M(\rho_{a})\) is the metric tensor and \(T_{\rho_{a}}\mathcal{P}^{G}\) is the tangent space. Then the mean and covariance evolution equations (4.10) are equivalent to the Gaussian approximate gradient flow (4.4)._
Furthermore, Appendix D.2 also contains proof of the following Lemma 2.2.2 indicating that several of the metrics considered later in this Section 4 do indeed satisfy the assumption (4.11) sufficient for Proposition 2.2 to hold.
**Lemma 2**: _[Assumption (4.11) holds for the Fisher-Rao metric, the affine invariant Wasserstein metric with preconditioner \(P\) independent of \(\theta\), and affine invariant Stein metric with preconditioner \(P\) independent of \(\theta\) and with a bilinear kernel \(\kappa(\theta,\theta^{\prime},\rho)=(\theta-m)^{T}A(\theta)(\theta^{\prime}-m) +b(\rho)\) (\(b\neq 0\), and \(A\) nonsingular). \({}_{\Box}\)_
_Remark 2.2_: The moment closure perspective was used in [118] as a heuristic approach to state estimation in the context of the unscented Kalman filter. A connection between the heuristics and gradient flow on the Bures-Wasserstein space of Gaussian distributions was established in [81]. The latter is equivalent to the Gaussian approximate gradient flow under the Wasserstein metric, also called the Gaussian approximate Wasserstein gradient flow in this paper; see also the discussion in Section 4.3.1. \({}_{\Box}\)
#### 4.1.3 Affine Invariance.
We now study the affine invariance concept in the setting of Gaussian approximate gradient flows. Let \(\varphi:\theta\to\tilde{\theta}\) denote an invertible affine transformation in \(\mathbb{R}^{N_{\theta}}\), where \(\varphi(\theta)=A\theta+b\) with \(A\in\mathbb{R}^{N_{\theta}}\times\mathbb{R}^{N_{\theta}}\), \(b\in\mathbb{R}^{N_{\theta}}\), and \(A\) invertible.
We define the push forward operator for various objects.
* For a parametrically-defined density \(\rho_{a}\), we write \(\rho_{\tilde{a}}=\varphi\#\rho_{a}\), so that \(\rho_{\tilde{a}}(\tilde{\theta})=\rho_{a}(\varphi^{-1}(\tilde{\theta}))|A^{-1}|\). Specifically, for Gaussian density space where \(a=(m,C)\), we have an invertible affine transformation in \(\mathbb{R}^{N_{a}}\), such that \(\tilde{a}=A^{G}a+b^{G}\), where \(A^{G}\) and \(b^{G}\) depend only on \(A\) and \(b\) and are defined by the identities \(\tilde{m}=Am+b\) and \(\tilde{C}=ACA^{T}\).
* For a tangent vector \(\sigma\in\mathbb{R}^{N_{a}}\) corresponding to \(\nabla_{a}\rho_{a}(\theta)\cdot\sigma\) in \(T_{\rho_{a}}\mathcal{P}^{G}\), we have \(\tilde{\sigma}=A^{G}\sigma\in\mathbb{R}^{N_{a}}\) corresponding to \(\nabla_{\tilde{a}}\rho_{\tilde{a}}(\tilde{\theta})\cdot\tilde{\sigma}\) in \(T_{\rho_{\tilde{a}}}\mathcal{P}^{G}\), and note that this satisfies \(\nabla_{\tilde{a}}\rho_{\tilde{a}}(\tilde{\theta})\cdot\tilde{\sigma}=\varphi \#(\nabla_{a}\rho_{a}(\theta)\cdot\sigma)=\nabla_{a}\rho_{a}(\varphi^{-1}( \tilde{\theta}))\cdot\sigma|A^{-1}|\).
* For a functional \(\mathcal{E}\) on \(\mathcal{P}^{G}\), we define \(\tilde{\mathcal{E}}=\varphi\#\mathcal{E}\) via \(\tilde{\mathcal{E}}(\rho_{\tilde{a}})=\mathcal{E}(\varphi^{-1}\#\rho_{\tilde{ a}})\).
With the above, we can make precise the definition of affine invariance for the Gaussian approximate gradient flows. The definition is similar to Definition 3.4.
**Definition 4.5** (Affine Invariant Gaussian Approximate Gradient Flow).: _The Gaussian approximate gradient flow (4.4) is called affine invariant if, under any invertible affine transformation \(\tilde{a}_{t}=\varphi(a_{t})\), the dynamics of \(\tilde{a}_{t}\) is itself a gradient flow of \(\tilde{\mathcal{E}}\), in the sense that_
\[\frac{\partial\tilde{a}_{t}}{\partial t}=-\mathfrak{M}(\tilde{a}_{t})^{-1} \frac{\partial\tilde{\mathcal{E}}(\rho_{\tilde{a}})}{\partial\tilde{a}}\Big{|} _{\tilde{a}=\tilde{a}_{t}}. \tag{4.12}\]
Naturally, if the gradient flow in probability space is affine invariant, then the Gaussian approximate flow has the same property; see the following proposition.
**Proposition 4.6**.: _For any affine invariant metric \(g\) defined via Definition 3.5, the Gaussian approximate gradient flow under the corresponding metric \(\mathfrak{g}_{a}\) is affine invariant for any \(\mathcal{E}\)._
We provide a proof for this proposition in Appendix D.3.
#### 4.1.4 Mean-Field Dynamics
The Gaussian approximate gradient flow can also be realized as a mean-field ordinary differential equation. Recall that for Gaussian evolutions the mean-field models are evolution equations for mean and covariance defined via mean-field expectations with respect to the Gaussian with this mean and covariance. We have the following lemma.
**Lemma 4.7**.: _Consider the mean-field equation_
\[\frac{\mathrm{d}\theta_{t}}{\mathrm{d}t}=\mathsf{A}(\rho_{t},\rho_{\mathrm{ post}})(\theta_{t}-m_{t})+\mathsf{b}(\rho_{t},\rho_{\mathrm{post}}), \tag{4.13}\]
_where \(\mathsf{A}:\mathcal{P}\times\mathcal{P}\to\mathbb{R}^{N_{\theta}\times N_{ \theta}}\) and \(\mathsf{b}:\mathcal{P}\times\mathcal{P}\to\mathbb{R}\), \(\rho_{t}\) is the law of \(\theta_{t}\) and \(m_{t}\) is the mean under \(\rho_{t}\). If the law of \(\theta_{0}\) is a Gaussian distribution, then \(\theta_{t}\) solving (4.13) is also Gaussian distributed for any \(t>0\); thus we can write \(\rho_{t}=\rho_{a_{t}}\) where \(a_{t}=(m_{t},C_{t})\) denotes the mean and covariance of the distribution. The evolution of \(m_{t}\) and \(C_{t}\) is given by_
\[\begin{split}\frac{\mathrm{d}m_{t}}{\mathrm{d}t}&= \mathsf{b}(\rho_{a_{t}},\rho_{\mathrm{post}}),\\ \frac{\mathrm{d}C_{t}}{\mathrm{d}t}&=\mathsf{A}(\rho_{ a_{t}},\rho_{\mathrm{post}})C_{t}+C_{t}\mathsf{A}(\rho_{a_{t}},\rho_{\mathrm{post}})^{T}. \end{split} \tag{4.14}\]
We provide a proof of this lemma in Appendix D.4. The lemma allows us to identify the corresponding mean-field dynamics (4.13) of the Gaussian approximate gradient flow (4.12). Furthermore the evolution equation (4.14) for the mean and covariance is defined via vector field for the evolution defined by expectations under
the Gaussian with this mean and covariance. We will elaborate on this identification in detail for specific metric tensors in later subsections. Regarding the affine invariance property of the mean-field equation, we have the following proposition:
**Proposition 4.8**: _Suppose the mean and covariance evolution equation (4.14) is affine invariant, in the sense that under the invertible affine transformation \(\tilde{\theta}=\varphi(\theta)=A\theta+b\) and correspondingly \(\tilde{m}_{t}=Am_{t},\tilde{C}_{t}=AC_{t}A^{T},\tilde{\rho}_{a_{t}}=\varphi \#\rho_{a_{t}},\tilde{\rho}_{\rm post}=\varphi\#\rho_{\rm post}\), it holds that_
\[\frac{{\rm d}\tilde{m}_{t}}{{\rm d}t} ={\sf b}(\tilde{\rho}_{a_{t}},\tilde{\rho}_{\rm post}), \tag{4.15}\] \[\frac{{\rm d}\tilde{C}_{t}}{{\rm d}t} ={\sf A}(\tilde{\rho}_{a_{t}},\tilde{\rho}_{\rm post})\tilde{C}_{t }+\tilde{C}_{t}{\sf A}(\tilde{\rho}_{a_{t}},\tilde{\rho}_{\rm post})^{T}.\]
_Then, the corresponding mean-field equation (4.13) is also affine invariant._
We provide a proof of this proposition in Appendix D.5.
### Gaussian Approximate Fisher-Rao Gradient Flow
#### 4.2.1 Metric
In the Gaussian density space, where \(\rho\) is parameterized by \(a=[m,C]\in\mathbb{R}^{N_{a}}\), the induced Fisher-Rao metric tensor \(\mathfrak{M}(a)\in\mathbb{R}^{N_{a}\times N_{a}}\) has entries
\[\mathfrak{M}(a)_{jk}=\int\frac{\partial\log\rho_{a}(\theta)}{\partial a_{j}} \frac{\partial\log\rho_{a}(\theta)}{\partial a_{k}}\rho_{a}(\theta){\rm d}\theta. \tag{4.16}\]
This is also the Fisher information matrix, which has explicit formula in the Gaussian space:
\[\mathfrak{M}(a)_{jk}=\frac{\partial m}{\partial a_{j}}^{T}C^{-1}\frac{ \partial m}{\partial a_{k}}+\frac{1}{2}\text{tr}\Big{(}C^{-1}\frac{\partial C }{\partial a_{j}}C^{-1}\frac{\partial C}{\partial a_{k}}\Big{)}. \tag{4.17}\]
#### 4.2.2 Flow Equation
The moment closure approach in Subsection 4.1.2 delivers the evolution of the mean and covariance by virtue of Lemma 4.3. Applying the moment closure approach to (3.15) leads to the following equations:
\[\frac{{\rm d}m_{t}}{dt} =\int\theta\left(\rho_{a_{t}}\big{(}\log\rho_{\rm post}-\log\rho_ {a_{t}}\big{)}-\rho_{a_{t}}\mathbb{E}_{\rho_{a_{t}}}[\log\rho_{\rm post}-\log \rho_{a_{t}}]\right){\rm d}\theta\] \[=\text{Cov}_{\rho_{a_{t}}}[\theta,\,\log\rho_{\rm post}-\log\rho_ {a_{t}}]\] \[=C_{t}\mathbb{E}_{\rho_{a_{t}}}[\nabla_{\theta}(\log\rho_{\rm post }-\log\rho_{a_{t}})],\] \[\frac{{\rm d}C_{t}}{dt} =\mathbb{E}_{\rho_{a_{t}}}[(\theta-m_{t})(\theta-m_{t})^{T}(\log \rho_{\rm post}-\log\rho_{a_{t}}-\mathbb{E}_{\rho_{a_{t}}}[\log\rho_{\rm post} -\log\rho_{a_{t}}])]\] \[=C_{t}\mathbb{E}_{\rho_{a_{t}}}[\nabla_{\theta}\nabla_{\theta}( \log\rho_{\rm post}-\log\rho_{a_{t}})]C_{t},\]
where \(\rho_{a_{t}}\sim\mathcal{N}(m_{t},C_{t})\), and we have used the Stein's lemma (Lemma D.1) in the above derivation. Furthermore noting that \(\mathbb{E}_{\rho_{a_{t}}}[\nabla\log\rho_{a_{t}}]=0\), we obtain
\[\frac{{\rm d}m_{t}}{dt} =C_{t}\mathbb{E}_{\rho_{a_{t}}}[\nabla_{\theta}\log\rho_{\rm post }], \tag{4.18}\] \[\frac{{\rm d}C_{t}}{dt} =C_{t}+C_{t}\mathbb{E}_{\rho_{a_{t}}}[\nabla_{\theta}\nabla_{ \theta}\log\rho_{\rm post}]C_{t}.\]
_Remark 4.9_.: Returning to Remark 3.13 and assuming a quadratic negative log likelihood
\[\Phi(\theta)=\frac{1}{2}(H\theta-y)^{\mathrm{T}}R^{-1}(H\theta-y) \tag{4.19}\]
and a Gaussian prior distribution, the functional (3.17) leads to the following Fisher-Rao gradient flow equations
\[\frac{\mathrm{d}m_{t}}{dt} =-C_{t}H^{\mathrm{T}}R^{-1}(Hm_{t}-y), \tag{4.20}\] \[\frac{\mathrm{d}C_{t}}{dt} =-C_{t}H^{\mathrm{T}}R^{-1}HC_{t},\]
for the mean \(m_{t}\) and covariance matrix \(C_{t}\). These define the well-known Kalman-Bucy filter [73] from linear state estimation; see the text [68] for further details. Their extension to general log-likelihood functions \(\Phi\) under Gaussian approximation is discussed in [108].
Equation (4.18) also corresponds to the gradient flow under the finite dimensional Fisher-Rao metric in the parameter space [105, 76]; in this context it goes by the nomenclature _natural gradient flow_[1, 98, 139]. The connection between Fisher-Rao natural gradient methods and Kalman filters has been studied in [103, 104]. \(\Diamond\)
#### 4.2.3 Affine Invariance
Equation (4.18) is affine invariant.
#### 4.2.4 Mean-Field Dynamics
Using Lemma 4.7 we can read off a choice of the pair \(\mathsf{A},\mathsf{b}\) defining the mean-field equation for the Gaussian approximate Fisher-Rao gradient flow (4.18); we obtain
\[\frac{\mathrm{d}\theta_{t}}{\mathrm{d}t}=\frac{1}{2}\Big{[}I+C_{t}\mathbb{E}_ {\rho_{a_{t}}}[\nabla_{\theta}\nabla_{\theta}\log\rho_{\mathrm{post}}]\Big{]} (\theta_{t}-m_{t})+C_{t}\mathbb{E}_{\rho_{a_{t}}}[\nabla_{\theta}\log\rho_{ \mathrm{post}}]. \tag{4.21}\]
Following Proposition 4.8, the mean-field equation (4.21) is also affine invariant.
### Gaussian Approximate Wasserstein Gradient Flow
#### 4.3.1 Metric
Recall the preconditioner \(P:\mathbb{R}^{N_{\theta}}\times\mathcal{P}\to\mathbb{R}^{N_{\theta}\times N_{ \theta}}_{>0}\), where the output space is the cone of positive-definite symmetric matrices. In the Gaussian density space, where \(\rho\) is parameterized by \(a\in\mathbb{R}^{N_{a}}\), the preconditioned Wasserstein metric tensor \(\mathfrak{M}(a)\in\mathbb{R}^{N_{a}\times N_{a}}\) has entries
\[\mathfrak{M}(a)_{jk}=\int\psi_{j}(\theta)\frac{\partial\rho_{a}(\theta)}{ \partial a_{k}}\mathrm{d}\theta\quad\text{where}\quad-\nabla_{\theta}\cdot \Big{(}\rho_{a}(\theta)P(\theta,\rho_{a})\nabla_{\theta}\psi_{j}(\theta)\Big{)} =\frac{\partial\rho_{a}}{\partial a_{j}}(\theta). \tag{4.22}\]
When \(P(\theta,\rho_{a})\) is the identity operator, the metric tensor \(\mathfrak{M}(a)\) has an explicit formula [30, 126, 96, 14, 85], and the corresponding Gaussian density space is called the Bures-Wasserstein space [129].
#### 4.3.2 Flow Equation
Again we can use the moment closure approach from Subsection 4.1.2, which is shown to apply here in Lemma 4.3. By applying the moment closure approach to (3.32), we get the mean and covariance evolution equations for the
Gaussian approximate Wasserstein gradient flow with \(\theta-\)independent \(P\) as follows:
\[\frac{\mathrm{d}m_{t}}{\mathrm{d}t} =\int\Bigl{[}\rho_{a_{t}}P(\rho_{a_{t}})\nabla_{\theta}(\log\rho_{ \mathrm{post}}-\log\rho_{a_{t}})\Bigr{]}\mathrm{d}\theta=P(\rho_{a_{t}})\mathbb{ E}_{\rho_{a_{t}}}[\nabla_{\theta}\log\rho_{\mathrm{post}}],\] \[\frac{\mathrm{d}C_{t}}{\mathrm{d}t} =\int\nabla_{\theta}\cdot\Bigl{[}\rho_{a_{t}}P(\rho_{a_{t}}) \nabla_{\theta}(\log\rho_{a_{t}}-\log\rho_{\mathrm{post}})\Bigr{]}(\theta-m_{t} )(\theta-m_{t})^{T}\mathrm{d}\theta\] \[=2P(\rho_{a_{t}})+P(\rho_{a_{t}})\mathbb{E}_{\rho_{a_{t}}}[ \nabla_{\theta}\nabla_{\theta}\log\rho_{\mathrm{post}}]C_{t}+C_{t}\mathbb{E}_{ \rho_{a_{t}}}[\nabla_{\theta}\nabla_{\theta}\log\rho_{\mathrm{post}}]P(\rho_{a _{t}}),\]
where \(\rho_{a_{t}}\sim\mathcal{N}(m_{t},C_{t})\), and we have used integration by parts and Stein's lemma (Lemma D.1), and the fact \(\mathbb{E}_{\rho_{a_{t}}}[\nabla\log\rho_{a_{t}}]=0\) in the above derivation.
When we set \(P(\rho)\equiv I\), the evolution equation becomes
\[\frac{\mathrm{d}m_{t}}{\mathrm{d}t} =\mathbb{E}_{\rho_{a_{t}}}\bigl{[}\nabla_{\theta}\log\rho_{ \mathrm{post}}(\theta_{t})\bigr{]}, \tag{4.24}\] \[\frac{\mathrm{d}C_{t}}{\mathrm{d}t} =2I+\mathbb{E}_{\rho_{a_{t}}}\bigl{[}\nabla_{\theta}\nabla_{ \theta}\log\rho_{\mathrm{post}}(\theta_{t})\bigr{]}C_{t}+C_{t}\mathbb{E}_{\rho _{a_{t}}}\bigl{[}\nabla_{\theta}\nabla_{\theta}\log\rho_{\mathrm{post}}(\theta_ {t})\bigr{]}.\]
This corresponds to the gradient flow under the constrained Wasserstein metric in Gaussian density space [118, 71, 81].
#### 4.3.3 Affine Invariance
We now allow the preconditioner to depend on \(\rho_{a_{t}}\) and set \(P(\rho_{a_{t}})=C_{t}\). This choice satisfies the affine invariant condition Proposition 3.18. The resulting evolution equations for the corresponding mean and covariance are
\[\frac{\mathrm{d}m_{t}}{\mathrm{d}t} =C_{t}\mathbb{E}_{\rho_{a_{t}}}\bigl{[}\nabla_{\theta}\log\rho_{ \mathrm{post}}(\theta_{t})\bigr{]}, \tag{4.25}\] \[\frac{\mathrm{d}C_{t}}{\mathrm{d}t} =2C_{t}+2C_{t}\mathbb{E}_{\rho_{a_{t}}}\bigl{[}\nabla_{\theta} \nabla_{\theta}\log\rho_{\mathrm{post}}(\theta_{t})\bigr{]}C_{t}.\]
Equation (4.25) is similar to the Gaussian approximate Fisher-Rao gradient flow (4.18), but with scaling factor \(2\) in the covariance evolution.
#### 4.3.4 Mean-Field Dynamics
Employing Lemma 4.7 we can again identify a pair \(\mathsf{A},\mathsf{b}\) leading to a mean-field equation for the Gaussian approximate Wasserstein gradient flow (4.23) with \(\theta\)-independent \(P\):
\[\frac{\mathrm{d}\theta_{t}}{\mathrm{d}t}=\Bigl{[}C_{t}^{-1}+P(\rho_{a_{t}}) \mathbb{E}_{\rho_{a_{t}}}[\nabla_{\theta}\nabla_{\theta}\log\rho_{\mathrm{ post}}]\Bigr{]}(\theta_{t}-m_{t})+P(\rho_{a_{t}})\mathbb{E}_{\rho_{a_{t}}}[ \nabla_{\theta}\log\rho_{\mathrm{post}}]. \tag{4.26}\]
From Proposition 4.8, its corresponding mean-field equation (4.26) with \(P(\rho_{a_{t}})=C_{t}\) is also affine invariant.
### Gaussian Approximate Stein Gradient Flow
#### 4.4.1 Metric
We work in the general preconditioned setting, as in the Wasserstein case. In the Gaussian density space, where \(\rho\) is parameterized by \(a\in\mathbb{R}^{N_{a}}\), the Stein metric tensor \(\mathfrak{M}(a)\in\mathbb{R}^{N_{a}\times N_{a}}\) is
\[\mathfrak{M}(a)_{jk}=\int\psi_{j}(\theta)\frac{\partial\rho_{a}( \theta)}{\partial a_{k}}\mathrm{d}\theta,\quad\text{where} \tag{4.27}\] \[-\nabla_{\theta}\cdot\Bigl{(}\rho_{a}(\theta)\int\kappa(\theta, \theta^{\prime},\rho_{a})\rho_{a}(\theta^{\prime})P(\theta,\theta^{\prime}, \rho_{a}(\theta),\rho_{a}(\theta^{\prime}))\nabla_{\theta}\psi_{j}(\theta^{ \prime})\mathrm{d}\theta^{\prime}\Bigr{)}=\frac{\partial\rho_{a}}{\partial a _{j}}(\theta).\]
#### 4.4.2 Flow Equation
We consider the setting in which \(P=P(\rho_{a})\) is independent of \(\theta,\) and we choose bilinear kernel
\[\kappa(\theta,\theta^{\prime},\rho)=(\theta-m)^{T}A(\rho)(\theta^{\prime}-m)+b( \rho). \tag{4.28}\]
where \(A:\mathcal{P}\rightarrow\mathbb{R}^{N_{\theta}\times N_{\theta}},\)\(b:\mathcal{P}\rightarrow\mathbb{R}\) and \(m\) is the mean under \(\rho.\)
In the following let \(P_{t}:=P(\rho_{a_{t}}),\) evaluate \(A\) and \(b\) at \(\rho=\rho_{a_{t}},\) writing the resulting time-dependent matrix- and vector-valued functions as \(A_{t}\) and \(b_{t}\) where \(A.:\mathbb{R}\rightarrow\mathbb{R}^{N_{\theta}\times N_{\theta}}\) and \(b.:\mathbb{R}\rightarrow\mathbb{R}^{N_{\theta}},\) and let \(m_{t}\) denote mean under \(\rho_{a_{t}}\) so that \(m.:\mathbb{R}\rightarrow\mathbb{R}^{N_{\theta}}.\) We apply the moment closure approach from Subsection 4.1.2 to (3.46). The mean and covariance evolution equations of the preconditioned Stein gradient flow with bilinear kernel (4.28) are
\[\frac{\mathrm{d}m_{t}}{\mathrm{d}t}=-\int\Bigl{(}\rho_{a_{t}}( \theta)\int\kappa(\theta,\theta^{\prime},\rho_{a_{t}})\rho_{a_{t}}(\theta^{ \prime})P_{t}\nabla_{\theta^{\prime}}\bigl{(}\log\rho_{a_{t}}(\theta^{\prime} )-\log\rho_{\mathrm{post}}(\theta^{\prime})\bigr{)}\mathrm{d}\theta^{\prime} \Bigr{)}\mathrm{d}\theta, \tag{4.29}\] \[\frac{\mathrm{d}C_{t}}{\mathrm{d}t}=\] \[+(\theta-m_{t})\Bigl{(}\rho_{a_{t}}(\theta)\int\kappa(\theta, \theta^{\prime},\rho_{a_{t}})\rho_{a_{t}}(\theta^{\prime})P_{t}\nabla_{ \theta^{\prime}}\bigl{(}\log\rho_{a_{t}}(\theta^{\prime})-\log\rho_{\mathrm{ post}}(\theta^{\prime})\bigr{)}\mathrm{d}\theta^{\prime}\Bigr{)}^{T} \mathrm{d}\theta.\]
where \(\rho_{a_{t}}\sim\mathcal{N}(m_{t},C_{t}),\) and we have used integration by parts in the above derivation. Imposing the form of the bilinear kernel (4.28), and using the Stein's lemma (Lemma D.1), and the fact \(\mathbb{E}_{\rho_{a_{t}}}[\nabla\log\rho_{a_{t}}]=0,\) we obtain
\[\frac{\mathrm{d}m_{t}}{\mathrm{d}t}= b_{t}P_{t}\mathbb{E}_{\rho_{a_{t}}}[\nabla_{\theta}\log\rho_{ \mathrm{post}}],\] \[\frac{\mathrm{d}C_{t}}{\mathrm{d}t}= P_{t}A_{t}C_{t}+C_{t}A_{t}P_{t}\] \[\qquad+P_{t}\mathbb{E}_{\rho_{a_{t}}}[\nabla_{\theta}\nabla_{ \theta}\log\rho_{\mathrm{post}}]C_{t}A_{t}C_{t}+C_{t}A_{t}C_{t}\mathbb{E}_{\rho_ {a_{t}}}[\nabla_{\theta}\nabla_{\theta}\log\rho_{\mathrm{post}}]P_{t}.\]
_Remark 4.10_.: Different choices of the preconditioner \(P\) and the bilinear kernel \(\kappa\) allows us to recover different Gaussian variational inference methods appearing in the literature. Choosing the preconditioner \(P_{t}=I\) and bilinear kernel (4.28) with \(A_{t}=C_{t}^{-1}\) and \(b_{t}=1\) recovers the Gaussian approximate Wasserstein gradient flow (4.24). Setting the preconditioner \(P_{t}=I\) and bilinear kernel (4.28) with \(A_{t}=I\) and \(b_{t}=1\) recovers the Gaussian sampling approach introduced in [53]:
\[\frac{\mathrm{d}m_{t}}{\mathrm{d}t}= \mathbb{E}_{\rho_{a_{t}}}[\nabla_{\theta}\log\rho_{\mathrm{post}}], \tag{4.31}\] \[\frac{\mathrm{d}C_{t}}{\mathrm{d}t}= 2C_{t}+\mathbb{E}_{\rho_{a_{t}}}[\nabla_{\theta}\nabla_{\theta} \log\rho_{\mathrm{post}}]C_{t}^{2}+C_{t}^{2}\mathbb{E}_{\rho_{a_{t}}}[\nabla_{ \theta}\nabla_{\theta}\log\rho_{\mathrm{post}}].\]
#### 4.4.3 Affine Invariance
Recalling that \(\rho_{a_{t}}\sim\mathcal{N}(m_{t},C_{t})\) and setting \(P_{t}=P(\rho_{a_{t}})=C_{t}\) and choosing bilinear kernel (4.28) with \(A_{t}=\frac{1}{2}C_{t}^{-1}\) and \(b_{t}=1,\) which satisfies the affine invariant condition Proposition 3.21, leads to the Gaussian approximate Fisher-Rao gradient flow (4.18).
#### 4.4.4 Mean-Field Dynamics
Using Lemma 4.2 we can deduce that the Gaussian approximate Stein gradient flow (4.29) with \(\theta\)-independent \(P\) has the following mean-field equation:
\[\frac{\mathrm{d}\theta_{t}}{\mathrm{d}t}=\Big{[}P_{t}A_{t}+P_{t}\mathbb{E}_{ \rho_{a_{t}}}[\nabla_{\theta}\nabla_{\theta}\log\rho_{\mathrm{post}}]C_{t}A_{t }\Big{]}(\theta_{t}-m_{t})+b_{t}P_{t}\mathbb{E}_{\rho_{a_{t}}}[\nabla_{\theta }\log\rho_{\mathrm{post}}].\]
Here \(P_{t}=P(\rho_{a_{t}})\). From Proposition 4.2, we know that this corresponding mean-field equation is also affine invariant.
### Convergence to Steady State
Recall that the objective of Gaussian variational inference is to solve the minimization problem (2.7). Furthermore all critical points satisfy (2.8). Regular gradient descent, in metric defined via the Euclidean inner-product in \(\mathbb{R}^{N_{a}}\) (i.e., setting \(\mathfrak{M}(a_{t})=I\) in (4.4)) will give rise to the dynamical system
\[\frac{\mathrm{d}m_{t}}{\mathrm{d}t} =\mathbb{E}_{\rho_{a_{t}}}\big{[}\nabla_{\theta}\log\rho_{ \mathrm{post}}(\theta_{t})\big{]}, \tag{4.32}\] \[\frac{\mathrm{d}C_{t}}{\mathrm{d}t} =\frac{1}{2}C_{t}^{-1}+\frac{1}{2}\mathbb{E}_{\rho_{a_{t}}} \big{[}\nabla_{\theta}\nabla_{\theta}\log\rho_{\mathrm{post}}(\theta_{t}) \big{]}.\]
Note that steady states of this dynamical system necessarily satisfy (2.8). In the preceding subsections we have derived a number of different gradient flows in the manifold of Gaussian densities, including the Gaussian approximate Fisher-Rao gradient flow (4.18) and the Gaussian approximate Wasserstein gradient flow (4.24); note that both of these dynamical systems also necessarily satisfy (2.8) in steady state. The convergence properties of (4.25) obtained by the affine-invariant Wasserstein gradient flow are similar to those of the Gaussian approximate Fisher-Rao gradient flow (4.18). We omit detailed discussion from the paper to avoid redundant discussions. In this subsection, we survey and study the convergence of these aforementioned Gaussian approximate gradient flows in three settings: the Gaussian posterior case; logconcave posterior case; and general posterior case.
#### 4.5.1 Gaussian Posterior Case
Assume the posterior distribution (1.1) is Gaussian so that \(\rho_{\mathrm{post}}(\theta)\sim\mathcal{N}(m_{\star},C_{\star})\) where
\[\Phi_{R}(\theta)=\frac{1}{2}(x-m_{\star})^{T}C_{\star}^{-1}(x-m_{\star}). \tag{4.33}\]
**Proposition 4.11**: _Consider the posterior distribution (1.1) under assumption (4.33) so that the posterior is Gaussian. Then the Gaussian approximate Fisher-Rao gradient flow (4.18) has the analytical solution_
\[m_{t}=m_{\star}+e^{-t}\Big{(}(1-e^{-t})C_{\star}^{-1}+e^{-t}C_{0 }^{-1}\Big{)}^{-1}C_{0}^{-1}\big{(}m_{0}-m_{\star}\big{)}, \tag{4.34a}\] \[C_{t}^{-1}=C_{\star}^{-1}+e^{-t}\big{(}C_{0}^{-1}-C_{\star}^{-1 }\big{)}. \tag{4.34b}\]
The proof is in Appendix E.1. We remark that both mean and covariance converge exponentially fast to \(m_{\star}\) and \(C_{\star}\) with convergence rate \(\mathcal{O}(e^{-t})\). This rate is independent of \(C_{\star}\).
For the Gaussian approximate gradient flow (4.32) and the Gaussian approximate Wasserstein gradient flow (4.24), if the norm of \(C_{\star}\) is large, their convergence rate is much slower than the Gaussian approximate Fisher-Rao gradient flow. Indeed, we have the following convergence result:
**Proposition 4.12**: _Consider the posterior distribution (1) under assumption (4.33) so that the posterior is Gaussian; this posterior is the unique minimizer of the Gaussian variational inference problem (2.7). Denote the largest eigenvalue of \(C_{\star}\) by \(\lambda_{\star,\max}\). For gradient flows with initialization \(C_{0}=\lambda_{0}I\), the following hold:_
1. _for the Gaussian approximate gradient flow (_4.32_):_ \[\|m_{t}-m_{\star}\|_{2}=\mathcal{O}(e^{-t/\lambda_{\star,\max}}),\|C_{t}-C_{ \star}\|_{2}=\mathcal{O}(e^{-t/(2\lambda_{\star,\max}^{2})});\]
2. _for the Gaussian approximate Fisher-Rao gradient flow (_4.18_):_ \[\|m_{t}-m_{\star}\|_{2}=\mathcal{O}(e^{-t}),\|C_{t}-C_{\star}\|_{2}=\mathcal{O }(e^{-t});\]
3. _for the Gaussian approximate Wasserstein gradient flow (_4.24_):_ \[\|m_{t}-m_{\star}\|_{2}=\mathcal{O}(e^{-t/\lambda_{\star,\max}}),\|C_{t}-C_{ \star}\|_{2}=\mathcal{O}(e^{-2t/\lambda_{\star,\max}}),\]
_where the implicit constants depend on \(m_{\star}\), \(C_{\star}\) and \(\lambda_{0}\)._
The proof is in Appendix E.2.
#### Logconcave Posterior Case
In this subsection, we consider the case that the posterior distribution \(\rho_{\rm post}(\theta)\) given by (1) is strongly log-concave.
**Proposition 4.13**: _Assume that the posterior distribution \(\rho_{\rm post}(\theta)\) is \(\alpha\)-strongly logconcave (Definition 3.24) and that \(-\nabla_{\theta}\nabla_{\theta}\log\rho_{\rm post}\preceq\beta I\). Assume further that the initial covariance matrix satisfies \(\lambda_{0,\min}I\preceq C_{0}\preceq\lambda_{0,\max}I\). Then for the dynamics (4.32), the Gaussian approximate Fisher-Rao gradient flow (4.18), and the Gaussian approximate Wasserstein gradient flow (4.24), we have_
\[\mathrm{KL}\Big{[}\rho_{a_{t}}(\theta)\Big{\|}\rho_{\rm post}(\theta)\Big{]} \leq e^{-Kt}\mathrm{KL}\Big{[}\rho_{a_{0}}(\theta)\Big{\|}\rho_{\rm post}( \theta)\Big{]}+(1-e^{-Kt})\mathrm{KL}\Big{[}\rho_{a_{\star}}\Big{\|}\rho_{\rm post }(\theta)\Big{]}, \tag{4.35}\]
_where \(\rho_{a_{0}}\sim\mathcal{N}(m_{0},C_{0})\) is the initial condition and \(\rho_{a_{\star}}\sim\mathcal{N}(m_{\star},C_{\star})\) is the unique global minimizer of (2.7). The rate constant \(K\) depends on \(\alpha,\beta,\lambda_{0,\min},\lambda_{0,\max}\). Specifically, we have_
* \(K=2\alpha/\max\{1,4/\alpha,4\lambda_{0,\max}\}\) _for the Gaussian approximate gradient flow (_4.32_);_
* \(K=\alpha\min\{1/\beta,\lambda_{0,\min}\}\) _for the Gaussian approximate Fisher-Rao gradient flow (_4.18_);_
* \(K=2\alpha\) _for the Gaussian approximate Wasserstein gradient flow (_4.24_)._
_In addition, we also have that \(\rho_{a_{t}}\) converges to \(\rho_{a_{\star}}\) exponentially fast in terms of the Wasserstein metric:_
\[W_{2}^{2}(\rho_{a_{t}},\rho_{a_{\star}})\leq\frac{2e^{-Kt}}{\alpha}\left( \mathrm{KL}[\rho_{a_{0}}\|\rho_{\rm post}(\theta)]-\mathrm{KL}[\rho_{a_{\star }}\|\rho_{\rm post}(\theta)]\right). \tag{4.36}\]
The proof is in Appendix E.3, and is inspired by the work [81].
In the above result, the rate constants \(K\) for the Gaussian approximate gradient flow (4.32) and the Gaussian approximate Fisher-Rao gradient flow (4.18) depend on the eigenvalues of the initialization covariance matrix. We believe it is due to our proof techniques and this dependence may be removed with a better proof strategy. On the other hand, if we can initialize \(C_{0}\) such that \(\frac{1}{\beta}\preceq C_{0}\preceq\frac{1}{\alpha}\), such dependence can be directly eliminated in the above bounds.
Also, we observe that the bound on the rate constant \(K\) for the Gaussian approximate Fisher-Rao gradient flow depends on \(\beta/\alpha\). In some case, affine transformation may be applied to reduce \(\beta/\alpha\), since the Gaussian approximate Fisher-Rao gradient flow (4.18) is affine invariant. As an example, in the case of Gaussian posteriors, the rate constant can be reduced to \(1\), as in Proposition 4.2.
On the other hand, we have the following proposition about the lower bound of the local convergence rate of the Gaussian approximate Fisher-Rao gradient flow (4.18).
**Proposition 4.15**: _Assume the posterior distribution \(\rho_{\mathrm{post}}(\theta)\) is \(\alpha\)-strongly log-concave (Definition 3.24) and that \(-\nabla_{\theta}\nabla_{\theta}\log\rho_{\mathrm{post}}\preceq\beta I\). Denote the unique minimizer of the Gaussian variational inference problem (2.7) by \(\rho_{a_{\star}}:=\mathcal{N}(m_{\star},C_{\star})\). For \(N_{\theta}=1\), let \(\lambda_{\star,\max}<0\) denote the largest eigenvalue of the linearized Jacobian matrix of the Gaussian approximate Fisher-Rao gradient flow (4.18) around \(m_{\star}\) and \(C_{\star}\); this number determines the local convergence rate of the Gaussian approximate Fisher-Rao gradient flow (4.18). Then we have_
\[-\lambda_{\star,\max}\geq\frac{1}{\big{(}7+\frac{4}{\sqrt{\pi}}\big{)}\big{(} 1+\log(\frac{\beta}{\alpha})\big{)}}.\]
_Moreover, the bound is sharp: it is possible to construct a sequence of triplets \(\rho_{\mathrm{post},n}\), \(\alpha_{n}\) and \(\beta_{n}\), where \(\lim_{n\to\infty}\frac{\beta_{n}}{\alpha_{n}}=\infty\), such that, if we let \(\lambda_{\star,\max,n}\) denote the corresponding largest eigenvalues of the linearized Jacobian matrix for the \(n\)-th triple, then, it holds that_
\[-\lambda_{\star,\max,n}=\mathcal{O}\left(1/\log\frac{\beta_{n}}{\alpha_{n}} \right).\]
The proof can be found in Appendix E.4.
#### 4.5.3 General Posterior Case
The previous sections consider Gaussian and then logconcave posteriors; for all three Gaussian approximate gradient flows we demonstrate exponential convergence, and for the Fisher-Rao based methodology we have some invariance of the rates of convergence with respect to conditioning of the problem. In this subsection, however, we construct counterexamples showing that, for general posteriors, the convergence of all three Gaussian approximate gradient flows to a stationary point can be arbitrarily slow.
**Proposition 4.16**: _For any \(K>0\) there exist a target \(\rho_{\mathrm{post}}\) such that, for the three Gaussian approximate gradient flows (4.18), (4.24) and (4.32), the convergence to their stationary points can be as slow as \(\mathcal{O}(t^{-\frac{2}{K}})\)._
The proof is in Appendix E.5.
## 5 Numerical Experiments
In this section, we perform numerical experiments to study the behaviour of the aforementioned gradient flows for sampling, which complements our theoretical study. We observe the following:
* In the probability density space, affine invariant gradient flows outperform their non-affine invariant counterparts, for the Gaussian posterior case (Figure 1), the logconcave posterior case (Figure 3), and the general posterior case (Figure 6).
* In the restricted Gaussian density space, affine invariant gradient flows outperform their non-affine invariant counterparts, for the Gaussian posterior
case (Figure 2), the logconcave posterior case (Figure 4), and the general posterior case (Figure 8).
* For general non-Gaussian posteriors, the convergence rates of all gradient flows deteriorate when the posterior becomes more anisotropic (Figure 6); consequently accurately estimating the summary statistics of these posteriors is challenging.
* The convergence curves from the use of affine invariant Wasserstein gradient flow, implemented with Langevin dynamics, oscillate slightly due to the added noise; those obtained from affine invariant Stein gradient flow, implemented by Stein variational gradient descent, are smooth (See Figures 1, 3, and 6). However, the added noise helps for sampling non-Gaussian, highly anisotropic posteriors in comparison with the affine invariant Stein variational gradient descent (See Figures 5 and 6).
In the following subsections, we first introduce all test problems in Subsection 5.1 and the setup for the numerical methods in Subsection 5.2. Then we present numerical results for the Gaussian posterior case in Subsection 5.3, the logconcave posterior case in Subsection 5.4, and the general posterior case in Subsection 5.5.
### Overview of Test Problems
We focus our experiments on three two-dimensional posteriors. In defining them we use the notation \(\theta=[\theta^{(1)},\theta^{(2)}]^{T}\).
1. _Gaussian Posterior._ \[\Phi_{R}(\theta)=\frac{1}{2}\theta^{T}\begin{bmatrix}1&0\\ 0&\lambda\end{bmatrix}\theta\quad\text{with}\quad\lambda=0.01,\,0.1,\,1.\] We initialize the gradient flows from \[\theta_{0}\sim\mathcal{N}\Big{(}\begin{bmatrix}10\\ 10\end{bmatrix},\begin{bmatrix}\frac{1}{2}&0\\ 0&2\end{bmatrix}\Big{)}.\]
2. _Logconcave Posterior._ \[\Phi_{R}(\theta)=\frac{(\sqrt{\lambda}\theta^{(1)}-\theta^{(2)})^{2}}{20}+ \frac{(\theta^{(2)})^{4}}{20}\quad\text{with}\quad\lambda=0.01,\,0.1,\,1.\] We initialize the gradient flows from \(\theta_{0}\sim\mathcal{N}\Big{(}\begin{bmatrix}10\\ 10\end{bmatrix},\begin{bmatrix}4&0\\ 0&4\end{bmatrix}\Big{)}\).
3. _General Posterior._ \[\Phi_{R}(\theta)=\frac{\lambda(\theta^{(2)}-(\theta^{(1)})^{2})^{2}}{20}+ \frac{(1-\theta^{(1)})^{2}}{20}\quad\text{with}\quad\lambda=0.01,\,0.1,\,1.\] This example is known as the Rosenbrock function [58]. We initialize the gradient flows from \[\theta_{0}\sim\mathcal{N}\Big{(}\begin{bmatrix}0\\ 0\end{bmatrix},\begin{bmatrix}4&0\\ 0&4\end{bmatrix}\Big{)}.\]
The summary statistics that we use to compare the resulting solution with the ground truth are the expectation \(\mathbb{E}[\theta]\), the covariance \(\text{Cov}[\theta]\), and \(\mathbb{E}[\cos(\omega^{T}\theta+b)]\); in the latter case we randomly draw \(\omega\sim\mathcal{N}(0,I)\) and \(b\sim\text{Uniform}(0,2\pi)\) and report the average MSE over 20 random draws of \(\omega\) and \(b\). The ground truths of these summary statistics are evaluated by integrating \(\rho_{\text{post}}\) numerically (See Appendix F for details).
### Numerical Method Setup
The gradient flows in the nonparametric density space studied in Section 3 are implemented by interacting particle systems with \(J=1000\) particles. Since we are not aware of a straightforward way of implementing the Fisher-Rao gradient flow that does not suffer from the immobility of the support, we do not consider it here. We compare the following nonparametric gradient flows (GFs) in our experiments.
* Wasserstein GF: The Wasserstein gradient flow with \(P=I\), which is implemented as stochastic particle dynamics (3.38) with \(\sigma_{t}=\sqrt{2}I\).
* Affine-invariant Wasserstein GF: The affine-invariant Wasserstein gradient flow with \(P=C_{t}\), which is implemented as stochastic particle dynamics (3.40).
* Stein GF: The Stein gradient flow with \[P=I,\quad\kappa(\theta,\theta^{\prime},\rho)=(1+4\log(J+1)/N_{\theta})^{N_{ \theta}/2}\exp(-\frac{1}{h}\|\theta-\theta^{\prime}\|^{2}),\] which is implemented as deterministic particle dynamics (3.49). Here \(h=\mathrm{med}^{2}/\log(J+1)\) and \(\mathrm{med}^{2}\) is the squared median of the pairwise Euclidean distance between the current particles, following [90].
* Affine-invariant Stein GF: The affine-invariant Stein gradient flow with \[P=C,\quad\kappa(\theta,\theta^{\prime},\rho)=(1+2)^{N_{\theta}/2}\exp(-\frac{ 1}{2}(\theta-\theta^{\prime})^{T}C^{-1}(\theta-\theta^{\prime})),\] which is implemented as deterministic particle dynamics (3.50).
Here the scaling constants in the definition of kernels are chosen such that
\[\int\int\kappa(\theta,\theta^{\prime},\rho)\mathcal{N}(\theta,m,C)\mathcal{N} (\theta^{\prime},m,C)\mathrm{d}\theta\mathrm{d}\theta^{\prime}=1. \tag{5.1}\]
This choice makes the Stein gradient flows comparable with the Wasserstein gradient flows in terms of implementation cost per step. Since we cannot analytically compute the integral (5.1) for the kernel of Stein GF, we estimate its scaling constant by replacing \(\mathrm{med}^{2}I\) with \(N_{\theta}C\).
For the gradient flows in the Gaussian density space we consider the three mean and covariant dynamics given in equations (4.18), (4.24) and (4.32). The expectations in the evolution equations are calculated by the unscented transform [70] with \(J=2N_{\theta}+1=5\) quadrature points. Therefore, Gaussian approximation has considerable speedup in comparison with the previously mentioned particle-based sampling approaches, where \(J=1000\).
### Gaussian Posterior Case
The convergence of different gradient flows, according to the three summary statistics, are presented in Figure 1 (nonparameteric density space) and in Figure 2 (Gaussian density space). In both nonparametric and Gaussian density spaces, the imposition of the affine invariance property makes the convergence rate independent of the anisotropy \(\lambda\) and accelerates the sampling for badly scaled Gaussian (\(\lambda=0.01\)). However, all these gradient flows in the nonparametric density space do not converge within machine precision because of the limited number of particles. The convergence rates of Gaussian approximate gradient flows match well with the predictions of Proposition 4.
### Logconcave Posterior Case
The convergence of different gradient flows, according to the three summary statistics, are presented in Figure 3 (nonparameteric
density space) and in Figure 4 (Gaussian density space). In both nonparametric and Gaussian density spaces, the imposition of the affine invariance property makes the convergence rate independent of the anisotropy \(\lambda\) and accelerates the sampling in the highly anisotropic case (\(\lambda=0.01\)). We observe that the corresponding Gaussian approximate gradient flows can reach lower errors for this case with the present numerical method setup defined in Subsection 5.2. We also observe that the convergence rate of the Gaussian approximate Fisher-Rao gradient flow does not deteriorate with increased anisotropy constant \(\lambda\); this indicates that the convergence rate in Proposition 4.13, for this gradient flow, may not be tight.
### General Posterior Case
We note that the Rosenbrock function is a non-convex function. Although its minimizer is at \([1,1]\), the expectation and covariance of the posterior density function is (See Appendix F)
\[\mathbb{E}[\theta]=\begin{bmatrix}1\\ 11\end{bmatrix}\qquad\text{Cov}[\theta]=\begin{bmatrix}10&20\\ 20&\frac{10}{\lambda}+240\end{bmatrix}.\]
The particles obtained by different nonparameteric gradient flows at \(t=15\) are depicted in Figure 5, and their convergences according to the three summary statistics are depicted in Figure 6. Estimated posterior densities (3 standard deviations)
obtained by different Gaussian approximate gradient flows are presented in Figure 7, and their convergences according to the three summary statistics are depicted in Figure 8. For small \(\lambda\) (e.g., \(\lambda=0.01\)), \(\theta^{(2)}\) is the stretch direction, and therefore the imposition of the affine invariance property makes the convergence faster. However, when \(\lambda\) increases, the posterior density concentrates on a manifold with significant curvature (See Figure 5). Although the particle positions match well with the density contours, the convergence of different gradient flows significantly deteriorates; the imposition of affine invariance does not relieve the situation. Furthermore, the Gaussian approximation cannot represent the posterior distribution at all well because the posterior is far from Gaussian.
## 6 Conclusions
### Summary
In this work, we have studied various gradient flows in both nonparametric and Gaussian density spaces for sampling distributions with unknown normalization constants, focusing on the affine invariance property. We introduce the concept of affine invariant metric and use it to develop general affine invariant Wasserstein, Stein, and Fisher-Rao gradient flows in both nonparametric and Gaussian density spaces. We provide theoretical analysis of these gradient flows and demonstrate
that affine-invariance effectively improves the efficiency of sampling log-concave distributions. Numerically, we demonstrate that the affine-invariance property can accelerate the convergence of gradient flows for highly anisotropic distributions.
Nevertheless, these strategies may still not perform well for general posterior distributions. In particular for multimodal distributions, or distributions which concentrate on manifolds with significant curvature such as the Rosenbrock function used here. In a companion work [29], we are exploring the direction of approximating the Fisher-Rao Gradient flow using Gaussian mixtures; this has the potential to capture multiple dominant modes efficiently. We are also interested in exploring other invariant properties and approximations that could deal with these more complex distributions. In addition, for high dimensional problems, it is of interest to develop a systematic study of model reduction of these gradient flows; see for example the projected Stein and Wasserstein gradient flows studied in [27, 132].
Finally we highlight that, for Bayesian inverse problems [117] the methods developed here do not exploit the structure of the forward problem. This is in contrast to ensemble Kalman based methods which have demonstrable performance advantages for some problems in this class [20]; notably these ensemble methods have derivative free implementations which are favorable in large scale inverse problems. Developing
the analysis of these methods, following the approach in this paper, constitutes an interesting direction for future study.
### Open Technical Problems
Several open technical problems remain unsolved in the work we have presented; we list them here.
1. We study lower bounds on \(C_{t}\) in Lemma 3.2 for the affine invariant Wasserstein gradient flow with \(P_{t}=C_{t}\). We obtain strong results only in dimension \(N_{\theta}=1\). When \(N_{\theta}>1\), determining whether, and if so when, \(C_{t}\) can become singular is an interesting question for further study, in particular under a logconcavity assumption such as Definition 3.24.
2. For the affine invariant Stein gradient flow, study of convergence and convergence rates for general kernel functions is of interest.
3. For the Gaussian approximate Fisher-Rao gradient flow (4.18), in particular under the logconcavity assumption Definition 3.24, we prove a global convergence rate of at least \[e^{-\alpha\min\{\frac{1}{\beta},\lambda_{0,\min}\}t}\] in Proposition 4.13. However, the numerical study showed far superior behavior on one problem class. A sharp bound is of interest. We study this
problem from the perspective of local convergence rate in Proposition 4.15 for \(N_{\theta}=1\). Studying this problem for \(N_{\theta}>1\) is of future interest.
4. For the Gaussian approximate Fisher-Rao gradient flow (4.18), under the logconcave assumption such as Definition 3.24, we prove that \(C_{t}\) is bounded below and above in Proposition 4.13. Furthermore, studying whether \(C_{t}\) can become singular or infinity for the general posterior case is of future interest.
**Acknowledgments** YC acknowledge the support from the Air Force Office of Scientific Research under MURI award number FA9550-20-1-0358 (Machine Learning and Physics-Based Modeling and Simulation). DZH and AMS are supported by the generosity of Eric and Wendy Schmidt by recommendation of the Schmidt Futures program; AMS is also supported by the Office of Naval Research (ONR) through grant N00014-17-1-2079 and by a Department of Defense Vannevar Bush Faculty Fellowship. SR is supported by Deutsche Forschungsgemeinschaft (DFG) - Project-ID 318763901 - SFB1294. JH is supported by NSF grant DMS-2054835.
## References
* [1]
Figure 5: General posterior case: particles obtained by different gradient flows at \(t=15\). Grey lines represent the contour of the true posterior. |
2308.04314 | Cooperative Multi-agent Bandits: Distributed Algorithms with Optimal
Individual Regret and Constant Communication Costs | Recently, there has been extensive study of cooperative multi-agent
multi-armed bandits where a set of distributed agents cooperatively play the
same multi-armed bandit game. The goal is to develop bandit algorithms with the
optimal group and individual regrets and low communication between agents. The
prior work tackled this problem using two paradigms: leader-follower and fully
distributed algorithms. Prior algorithms in both paradigms achieve the optimal
group regret. The leader-follower algorithms achieve constant communication
costs but fail to achieve optimal individual regrets. The state-of-the-art
fully distributed algorithms achieve optimal individual regrets but fail to
achieve constant communication costs. This paper presents a simple yet
effective communication policy and integrates it into a learning algorithm for
cooperative bandits. Our algorithm achieves the best of both paradigms: optimal
individual regret and constant communication costs. | Lin Yang, Xuchuang Wang, Mohammad Hajiesmaili, Lijun Zhang, John C. S. Lui, Don Towsley | 2023-08-08T15:02:50Z | http://arxiv.org/abs/2308.04314v1 | Cooperative Multi-agent Bandits: Distributed Algorithms with Optimal Individual Regret and Constant Communication Costs
###### Abstract
Recently, there has been extensive study of cooperative multi-agent multi-armed bandits where a set of distributed agents cooperatively play the same multi-armed bandit game. The goal is to develop bandit algorithms with the optimal group and individual regrets and low communication between agents. The prior work tackled this problem using two paradigms: leader-follower and fully distributed algorithms. Prior algorithms in both paradigms achieve the optimal group regret. The leader-follower algorithms achieve constant communication costs but fail to achieve optimal individual regrets. The state-of-the-art fully distributed algorithms achieve optimal individual regrets but fail to achieve constant communication costs. This paper presents a simple yet effective communication policy and integrates it into a learning algorithm for cooperative bandits. Our algorithm achieves the best of both paradigms: optimal individual regret and constant communication costs.
## 1 Introduction
Recently there has been a surge of various online learning problems in distributed settings, where a set of agents perform individual learning algorithms to complete a common task and can cooperate with each other to improve the performance of the learning process. Distributed online learning is naturally motivated by a broad range of applications where computational resources are geographically distributed, and a group of machines has to communicate with each other to complete a common task cooperatively. Examples include inference engines in a software-defined network, servers in a data center, and drones in a swarm. In distributed online learning settings, agents take actions over time and receive sequential samples associated with the selected actions. While the agents can cooperate to speed up the learning process, it comes at the expense of communication overhead in sharing sequential samples with others. Hence, distributed online learning problems involve a natural trade-off between learning performance and communication overheads.
This paper focuses on studying Cooperative Multi-Agent Multi-Armed Bandit (CMA2B) problems where multiple agents tackle the same instance of a bandit problem. In the standard setting of CMA2B, a set of \(M\) independent agents existing over the entire time horizon pull an arm at each time
from a common set of \(K\) arms. Associated with arms are mutually independent sequences of i.i.d. \([0,1]\)-valued rewards with mean \(0\leq\mu(i)\leq 1\), for arm \(i\in[K]\). Each agent has full access to the set of arms: agents are allowed to pull and receive a reward from any arm in the common set without any reward degradation when pulling the same arm. The goal of each agent is to learn the best arm, with performance characterized by group regret and maximum individual regret according to different application scenarios. In addition to regret, another important metric is the communication overheads that the agents spend in cooperative learning.
The above CMA2B problem is a natural extension of the basic MAB problem [1, 5] in a cooperative multi-agent setting, with extensive recent literature [29, 15, 6, 7, 19, 8, 18, 22, 25, 12, 30, 10, 4, 9, 33, 21, 20, 36, 34, 35, 31]. In terms of solution design, the prior work could be categorized into two paradigms of leader-follower, where a leader agent coordinates the learning process, and fully distributed algorithms, where there is no central coordinator between agents.
In the leader-follower paradigm [27, 23, 28, 26, 32, 30, 3, 8, 10], a leader agent coordinates the learning process among all agents. The state-of-the-art result in this paradigm is the DPE2 algorithm proposed in [30] and achieves the optimal group regret with a constant number of communication overheads1, Yet, DPE2 (and all other leader-follower-based algorithms) relies on a structure where the leader solely pays the exploration costs and incurs almost all the regret in the system. Hence, by nature, this paradigm fails to achieve a good individual regret since all the regret is imposed on the leader agent. It is worth noting that in many practical applications, agents' individual regrets are crucial for a system's overall performance. For example, in a drone swarm, the failure/misbehavior of a single drone, e.g., it crashes into other drones, can dramatically degrade the whole system's overall performance; or in network measurement, the slowest inference engine determines how fast the network parameters, e.g., traffic flows and channel bandwidths, are learned.
Footnote 1: Constant communication cost in this paper means it is independent of time horizon \(T\).
An alternative approach is to remove the leader as the central coordinator and design fully distributed cooperative algorithms. While there has been a success in achieving the optimal group and individual regrets for fully distributed algorithms, they still fail to achieve low communication overhead, such as those in the leader-follower-based algorithms. Early works in this space, e.g., [6, 34, 35] adopted immediate broadcasting as their communication scheme, incurring a high communication cost of \(O(T)\). More recent works [22, 32, 9], improved the communication overhead of the cooperative algorithms to \(O(\log T)\) by optimizing the use of communication budget. The state-of-the-art in this line of work is the UCB-TCOM algorithm [31] that achieves the optimal individual regret of \(O(K/M\log T)\) with communication cost of \(O(KM\log\log T)\). Despite the above efforts, prior to this work, no existing algorithm, either based on leader-follower or fully distributed, achieves optimal group and individual regret with constant communication costs.
Besides the literature on distributed multi-agent bandits, there is a line of works on batched bandits [24, 13, 11, 16, 17] that relate to CMA2B. In batched bandits, the time horizon is separated into several batches, and the reward observations of pulling arms during each batch are only revealed at the end of the batch. This scheme is similar to the distributed bandits, where the observations of other agents after the last communication are only revealed at these agents' next communication. Therefore, the batched bandits algorithm can straightforwardly adapt to our multi-agent bandits setting. The current state-of-the-art batched algorithm requires \(O(\log\log T)\) batches to attain the near-optimal problem-dependent regret bound [16]. That is, directly transferring their algorithms to the distributed setting
\begin{table}
\begin{tabular}{|c|l l l|} \hline
**Algorithm** & **Fully distributed** & **Individual regret** & **Communication cost** \\ \hline DPE2 [30] & No (leader-follower) & \(O(K\log T)\) & \(O(K^{2}M^{2}\Delta^{-2})\) \\ ComEx [20] & Yes & \(O(K\log T)\) & \(O(KM\log T)\) \\ GosInE [9] & Yes & \(O((KM+2)\log T)\) & \(\Omega(\log T)\) \\ Dee\_UCB [36] & Yes & \(O(K/M)\log T)\) & \(O(MT)\) \\ UCB-TCOM [31] & Yes & \(O((K/M)\log T)\) & \(O(KM\log\log T)\) \\ \hline \hline DoE-bandit (**this work**) & Yes & \(O((K/M)\log T)\) & \(O(KM\log\Delta^{-1})\) \\ \hline \end{tabular}
\end{table}
Table 1: A comparison summary of prior literature and this work. Note that all algorithms in this table achieve the optimal group regret of \(O(K\log T)\). Hence, we only compare the individual and communication costs of different algorithms.
leads to \(O(\log\log T)\) communication costs. In contrast, our work shows a constant communication cost is enough to guarantee the optimal individual and group regrets.
### Contributions
This paper presents DoE-bandit, the first fully distributed algorithm that guarantees the optimal group and maximum individual regrets with constant communication costs (see Theorem 1). Specifically, DoE-bandit achieves an \(O(\sum_{i:\Delta_{i}>0}\log T/\Delta_{i})\) group regret and an \(O((1/M)\sum_{i:\Delta_{i}>0}\log T/\Delta_{i})\) maximum individual regret, where \(\Delta_{i}\) is the gap of reward means between the optimal arm and the \(i\)-th one. Further, DoE-bandit achieves the constant communication complexity of \(O(KM\log(1/\Delta))\), where \(\Delta=\min_{i}\Delta_{i}\). A summary of our results and the most relevant prior work is given in Table 1.
To achieve the above results, DoE-bandit leverages a communication policy called Distributed Online Estimation (DoE), which is the main algorithmic contribution of this work. The key idea behind DoE is to determine the synchronization frequency of sharing local empirical estimates of agents so that the quality of estimates is maintained compared to the full cooperation policy. The full cooperation is equivalent to the existence of _centralized estimator_ with the best possible estimate using all agents' samples. With limited cooperation, individual agents have access to their locally observed samples, which could cause intrinsic deviations from the centralized estimator. DoE measures the deviations precisely and uses them as an indicator to trigger a communication round. That is, DoE may urge the agents to communicate with others to synchronize the estimates on the mean of arms once it realizes that the deviation is large. By controlling the deviation in a proper margin, DoE guarantees the optimal learning performance, i.e., the one achievable by the centralized estimator, for individual agents with low communication overheads. By plugging DoE into an elimination-based bandit algorithm, we derive DoE-bandit that improves the state-of-the-art results for the CMA2B problem. Last, using real datasets, we include experiments that demonstrate the improved performance of DoE-bandit compared to all benchmark algorithms listed in Table 1.
## 2 Problem Description
In the following, we introduce a basic multi-agent multi-armed bandit system model. We note that the communication policy developed in this paper is generic and could be applied to a broad range of cooperative online learning settings.
Consider a multi-agent stochastic bandit setting with a set \(\mathcal{M}=\{1,\ldots,M\}\) of independent agents existing over the entire time period, and a set \(\mathcal{K}=\{1,2,\ldots,K\}\) of arms. Associated with arms are mutually independent sequences of i.i.d. \([0,1]\)-valued (e.g., Bernoulli) rewards with mean \(0\leq\mu(i)\leq 1\), for arm \(i\in\mathcal{K}\). Agent \(j\in\mathcal{M}\) has full access to the set of arms. Agents are allowed to pull and receive a reward from any arm from \(\mathcal{K}\). Note that for ease of presentation, we focus on a basic model formulation where agents reside on a complete graph, incur no communication delays, and the communication is lossless. However, the basic model and communication policy proposed in this paper could be extended to account for these practical additions.
In bandit learning, the goal of each agent \(j\) is to learn the best arm as fast as possible with minimizing the _pseudo-regret_ (called _regret_ for short in the rest of this paper). The expected regret of an agent \(j\) is formally defined as
\[\mathbb{E}\left[R_{T}^{j}\right]:=\mu(i^{*})T-\mathbb{E}\left[\sum\nolimits_{ t=1}^{T}x_{t}(I_{t}^{j})\right],\]
where \(i^{*}\) is the optimal arm, \(I_{t}^{j}\) is the action taken by agent \(j\) at round \(t\), and \(x_{t}(I_{t}^{j})\) is the realized reward. Also, the expectation is taken over the randomness of stochastic rewards and the algorithm's (agents') decisions. In a multi-agent setting, the total performance is measured by the total expected regret of all agents, defined as
\[\mathbb{E}\left[R_{T}\right]:=\sum\nolimits_{j\in\mathcal{M}}\mathbb{E}\left[R _{T}^{j}\right].\]
In addition to the group regret, which characterizes overall performance, the individual performance of each agent is also important. To capture this individual performance, we measure the maximum individual regret defined as follows,
\[\mathbb{E}\left[\bar{R}_{T}\right]:=\max_{j\in\mathcal{M}}\mathbb{E}\left[R_{T }^{j}\right].\]
Similar to other distributed learning problems, the multi-agent MAB setting encourages distributed agents to cooperate with each other by sharing information through _messages_, which include reward observations, reward averages, or arm indices. We assume any message can be communicated within a single time slot. The total number of messages communicated among these agents quantifies the communication complexity of an algorithm.
## 3 Algorithm
This section presents an algorithm that adds a Distributed Online Estimation (DoE) subroutine to each learning agent \(j\) and enables them to approximate the estimate of the optimal centralized algorithm having all samples when estimating the parameter of a common i.i.d. process. We introduce the details of the DoE algorithms in Section 3.1 and then integrate it to a bandit algorithm in Section 3.2.
### Distributed Online Estimation Algorithm (DoE)
To facilitate the presentation of the high-level idea of DoE, let us focus on a simplified setting that involves only one arm \(i\) whose reward mean \(\mu(i)\) is unknown to the distributed agents that sample the process simultaneously in each slot. Since each agent possesses the same number of pulls, we denote \(n_{t}(i)\) as the number of samples available to each agent up to time \(t\). The idea of DoE is to synchronize the estimates of distributed agents when the local estimates deviate substantially from the centralized one with all samples. By properly configuring DoE, each individual agent manages to efficiently control the deviation of the estimates of individual agents with incurring low communication costs.
More specifically, during the running time, DoE adopts a thresholding policy to decide whether to trigger a communication round so that agents can exchange messages with each other to synchronize their estimates with all samples in the system. To decide whether to start a communication round, each agent maintains the so-called _Common Mean_ (CM) for the mean over all system-wide available samples in the last communication round, and simply compare CM with _Auxiliary Local Estimates_ (ALE, details shown in 3.1.1).
The value of CM is calculated by averaging all samples in the last communication round, so, its value becomes updated only once at each communication round and remains unchanged in the subsequent non-communication rounds. The value of CM at time \(t\) is denoted as \(\hat{\mu}_{\texttt{com},t}(i)\). At specific time slots, each agent checks whether the gap between CM and ALE is smaller than some threshold value. In DoE, all agents share a common threshold value, which can be time-varying with the number of available samples, \(n_{t}(i)\), and thus the threshold value at time \(t\) is denoted as \(G_{n_{t}(i)}\). If the gap between ALE and CM is larger than the threshold value, a new communication round is triggered to synchronize the estimates. By doing so, the sum of new samples from other agents will be collected, a new common mean is calculated, and then the agent broadcasts the new CM to all others.
In DoE, the threshold value \(G_{n_{t}(i)}\) plays a key role in controlling estimate deviations and communication overheads. Intuitively, when the ALEs of each individual agent center around the common mean, the actual estimates of all agents center around CM as well. Thus, no communication is needed. Otherwise, a communication round is triggered to synchronize the estimates of all agents. Hence, the threshold value determines how far the estimates deviate from each other during the non-communication rounds; the smaller the radius, the smaller the deviations, and the closer the estimates of each agent approach the mean over all samples. On the other hand, with smaller threshold values \(G_{n_{t}(i)}\), agents communicate more frequently with each other. Hence, the trade-off of estimation performance versus communication overheads is associated with \(G_{n_{t}(i)}\).
In the following, we present the technical details of the DoE algorithm and first show how to construct the estimate interval for each agent by using local estimates.
#### 3.1.1 Constructing the Auxiliary Local Estimates (ALE)
At a non-communication round \(t\), an agent only accesses partial external samples from others. Below we introduce how an agent builds up the Auxiliary Local Estimate with missing samples from others.
Note that \(n_{t}(i)\) is the number of samples that an agent has made for arm \(i\) up to time slot \(t\). Let \(t_{\texttt{last}}\) denote the last round before \(t\) that the condition in Line 6 holds, and \(X_{t}^{j}(i)\) be the sum of rewards from \(n_{t}(i)\) samples of agent \(j\) at time slot \(t\) for arm \(i\). For agent \(j\), there are \(n_{t}(i)-n_{t_{\texttt{last}}}(i)\) missing
samples from any other agents. In DoE, agent \(j\) uses local samples in the same time slot to compensate the missing samples from other agents to construct ALE, denoted by \(\hat{\mu}^{j}_{\text{aux},t}(i)\). That is
\[\hat{\mu}^{j}_{\text{aux},t}(i)=\frac{1}{Mn_{t}(i)}\sum_{j^{\prime}=1}^{M}\left( X^{j^{\prime}}_{t_{\text{last}}}(i)+X^{j}_{t}(i)-X^{j}_{t_{\text{last}}}(i)\right) \tag{1}\]
where the term \(X^{j}_{t}(i)-X^{j}_{t_{\text{last}}}(i)\) serves as the compensation for the missing samples from agent \(j^{\prime}\) from \(t_{\text{last}}\) to \(t\). In DoE, ALE mimics the estimate of the estimator, which possesses all \(Mn_{t}(i)\) samples and serves as an index through which the agents decide when to communicate.
We weight the local estimates in ALE such that it may involve a larger estimation error. Hence, in addition to ALE, each agent \(j\) calculates the local estimate \(\hat{\mu}^{j}_{t}(i)\) to be used in a bandit algorithm using the following equation.
\[\hat{\mu}^{j}_{t}(i)=\frac{1}{Mn_{t_{\text{last}}}(i)+n_{t}(i)-n_{t_{\text{ last}}}(i)}\left(\sum_{j^{\prime}=1}^{M}X^{j^{\prime}}_{t_{\text{last}}}(i)+X^{j}_{t} (i)-X^{j}_{t_{\text{last}}}(i)\right) \tag{2}\]
#### 3.1.2 Communication Policy of DoE
Now with the definition of ALE, we present the communication policy of DoE. The pseudocode of DoE is summarized in Algorithm 1. To decide a communication round, an agent \(j\) checks the values of \(\hat{\mu}^{j}_{\text{aux},t}(i)\) and \(\hat{\mu}_{\text{con},t}(i):=(\sum_{j=1}^{M}X^{j}_{t_{\text{last}}}(i))/(Mn_ {t_{\text{last}}}(i))\) every time the specified threshold value, i.e., \(G_{n_{t}(i)}\), reduces to \(1/\beta\) (\(\beta>1\)) times of the original value (Lines 6, 7 in Algorithm 1). In DoE, \(\beta\) determines how frequently the algorithm checks those values. Once the deviation of the local estimate \(\hat{\mu}^{j}_{\text{aux},t}(i)\) from the common mean, \(\hat{\mu}_{\text{con},t}(i)\), is larger than the specified threshold value, \(G_{n_{t}(i)}\), agent \(j\) calls for triggering of a new communication round. In a communication round triggered by agent \(j\), the sum of missing samples from the last communication round \(t_{\text{last}}\) from each other agent will be collected to calculate a new common mean. Then, this new common mean will be broadcast to all other agents.
```
1:Parameters:\(\beta>1\); \(G_{n},n=1,2,\dots\)
2:Variables:\(\hat{\mu}^{j}_{\text{aux}}(i)\), \(n(i)\gets 0\)\(\hat{\mu}_{\text{con}}(i)\gets 0\), \(G_{\text{last}}\gets G_{1}\); \(X^{j^{\prime}}(i)\gets 0\), \(X^{j^{\prime}}_{\text{last}}(i)\gets 0\), \(\forall j^{\prime}\in\mathcal{M}\)
3:for each round \(t\) when the agent gets a new sample do
4:\(n(i)\gets n(i)+1\)
5: Update \(X^{j}(i)\) with the new sample
6:if\(\beta G_{n(i)}\leq G_{\text{last}}\)then
7:\(G_{\text{last}}\gets G_{n(i)}\)
8:if\(|\hat{\mu}^{j}_{\text{aux}}(i)-\hat{\mu}_{\text{con}}(i)|>G_{n(i)}\)then
9://Communicate to synchronize the estimates
10:\(\quad\) - Collect \(X^{j^{\prime}}(i)\) from other agents and calculate the new \(\hat{\mu}_{\text{con}}(i)\)
11:\(\quad\) - Broadcast the new \(\hat{\mu}_{\text{con}}(i)\) to other agents
12:\(\quad\) - \(X^{j^{\prime}}_{\text{last}}(i)\gets X^{j^{\prime}}(i)\) for all \(j^{\prime}\in\mathcal{M}\)
13:endif
14:endif
15: Update \(\hat{\mu}^{j}_{\text{aux}}(i)\) according to Eq. (1) and \(\hat{\mu}^{j}(i)\) according to Eq. (2)
16:endfor
```
**Algorithm 1** DoE: an algorithm for estimating the mean of arm \(i\) by agent \(j\), subscript \(t\) is dropped
Our analysis in Lemma 1 shows that DoE can provide a provable performance guarantee for the single-arm-estimation problem (in the form of confidence interval) with a tunable trade-off between the estimation quality and communication overheads. With a richer communication budget, the estimation performance of DoE approaches that of the optimal estimator with full access to the samples. Since DoE can provide an explicit confidence interval for the mean to be estimated, it is straightforward to plug DoE into bandit algorithms, as exemplified in the next section.
### Integrating DoE to a Bandit Learning Algorithm
In this section, we present a distributed bandit algorithm named DoE-bandit that uses DoE as the underlying communication policy. We summarize the pseudocode of DoE-bandit in Algorithm 2.
DoE-bandit is based on active arm elimination, which is a classic approach to address the well-known tradeoff between exploration (acquiring new information) and exploitation (optimizing based on available information) in bandit problems. In this approach, the learner constructs a _candidate set_ for the arms, which are likely to be optimal, and exploration is allowed only from the arms in the candidate set. When exploring the candidate set, the algorithm periodically pulls an arm in and dynamically eliminates the arms which are unlikely to be optimal.
To integrate DoE with the bandit algorithm, we initiate multiple instances of DoE run by DoE-bandit, each of which tackles the estimation of a single arm. To implement the DoE subroutine, each agent notifies others once an arm is eliminated (Line 17 in Algorithm 2) and pulls arms in the candidate set in a round-robin manner (Line 7), in order that all agents always pull the same arm at each time slot and DoE is able to keep track of the total number of samples in the system by \(Mn_{t}(i)\) for all agents. The above rules imply that all agents have a common candidate set, which is denoted by \(\mathcal{C}_{t}\).
_Constructing the candidate set._ To construct the candidate set, DoE-bandit determines an explicit confidence interval for the reward means of arms. Define CI\((n,\delta)\) as the radius of the confidence interval for the reward process with \(n\) samples and confidence level \(1-\delta\). If the reward process is \([0,1]\)-valued, we define
\[\text{CI}(n,\delta)=\sqrt{\frac{\log\delta^{-1}}{2n}}, \tag{3}\]
where \(\delta\) specifies the violation probability that the true mean lies outside the above confidence interval. As we mentioned, the threshold value, \(G_{n_{t}(i)}\), in DoE determines the deviation of the estimates in individual agents from the optimal one with all samples. Hence, in order to guarantee distributed agents to achieve the same order of the convergence rate as the optimal one, we set \(G_{n_{t}(i)}\) according to the confidence interval with the total of \(Mn_{t}(i)\) samples. By setting \(G_{n_{t}(i)}=\alpha\text{CI}(Mn_{t}(i),\delta)\) where \(\alpha>0\), DoE yields a confidence interval for the mean of arm \(i\), whose radius is \((2\alpha\beta+\beta)\text{CI}(Mn_{t}(i),\delta)\) (see Lemma 1 on detailed derivation). With the above result, an arm \(i\) is eliminated by agent \(j\) from the candidate set \(\mathcal{C}_{t}\) at time \(t\) if there exist an arm \(i^{\prime}\in\mathcal{C}_{t}\) such that
\[\hat{\mu}_{t}^{j}(i)+(2\alpha\beta+\beta)\text{CI}(Mn_{t}(i),\delta)<\hat{\mu }_{t}^{j}(i^{\prime})-(2\alpha\beta+\beta)\text{CI}(Mn_{t}(i^{\prime}),\delta). \tag{4}\]
Theoretical Results for Regret and Communication Cost
In this section, we summarize the theoretical results, through which we show that the DoE-bandit can achieve the same order of optimal regret as the optimal centralized one while incurring constant communication overheads.
### Main Results
The following lemma shows the performance of DoE with the upper bound of estimation error proportional to the radius of the confidence interval with system-wide samples. Then, we summarize the results for DoE-bandit in Theorem 1.
**Lemma 1**.: _Assume \(M\) agents independently sample an arm with an i.i.d. reward process with unknown mean \(\mu(i)\), and \(n_{t}(i)\) is the available samples for each agent up to time slot \(t\). With \(\beta>1\) and \(G_{n_{t}(i)}=\alpha\text{CI}(Mn_{t}(i),\delta)\), for any \(t\), with probability \(1-\delta\), we have_
\[|\hat{\mu}_{t}^{j}(i)-\mu(i)|\leq(2\alpha\beta+\beta)\text{CI}(Mn_{t}(i),\delta).\]
**Theorem 1**.: _Let \(\text{CI}_{[0,1]}(n,\delta)\) in Eq. (3) with \(1\geq\delta>0\) be the radius of the confidence interval of a \([0,1]\)-valued i.i.d. process with \(n\) samples. Set \(\beta>1\) and \(G_{n}=\alpha\min\{1,\text{CI}_{[0,1]}(Mn,\delta)\}\), where \(\alpha>0\). DoE-bandit achieves the following performance. (i) (Group Regret)_
\[\mathbb{E}\left[R_{T}\right] = O\left(\sum_{i:\Delta_{i}>0}\frac{8(2\alpha+1)^{2}\beta^{2}\log \delta^{-1}}{\Delta_{i}}+\frac{KM^{3}\tau^{2}T\delta}{2}\right), \tag{5}\]
_where \(\tau:=\frac{8(2\alpha+1)^{2}\beta^{2}\log\delta^{-1}K}{\Delta^{2}}\). (ii) (Maximum individual regret)_
\[\mathbb{E}\left[\bar{R}_{T}\right] = O\left(\sum_{i:\Delta_{i}>0}\frac{8(2\alpha+1)^{2}\beta^{2}\log \delta^{-1}}{M\Delta_{i}}+\frac{KM^{2}\tau^{2}T\delta}{2}\right). \tag{6}\]
_(iii) (Communication costs) The expected number of messages sent by all agents running DoE-bandit satisfies the following upper bound._
\[\sum_{i:\Delta_{i}>0}6M\log_{\beta}\left(\frac{4(2\alpha\beta+\beta)}{\Delta_ {i}}\right)+\frac{KM^{3}\tau^{2}T\delta}{2}+M(K-1). \tag{7}\]
In what follows, we sketch the high-level idea of the proof of Theorem 1 for both regret and communication costs of DoE-bandit. A formal proof is given in Appendix A. In Section 3, we tailor the elimination-based strategy in DoE-bandit such that all agents pull arms in a synchronized manner. Hence, each agent can always track the number of pulls of any arm \(i\) by \(Mn_{t}(i)\). In this way, CMA2B involves multiple distributed online estimation problems, each of which can be solved by DoE separately. In Lemma 1, we show that the DoE subroutine builds up a confidence interval for the mean reward of an arm (involving an additional constant factor compared to the one in single-agent settings with all samples always being available). Then, we can prove the regret bound in Theorem 1 by applying the results of Lemma 1 to the standard analysis for a multi-armed bandit problem. Specifically, the expected number of pulls when the true mean of an arm is outside of the characterized confidence interval is very small. Thus the probability of eliminating the optimal by DoE-bandit is low. In other words, the major part of the regret of DoE-bandit is introduced during the elimination phase when the candidate set contains more than one arm. Upper bounding the length of the above-mentioned elimination phase of DoE-bandit yields the regret result in Eq. (6).
To prove the communication cost, we highlight the fact that agents communicate the mean of arm \(i\) only when the radius of the confidence interval provided by DoE is larger than \(\Delta_{i}/2\) (such that the investigated arm remains in the candidate set). Combining with the rule of DoE that agents communicate only when the radius of the confidence interval reduces to \(1/\beta\) of the previous, we prove the bound for the communication cost.
### Discussion
In the following, we discuss several remarks regarding the significance of our results.
Optimality.When \(\delta=O(1/T^{s})\), \(s\geq 2\), the second term in Eq. (5) and (6) becomes constant. Hence, we can recover a \(O(\sum_{i:\Delta_{i}>0}(1/\Delta_{i})\log T)\) group regret and \(O(\sum_{i:\Delta_{i}>0}(1/\Delta_{i})\log T/M)\) individual regret for the distributed bandit problem, implying that the proposed algorithm attains both the (order-) optimal group and maximum individual regrets. In the meantime, it is possible to drop the second term in Eq. (7), and thus DoE-bandit incurs constant \(O(MK\log(1/\Delta))\) communication costs. This result substantially improves the state-of-the-art result for fully distributed bandit algorithms (see Table 1).
Influence of \(\alpha\) and \(\beta\).Eq. (7) shows that communication overheads influence the estimation quality through parameters \(\alpha\) and \(\beta\). Generally speaking, \(\beta\) specifies the frequency that DoE checks the deviation of individual estimates, directly upper bounding the communication overheads for DoE-bandit. Hence, \(\beta\) seems to have a larger influence in the communication overheads bound in Theorem 1 than \(\alpha\). On the other hand, \(\alpha\) specifies the radius of the estimate interval or the threshold for the estimate deviation, which triggers an actual communication demand. Thus, the influence of \(\alpha\) is more reflected in the empirical performance. Actually, the empirical performance of DoE-bandit in communication complexity can be much better than the theoretical bound since agents running the DoE-bandit algorithm start a communication round in an on-demand manner, i.e., only when their estimates deviate a lot from each other. For example, if the investigated process is benign, our algorithm can achieve much lower communication overheads empirically than those works whose communication policies fail to adapt to dynamic environments.
Results for other i.i.d. processes.DoE-bandit triggers a communication round based on the variation of the threshold, with the communication overheads on a suboptimal arm \(i\) being \(O(\log_{\beta}(G_{1}/G_{n_{T}(i)}))\), approximately. In DoE-bandit, the threshold value is set based on that of the confidence interval with all samples (up to a tunable parameter \(\alpha\)). For a Bernoulli process, the mean always lies in \([0,1]\). Hence, we can set \(G_{1}=1\), which results in \(O(\log(1/G_{n_{T}}(i)))\) or \(O(\log(1/\Delta))\) communication overheads. We note that by slight modification, the DoE-bandit algorithm can tackle other i.i.d. processes with similar results obtained. For an i.i.d. process with an unbounded mean, such as the Gaussian process, the DoE-bandit may choose to start a communication round only when the size of the confidence interval shrinks to \(O(\sqrt{M})\). This will not degrade the regret results guaranteed in Theorem 1, since the algorithm only has to spend on average \(O(\log T)\) samples in shrinking the confidence intervals of all arms, with an increase of \(O(K\log T)\) regret. On the other hand, the communication overheads is only \(O(\log(\sqrt{M}/\Delta))\), since \(G_{1}\) can be set to \(O(\sqrt{M})\).
## 5 Numerical Results
In this section, we conduct numerical experiments to corroborate the performance of the DoE-bandit algorithm. We aim to highlight the advantage of DoE-bandit in group and individual regrets and in communication costs over start-of-the-art baselines.
Figure 1: DoE-bandit (this work) vs. baseline algorithms listed in Table 1
Experimental Setups and Baseline AlgorithmsWe consider a multi-agent bandits setting with \(K=100\) arms, \(M=50\) agents, and \(T=30K\), and each arm is associated with a Bernoulli distribution with mean randomly taken from the click-through-rate in Ad-Clicks[2]. In DoE-bandit algorithm, we set parameters \(\alpha=1,\beta=3\) and \(\delta=1/T^{2}\). We run \(50\) trials of each experiment and plot the means as lines and their standard deviations as shaded regions.
We compare the regret and communication costs of DoE-bandit with five baselines (ComEx[20], GosInE[9], Dec_UCB[36], DPE2[30], and UCB-TCOM[31]) outlined in Table 1. We note that some of the baseline algorithms are developed for a set of agents that are connected through an underlying graph topology. Hence, to make the comparison fair, we consider a complete graph for all algorithms so that any two agents can communicate.
Experimental ResultsFigure 1 reports the comparison results. Figure 0(a) shows that DoE-bandit achieves the smallest communication costs among all algorithms. Note that DoE-bandit and DPE2 are the only two algorithms with constant communication costs, better than others and matching the theoretical results in Table 1. Figure 0(b) reports the group regrets of algorithms. The results show DoE-bandit is not as good as DPE2, ComEx, and UCB-TCOM. This is because DoE-bandit is based on the arm-elimination policy and others are UCB-like algorithms. It is known that with the same order-wise regret performance, UCB algorithms are empirically better than elimination ones in general [14, SS6]. Figure 0(c) reports the maximum individual regrets of agents. UCB-like algorithms perform still better than others. However, DPE2--the other algorithm with constant communication cost--suffers poor individual regret since it leverages a leader-follower structure to complete the learning task, and the leader agent incurs high individual regret in the leader.
To further investigate the communication costs of DoE-bandit, in Figure 2, we report the communication costs of DoE-bandit in comparison to DPE2, as the only alternative with constant communication costs, under a variety of different parameter settings. We study the impact of three parameters on the communications costs of DoE-bandit and DPE2: (1) the reward gap \(\Delta\) between arms in Figure 1(a); (2) the number of agents \(M\) in Figure 1(b); and (3) the number of arms \(K\) in Figure 1(c). The log-y-axis of these three figures is the final cumulative communication costs at the end of the time horizon. In all figures, the communication cost of DoE-bandit is always better than DPE2's. Figure 1(a) shows when \(\Delta\) decreases, the communication costs of DoE-bandit only change slightly while DPE2's increase. This is because the communication cost of DoE-bandit is \(O(KM\log\Delta^{-1})\) which is much better than that of DPE2 which is \(O(K^{2}M^{2}\Delta^{-2})\). Last, the communication costs of both DoE-bandit and DPE2 increases as the number of agents \(M\) (Figure 1(b)) or arms \(K\) (Figure 1(c)) increases. This corroborates their communication cost upper bounds' dependency on \(K\) and \(M\).
## 6 Conclusions
This paper presented DoE-bandit, a fully distributed algorithm for a cooperative multi-agent multi-armed bandits problem. The proposed algorithm achieves the optimal group and individual regret with constant communication overhead. The theoretical claims are verified by numerical experiments and show that DoE-bandit outperforms prior algorithms.
Figure 2: Communications: DoE-bandit vs. DPE2
The core communication policy proposed in this paper could be further extended in multiple directions. To address the exploitation-exploration dilemma in bandit learning, DoE-bandit adopts an elimination-based strategy to determine the arms which will be pulled. The elimination-based strategy is thought to be less practically efficient than others, such as the UCB strategy (Upper Confidence Bound) and the TS strategy (Thompson sampling). This phenomenon is also observed in the multi-agent multi-armed bandit setting (see our experimental results in Figure 1). Hence, it is meaningful to develop an UCB/TS-based algorithm which achieves better practical performance with guaranteeing the same optimal theoretical results claimed in this work. Second, one can extend the work to capture more practical concerns, such as considering an underlying topology for agents, communication delays between agents, and lossy communication between agents. |
2301.08025 | Generalization through Diversity: Improving Unsupervised Environment
Design | Agent decision making using Reinforcement Learning (RL) heavily relies on
either a model or simulator of the environment (e.g., moving in an 8x8 maze
with three rooms, playing Chess on an 8x8 board). Due to this dependence, small
changes in the environment (e.g., positions of obstacles in the maze, size of
the board) can severely affect the effectiveness of the policy learned by the
agent. To that end, existing work has proposed training RL agents on an
adaptive curriculum of environments (generated automatically) to improve
performance on out-of-distribution (OOD) test scenarios. Specifically, existing
research has employed the potential for the agent to learn in an environment
(captured using Generalized Advantage Estimation, GAE) as the key factor to
select the next environment(s) to train the agent. However, such a mechanism
can select similar environments (with a high potential to learn) thereby making
agent training redundant on all but one of those environments. To that end, we
provide a principled approach to adaptively identify diverse environments based
on a novel distance measure relevant to environment design. We empirically
demonstrate the versatility and effectiveness of our method in comparison to
multiple leading approaches for unsupervised environment design on three
distinct benchmark problems used in literature. | Wenjun Li, Pradeep Varakantham, Dexun Li | 2023-01-19T11:55:47Z | http://arxiv.org/abs/2301.08025v2 | # Effective Diversity in Unsupervised Environment Design
###### Abstract
Agent decision making using Reinforcement Learning (RL) heavily relies on either a model or simulator of the environment (e.g., moving in an 8x8 maze with three rooms, playing Chess on an 8x8 board). Due to this dependence, small changes in the environment (e.g. positions of obstacles in the maze, size of the board) can severely affect the effectiveness of the policy learnt by the agent. To that end, existing work has proposed training RL agents on an adaptive curriculum of environments (generated automatically) to improve performance on out-of-distribution (OOD) test scenarios. Specifically, existing research has employed the potential for the agent to learn in an environment (captured using Generalized Advantage Estimation, GAE) as the key factor to select the next environment(s) to train the agent. However, such a mechanism can select similar environments (with a high potential to learn) thereby making agent training redundant on all but one of those environments. To that end, we provide a principled approach to adaptively identify diverse environments based on a novel distance measure relevant to environment design. We empirically demonstrate the versatility and effectiveness of our method in comparison to multiple leading approaches for unsupervised environment design on three distinct benchmark problems used in literature.
## 1 Introduction
Deep Reinforcement Learning (DRL) has had many successes in challenging tasks (e.g. Atari [18], AlphaGO [17], solving Rubik cube [1], chip design [14]) in the last decade. However, DRL agents have been proven to be brittle and often fail to transfer well to environments only slightly different from those encountered during training [19, 12], e.g., changing size of the maze, background in a game, positions of obstacles. To achieve robust and generalizing policies, agents must be exposed to different types of environments that assist in learning different challenges [14]. To that end, existing research has proposed Unsupervised Environment Design (UED) that can create a curriculum (or distribution) of training scenarios that are adaptive with respect to agent policy [13, 14, 15, 16, 17, 18]. Following the terminology in existing works, we will refer to a particular environment configuration as a _level_, e.g., an arrangement of blocks in the maze, shape/height of the obstacles in front of the agent, the geometric configuration of racing tracks, etc.
_Related work on UED:_ Existing works on UED can be categorized along two threads. The first thread has focused on principled generation of levels and was pioneered by the Protagonist Antagonist Induced Regret Environment Design (PAIRED, [1]) algorithm. PAIRED introduced a multi-agent game between level _generator_ (teacher), _antagonist_ (expert student) and _protagonist_ (normal student), and utilized the performance difference between _antagonist_ and _protagonist_ as a reward signal to guide the _generator_ to adaptively create challenging training levels. The second thread has abandoned the idea of principled level generation and instead championed: (a) randomly generating levels and (b) replaying previously considered levels to deal with catastrophic forgetting by the student agent. The algorithm is referred to as Prioritized Level Replay (PLR, [19]) and the levels are replayed based on learning potential, captured using Generalized Advantage Estimation (GAE, [12]). PLR has been empirically shown to be more scalable and also is able to achieve more robust and better "out-of-distribution" performance than PAIRED. In [19], authors combined randomized generator and replay mechanism together and proposed PLR\({}^{1}\). Empirically, PLR\({}^{1}\)achieves the state-of-the-art in literature (as a method that demands no human expertise), and thus we will adopt PLR\({}^{\pm}\) as our primary baseline in this paper. Another recent approach AC-CEL [20] that builds on PLR performs edits (i.e. modifications based on human expertise) on high regret environments to learn better. Both these threads of research have significantly improved the state-of-the-art (domain randomization [2, 13] and adversarial learning [14, 15]).
Khirodkar _et al._[2018]). However, to generalize better, it is not sufficient to train the agent only on high regret/GAE levels. We can have multiple high regret levels, which are very "similar" to each other and agent does not learn a lot from being trained on similar levels. Thus, levels also have to be sufficiently "different", so that agent can gain more perspective on different challenges that it can face.
To that end, in this paper, we introduce a diversity metric in UED, which is defined based on the distance between occupancy distributions associated with agent trajectories from different levels. We then provide a principled method, referred to as _Diversity Induced Prioritized Level Replay_ (DIPLR) to select levels with the diversity metric to provide better generalization performance than the state-of-the-art.
Related Work on Diversity in RLThere is existing literature on diversity measurement in the field of transfer RL, e.g. [20, 1, 14]. However, these works conduct an exhaustive comparison of any possible states between two underlying Markov Decision Processes (MDPs), which does not take the current agent policy into account, and may suffer from a large computational cost. In contrast, we use the collected trajectories induced by the policy to approximate the occupancy distribution of the different levels. Such a method can avoid massive computation and more robustly indicate the difference between two levels being explored by the same agent policy. There are a few works that have explored the diversity between policies in DRL. 1) [1] adopted distance on pairwise policies to modify the loss function to enforce the agent to attempt policies different from its prior policies. As an improved version on [1], [17] computes the diversity on a population of policies instead of pairwise distance, so that it avoids cycling training behaviors and policy redundancy. 2) [12] proposed trajectory diversity measurement based on _Jensen-Shannon Divergence_ (JSD) and included it in the loss function to maximize the difference between policies. The JSD computes the divergence based on action distribution in trajectories. In contrast to these works, our method computes both pairwise and population-wise distance for the purpose of measuring the difference between levels. Furthermore, we provide a diversity guided method to train a robust and well-generalizing agent in the UED framework
**Contributions:**
Overall, our contributions are three-fold. First, we highlight the benefits of diversity in UED tasks and formally present the definition of distance between levels. Second, we employ Wasserstein distance to quantitatively measure the distance and introduce the DIPLR algorithm. Finally, we empirically demonstrate the versatility and effectiveness of DIPLR in comparison to other leading approaches on benchmark problems. Notably, we also investigate the relationship between _diversity_ and _regret_ (i.e. _learning potential_) and conduct an ablation study to reveal their individual effectiveness. Surprisingly, we find that diversity solely works better than regret in the model, and they complement each other to achieve the best performance when combined.
## 2 Background
In this section, we describe the problem of UED and also the details of the leading approaches for solving UED.
### Unsupervised Environment Design, UED
The goal of UED is to train a student agent that performs well across a large set of different environments. To achieve this goal, there is a teacher agent in UED that provides a curriculum of environment parameter values to train the student agent to generalize well to unseen levels.
UED problem is formally described using an Underspecified Partially Observable Markov Decision Process (UPOMDP). It is defined using the following tuple:
\[\langle S,A,\Theta,I,O,T,R,\gamma\rangle\]
\(S\), \(A\) and \(O\) are the set of states, actions and observations respectively. \(R:S\rightarrow\mathbb{R}\) is the reward function and \(\gamma\) is the discount factor. The most important element in the tuple is \(\Theta\), which is the set of environment parameters. A particular parameter \(\theta\in\Theta\) (can be a vector or sequence of values) defines a level and can impact the reward model, transition dynamics and the observation function, i.e. \(R:S\times\Theta\rightarrow\mathbb{R}\), \(T:S\times A\times\Theta\to S\) and \(I:S\times\Theta\to O\). UPOMDP is under specified, because we cannot train with all values of \(\theta\) (\(\in\Theta\)), as \(\Theta\) can be infinitely large. The goal of the student agent policy \(\pi\) in a UPOMDP is to maximize its discounted expected rewards for any given \(\theta\in\Theta\):
\[\max_{\pi}V^{\theta}(\pi)=\max_{\pi}\mathbb{E}_{\pi}V^{\theta}(\tau)=\max_{ \pi}\mathbb{E}_{\pi}\Big{[}\sum_{t=0}^{H}r_{t}^{\theta}\cdot\gamma^{t}\Big{]}\]
where \(r_{t}^{\theta}\) is the reward obtained by \(\pi\) in a level with environment parameter \(\theta\) at time step \(t\). Thus, the student agent needs to be trained on a series of \(\theta\) values that maximize its generalization capability on all possible levels from \(\Theta\). To that end, we employ the teacher agent. The goal of the teacher agent policy \(\Lambda\) is to generate a distribution over the next set of environment parameter values to train the student, i.e.,
\[\Lambda:\Pi\rightarrow\Delta(\Theta)\]
to achieve good generalization performance, where \(\Pi\) is the set of possible policies of the teacher.
### Approaches for Solving UED
In all approaches for solving UED, student always optimizes a policy that maximizes its value on the given \(\theta\). The different approaches for solving UED vary on the method adopted by the teacher agent. We now elaborate on the \(\Lambda\) employed by different approaches.
Two fundamental approaches to UED are Domain Randomization (DR) [1, 13, 14] and Minimax [14, 15]. Teacher in DR simply randomizes the environment configurations regardless of the student's policy, i.e.,
\[\Lambda^{DR}(\pi)=\mathcal{U}(\Theta)\]
where \(\mathcal{U}\) is the uniform distribution over all possible \(\theta\) values. On the other hand, teacher in Minimax adversarially generates challenging environments to minimize the rewards of the student's policy, i.e.,
\[\Lambda^{MM}(\pi)=\arg\min_{\theta}V^{\theta}(\pi)\]
The next set of approaches related to PAIRED Dennis _et al._ (2020); Jiang _et al._ (2021) rely on regret, which is defined approximately as the difference between the maximum and the mean return of students' policy, to generate new levels:
\[reg^{\theta}(\pi)\approx\max_{\tau\sim\pi}V^{\theta}(\tau)-\mathbb{E}_{\tau \sim\pi}V^{\theta}(\tau)\]
Given this regret, teacher selects policies that maximize regret:
\[\Lambda^{MR}(\pi)=\arg\max_{\theta}reg^{\theta}(\pi)\]
From the original paper by Dennis _et al._ (2020), there have been multiple major improvements Jiang _et al._ (2021); Parker-Holder _et al._ (2022):
1. New levels are generated randomly instead of using an optimizer.
2. Level to be selected at a time step is decided based on an efficient approximation of regret, referred to as _positive value loss_ (a customized form of Generalized Advantage Estimation (GAE)): \[gae^{\theta}(\pi)=\frac{1}{H}\sum_{t=0}^{H}\max\left(\sum_{k=t}^{H}(\gamma \lambda)^{k-t}\delta_{k},0\right)\] (1) where \(\gamma\) and \(\lambda\) are the GAE and MDP discount factors respectively, \(H\) is the time horizon and \(\delta_{k}\) is the TD-error at time step \(k\).
3. Levels are replayed with a certain probability to ensure there is no catastrophic forgetting.
4. Edit randomly on generated levels through human-defined edits Parker-Holder _et al._ (2022).
We will use learning potential, regret and GAE, interchangeably to represent _positive value loss_ in the rest of the paper.
### Wasserstein Distance
We will propose a diversity measure that relies on distance between occupancy distributions, where we utilize Wasserstein Distance. The problem of optimal transport density between two distributions was initially proposed by Monge (1781) and generalized by Kantorovich (1942). Wasserstein distance is preferred by the machine learning community among several commonly used divergence measures, e.g. Kullback-Leibler divergence, because it is 1) non-zero and continuously differentiable when the support of two distributions is disjoint and 2) symmetric, i.e., \(\mathcal{W}(\mathcal{P},\mathcal{Q})\) is equal to \(\mathcal{W}(\mathcal{Q},\mathcal{P})\).
Wasserstein distance is the measurement between probability distributions defined on a metric space \(M(d,\mathcal{C})\) with a cost metric \(d:\mathcal{C}\times\mathcal{C}\mapsto\mathbb{R}_{+}\):
\[\mathcal{W}_{p}(\mathcal{P},\mathcal{Q}) = \left(\underset{\gamma\in\Pi(\mathcal{P},\mathcal{Q})}{inf} \mathbb{E}_{(x,y)\sim\gamma}[d(x,y)^{p}]\right)^{1/p} \tag{2}\]
where \(\Pi(\mathcal{P},\mathcal{Q})\) is the set of all possible joint probability distributions between \(\mathcal{P}\) and \(\mathcal{Q}\), and one joint distribution \(\gamma\in\Pi(\mathcal{P},\mathcal{Q})\) represents a density transport plan from point \(x\) to \(y\) so as to make \(x\) follows the same probability distribution of \(y\).
## 3 Approach: DIPLR
A key drawback of existing approaches for solving UED is the inherent assumption that levels with high regret (or GAE) all have high learning potential. However, if there are two levels1 which are very similar to each other, and they both have high regret, training the student agent on one of those levels makes training on the other unnecessary and irrelevant. Given our ultimate goal is to let student agent policy transfer well on to a variety of few-shot or zero-shot levels, we utilize diversity. Diversity has gained traction in Deep Reinforcement Learning to improve the generalization performance (albeit in a different problem setting than the one in the paper) of models (Parker-Holder _et al._, 2020; Hong _et al._, 2018; Lupu _et al._, 2021). More specifically, we provide
Footnote 1: We will use levels and environments interchangeably.
1. a scalable mechanism for quantitatively estimating similarity (or dissimilarity) between two given levels given the current student policy. This will then be indirectly used to generate diverse levels.
2. a teacher agent for UED that generates diverse levels and trains student agent on those diverse environments, so as to achieve strong generalization to "out-of-distribution" levels.
### Estimating Similarity between Levels
The objective here is to estimate similarity between levels given the current student agent policy. One potential option is to encode each level using one of a variety of encoding methods (e.g., Variational Autoencoder) or using the parameters associated with the level and then take distance between the encodings of the levels. However, such an approach has multiple major issues:
* Encoding a level does not account for the sequential moves the student agent will make through the level, which is of critical importance as the similarity is with respect to student policy;
* There can be stochasticity in the level that is not captured by the parameters of the level. For instance, in Bipedal-Walker, a complex yet well-parameterized UPMODP introduced by Wang _et al._ (2019); Parker-Holder _et al._ (2022), has multiple free parameters controlling the terrain. Because of the existence of stochasticity in the level generation process, two levels could be very different while we have near-zero distance measurement given their environment parameter vectors.
* Distance between environment parameters requires normalization in each parameter dimension and is domain-specific.
Since we collect several trajectories within current levels when approximating the regret value, we can naturally get the
state-action distribution induced by the current policy. Therefore, we propose to evaluate _similarity on the different levels_ based on the _distance between occupancy distributions of the current student policy_. The hypothesis is that if the levels are similar, then the trajectories traversed by the student agent within the level will have similar state-action distribution. Equation (3) provides the expression for state-action distribution given a policy, \(\pi\) and level \(l_{\theta_{1}}\) (\(\theta_{1}\) represents the parameters corresponding to the level):
\[\rho^{\pi}_{l_{\theta_{1}}}(s,a)=(1-\gamma)\sum_{t=0}^{H}\left[Pr(s_{t}=s,a_{ t}=a|s_{0}\sim p_{0}(.),\right.\]
\[\left.s_{t}\sim p(.|s_{t-1},a_{t-1},\theta_{1}),a_{t}\sim\pi(.|s_{t})\right] \tag{3}\]
Similarly, we can derive \(\rho^{\pi}_{l_{\theta_{2}}}\) for level, \(l_{\theta_{2}}\) with parameter \(\theta_{2}\). Typically, KL divergence is employed to measure the distance between state-action distributions.
\[-D_{KL}(\rho^{\pi}_{l_{\theta_{1}}}||\rho^{\pi}_{l_{\theta_{2}}})=\mathbb{E}_ {(s,a)\sim\rho^{\pi}_{\theta_{1}}}\left[\log\frac{\rho^{\pi}_{l_{\theta_{2}}}} {\rho^{\pi}_{l_{\theta_{1}}}}\right]\]
However, KL divergence is not applicable in our setting, because of lack of transition probabilities, the two different levels can result in two occupancy distributions with disjoint supports. More importantly, KL divergence cannot work without explicit estimates of the occupancy distributions. Therefore, we employ the Wasserstein distance described in Equation (2), which can calculate the distance between two distributions from empirical samples.
We can write the Wasserstein distance between two occupancy distributions as Equation (4):
\[\mathcal{W}(\rho^{\pi}_{l_{\theta_{1}}},\rho^{\pi}_{l_{\theta_{2}}})=\left( \underset{\gamma\in\Pi(\rho^{\pi}_{l_{\theta_{1}}},\rho^{\pi}_{l_{\theta_{2}}} )}{inf}\mathbb{E}_{(\phi_{1},\phi_{2})\sim\gamma}[d(\phi_{1},\phi_{2})^{p}] \right)^{1/p} \tag{4}\]
where \(\phi\in(S,A)\) is a sample from the occupancy distribution. By Equation (4), we can collect state-action samples in trajectories to compute the empirical Wasserstein distance between two levels, i.e. \(\mathcal{D}(l_{\theta_{1}},l_{\theta_{2}})\approx\mathcal{W}(\rho^{\pi}_{l_{ \theta_{1}}},\rho^{\pi}_{l_{\theta_{2}}})\), is our empirical estimation of the Wasserstein distance between two levels.
### Diversity guided Training Agent
Now that we have a scalable way to compute the distance between levels, \(\mathcal{D}(\cdot,\cdot)\), we will utilize it in the teacher to generate a diverse curriculum for the student agent. We build on the PLR training agent [11] that only considered regret/GAE and stateless to prioritize levels for selection. Our training agent most crucially employs diversity to select levels.
Our training agent maintains a buffer of high-potential levels for training. At each iteration, we either: (a) generate a new level to be added to the buffer; or (b) sample a mini-batch of levels for the student agent to train on.
**Diversity during generating a new level:** For a new level, \(l_{\theta}\), the distance from \(l_{\theta}\) to all the levels, \(L=\langle l_{\theta_{1}},l_{\theta_{2}},\cdots\rangle\) in the buffer is given by:
\[\mathcal{D}(l_{\theta},L)=\min_{k}\mathcal{D}(l_{\theta},l_{\theta_{k}}) \tag{5}\]
To increase diversity, we add new levels to the buffer that have the highest value of this distance (amongst all the randomly generated levels). We can also combine this distance measure with regret/GAE, when taking a decision to include a level in the buffer, so that different and challenging environments are added.
**Diversity in sampling levels from buffer to train:** To decide on levels to train on, we maintain priority (probability of getting selected) for each level that is computed based on their distance from other levels in the buffer. The rank of a level, \(l_{\theta_{i}}\) in terms of distance from other levels in the buffer, \(L\) is given by
\[h(\mathcal{D}_{i},\langle\mathcal{D}_{1},\mathcal{D}_{2},\cdots\rangle)\]
where \(\mathcal{D}_{i}=\mathcal{D}(l_{\theta_{i}}L\setminus l_{\theta_{i}})\). We then convert the rank into a probability using Equation (6),
\[P_{i} = \frac{1}{h(\mathcal{D}_{i},\mathcal{D})^{\beta}} \tag{6}\]
where \(h(\cdot,\cdot)\) is the rank transformation that can find the rank of \(\mathcal{D}_{i}\) in a set of values \(\mathcal{D}\), and \(\beta\) is a tunable temperature parameter.
**Combining Diversity with Regret/GAE**: Instead of solely considering the diversity aspect of the level buffer, we can also take learning potential into account by using regrets in Equation (1) so that we will have a level buffer that is not only filled up with diverse training levels but also challenging levels that continuously push the student. We could assign different weights to diversity and regret by letting the replay probability \(P_{replay}=\rho\cdot P_{D}+(1-\rho)\cdot P_{R}\), where \(P_{D}\) and \(P_{R}\) are the prioritization of diversity and regret respectively, and \(\rho\) is the tuning parameter.
An overview of our proposed algorithm is shown in Figure 1. When the level generator creates a new level \(l_{\theta_{i}}\), we collect trajectories \(\tau_{i}\) on \(l_{\theta_{i}}\), and compute its distance \(\mathcal{D}_{i}\) to the trajectory buffer by Equation (5). If the replay probability of \(l_{\theta_{i}}\) is greater than that of any levels in the level buffer, we insert \(l_{\theta_{i}}\) into the level buffer and remove the level with minimum replay probability from the level buffer.
The complete procedure of DIPLR is presented in Algorithm 1. To accelerate the calculation process required by Wasserstein distance, we adopt the popular empirical Wasserstein distance solver \(D(\cdot,\cdot)\) from [11]. For simplicity, we use \(\tau\) to represent the state-action samples from trajectory \(\tau\) in the pseudocode. To better reveal
Figure 1: An Overview of DIPLR Algorithm
the relationship between _diversity_ and _learning potential_, we conduct an ablation study where we only adopt the diversity metric to pick levels to fill up the level buffer, i.e. set \(P_{replay}=1\cdot P_{D}+0\cdot P_{R}\).
## 4 Experiments
In this section, we compare DIPLR to the set of leading benchmarks for solving UED: Domain Randomization, Minimax, PAIRED, PLR \({}^{\bot}\). We conduct extensive experiments and empirically demonstrate the effectiveness and generality of DIPLR on three popular yet highly distinct UPOMDP domains, Minigrid, Bipedal-Walker and Car-Racing. Minigrid is a partially-observable navigation problem under discrete control with sparse rewards, while Bipedal-Walker and Car-Racing are partially-observable walking/driving under continuous control with dense rewards (we provide more details in Appendix). In each domain, we train the teacher and student agents with Proximal Policy Optimization (PPO, [14]) and we will present the zero-shot out-of-distribution (OOD) test performance of all algorithms in each domain. Besides, to make the comparison more reliable and straightforward, we adopt the recently introduced standardized DRL evaluation metric [1], with which we show the aggregate inter-quartile mean (IQM) and optimality gap plots. Specifically, IQM discards the bottom and top 25% of the runs and measures the performance on the middle 50% of combined runs, and it is thus robust to outlier scores and is a better indicator of overall performance; Optimality gap captures the amount by which the algorithm fails to meet the "best performance", i.e. it assumes that a score (e.g. =1.0) is a desirable target beyond which improvements are not very important. We present the plots after normalizing the performance with a min-max range of solved-rate/returns for better visualization.
We also provide a detailed ablation analysis to demonstrate the utility of diversity alone, by providing results with and without regret (or GAE). The version with diversity alone is referred to as DIPLR\({}^{-}\) and the one which uses both diversity and regret is referred to as DIPLR.
* Figure 3: PAIRED performs poorly, as the long horizon (60 steps) and sparse reward task is super challenging for a guided level generator. While Domain Randomization performs decently, minimax performs worse than PAIRED.
* Figure 4: With respect to zero-shot transfer results, DIPLR is clearly better than \(\text{PLR}^{\perp}\) in every one of the testing scenarios. This clearly indicates the utility of diversity.
### Bipedal-Walker Domain
We also conduct a comprehensive comparison on the Bipedal-Walker domain, which is introduced in [20, 21] and is popular in the community as it is a well-parameterized environment with eight variables (including ground roughness, the number of stair steps, min/max range of pit gap width, min/max range of stump height, and min/max range of stair height) conditioning the terrains and is thus straightforward to interpret. The teacher in Bipedal-Walker can specify the values of the eight environment parameters, but there will still be stochasticity in the generation of a particular level. As for the student, i.e. walker agent, it should determine the torques applied on its joints and is constrained by partial observability where it only knows its horizontal speed, vertical speed, angular speed, positions of joints, joints' angular speed, etc.
Following the experiment settings in prior UED works, we train all the algorithms for 30k PPO updates (\(\sim\)1B steps), and then evaluate their generalization capability on six distinct test instances in Bipedal-Walker domain, i.e. Bidedal-Walker, Hardcore, Stair, PitGap, Stump, and Roughness, as illustrated in Figure 5. Among these, \(\{BipedalWalker\}\) is the basic level that only evaluates whether the agent can walk on flat ground, \(\{Stair,PitGap,Stump,Roughness\}\) challenges the walker for a particular kind of obstacles and \(\{Hardcore\}\) contains a combination of all types of challenging obstacles.
To understand the evolution of transfer performance, we evaluate the student every 100 student policy updates and plot the curves in Figure 7. As is shown, our proposed method DIPLR outperforms all other benchmarks in each test environment. The ablation model DIPLR\({}^{-}\) also surpasses our primary benchmark \(\text{PLR}^{\perp}\) in five test environments, indicating that diversity metric alone contributes to ultimate generalization performance more than regret in continuous domains. PAIRED suffers from variance quite a bit and cannot constantly create informative levels for the student agents in such a complex domain. This is because of the nature of the multi-agent framework adopted in PAIRED, where the convergence highly depends on the ability of the expert student.
After training for 30k PPO updates, we collect trained
Figure 4: Zero-shot transfer performance in eight human-designed test environments. The plots are based on the median and interquartile range of solved rates across 10 independent experiments.
Figure 5: Six examples of test environments in Bipedal-Walker domain. (a) BipedalWalker, (b) Hardcore, (c) Stair, (d) PitGap, (e) Stump, (f) Roughness. Note that these may not be the exact test environment terrains but only extreme cases for demonstration.
Figure 6: Aggregate test performance over 10 runs in Bipedal-Walker domain.
models and conduct a more rigorous evaluation based on 10 test episodes in each test environment, and present the aggregate results after min-max normalization (with range=[0, 300] on Bipedal-Walker and [0, 150] on the other five test environments) in Figure 6. Notably, our method DIPLR dominates all the benchmarks in both IQM and optimality gap. Furthermore, DIPLR\({}^{-}\) performs better than \(\text{PLR}^{\perp}\).
### Car-Racing Domain
To validate that our method is scalable and versatile, we further implement DIPLR on the Car-Racing environment, which is introduced by [10] and has been used in existing UED papers. In this domain, the teacher needs to create challenging racing tracks by using Bezier curves (via 12 control points) and the student drives on the track under continuous control with dense reward. A zoom-in illustration of the track is shown in Figure 8 (a). After training the student for around 3k PPO updates (\(\sim\) 5.5M steps), we evaluate the student agent in four test scenarios, among which three are challenging F1-tracks existing in the real world and the other one is a basic vanilla track. Note that these tracks are absolutely OOD because they cannot be defined with just 12 control points, for example, the F1-Singapore track in Figure 8 (c) has more than 20 control points.
Figure 9 presents the complete training process of each algorithm on four test environments, with an evaluation interval of 100 PPO updates. As a relatively simpler domain, our method DIPLR and DIPLR\({}^{-}\) quickly converge to the optimum while DR and \(\text{PLR}^{\perp}\) achieve a near-optimal generalization on the Vanilla and F1-Italy track.
The aggregate performance after min-max normalization (with range=[200,800]) of all methods is summarized in Figure 10. Despite both \(\text{PLR}^{\perp}\) and DR agents can learn well on this comparatively simple domain, DIPLR outperforms them on both IQM and optimality gap.
## 5 Conclusion
We provided a novel method DIPLR for unsupervised environment design (UED), where diversity in training environments is employed to improve generalization capability of a student agent. By computing the distance between levels via Wasserstein distance over occupancy distributions, we constantly expose the agent to challenging yet diverse scenarios and thus improve its generalization in zero-shot out-of-distribution test environments. In our experiments, we validated that DIPLR is capable of training robust and generalizing agents and significantly outperforms the best-performing baselines in three distinct UED domains. Moreover, we explored the relationship between _diversity_ and _learning potential_, and we discover that diversity alone benefits the algorithm more than learning potential, and they complement each other to achieve the best performance when combined.
Figure 8: Examples in Car-Racing domain. (a) A zoom-in snapshot of the car-racing track; (b) a track generated by domain randomization; (c) one of the test environments, F1-Singapore track
Figure 10: Aggregate test performance over 10 independent runs in Car-Racing domain.
Figure 7: Performance on test environments during the training period over five independent experiments (mean and standard error).
Figure 9: Zero-Shot Transfer Performance in Car-Racing domain |
2304.03976 | Classification of marked elliptic root systems with non-reduced affine
quotient | The class of root systems, called elliptic root systems, were introduced in
1985 by K. Saito, for his studies on a normal surface singularity which
contains a regular elliptic curve in its minimal resolution. He also classified
such root systems when they admit a reduced affine quotient, as root system. In
this note, we provide the classification of elliptic root systems that admit a
non-reduced affine quotient, thus complete the classification of such root
systems. | A. Fialowski, K. Iohara, Y. Saito | 2023-04-08T10:08:01Z | http://arxiv.org/abs/2304.03976v2 | # Classification of marked elliptic root systems
###### Abstract
The class of root systems, called elliptic root systems, were introduced in 1985 by K. Saito, for his studies on a normal surface singularity which contains a regular elliptic curve in its minimal resolution. He also classified such root systems when they admit a reduced affine quotient, as root system. In this note, we provide the classification of elliptic root systems that admit a non-reduced affine quotient, thus complete the classification of such root systems.
**Resume** La classe des systemes de racines, dits elliptiques, a ete introduite par K. Saito en 1985 pour ses etudes sur une singularite normale d'une surface dont sa resolution minimale contient une courbe elliptique reguliere. Il a meme classifie tels systemes de racines dans le cas ou lis admettent un quotient affine reduit, comme systeme de racines. Dans cette note, nous allons donner la classification de systemes elliptiques de racines qui admettent un quotient affine non-reduit, d'ou nous achevons la classification des systemes elliptiques de racines.
**MSC2020**: 17B22 (primary) 17B67, 08A35 (secondary)
## 1 Introduction
In his study on simply elliptic singularities (cf. [7]), K. Saito introduced the notion of **elliptic root systems**, what were called \(2\)_-extended affine root systems_, in 1985 [8]. Such root systems are defined in a real vector space \(F=\mathbb{R}^{l+2}\) (\(l>0\)) equipped with a positive semi-definite metric \(I\) whose radical \(\operatorname{rad}(I)\) is of dimension \(2\). Hence, it can be viewed as a \(2\)-dimensional generalization of affine root systems. However, this newly defined root system has an interesting property: it has a finite order Coxeter element in its Weyl group! This fact plays a crucial role for its study and has important geometric applications in singularity theory.
Let \(R\) be an elliptic root system and let \(G\) be a one-dimensional subspace of the radical \(\operatorname{rad}(I)\) of \(I\) such that the sublattice \(G\cap Q(R)\) of \(Q(R):=\mathbb{Z}R\) is full in \(G\). K. Saito has classified the pairs \((R,G)\) of an elliptic root system \(R\) with its marking \(G\), under the assumption that the quotient \(R/G\), which is the image of \(R\) in \(F/G\) via the canonical projection \(F\twoheadrightarrow F/G\), is a reduced affine root system. Hence, his classification heavily depends on the classification of affine root systems due to V. G. Kac [4], R. V. Moody [6] and I. G. Macdonald [5], where Kac and Moody considered affine root systems as the root system of affine Lie algebras. Notice that Macdonald classified affine root systems without discussing their relations with Lie algebras, hence including non-reduced affine root systems. Saito's classification doesn't imply the classification of reduced marked elliptic root systems \((R,G)\), since \(R/G\) can be non-reduced even if \(R\) itself is reduced.
In this note, we give the classification theorem of the marked elliptic root systems \((R,G)\) whose quotient \(R/G\) is a non-reduced affine root system, thus complete the classification of marked elliptic root systems.
**Acknowledgment**.: We want to thank the referee for pointing out the paper [3] which, together with [1], helped us find a missing case in the classification. Research of Y. S. is supported by JSPS KAKENHI Grant number JP20K03568.
## 2 Marked elliptic root systems
Let \(F\) be a real vector space and \(I:F\times F\to\mathbb{R}\) be a symmetric bilinear form whose signature is \((l_{+},l_{0},l_{-})\) for some non negative integers \(l_{+},l_{0},l_{-}\) such that not all of them are zero. As usual, for any non-isotropic vector \(\alpha\in F\), we set
\[\alpha^{\vee}=\frac{2}{I(\alpha,\alpha)}\alpha\in F,\]
and define an isometry \(w_{\alpha}\in O(F,I)\) by
\[w_{\alpha}(\lambda)=\lambda-I(\lambda,\alpha^{\vee})\alpha.\]
**Definition 2.1**.: _A non-empty discrete subset \(R\) of \(F\) is called a **generalized root system** if it satisfies_
1. _The lattice_ \(Q(R)\) _(called the_ **root lattice**_), spanned by the elements of_ \(R\)_, is full in_ \(F\)_, i.e.,_ \(\mathbb{R}\otimes_{\mathbb{Z}}Q(R)\cong F\)_._
2. _For any_ \(\alpha\in R\)_, one has_ \(I(\alpha,\alpha)\neq 0\)_._
3. _For any_ \(\alpha,\beta\in R\)_, one has_ \(I(\alpha^{\vee},\beta)\in\mathbb{Z}\)_._
4. _For any_ \(\alpha\in R\)_, one has_ \(w_{\alpha}(R)=R\)_._
5. _Assume that there exists two subsets_ \(R_{1},R_{2}\) _of_ \(R\) _which are orthogonal and_ \(R_{1}\cup R_{2}=R\)_, then either_ \(R_{1}\) _or_ \(R_{2}\) _is empty._
An **elliptic root system**\(R\) is a generalized root system belonging to \(F\) with a metric \(I\) whose signature is \((l,2,0)\). A vector subspace \(G\) of \(\operatorname{rad}(I)\) is called a **marking** if the sublattice \(G\cap Q(R)\) of \(G\) is full in \(G\). The pair \((R,G)\) of an elliptic root system with its marking is called a **marked elliptic root system**, _mERS_ for short. The image of \(R\) by the canonical projection \(F\twoheadrightarrow F/G\) is known to be an affine root system, denoted by \(R/G\) and is called the **quotient root system._
By definition, the sublattice \(\operatorname{rad}_{\mathbb{Z}}(I):=Q(R)\cap\operatorname{rad}(I)\) of \(Q(R)\) is of rank \(2\). Hence, there exists two non-zero vectors \(a\) and \(b\) of \(F\) which generate the lattice \(\operatorname{rad}_{\mathbb{Z}}(I)=\mathbb{Z}a\oplus\mathbb{Z}b\). Here and after, we always fix \(G=\mathbb{R}a\). In 1985, K. Saito [8] classified mERSs \((R,G)\) whose quotient \(R/G\) is a reduced affine root system. Here, a root system is said to be **reduced** if for any \(\alpha\in R\), neither \(2\alpha\) nor \(\frac{1}{2}\alpha\) belongs to \(R\).
## 3 Main results
In the rest of this note, we fix an ambient real vector space \(F=\mathbb{R}^{l+2}\) equipped with a metric \(I\) whose signature is \((l,2,0)\). For each root system \(R\), we denote the set of short (resp. middle length, long) roots by \(R_{s}\), \(R_{m}\) and \(R_{l}\), respectively.
### New root systems
First, we introduce some reduced mERSs \((R,G)\) with non-reduced affine quotient \(R/G\). Since there are only \(6\) such cases, we call each type of the marked root system by making reference to \(BC_{l}\).
#### 3.1.1 Reduced classical type
Each mERS \((R,G)\) with \(G=\mathbb{R}a\) is defined by
\[R=(R(X_{l})_{s}+\mathbb{Z}a)\cup(R(X_{l})_{m}+\mathbb{Z}a)\cup(R(X_{l})_{l}+(1+ 2\mathbb{Z})a),\]
where \(R(X_{l})\) is a non-reduced affine root system, and for each \(R\), it is given by the following table:
\begin{tabular}{|c||c|c|c|c|} \hline Type of \(R\) & \(BC_{l}^{(1,2)}\) & \(BC_{l}^{(4,2)}\) & \(BC_{l}^{(2,2)\sigma}(1)\) & \(BC_{l}^{(2,2)\sigma}(2)\) \\ \hline \(X_{l}\) & \(BCC_{l}\) & \(C^{\vee}BC_{l}\) & \(BB_{l}^{\vee}\) & \(C^{\vee}C_{l}\) \\ \hline \end{tabular} Here we recall that these \(4\) non-reduced affine root systems are defined as follows (cf. [5]):
\[R(BCC_{l})= R(BC_{l})+\mathbb{Z}b (l\geq 1),\] \[R(BB_{l}^{\vee})= (R(BC_{l})_{s}+\mathbb{Z}b)\cup(R(BC_{l})_{m}+\mathbb{Z}b)\cup(R (BC_{l})_{l}+2\mathbb{Z}b) (l\geq 2),\] \[R(C^{\vee}C_{l})= (R(BC_{l})_{s}+\mathbb{Z}b)\cup(R(BC_{l})_{m}+2\mathbb{Z}b)\cup(R (BC_{l})_{l}+2\mathbb{Z}b) (l\geq 1),\] \[R(C^{\vee}BC_{l})= (R(BC_{l})_{s}+\mathbb{Z}b)\cup(R(BC_{l})_{m}+2\mathbb{Z}b)\cup( R(BC_{l})_{l}+4\mathbb{Z}b) (l\geq 1),\]
and the root system of type \(BC_{l}\) is defined by
\[R(BC_{l})=\{\,\pm\varepsilon_{i}\,\}_{1\leq i\leq l}\cup\{\,\pm(\varepsilon_{ i}\pm\varepsilon_{j})\,|\,1\leq i<j\leq l\,\}\cup\{\,\pm 2\varepsilon_{i}\,\}_{1 \leq i\leq l},\]
where \(\{\varepsilon_{i}\}_{1\leq i\leq l}\) satisfy \(I(\varepsilon_{i},\varepsilon_{j})=\delta_{i,j}\), the Kronecker delta.
#### 3.1.2 Reduced \(*\)-type
There are two ERSs of \(*\)-type, and they are defined by
\[R(BC_{l}^{(1,1)*})= (R(BCC_{l})_{s}+\mathbb{Z}a)\cup(R(BCC_{l})_{m}+\mathbb{Z}a)\cup( R(BC_{l})_{l}+L_{1,1}),\] \[R(BC_{l}^{(4,4)*})= (R(BC_{l})_{s}+L_{1,1})\cup(R(C^{\vee}BC_{l})_{m}+2\mathbb{Z}a) \cup(R(C^{\vee}BC_{l})_{l}+4\mathbb{Z}a),\]
where we set
\[L_{1,1}=\{\,ma+nb\,|\,(m-1)(n-1)\equiv 0[2]\,\}.\]
Notice that they had been discovered by S. Azam [2] in 2002.
Second, we consider the non-reduced root systems. Since there are many such root systems, we call each type of the marked root systems by making reference to its affine quotient \(R(X_{l})=R/G\) which is one of the type \(BCC_{l},C^{\vee}BC_{l},BB_{l}^{\vee}\) and \(C^{\vee}C_{l}\).
#### 3.1.3 Non-reduced classical type
We introduce \(4\) types of the root systems: \(X_{l}^{(1)}\), \(X_{l}^{(2)}(1)\), \(X_{l}^{(2)}(2)\) and \(X_{l}^{(4)}\). They are defined by
\[R(X_{l}^{(1)})= R(X_{l})+\mathbb{Z}a,\] \[R(X_{l}^{(2)}(i))= (R(X_{l})_{s}+\mathbb{Z}a)\cup(R(X_{l})_{m}+i\mathbb{Z}a)\cup(R(X_ {l})_{l}+2\mathbb{Z}a)\qquad i=1,2,\] \[R(X_{l}^{(4)})= (R(X_{l})_{s}+\mathbb{Z}a)\cup(R(X_{l})_{m}+2\mathbb{Z}a)\cup(R( X_{l})_{l}+4\mathbb{Z}a).\]
Here, \(R(X_{l})\) is one of the \(4\) types: \(BCC_{l},C^{\vee}BC_{l},BB_{l}^{\vee}\) and \(C^{\vee}C_{l}\).
#### 3.1.4 Non-reduced \(*\)-type
In this case, there can be several \(*\)-types : \(*_{i}\), \(*_{i^{\prime}}\) (\(i=0,1\)), \(*_{s}\) and \(*_{l}\).
Let \(L_{i,j},L_{i,j}^{s_{1},s_{2}}\) (\(i,j=0,1\)) and \((s_{1},s_{2}\in\mathbb{Z}_{>0}\)) be the subsets of the lattice \(\operatorname{rad}_{\mathbb{Z}}(I)=\mathbb{Z}a\oplus\mathbb{Z}b\) defined as follows (\(L_{1,1}\) has already been introduced):
\[L_{i,j}= \{\,ma+nb\,|\,(m-i)(n-j)\equiv 0[2]\,\},\] \[L_{i,j}^{s_{1},s_{2}}= \{\,s_{2}ma+s_{1}nb\,|\,(m-i)(n-j)\equiv 0[2]\,\}.\]
Let \(t_{1}\) be an integer defined by the following table:
\[\begin{array}{|c||c|c|c|}\hline X_{l}&BCC_{l}&C^{\vee}BC_{l}&C^{\vee}C_{l}\\ \hline t_{1}&1&4&2\\ \hline\end{array}\]
and \(t_{2}\) an integer such that \((t_{1},t_{2})\in\{1,2,4\}^{2}\setminus\{(1,4),(4,1)\}\).
The set \(R_{m}\) always is of the form
\[R_{m}=R(X_{l})_{m}+\min\{2,t_{2}\}\mathbb{Z}a.\]
and the set \(R_{s}\) and \(R_{l}\) are described as follows:
\(*_{i}\)**-type (\(i=0,1\))** For each such pair \((t_{1},t_{2})\), the subsets \(R_{s}\) and \(R_{l}\) are given by
1. for \(i=0\) and 1. \(t_{1},t_{2}\in\{1,2\}\) and \((t_{1},t_{2})\neq(2,2)\), \[R_{s}=R(X_{l})_{s}+\mathbb{Z}a\qquad\text{and}\qquad R_{l}=R(BC_{l})_{l}+L_{0,0 }^{t_{1},t_{2}},\] 2. \(t_{1}=t_{2}=2\), \[R_{s}=R(BC_{l})_{s}+L_{0,0}\qquad\text{and}\qquad R_{l}=R(BC_{l})_{l}+L_{0,0}^{ 2,2},\] 3. \(t_{1},t_{2}\in\{2,4\}\) and \((t_{1},t_{2})\neq(2,2)\), \[R_{s}=R(BC_{l})_{s}+L_{0,0}\qquad\text{and}\qquad R_{l}=R(X_{l})_{l}+t_{2} \mathbb{Z}a,\]
2. for \(i=1\) and 1. \(t_{1},t_{2}\in\{1,2\}\) and \((t_{1},t_{2})\neq(2,2)\), \[R_{s}=R(X_{l})_{s}+\mathbb{Z}a\qquad\text{and}\qquad R_{l}=R(BC_{l})_{l}+L_{1,1} ^{t_{1},t_{2}},\] 2. \(t_{1}=t_{2}=2\), \[R_{s}=R(BC_{l})_{s}+L_{0,0}\qquad\text{and}\qquad R_{l}=R(BC_{l})_{l}+L_{1,1} ^{2,2},\] 3. \(t_{1},t_{2}\in\{2,4\}\) and \((t_{1},t_{2})\neq(2,2)\), \[R_{s}=R(BC_{l})_{s}+L_{1,1}\qquad\text{and}\qquad R_{l}=R(X_{l})_{l}+t_{2} \mathbb{Z}a.\]
For \(i=0,1\), the nomenclature of each mERS given by the above formula is as follows:
\[\begin{array}{|c|c|c|c|c|c|c|}\hline(t_{1},t_{2})&(1,1)&(1,2)&(2,1)&(2,2)&(2, 4)&(4,2)&(4,4)\\ \hline\text{Type of }(R,G)&BCC_{l}^{(1)*_{i}}&BCC_{l}^{(2)*_{i}}&C^{\vee}C_{l}^{(1)*_{i}}&C ^{\vee}C_{l}^{(2)*_{i}}&C^{\vee}C_{l}^{(4)*_{i}}&C^{\vee}BC_{l}^{(2)*_{i}}&C ^{\vee}BC_{l}^{(4)*_{i}}\\ \hline\end{array}\]
Remark that the ERSs \(R(BCC_{l}^{(1)*_{1}})\) and \(R(C^{\vee}BC_{l}^{(4)*_{1}})\) are reduced and they are exactly the ERSs \(R(BC_{l}^{(1,1)*})\) and \(R(BC_{l}^{(4,4)*})\), respectively.
\(*_{i^{\prime}}\)**-type (\(i=0,1\))** One has \(t=t_{1}=t_{2}\in\{1,2,4\}\), and the subsets \(R_{s}\) and \(R_{l}\) are given by
1. for \(t=1\), \[R(BCC_{l}^{(1)*_{0^{\prime}}})_{s}=R(BCC_{l})_{s}+\mathbb{Z}a\qquad\text{and} \qquad R(BCC_{l}^{(1)*_{0^{\prime}}})_{l}=R(BC_{l})_{l}+L_{0,1},\]
2. for \(t=2\), \[R(C^{\vee}C_{l}^{(2)*_{1^{\prime}}})_{s}=R(BC_{l})_{s}+L_{0,0}\qquad\text{and} \qquad R(C^{\vee}C_{l}^{(2)*_{1^{\prime}}})_{l}=R(BC_{l})_{l}+L_{1,0}^{2,2}\]
3. for \(t=4\), \[R(C^{\vee}BC_{l}^{(4)*_{0^{\prime}}})_{s}=R(BC_{l})_{s}+L_{0,1}\qquad\text{and} \qquad R(C^{\vee}BC_{l}^{(4)*_{0^{\prime}}})_{l}=R(C^{\vee}BC_{l})_{l}+4 \mathbb{Z}a.\]
\(*_{\natural}\)**-type (\(\natural\in\{s,l\}\))** One has \(t_{1}=t_{2}=2\), and the subsets \(R_{s}\) and \(R_{l}\) are given by
1. for \(\natural=s\), \[R(C^{\vee}C_{l}^{(2)*_{s}})_{s}=R(BC_{l})_{s}+L_{0,0}\qquad\text{and}\qquad R (C^{\vee}C_{l}^{(2)*_{s}})_{l}=R(C^{\vee}C_{l})_{l}+2\mathbb{Z}a,\]
2. for \(\natural=l\), \[R(C^{\vee}C_{l}^{(2)*_{l}})_{s}=R(C^{\vee}C_{l})_{s}+\mathbb{Z}a\qquad\text{ and}\qquad R(C^{\vee}C_{l}^{(2)*_{l}})_{l}=R(BC_{l})_{l}+L_{0,0}^{2,2}.\]
Remark that the numbers \(t_{i}\) (\(i=1,2\)) are the so-called first (resp. second) **tier numbers** (cf. [8]).
\(*_{i^{\prime}}\)**-type (\(i=0,1\))** One has \(t=t_{1}=t_{2}=2\), and the subsets \(R_{s}\) and \(R_{l}\) are given by
\[R(C^{\vee}C_{l}^{(2)*_{s}})_{s}=R(BC_{l})_{s}+L_{0,0}\qquad\text{and}\qquad R (C^{\vee}C_{l}^{(2)*_{s}})_{l}=R(C^{\vee}C_{l})_{l}+2\mathbb{Z}a,\]
\(*_{i^{\prime}}\)**-type (\(\natural\in\{s,l\}\))** One has \(t_{1}=t_{2}=2\), and the subsets \(R_{s}\) and \(R_{l}\) are given by
\[R(C^{\vee}C_{l}^{(2)*_{s}})_{s}=R(BC_{l})_{s}+L_{0,0}\qquad\text{and}\qquad R (C^{\vee}C_{l}^{(2)*_{s}})_{l}=R(C^{\vee}C_{l})_{l}+2\mathbb{Z}a,\]
2. for \(\natural=l\), \[R(C^{\vee}C_{l}^{(2)*_{l}})_{s}=R(C^{\vee}C_{l})_{s}+\mathbb{Z}a\qquad\text{ and}\qquad R(C^{\vee}C_{l}^{(2)*_{l}})_{l}=R(BC_{l})_{l}+L_{0,0}^{2,2}.\]
Remark that the numbers \(t_{i}\) (\(i=1,2\)) are the so-called first (resp. second) **tier numbers** (cf. [8]).
\(*_{i^{\prime}}\)**-type (\(i=0,1\))** One has \(t=t_{1}=t_{2}\in\{1,2,4\}\), and the subsets \(R_{s}\) and \(R_{l}\) are given by
\[R(BCC_{l}^{(1)*_{0^{\prime}}})_{s}=R(BCC_{l})_{s}+\mathbb{Z}a\qquad\text{and} \qquad R(BCC_{l}^{(1)*_{0^{\prime}}})_{l}=R(BC_{l})_{l}+L_{0,1},\]
\(*_{i^{\prime}}\)**-type (\(i=0,1\))** One has \(t=t_{1}=t_{2}\in\{1,2,4\}\), and the subsets \(R_{s}\) and \(R_{l}\) are given by
\[R(BCC_{l}^{(1)*_{0^{\prime}}})_{s}=R(BCC_{l})_{s}+\mathbb{Z}a\qquad\text{and} \qquad R(BCC_{l}^{(1)*_{0^{\prime}}})_{l}=R(BC_{l})_{l}+L_{0,1},\]
\(*_{i^{\prime}}\)**-type (\(i=0,1\))** One has \(t=t_{1}=t_{2}\in\{1,2,4\}\), and the subsets \(R_{s}\) and \(R_{l}\) are given by
\[R(BCC_{l}^{(1)*_{0^{\prime}}})_{s}=R(BCC_{l})_{s}+\mathbb{Z}a\qquad\text{and} \qquad R(BCC_{l}^{(1)*_{0^{\prime}}})_{l}=R(BC_{l})_{l}+L_{0,1},\]
\(*_{i^{\prime}}\)**-type (\(i=0,1\))** One has \(t=t_{1}=t_{2}\in\{1,2,4\}\), and the subsets \(R_{s}\) and \(R_{l}\) are given by
\[R(BCC_{l}^{(1)*_{0^{\prime}}})_{s}=R(BCC_{l})_{s}+\mathbb{Z}a\qquad\text{and} \qquad R(BCC_{l}^{(1)*_{0^{\prime}}})_{l}=R(BC_{l})_{l}+L_{0,1},\]
\(*_{i^{\prime}}\)**-type (\(i=0,1\))** One has \(t=t_{1}=t_{2}\in\{1,2,4\}\), and the subsets \(R_{s}\) and \(R_{l}\) are given by
\[R(BCC_{l}^{(1)*_{0^{\prime}}})_{s}=R(BCC_{l})_{s}+\mathbb{Z}a\qquad\text{and} \qquad R(BCC_{l}^{(1)*_{0^{\prime}}})_{l}=R(BCC_{l})_{l}+L_{0,1},\]
\(*_{i^{\prime}}\)**-type (\(i=0,1\))** One has \(t=t_{1}=t_{2}\in\{1,2,4\}\), and the subsets \(R_{s}\) and \(R_{l}\) are given by
It is clear that by switching \(a\) and \(b\), from the ERSs of type \(BC_{l}^{(1,2)}\), \(BC_{l}^{(4,2)}\), \(BC_{l}^{(2,2)\sigma}(1)\) and \(BC_{l}^{(2,2)\sigma}(2)\), we obtain the ERSs of type \(BC_{l}^{(2,1)}\), \(BC_{l}^{(2,4)}\), \(BC_{l}^{(2,2)}(1)\) and \(BC_{l}^{(2,2)}(2)\) respectively, discovered by K. Saito in [8]. The ERSs of type \(BC_{l}^{(1,1)*}\) and \(BC_{l}^{(4,4)*}\) has been considered by S. Azam in [2]. Hence, Saito's list together with these 2 ERSs due to Azam provide us a complete list of reduced ERSs.
As for the non-reduced mERSs \((R,G)\), the classification is given as follows:
**Theorem 3.2**.: _Let \((R,G)\) be a non-reduced marked elliptic root system belonging to a real vector space with a symmetric bilinear form of signature \((l,2,0)\). Then, it is isomorphic to one of the following:_
1. \(R/G\) _of type_ \(BCC_{l}\)__\((l\geq 1)\)_:_ \[BCC_{l}^{(1)},\ BCC_{l}^{(1)*_{0}},\ BCC_{l}^{(1)*_{0^{\prime}}},\] \[BCC_{l}^{(2)}(1)\,(l>1),\ BCC_{l}^{(2)}(2),\ BCC_{l}^{(2)*_{p}} \,(p\in\{0,1\}),\] \[BCC_{l}^{(4)}.\]
2. \(R/G\) _of type_ \(C^{\vee}BC_{l}\)__\((l\geq 1)\)_:_ \[C^{\vee}BC_{l}^{(1)},\] \[C^{\vee}BC_{l}^{(2)}(1)\,(l>1),\ C^{\vee}BC_{l}^{(2)}(2),\ C^{ \vee}BC_{l}^{(2)*_{p}}\,(p\in\{0,1\}),\] \[C^{\vee}BC_{l}^{(4)},\ C^{\vee}BC_{l}^{(4)*_{0}},\ C^{\vee}BC_{l }^{(4)*_{0^{\prime}}}.\]
3. \(R/G\) _of type_ \(BB_{l}^{\vee}\)__\((l\geq 2)\)_:_ \[BB_{l}^{\vee}{}^{(1)},\ BB_{l}^{\vee}{}^{(2)}(1),\ BB_{l}^{\vee}{}^{(2)}(2),\ BB_{l}^{\vee}{}^{(4)},\] \[BB_{2}^{\vee}{}^{(2)*}.\]
4. \(R/G\) _of type_ \(C^{\vee}C_{l}\)__\((l\geq 1)\)_:_ \[C^{\vee}C_{l}^{(1)},\ C^{\vee}C_{l}^{(1)*_{p}}\,(p\in\{0,1\}),\] \[C^{\vee}C_{l}^{(2)}(1)\,(l>1),\ C^{\vee}C_{l}^{(2)}(2),\ C^{ \vee}C_{l}^{(2)*_{s}},\ C^{\vee}C_{l}^{(2)*_{l}},\] \[C^{\vee}C_{l}^{(2)*_{0}},\ C^{\vee}C_{l}^{(2)*_{1}},\ C^{\vee}C_ {l}^{(2)*_{1^{\prime}}},\ C^{\vee}C_{l}^{(2)*_{1^{\prime}}},\ C^{\vee}C_{l}^{( 2)*_{0}},\] \[C^{\vee}C_{l}^{(4)},\ C^{\vee}C_{l}^{(4)*_{p}}\,(p\in\{0,1\}).\]
Here, we briefly explain how to classify mERSs with non-reduced quotient. Let \((R,G)\) be such a mERS of rank \(l\geq 2\). (The rank 1 case can be handled by direct computations.) Set \(R_{+}=R_{m}\cup R_{l}\) and \(R_{-}=R_{m}\cup R_{s}\). It can be shown that the mERSs \((R_{\pm},G)\) are of \(C_{l}\)-type (resp. \(B_{l}\)-type). By the classification theorem (cf. [8]) of mERSs with reduced quotient, we have a list of possible mERSs \((R_{\pm},G)\). In particular, the only possible mERS \((R_{m},G)\) is of type \(D_{l}^{(1,1)}\), and \(R(D_{2})+L_{0,0}\) for \(l=2\) in addition. Here, \(D_{2}\) (resp. \(D_{3}\)) is viewed as \(A_{1}\times A_{1}\) (resp. \(A_{3}\)). Realizing \(R_{\pm}\) in \(F\) so that \(R_{+}\cap R_{-}=R_{m}=R(D_{l}^{(1,1)})\), it suffices to _glue_ them to obtain \(R\). Namely, rotating one of \(R_{\pm}\), say \(R_{+}\), by an automorphism \(\varphi\in\mathrm{Aut}(R_{m})\subset GL(F)\), it is enough to verify that \(R_{-}\cup\varphi(R_{+})\)
becomes a generalized root system. It turns out that isomorphism classes of such root systems can be parametrized by the double coset
\[\mathrm{Aut}(R_{+})\backslash\mathrm{Aut}(R_{m})/\mathrm{Aut}(R_{-}).\]
In [3], the authors described \(BC_{l}\)-type root systems as
\[R(S,L,E)=(R(BC_{l})_{s}+S)\cup(R(BC_{l})_{m}+L)\cup(R(BC_{l})_{l}+E)\]
for some discrete subsets \(S,L,E\subset\mathrm{rad}_{\mathbb{Z}}(I)\) and gave certain conditions on the triple \((S,L,E)\) for \(l>1\) and the pair \((S,E)\) for \(l=1\), following the idea developed in [1]. They have thus classified the triples and pairs via some combinatorial studies on them, whereas our argument is based on the structure of automorphism groups of ERSs of \(BCD\)-type.
### Classification of non-reduced ERSs
Theorem 3.2 provides the isomorphism classes of mERS with non-reduced affine quotient. But, as root systems, some mERSs are isomorphic. Indeed,
**Theorem 3.3**.: _Among the above \(35\) non-reduced marked elliptic root systems, we have the following isomorphisms as root systems:_
1. _Via the isomorphism_ \(a\leftrightarrow b\)_:_ \[R(BCC_{l}^{(2)}(1))\cong R(BB_{l}^{\vee(1)}), \qquad R(C^{\vee}BC_{l}^{(2)}(1))\cong R(BB_{l}^{\vee(4)}),\] \[R(BCC_{l}^{(2)}(2))\cong R(C^{\vee}C_{l}^{(1)}), \qquad R(C^{\vee}BC_{l}^{(2)}(2))\cong R(C^{\vee}C_{l}^{(4)}),\] \[R(BCC_{l}^{(2)*_{p}})\cong R(C^{\vee}C_{l}^{(1)*_{p}}) \qquad R(C^{\vee}BC_{l}^{(2)*_{p}})\cong R(C^{\vee}C_{l}^{(4)*_{p}}) \qquad(p\in\{0,1\}),\] \[R(BB_{l}^{\vee(2)}(2))\cong R(C^{\vee}C_{l}^{(2)}(1)).\]
2. _Via the isomorphism_ \(a\mapsto a+b,\;b\mapsto b\)_:_ \[R(BCC_{l}^{(1)*_{0^{\prime}}})\cong R(BCC_{l}^{(1)*_{0}}),\qquad R(C^{\vee} BC_{l}^{(4)*_{0^{\prime}}})\cong R(C^{\vee}BC_{l}^{(4)*_{0}}),\] \[R(C^{\vee}C_{l}^{(2)*_{1^{\prime}}})\cong R(C^{\vee}C_{l}^{(2)*_{1}}).\]
3. _Via an exotic isomorphism_ \(\&\) _\(a\leftrightarrow b\)_:_ \[R(BCC_{l}^{(4)})\cong R(C^{\vee}BC_{l}^{(1)})\cong R(C^{\vee}C_{l}^{(2)*_{0}}).\] _Indeed, the exotic one is given by_ \[R(C^{\vee}C_{l}^{(2)*_{0}})= \,(R(BC_{l})_{s}+\mathbb{Z}(a+2b)+\mathbb{Z}b)\cup(R(BC_{l})_{m} +\mathbb{Z}(a+2b)+2\mathbb{Z}b)\] \[\cup \,(R(BC_{l})_{l}+\mathbb{Z}(a+2b)+4\mathbb{Z}b)\cong R(C^{\vee}BC _{l}^{(1)}).\]
Thus, we only have \(21\) isomorphic classes of non-reduced ERSs. |
2310.01026 | An OrthoBoXY-Method for Various Alternative Box Geometries | We have shown in a recent contribution [J. Phys. Chem.B 127, 7983-7987
(2023)] that for molecular dynamics (MD) simulations of isotropic fluids based
on orthorhombic periodic boundary conditions with "magic" box length ratios of
$L_z/L_x\!=\!L_z/L_y\!=\!2.7933596497$, the computed self-diffusion
coefficients $D_x$ and $D_y$ in $x$- and $y$-direction become system size
independent. They thus represent the true self-diffusion coefficient
$D_0\!=\!(D_x+D_y)/2$, while the shear viscosity can be determined from
diffusion coefficients in $x$-, $y$-, and $z$-direction, using the expression
$\eta\!=\!k_\mathrm{B}T\cdot 8.1711245653/[3\pi L_z(D_{x}+D_{y}-2D_z)]$. Here
we present a more generalized version of this "OrthoBoXY"-approach, which can
be applied to any orthorhombic MD box. We would like to test, whether it is
possible to improve the efficiency of the approach by using a shape more akin
to the cubic form, albeit with different box-length ratios $L_x/L_z\!\neq\!
L_y/L_z$ and $L_x\!<\!L_y\!<\!L_z$. We use simulations of systems of 1536
TIP4P/2005 water molecules as a benchmark and explore different box-geometries
to determine the influence of the box shape on the computed statistical
uncertainties for $D_0$ and $\eta$. Moreover, another "magical" set of
box-length ratios is discovered with $L_y/L_z\!=\!0.57804765578$ and
$L_x/L_z\!=\!0.33413909235$, where the self-diffusion coefficient in
$x$-direction becomes system size independent, such that $D_0\!=\!D_x$. | Johanna Busch, Dietmar Paschek | 2023-10-02T09:18:09Z | http://arxiv.org/abs/2310.01026v2 | # An OrthoBoxY-Method for Various Alternative Box Geometries
###### Abstract
We have shown in a recent contribution [_J. Phys. Chem.B_**127**, 7983-7987 (2023)] that for molecular dynamics (MD) simulations of isotropic fluids based on orthorhombic periodic boundary conditions with "magic" box length ratios of \(L_{z}/L_{x}=L_{z}/L_{y}=2.7933596497\), the computed self-diffusion coefficients \(D_{x}\) and \(D_{y}\) in \(x\)- and \(y\)-direction become system size independent. They thus represent the true self-diffusion coefficient \(D_{0}=(D_{x}+D_{y})/2\), while the shear viscosity can be determined from diffusion coefficients in \(x\)-, \(y\)-, and \(z\)-direction, using the expression \(\eta=k_{\rm B}T\cdot 8.1711245653/[3\pi L_{z}(D_{x}+D_{y}-2D_{z})]\). Here we present a more generalized version of this "OrthoBoxY"-approach, which can be applied to any orthorhombic MD box. We would like to test, whether it is possible to improve the efficiency of the approach by using a shape more akin to the cubic form, albeit with different box-length ratios \(L_{x}/L_{z}\neq L_{y}/L_{z}\) and \(L_{x}<L_{y}<L_{z}\). We use simulations of systems of 1536 TIP4P/2005 water molecules as a benchmark and explore different box- geometries to determine the influence of the box shape on the computed statistical uncertainties for \(D_{0}\) and \(\eta\). Moreover, another "magic" set of box-length ratios is discovered with \(L_{y}/L_{z}=0.57804765578\) and \(L_{x}/L_{z}=0.33413909235\), where the self-diffusion coefficient in \(x\)-direction becomes system size independent, such that \(D_{0}\!=\!D_{x}\).
## I Introduction
The viscosity of a fluid and the diffusion coefficients of its constituents provide us with an important reference for the understanding of a large variety of transport-related phenomena.[1; 2] To investigate molecular transport properties, molecular dynamics (MD) simulations have demonstrated to be an important novel source for delivering insights and for providing reference data.[3] For example, MD simulations can produce accurate reference data for systems, which are otherwise difficult to measure, such as the multicomponent diffusion inside of nanoporous materials,[4] or they are enabling us to study conditions, which are experimentally difficult to replicate, such as the pressures and temperatures found in the earth's interior.[5]
However, self-diffusion coefficients within liquids obtained from MD simulations with periodic boundary conditions (PBCs) can exhibit a quite substantial system size dependence.[6; 7; 8; 9] A review article by Celebi et al. provides a good overview of this topic.[6] This effect is caused by the altered hydrodynamic interactions between particles in a periodic system, and leads to an \(L^{-1}\)-dependence of the self-diffusion coefficients, where \(L\) represents the length of the cubic simulation box.[8; 9; 10; 11; 12] This behavior has been quantitatively analyzed for simulations of polymers in solution [8], TIP3P model water molecules, and Lennard-Jones particles [9], as well as carbon dioxide, n-alkanes, and poly(ethylene glycol) dimethyl ethers for a wide variety of conditions.[12] In their seminal contribution, Yeh and Hummer have shown that determining _true_ system size independent self-diffusion coefficients thus requires either the knowledge of the shear viscosity, or a series of MD simulations with varying box-lengths.[9]
Recently, we have reported that direction-dependent self-diffusion data, obtained from a single MD simulation with PBCs based on a specific orthorhombic unit cell can be used to determine both the system size independent true self-diffusion coefficient \(D_{0}\) and the shear viscosity \(\eta\).[13] By performing MD simulations of orthorhombic systems with "magic" box length ratios of \(L_{z}/L_{x}=L_{z}/L_{y}=2.7933596497\), due to a cancelling effect of the hydrodynamic interactions, the computed self-diffusion coefficients \(D_{x}\) and \(D_{y}\) in \(x\)- and \(y\)-direction are representing the _true_ system size independent self-diffusion coefficient \(D_{0}=(D_{x}+D_{y})/2\). At the same time the shear viscosity can be determined from diffusion coefficients in \(x\)-,\(y\)- and \(z\)-direction using \(\eta=k_{\rm B}T\cdot 8.1711245653/[3\pi L_{z}(D_{x}+D_{y}-2D_{z})]\), where \(k_{\rm B}\) denotes Boltzmann's constant and \(T\) represents the temperature.[13] This approach was coined "OrthboBoxY", and is based on a recently derived extension of the Yeh-Hummer approach, allowing for a quantitative description of the anisotropy of the diffusion-tensor of an isotropic fluid caused by hydrodynamic interactions within an orthorhombic periodic system.[10]
The use of this "magic" box shape has been deemed particularly appealing, since no further conversions are required to obtain the true self-diffusion coefficient \(D_{0}\) from the MD simulation data. However, a slight drawback might be that the simulation box with box-length ratios \(L_{z}/L_{x}\!=\!L_{z}/L_{y}\!=\!2.7933596497\), is strongly elongated in the \(z\)-direction, which means that rather large system sizes are required if the box-lengths in \(x\)- and \(y\)-direction are meant to exceed a certain minimum threshold. In this contribution we therefore follow a strategy to generate alternative box shapes, offering the possibility to study boxes which resemble more closely the popular cubic box. In particular, we explore box geometries with \(L_{x}\neq L_{y}\neq L_{z}\), which, however, obey the condition \(L_{y}/L_{z}=L_{x}/L_{y}\). Thus the box shape can be controlled by systematically varying the ratio \(L_{y}/L_{z}\), while approaching the form of a cubic box with \(L_{y}/L_{z}=1\) as a limiting case. To study the efficiency of this approach, we determine the in
fluence of the box shape on the statistical uncertainty of the computed values for \(D_{0}\) and \(\eta\). In the following, we also derive expressions to compute \(D_{0}\) and \(\eta\) purely from direction-dependent self-diffusion data under those conditions.
## II Generalized Orthoboxy-Approach for MD simulations using Arbitrary Orthorhombic Simulation boxes
For orthorhombic box geometries, the presence of unequal box-lengths leads to different system size dependencies for each of the components \(D_{ii}\) with \(i\in\{x,y,z\}\) of the diffusion tensor \(\mathbf{D}\) such that the self-diffusion tensor becomes anisotropic under PBCs even for an isotropic fluid.[14] For a quantitative description of this effect, Kikugawa et al. and others [10; 11] have followed the approach of Yeh and Hummer [9], who had realized that a particle in a periodic system experiences hydrodynamic interactions not only with the solvent in its immediate surrounding, but also with its periodic images, communicated via the solvent. They have, based on the linearized Navier-Stokes equation for an incompressible fluid, and the Kirkwood-Riseman theory of polymer diffusion [15], obtained an expression for the diffusion tensor modified for a periodic system with
\[\mathbf{D}_{\mathrm{PBC}}=D_{0}\mathbf{1}+k_{\mathrm{B}}T\lim_{r\to 0} \left[\mathbf{T}_{\mathrm{PBC}}(\mathbf{r})-\mathbf{T}_{0}(\mathbf{r})\right]. \tag{1}\]
Here \(\mathbf{1}\) is the unity matrix, and \(\mathbf{T}_{\mathrm{PBC}}(\mathbf{r})\) and \(\mathbf{T}_{0}(\mathbf{r})\) are the Oseen mobility tensors for a periodic system and an infinite nonperiodic system, respectively.[9]\(D_{0}\) denotes the _true_ scalar diffusion coefficient within an infinite system. In order to bring equation 1 in a form, which could be treated numerically, the technique of Ewald summation adapted to hydrodynamic interactions was employed.[16; 17] The result is an expression, describing the system size dependence of self-diffusion coefficients from MD simulations with PBCs, based on the effect of the hydrodynamic interactions between particles in a periodic system.
Let us now assume that we have an orthorhombic simulation box with \(L_{x}\neq L_{y}\neq L_{z}\) in combination with PBCs, and are performing an MD simulation of an isotropic fluid. For this situation, we can compute the direction-dependent self-diffusion coefficients \(D_{\mathrm{PRC},ii}\) based on Equation 1 from the knowledge of the true self-diffusion coefficient \(D_{0}\) by using
\[D_{\mathrm{PBC},ii} = D_{0}-\frac{k_{\mathrm{B}}T\zeta_{ii}}{6\pi\eta L_{i}}\, \tag{2}\]
where \(\eta\) is the viscosity, and \(L_{i}\) are the individual box-lengths of the orthorhombic unit cell.[13] The \(\zeta_{ii}\) represent the direction-dependent Madelung constant analogues of the orthorhombic lattice, which are calculated by Ewald summation using [13; 10]
\[\zeta_{ii} = -\frac{3}{2}\,L_{i}\cdot\Bigg{\{}\frac{1}{2}\Bigg{[}\!\sum_{n \neq 0}\frac{\mathrm{erfc}(\alpha\,n)}{n}\] \[+\frac{n_{i}^{2}}{n^{2}}\left(\frac{\mathrm{erfc}(\alpha\,n)}{n} +\frac{2\alpha}{\sqrt{\pi}}e^{-\alpha^{2}n^{2}}\right)\!\Bigg{]}\] \[+\frac{\pi}{V}\Bigg{[}\!\sum_{\mathbf{k}\neq 0}\frac{4\,e^{-k^{2} /(4\alpha^{2})}}{k^{2}}\] \[-\frac{k_{i}^{2}}{\alpha^{2}k^{2}}e^{-k^{2}/(4\alpha^{2})}\left( 1+\frac{4\alpha^{2}}{k^{2}}\right)\Bigg{]}\] \[-\frac{\pi}{\alpha^{2}V}-\frac{\alpha}{\sqrt{\pi}}\Bigg{\}}\.\]
with \(\mathbf{n}=(n_{x},n_{y},n_{z})\), and \(\mathbf{k}=(k_{x},k_{y},k_{z})\) being real and reciprocal lattice vectors with \(n_{i}\!=\!L_{i}m_{i}\) and \(k_{i}\!=\!2\pi\cdot m_{i}/L_{i}\), based on integer numbers for \(m_{i}\). We use \(n=|\mathbf{n}|\) and \(k^{2}=|\mathbf{k}|^{2}\), while \(\alpha\) represents the Ewald convergence parameter.
From Equation 2 follows that by using the difference between two system-size dependent self-diffusion coefficients for two different directions \(i\) and \(j\), we can obtain for any orthorhombic simulation box a term describing the viscosity \(\eta\) as follows:
\[\eta_{ij}=\frac{k_{\mathrm{B}}T\left(\zeta_{jj}/L_{j}-\zeta_{ii}/L_{i}\right)} {6\pi\left(D_{\mathrm{PBC},ii}-D_{\mathrm{PBC},jj}\right)}. \tag{4}\]
Here, the indices \(ij\) indicate that the estimate of the viscosity was obtained by employing directions \(i\) and \(j\). Of course,
\begin{table}
\begin{tabular}{c c c c c} \(L_{y}/L_{z}\) & \(L_{x}/L_{z}\) & \(\zeta_{xx}\) & \(\zeta_{yy}\) & \(\zeta_{zz}\) \\ \hline
0.95 & 0.9025 & 2.5828924663 & 2.828555577 & 3.096529075 \\
0.90 & 0.81 & 2.3170121640 & 2.800065379 & 3.378128871 \\
0.85 & 0.7225 & 2.0355569516 & 2.747235325 & 3.688644375 \\
0.80 & 0.64 & 1.7339175977 & 2.663352789 & 4.036025562 \\
0.75 & 0.5625 & 1.4069966828 & 2.538694622 & 4.429678724 \\
0.70 & 0.49 & 1.0490574329 & 2.359206961 & 4.880643368 \\
0.65 & 0.4225 & 0.6533320232 & 2.104440785 & 5.402004186 \\
0.60 & 0.36 & 0.2113689766 & 1.74178359 & 6.009538053 \\
0.57804765578 & 0.33413909235 & 0 & 1.541707906 & 6.308282188 \\ \end{tabular}
\end{table}
Table 1: Madelung constant analogues according to Equation 3, computed for orthorhombic MD simulation cells with \(L_{x}\neq L_{y}\neq L_{z}\), fulfilling the condition \(L_{y}/L_{z}\!=\!L_{x}/L_{y}\) for varying indicated sets of box-lengths ratios.
sufficient sampling will lead to \(\eta\!=\!\eta_{xy}\!=\!\eta_{xz}\!=\!\eta_{yz}\). Consequently, the viscosity can then be computed by averaging over contributions from all three different directions according to
\[\eta=\frac{1}{3}\left(\eta_{xy}+\eta_{xz}+\eta_{yz}\right) \tag{5}\]
and the average true system size independent self-diffusion coefficient \(D_{0}\) can be obtained from
\[D_{0} = \frac{1}{3}\Bigg{[}D_{\text{PBC},xx}+D_{\text{PBC},yy}+D_{\text{ PBC},zz}\] \[+\frac{k_{\text{B}}T}{6\pi\eta}\left(\frac{\zeta_{xx}}{L_{x}}+ \frac{\zeta_{yy}}{L_{y}}+\frac{\zeta_{zz}}{L_{z}}\right)\Bigg{]}\;.\]
Note that \(D_{0}\) is also directly available by combining Equations 2 and 4 without the need of involving both the viscosity \(\eta\) and the temperature \(T\), yielding
\[D_{0}=D_{\text{PBC},ii}+\frac{\frac{\zeta_{ii}}{L_{i}}}{\frac{\zeta_{jj}}{L_{ j}}-\frac{\zeta_{kk}}{L_{k}}}\left(D_{\text{PBC},kk}-D_{\text{PBC},jj}\right) \tag{7}\]
with \(i,j,k\in\{x,y,z\}\). The best estimate for \(D_{0}\) following Equation 7 is determined by computing a weighted average of all three possible permutations of of \(x\), \(y\), and \(z\). Keep in mind that the quantities \(\eta_{ij}\) and \(D_{\text{PBC},ii}\) in Equations 5 and 6 might have different uncertainties, such that their averages should also be better computed as _weighted_ averages. To be able to make use of Equations 2, 4, 6, and 7, we have computed the Madelung constant analogues in \(x\)-, \(y\)- and \(z\)-direction for various box-length ratios shown in Table 1. The computations of Equation 3 discussed above were performed using double-precision floating point arithmetic, and an Ewald convergence parameter of \(\alpha\!=\!4/L_{z}\) with \(m_{i}\) ranging between \(-m_{\text{max}}\leq m_{i}\leq m_{\text{max}}\) using \(m_{\text{max}}\!=\!100\) for both the real and reciprocal lattice summation, ensuring that all calculated Madelung constant analogues \(\zeta_{ii}\) shown in Table 1 are converged.
Here, we consider systems with an orthorhombic unit cell with \(L_{x}\neq L_{y}\neq L_{z}\). In particular, we investigate unit cells where the box-length ratios are connected with respect to one another via \(L_{y}/L_{z}\!=\!L_{x}/L_{y}\), leading to the relation \(L_{x}/L_{z}\!=\!(L_{y}/L_{z})^{2}\). This allows us to deviate from a cubic box geometry in a systematic fashion by varying the ratio \(L_{y}/L_{z}\) as shown in Table 1. Here \(L_{y}/L_{z}\!=\!1\) represents the limiting value for cubic boxes.
Note that under this condition, there exists at least one other set of "magic" box-length ratios with \(L_{y}/L_{z}\!=\!0.57804765578\) and \(L_{x}/L_{z}\!=\!0.33413909235\), leading to a Madelung constant analog in \(x\)-direction \(\zeta_{xx}\!=\!0\). This indicates that for such an MD box, the self-diffusion in \(x\)-direction is representing the true system size independent self-diffusion coefficient
\[D_{0}=D_{\text{PBC},xx}\;, \tag{8}\]
and the shear viscosity can be determined using the self-diffusion data in \(y\)- and \(z\)-direction:
\[\eta=\frac{k_{\text{B}}T}{12\pi}\cdot\left[\frac{\zeta_{yy}/L_{y}}{D_{0}-D_{ \text{PBC},yy}}+\frac{\zeta_{zz}/L_{z}}{D_{0}-D_{\text{PBC},zz}}\right] \tag{9}\]
with \(\zeta_{yy}\!=\!1.541707906\) and \(\zeta_{zz}\!=\!6.308282188\) (see also Table 1). Since the computation of the \(\zeta_{ii}\) has been performed numerically, we have determined \(\zeta_{xx}\!<\!10^{-10}\) using the "magical" box geometry indicated above. Note, however, that also for \(\eta\) in Equation 9, a weighted average might be in most cases a more appropriate choice.
## III Molecular dynamics simulations
To test the generalized "OrthBoXY" approach outlined in the previous section, MD simulations of TIP4P/2005 model water [18] were carried out. TIP4P/2005 has been demonstrated to accurately describe the properties of water compared to other simple rigid nonpolarizable water models.[19] Simulations were performed at a temperature of \(T\!=\!298\) K under NVT conditions at a density of \(\rho\!=\!0.9972\) g cm\({}^{-3}\). Systems containing 1536 water molecules are simulated using various orthorhombic box geometries fulfilling the condition \(L_{y}/L_{z}\!=\!L_{x}/L_{y}\). The studied box geometries are summarized in Table 1. MD simulations of 160 ns length each were performed using Gromacs 5.0.6.[20; 21] The integration time step for all simulations was 2 fs. The temperature of the simulated systems was controlled employing the Nose-Hoover thermostat [22; 23] with a coupling time \(\tau_{T}\!=\!1.0\) ps. Both, the Lennard-Jones and electrostatic interactions were treated by smooth particle mesh Ewald summation.[24; 25; 26] The Ewald convergence parameter was set to a relative accuracy of the Ewald sum of \(10^{-5}\) for the Coulomb-interaction and \(10^{-3}\) for the LJ-interaction. All bond lengths were kept fixed during the simulation run and distance constraints were solved by means of the SETTLE procedure. [27] The simulations were carried out in \(n_{\text{W}}=320\) subsequent segments of \(\tau_{\text{W}}\!=\!500\) ps length, resulting in total simulation times of \(160\) ns each. For each simulation segment, 2500 frames were stored with a time interval of \(0.2\) ps between consecutive frames. All reported properties were then calculated for each of the segments separately to be able to estimate the uncertainty using standard statistical analysis procedures.[28; 29] Here the variance \(\sigma_{X}\) of a computed property \(X\) is estimated via
\[\sigma_{X}^{2}=\langle X^{2}\rangle-\langle X\rangle^{2}\;, \tag{10}\]
where \(\langle\ldots\rangle\) indicates averaging over \(n_{\text{W}}\) simulation run segments, and its uncertainty is determined via
\[\hat{\sigma}_{X}=\frac{\sigma_{X}}{\sqrt{n_{\text{W}}}}\;. \tag{11}\]
All simulation boxes listed in Table 2 were created starting from a single simulation box with \(L_{x}\!=\!L_{y}\!=\!2.48582\) nm and \(L_{z}\!=\!7.45747\) nm, containing 1536 TIP4P/2005 water molecules at a density of \(\rho\!=\!0.9972\) g cm\({}^{-3}\), which is available from our GitHub repository.[30] The boxes were then morphed into their final form in a volume-preserving fashion by using short nonequilibrium MD simulation runs of \(200\) ps length by employing GROMACS' "deform" feature.[31] The prepared systems were then equilibrated under NVT conditions for another \(500\) ps.
## IV Results and Discussion
Self-diffusion coefficients were computed from the slope of the center-of-mass mean square displacement of the water molecules using the Einstein formula [28] according to
\[D_{\text{PBC}}=\frac{1}{6}\frac{\partial}{\partial t}\lim_{t\to\infty}\left<| \mathbf{r}(0)-\mathbf{r}(t)|^{2}\right>\, \tag{12}\]
and
\[D_{\text{PBC},ii}=\frac{1}{2}\frac{\partial}{\partial t}\lim_{t\to\infty} \left<|r_{i}(0)-r_{i}(t)|^{2}\right>\, \tag{13}\]
where \(\mathbf{r}(t)=[r_{x}(t),r_{y}(t),r_{z}(t)]\) represent the position of the center of mass of a water molecule at time \(t\) and the \(r_{i}(t)\) are its respective components in \(x\)-, \(y\)-, and \(z\)-direction. All computed self-diffusion coefficients shown in Table 2 were determined from the slope of the mean square displacement of the water molecules fitted to time intervals between \(15\,\)ps and \(200\,\)ps.
Table 2 contains direction-dependent self-diffusion coefficients \(D_{\text{PBC},ii}\) obtained from MD simulations using orthorhombic unit cells with \(L_{y}/L_{z}=L_{x}/L_{y}\) for various ratios \(L_{y}/L_{z}\). Here a decreasing ratio \(L_{y}/L_{z}\) indicates an increasing deviation from the cubic form. If the hydrodynamic interaction approach towards self-diffusion according to Equation 1 holds, we expect a linear relation between the difference between self-diffusion coefficients in different directions \(i\) and \(j\) with \(\Delta D_{\text{PBC},ij}=D_{\text{PBC},ii}-D_{\text{PBC},jj}\) and the difference between the Madelung constant analogue weighted with inverse box-lengths \(\zeta_{jj}/L_{j}-\zeta_{ii}/L_{i}\) according to Equation 4. Figure 1a demonstrates, that this linear relationship is excellently fulfilled. A linear fit including all \(9\times 3\!=\!27\) direction-dependent data points passes almost perfectly through the origin with \(\Delta D_{\text{PBC},ij}(0)=(0.0013\pm 0.0020)\times 10^{-9}\,\text{m}^{2}\, \text{s}^{-1}\), while the slope of the fitted solid grey line in 1a yields a viscosity of \(\eta=(0.8957\pm 0.0178)\,\)mPa s, which is consistent with the value \(\eta=(0.900\pm 0.051)\,\)mPa s obtained for TIP4P/2005 under the same conditions (temperature and density) in Ref. [13] It is evident, however, that for near-cubic box geometries with \(L_{y}/L_{z}\geq 0.95\) the error and the magnitude of \(\Delta D_{\text{PBC},ij}\) have almost the same size, thus rendering those systems an extremely unreliable source for determining viscosity data. This behavior is a consequence of the little variation found for the size of the computed uncertainties of the direction-dependent diffusion coefficients, as can be seen from the data shown in Table 2. This is leading to a strong increase of the relative error of \(\Delta D_{\text{PBC},ij}\) for \(L_{y}/L_{z}\to 1\). This behavior is perfectly demonstrated in Figure 1b in the form of a log-log plot of the relative error of the computed differences of the self diffusion coefficients \(\hat{\sigma}(\Delta D_{\text{PBC},ij})/\Delta D_{\text{PBC},ij}\) vs. \(\zeta_{jj}/L_{j}-\zeta_{ii}/L_{i}\). Given the linear dependence of \(\Delta D_{\text{PBC},ij}\) vs. \(\zeta_{jj}/L_{j}-\zeta_{ii}/L_{i}\) as shown in Figure 1a, a constant size of \(\hat{\sigma}(\Delta D_{\text{PBC},ij})\) would lead to an inverse proportional behavior according to
\[\frac{\hat{\sigma}(\Delta D_{\text{PBC},ij})}{\Delta D_{\text{PBC},ij}}\propto \left(\frac{\zeta_{jj}}{L_{j}}-\frac{\zeta_{ii}}{L_{i}}\right)^{\beta} \tag{14}\]
with an exponent \(\beta\!=\!-1\). Due to small, albeit significant variation of \(\hat{\sigma}(\Delta D_{\text{PBC},ij})\) as a function of \(L_{y}/L_{z}\), however, the solid grey line in Figure 1 indicates a scaling with a slightly smaller exponent \(\beta\!=\!-0.9\).
According to Equation 4, we can compute an estimate of the viscosity \(\eta_{ij}\) for each difference in diffusion coefficients \(\Delta D_{\text{PBC},ij}\). These viscosity estimates are shown in Figure 2a as a function of \(\zeta_{jj}/L_{j}-\zeta_{ii}/L_{i}\). The predictive value of the most anisotropic systems is very good, the quality of the estimate, however, drops significantly for \(L_{y}/L_{z}\to 1\), since \(\hat{\sigma}(\eta_{ij})/\eta_{ij}\approx\hat{\sigma}(\Delta D_{\text{PBC},ij} )/\Delta D_{\text{PBC},ij}\). In Figure 2b, the computed estimates for the true self-diffusion coefficients \(D_{0}\) according to Equation 2 are shown. To compute \(D_{0}\), we have used the average viscosity of \(\eta=(0.8957\pm 0.0178)\,\)mPa s. Note that Equation 2 holds up extremely well, leading to a very small statistical variation of the computed average for \(D_{0}\) with no recognizable trend as a function of \(L_{y}/L_{z}\), including the simulations with near-cubic box shapes. Averaging those values leads to \(D_{0}=(2.2923\pm 0.0008)\times 10^{-9}\,\text{m}^{2}\,\text{s}^{-1}\). The accuracy suggested by the small statistical error, however, might be misleading, since it does not properly account for the uncertainty of the average viscosity, which would shift the
\begin{table}
\begin{tabular}{c c c c c c c} \(L_{y}/L_{z}\) & \(L_{x}/\text{nm}\) & \(L_{y}/\text{nm}\) & \(L_{z}/\text{nm}\) & \(D_{\text{PBC},xx}/10^{-9}\text{m}^{2}\text{s}^{-1}\) & \(D_{\text{PBC},yy}/10^{-9}\text{m}^{2}\text{s}^{-1}\) & \(D_{\text{PBC},zz}/10^{-9}\text{m}^{2}\text{s}^{-1}\) \\ \hline
0.95 & 3.40592 & 3.58517 & 3.77387 & \(2.1032\pm 0.0050\) & \(2.1004\pm 0.0046\) & \(2.0911\pm 0.0041\) \\
0.90 & 3.22666 & 3.58517 & 3.98353 & \(2.1145\pm 0.0046\) & \(2.1013\pm 0.0045\) & \(2.0777\pm 0.0044\) \\
0.85 & 3.04740 & 3.58517 & 4.21785 & \(2.1330\pm 0.0050\) & \(2.1022\pm 0.0045\) & \(2.0787\pm 0.0044\) \\
0.80 & 2.86814 & 3.58517 & 4.48147 & \(2.1519\pm 0.0050\) & \(2.1019\pm 0.0044\) & \(2.0793\pm 0.0040\) \\
0.75 & 2.68888 & 3.58517 & 4.78023 & \(2.1618\pm 0.0053\) & \(2.1177\pm 0.0048\) & \(2.0662\pm 0.0039\) \\
0.70 & 2.50962 & 3.58517 & 5.12168 & \(2.1957\pm 0.0061\) & \(2.1295\pm 0.0057\) & \(2.0545\pm 0.0040\) \\
0.65 & 2.33036 & 3.58517 & 5.51565 & \(2.2284\pm 0.0070\) & \(2.1478\pm 0.0061\) & \(2.0564\pm 0.0038\) \\
0.60 & 2.15110 & 3.58517 & 5.97529 & \(2.2740\pm 0.0071\) & \(2.1763\pm 0.0071\) & \(2.0508\pm 0.0039\) \\ \(0.578047\dots\) & 2.07240 & 3.58517 & 6.20221 & \(2.2964\pm 0.0077\) & \(2.1841\pm 0.0074\) & \(2.0484\pm 0.0041\) \\ \end{tabular}
\end{table}
Table 2: Parameters describing the MD simulations of 1536 TIP4P/2005 water molecules using an orthorhombic unit cell with box-lengths ratios \(L_{y}/L_{z}=L_{x}/L_{y}\) performed under NVT conditions at a temperature of \(T\!=\!298\,\)K and a density of \(\rho\!=\!0.9972\,\)g\(\,\)cm\({}^{-3}\). Here \(L_{x}\), \(L_{y}\), and \(L_{z}\) representing the box lengths of the orthorhombic unit cell. The direction-dependent self-diffusion coefficients \(D_{\text{PBC},ii}\) are determined from the slope of the center-of-mass mean square displacement of the water molecules over a time interval between \(15\,\)ps and \(200\,\)ps.
whole ensemble of data points in Figure 2b simultaneously up or down.
Next, we would like to compute both \(D_{0}\) and \(\eta\) from the system with "magical" box length ratios of \(L_{y}/L_{z}=0.57804765578\) and \(L_{x}/L_{z}\!=\!0.33413909235\). Here, according to Equation 8\(D_{0}=D_{\rm PBC,\it{x}x}=(2.2964\pm 0.0077)\times 10^{-9}\) m\({}^{2}\) s\({}^{-1}\), which is in excellent agreement with the estimate including all box shapes. The weighted average of the viscosity computed according to Equation 9 is \(\eta\!=\!(0.8869\pm 0.0415)\) mPa s, which is slightly smaller than the estimate from the slope of the data in Figure 1a, albeit within the range of the statistical uncertainty. Note that in Ref. [13], the estimate for \(D_{0}\) and the viscosity for a system of the same number of molecules at the same density, but for a shorter simulation of just \(10\) ns length (with \(n_{\rm W}\!=\!20\)) was obtained as \(D_{0}=(2.283\pm 0.027)\times 10^{-9}\) m\({}^{2}\) s\({}^{-1}\) and \(\eta\!=\!(0.853\pm 0.084)\) mPa s.
Figure 1: Analysis of the direction-dependent self-diffusion data according to Equation 4. a) Differences of the self-diffusion coefficients in \(i\)- and \(j\)-direction \(\Delta D_{\rm PBC,\it{ij}}\!=\!D_{\rm PBC,\it{ii}}-D_{\rm PBC,\it{jj}}\) with \(i,j\in\{x,y,z\}\) vs. \(\zeta_{jj}/L_{j}-\zeta_{ii}/L_{i}\) according to Equation 4 for TIP4P/2005 water at \(298\) K determined from MD simulations employing orthorhombic simulation boxes with varying box-lengths ratios \(L_{y}/L_{z}\!=\!L_{x}/L_{y}\). The grey solid line represent a linear fit of the data according to Equation 4, resulting in a viscosity of \(\eta=(0.8957\pm 0.0178)\) mPa s. b) Log-log plot of the relative error of the computed differences of the self diffusion coefficients \(\hat{\sigma}(\Delta D_{\rm PBC,\it{ij}})/\Delta D_{\rm PBC,\it{ij}}\) given in percent, vs. \(\zeta_{jj}/L_{j}-\zeta_{ii}/L_{i}\). The solid grey line represents a fitted scaling of the errors according to \((\zeta_{jj}/L_{j}-\zeta_{ii}/L_{i})^{\beta}\) with an exponent \(\beta\!=\!-0.9\).
Figure 2: Shear viscositiies \(\eta\) and system size independent true self-diffusion coefficients \(D_{0}\) obtained for TIP4P/2005 water at \(298\) K, determined from MD simulations employing orthorhombic simulation boxes with varying box-lengths ratios \(L_{y}/L_{z}\!=\!L_{x}/L_{y}\). a) Shear viscosities computed according to Equation 4 using self-diffusion coefficients listed in Tables 1 and 2. The grey solid line represents a viscosity of \(\eta\!=\!0.8957\) mPa s, representing a linear fit of the data how in Figure 1 according to Equation 4. b) True self-diffusion coefficients \(D_{0}\) obtained according to Equation 2 from data listed in Tables 1 and 2. using the average viscosity \(\eta\!=\!(0.8957\pm 0.0178)\) mPa s. The grey solid line represents the average value \(D_{0}\!=\!2.2923\times 10^{-9}\) m\({}^{2}\) s\({}^{-1}\).
Given the 16 times longer simulation runs, we would expect a four times smaller statistical uncertainty, which coincides rather well with the computed uncertainty for \(D_{0}\). The fact that the statistical uncertainty of \(\eta\) is just about one-half of the value reported in Ref. [13], suggests the "magical" simulation-box procedure proposed here is less effective in estimating the viscosity than the original "OrthBoXY"-method proposed in Ref. [13].
Note that the computed viscosities are all lying very close to the experimental value for ordinary water between \(0.892\,\)mPa s and \(0.893\,\)mPa s at \(298.15\,\)K, reported by Harris and Woolf [32, 33] (when using their corrected data tables listed in Ref.[33]), while the self-diffusion coefficients almost perfectly match the experimental value of Krynicki et al.[34] with \(2.30\times 10^{-9}\,\)m\({}^{2}\,\)s\({}^{-1}\) at \(298.2\,\)K, and of Mills [35] with \(2.299\times 10^{-9}\,\)m\({}^{2}\,\)s\({}^{-1}\) at \(25^{\circ}\)C.
Finally, it should not be left unnoticed that the data shown in Figures 1a and 2b provide excellent evidence for the validity of the hydrodynamic-interaction-based approach for correcting self-diffusion coefficients for non-cubic geometries.
## V Conclusion
We have derived equations representing a generalized "OrthboOXY" procedure, which can be applied to MD simulations of any orthorhombic box geometry for determining both the system size independent true self-diffusion coefficient \(D_{0}\) and the shear viscosity \(\eta\). We have tested this approach by using NVT MD simulations of systems containing 1536 TIP4P/2005 water molecules at a density of \(\rho=0.9972\,\)g\(\,\)cm\({}^{-3}\) and a temperature of \(T=298\,\)K using varying box geometries. These systems obey the condition \(L_{y}/L_{z}=L_{x}/L_{y}\), while the ratio \(L_{y}/L_{z}\) was systematically varied. In particular, we have explored the feasibility of employing box shapes more akin to the cubic form.
We have demonstrated, that we are indeed able to determine the _true_ self-diffusion coefficient \(D_{0}\) for TIP4P/2005 water without prior knowledge of the shear viscosity from single MD simulation runs using this generalized approach, similar to what we have achieved in our previous paper using simulation boxes with \(L_{z}/L_{x}=L_{z}/L_{y}=2.7933596497\).[13] The computed values for \(D_{0}\) agree well with the values determined from MD simulations employing both orthorhombic and cubic unit cells discussed in Ref.[13]. Moreover, both the computed self-diffusion coefficient and shear viscosity agree nearly quantitatively with the experimentally observed data for water at \(298\,\)K.
However, the idea, to use box shapes more akin to the cubic form turns out to be only partially practical, since the small differences in direction-dependent self-diffusion coefficients observed for near-cubic geometries lead to unsustainably large relative uncertainties for the computed viscosities. Large differences between box-length, are instead the preferred choice. This leads us to the conclusion, that the original "OrthboOXY"-approach outlined in Ref.[13] has to be considered already quite efficient.
Instead, another "magical" set of box-length ratios has been discovered with \(L_{y}/L_{z}=0.57804765578\) and \(L_{x}/L_{z}=0.33413909235\), where the self-diffusion coefficient in \(x\)-direction becomes system sized independent, such that \(D_{0}=D_{\mathrm{PBC},xx}\). An expression for determining the viscosity \(\eta\), employing \(D_{\mathrm{PBC},yy}\) and \(D_{\mathrm{PBC},zz}\), is given by Equation 9. However, from the standpoint of box anisotropy, this box shape is deemed less preferable to the original "OrthboOXY"-approach, since the smallest box-length is about \(20\) per cent smaller than the smallest box length of a system of the same size using the original "OrthboOXY"-approach under the same conditions. Moreover, the simulations indicate, that also uncertainty of the predicted viscosity is slightly worse than that of the original "OrthboOXY"-procedure.
## Acknowledgements
We thank the computer center at the University of Rostock (ITMZ) for providing and maintaining computational resources.
## Data availability statement
The codes of GROMACS and MOSCITO are freely available. Input parameter and topology files for the MD simulations and the code for computing the Madelung constant analogues for cubic and orthorhombic lattices can be downloaded from GitHub via [https://github.com/PaschekLab/OrthboOXY/](https://github.com/PaschekLab/OrthboOXY/)
|
2303.16462 | Effect of Coriolis force on the shear viscosity of quark matter: A
nonrelativistic description | Shear viscosity becomes anisotropic in a rotating medium. It is discovered
here that for rotating thermalized quantum systems such as those created in
relativistic heavy-ion collisions, the coeffficient of shear viscosity breaks
up into five independent components. Similar phenomena were also discovered for
quark-gluon plasma in the presence of the magnetic field. Like the Lorentz
force at a finite magnetic field, the Coriolis force also creates anisotropic
viscosity at nonzero rotation. As a first approach, for simplicity, the
calculations are done in the nonrelativistic prescription, with a future
proposal to extend it toward a relativistic description. Introducing the
Coriolis force term in relaxation time approximated Boltzmann transport
equation, we have found different effective relaxation times along the
parallel, perpendicular, and Hall directions in terms of actual relaxation time
and rotating time period. Comparing the present formalism with the finite
magnetic field picture, we have shown the equivalence of roles between the
rotating and cyclotron time periods, where the rotating time period is inverse
of twice the angular velocity. | Cho Win Aung, Ashutosh Dwibedi, Jayanta Dey, Sabyasachi Ghosh | 2023-03-29T05:29:33Z | http://arxiv.org/abs/2303.16462v2 | # Effect of Coriolis Force on Shear Viscosity : A Non-Relativistic Description
###### Abstract
We have addressed that during the transition from zero to finite rotation picture, a transition from isotropic to anisotropic nature of shear viscosity coefficients can be found due to Coriolis force as expected due to Lorentz force at a finite magnetic field in earlier studies on the topics of relativistic matter like quark-gluon plasma. We have done it for non-relativistic matters for simplicity, with a future proposal to extend it towards a relativistic description. Introducing the Coriolis force term in relaxation time approximated Boltzmann transport equation, we have found different effective relaxation times along the parallel, perpendicular, and Hall directions in terms of actual relaxation time and rotating time period. Comparing the present formalism with the finite magnetic field picture, we have shown the equivalence of roles between the rotating and cyclotron time periods, which define the rotating time period as the inverse of 2 times angular velocity.
## I Introduction
In off-central heavy ion collisions (HIC), a very high orbital angular momentum (OAM) can be deposited. In a typical collision, OAM created from torque at the time of collision could be of the order of \(\sim\) (10\({}^{3}\) - 10\({}^{7}\)) \(\hbar\), depending on the impact parameter, collision energy, and system size [1; 2; 3]. A fraction of this initial OAM is transferred to the created quark-gluon plasma (QGP) medium in the form of local vorticity. The impact of such a huge initial OAM or later time vorticity on various observables and polarization has been calculated from various theoretical viewpoints. The Refs. [3; 4; 5; 6; 7; 8; 9; 10] have studied the statistical properties with keen interest on the polarization of particles in HIC by demands of angular momentum conservation. Whereas the Refs. [2; 11; 12; 13; 14; 15] have taken the approach of the spin-orbit coupling under strong interactions to explain the polarization observed in HIC. On the other hand, the authors of the Refs. [16; 17; 18; 19; 20; 21; 22; 23; 24; 25; 26; 27; 28; 29; 30; 31; 32] have taken the approach of quantum kinetic theory to obtain chiral anomalies and polarization effects observed in HIC. More recently, a new theoretical framework has been proposed where the complete evolution of spin has been taken care of through explicit incorporation of polarization in a hydrodynamic framework [33; 34; 35; 36; 37; 38; 39; 40; 41; 42; 43]. People have calculated the evolution of vorticity and the polarization of particles with a particular focus on \(\Lambda-\)hyperon by various transport and hydrodynamical models [44; 45; 46; 47; 48; 49; 50; 51; 52; 53; 54; 55]. See Refs. [36; 56] for recent review papers on the topic related with vorticity of QGP and polarization of hadrons. There is a gross equivalence between the OAM and magnetic field, both of which can be produced in the peripheral collision of heavy ions. Refs. [57; 58] have shown an analogy between the effect of rotation (Coriolis force) and the magnetic field (Lorentz force). Now, the medium constituents (quarks and hadrons) have two basic quantities, momentum, and spin, which will be affected by both OAM and the magnetic field. Former quantity - momentum will get a similar kind of deflection by OAM and magnetic field through Coriolis and Lorentz forces, respectively. On the other hand, the latter quantity - spin will be affected by this OAM (as well as magnetic fields) through a different mechanism, which is basically a matter of focal interest of the spin-hydrodynamics community [36; 56]. This will be ultimately connected with the hot experimental quantity - polarization of hadrons. The present article focuses only on the former quantity - momentum, which will be affected due to the vorticity of the medium via the Coriolis force. Our future aim will be to go for a more realistic picture by considering other ingredients like the effect of other (pseudo) forces due to rotation, the effect of vorticity on the spin, etc. However, the present article is planned to concentrate only on the topic of - the effect of Coriolis force on the shear viscosity of rotating matter. Again for simplicity, we will start with the non-relativistic matter with the future aim to extend it towards a relativistic description.
In recent times, Refs. [59; 60; 61; 62; 63; 64; 65] have gone through a systematic and step by step study on the problem - effect of Lorentz force on shear viscosity of magnetized matter. Connecting the similarity between Lorentz force at a finite magnetic field and Coriolis force at finite vorticity/rotation, the present article is aimed to explore the problem - effect of Coriolis force on shear viscosity of rotating matter. At a finite magnetic field, the (shear) viscous stress tensor breaks into five independent components as one can build five independent velocity gradient tensors in terms of fluid element velocity \(u_{i}\) and magnetic field unit vector \(b_{i}\). In the absence of magnetic fields, a single velocity gradient component in terms of \(u_{i}\) is only possible; hence, one can get an isotropic shear viscosity coefficient of the medium. So, during the transition from zero to finite magnetic field pictures, shear viscosity coefficients transform from isotropic to anisotropic nature. Similarly, viscous stress tensors can have five independent velocity gradient components for a fluid under finite rotation in terms of fluid element velocity \(u^{i}\) and angular velocity unit vector \(\omega_{i}\). Its detailed
formalism part is built in the next section (II), and then in Sec. (III), we have described the numerical outcomes on temperature and angular velocity dependency of shear viscosity with graphical visualization and interpretation. In the end, we have summarized our findings in Sec. (IV).
## II Formalism
In classical mechanics, if we have a system rotating with an angular velocity \(\vec{\Omega}\), one can write the following operator equation holding for any arbitrary vector [66],
\[\left(\frac{d}{dt}\right)_{s}\equiv\left(\frac{d}{dt}\right)_{r}+\vec{\Omega} \times\quad, \tag{1}\]
where s and r in the subscripts of the expression mean, the time-derivative of a vector has to be performed with respect to space-fixed and rotating frames, respectively. If one substitutes the position vector \(\vec{r}\) in the operator equations one gets the relation,\(\vec{v}_{s}=\vec{v}_{r}+\vec{\Omega}\times\vec{r}\), where one identifies \(\vec{v}_{s}\) and \(\vec{v}_{r}\) with velocity in space-fixed and rotating frame respectively. Again substituting this in general Eq. (1) we have,
\[\vec{a}_{s}=\vec{a}_{r}+2(\vec{\Omega}\times\vec{v}_{r})+\vec{\Omega}\times( \vec{\Omega}\times\vec{r})+\dot{\vec{\Omega}}\times\vec{r}\;. \tag{2}\]
We will ignore the subscripts s and r on the vectors for simplicity of notation, so, from now onward, we will call \(\vec{v}_{r}\) and its component as \(\vec{v}\) and \(v_{i}\) respectively. The terms of Eq. (2) can be rearranged to write Newton's equation in a rotating frame. The second term in Eq. (2) is known as the Coriolis acceleration. In Fig. (1), we have schematically presented a fluid rotating with angular velocity \(\vec{\Omega}\). For simple visualization, the geometry of the fluid system has been chosen as cylindrical. If we take any fluid element and look at it closely, the particles inside it have a random part of the velocity \(\vec{v}\) on top of the rotational velocity \(\vec{\Omega}\times\vec{r}\). All the particles inside any fluid element feel the Coriolis force \(2m(\vec{v}\times\vec{\Omega})\). For the case of constant angular velocity(as is assumed here), the Euler force vanishes, but the other two forces, i.e., Coriolis and Centrifugal, remain non-zero. In the present calculation, we will consider only the effect of Coriolis force on particle motion.
Figure 1: Schematic picture of rotating cylindrical fluid (in the left panel), whose one of the (cubical) fluid elements is zooming in the right panel, where particles inside the fluid element box are randomly moving and facing Coriolis force
We can find a similarity or equivalence between finite magnetic fields and finite rotation pictures. For example, at finite magnetic field (\(B\)), a particle with charge \(q\) and velocity \(v\) will face the Lorentz force \(\vec{F}=q\vec{v}\times\vec{B}\), while at angular velocity \(\Omega\) of medium, a particle with mass \(m\) and velocity \(v\) will face the Coriolis force \(\vec{F}=2m\vec{v}\times\vec{\Omega}\). The dissipative part of the energy-momentum tensor is modified at the microscopic level through the Lorentz force. A similar kind of modification can be expected for the finite rotation case. The similarity between this finite \(B\) and finite \(\Omega\) in microscopic descriptions inspire us to build a similar kind of macroscopic description. Refs. [59; 60; 61; 62; 63; 64; 65] have prescribed that macroscopic expressions of dissipative energy-momentum tensor at finite \(B\) can be built by the basic tensors - fluid velocity (\(u_{i}\)), Kronecker delta (\(\delta_{ij}\)), and the component of magnetic field unit vector, \(b_{i}(B_{i}\equiv Bb_{i})\). The same macroscopic structure can be expected in finite rotation by replacing \(b_{i}\) by angular velocity unit vector \(\omega_{i}(\Omega_{i}\equiv\Omega\omega_{i})\). Following the structure similar to the finite magnetic field case, we can write viscous stress tensor for finite angular velocity as:
\[\tau^{ij}=\eta^{ijkl}U_{kl}\, \tag{3}\]
where, \(U_{kl}\)=\(\frac{1}{2}(\frac{\partial u}{\partial x_{l}}+\frac{\partial u}{\partial x_{k}})\) is the velocity gradient and \(\eta^{ijkl}\) is the viscosity tensor. One can make seven independent tensor components with the properties that they remain symmetric under the exchange of indices \(i\leftrightarrow j\) and \(k\leftrightarrow l\)[67]. These tensor components are given below:
\[\delta_{ik}\delta_{jl}+\delta_{jk}\delta_{il},\] \[\delta_{ij}\delta_{kl},\] \[\delta_{ik}\omega_{j}\omega_{l}+\delta_{jk}\omega_{i}\omega_{l}+ \delta_{il}\omega_{j}\omega_{k}+\delta_{jl}\omega_{i}\omega_{k},\] \[\delta_{ij}\omega_{k}\omega_{l}+\delta_{kl}\omega_{i}\omega_{j},\] \[\omega_{i}\omega_{j}\omega_{k}\omega_{l},\] \[\omega_{ik}\delta_{jl}+\omega_{jk}\delta_{il}+\omega_{il}\delta_ {jk}+\omega_{jl}\delta_{ik},\] \[\omega_{ik}\omega_{j}\omega_{l}+\omega_{jk}\omega_{i}\omega_{l}+ \omega_{il}\omega_{j}\omega_{k}+\omega_{jl}\omega_{i}\omega_{k}, \tag{4}\]
where \(\omega_{ij}\equiv\epsilon_{ijk}\omega_{k}\). We can make seven independent linear combinations of the above basis to obtain tensors which, when contracted with \(U_{kl}\), give five traceless tensors(\(C^{n}_{ij},n=0\)_to_ 4) and two non-zero traceless tensors (\(C^{n}_{ij},n=5\)_to_ 6). Similar to the structure of 5 traceless tensors and 2 non-zero traceless tensors for finite magnetic field case [59; 60; 61; 62; 63; 64; 68], they can be expressed as:
\[C^{0}_{ijkl}=(3\omega_{i}\omega_{j}-\delta_{ij})(\omega_{k} \omega_{l}-\frac{1}{3}\delta_{kl})\] \[C^{1}_{ijkl}=\delta_{il}\delta_{jk}+\delta_{jl}\delta_{ik}-\delta _{ij}\delta_{kl}+\delta_{ij}\omega_{k}\omega_{l}-\delta_{jl}\omega_{i}\omega_{ k}\] \[-\delta_{jk}\omega_{i}\omega_{l}+\delta_{kl}\omega_{i}\omega_{j} -\delta_{ik}\omega_{j}\omega_{l}-\delta_{il}\omega_{j}\omega_{k}+\omega_{i} \omega_{j}\omega_{k}\omega_{l}\] \[C^{2}_{ijkl}=\delta_{ik}\omega_{j}\omega_{l}+\delta_{il}\omega_ {j}\omega_{k}+\delta_{jk}\omega_{i}\omega_{l}+\delta_{jl}\omega_{i}\omega_{k} -4\omega_{i}\omega_{j}\omega_{k}\omega_{l}\] \[C^{3}_{ijkl}=\delta_{il}\omega_{jk}+\delta_{jl}\omega_{ik}- \omega_{ik}\omega_{j}\omega_{l}-\omega_{jk}\omega_{i}\omega_{l}\] \[C^{4}_{ijkl}=\omega_{ik}\omega_{j}\omega_{l}+\omega_{il}\omega_{j }\omega_{k}+\omega_{jk}\omega_{i}\omega_{l}+\omega_{jl}\omega_{i}\omega_{k}\] \[C^{5}_{ijkl}=\delta_{ij}\delta_{kl}\] \[C^{5}_{ijkl}=\delta_{ij}\omega_{k}\omega_{l}+\delta_{kl}\omega_ {i}\omega_{j}\, \tag{5}\]
with
\[C^{0}_{ij}=(3\omega_{i}\omega_{j}-\delta_{ij})(\omega_{k} \omega_{l}U_{kl}-\frac{1}{3}\vec{\nabla}\cdot\vec{u})\] \[C^{1}_{ij}=2U_{ij}+\delta_{ij}U_{kl}\omega_{k}\omega_{l}-2U_{ik} \omega_{j}\omega_{k}-2U_{jk}\omega_{k}\omega_{i}\] \[+(\omega_{i}\omega_{j}-\delta_{ij})\vec{\nabla}\cdot\vec{u}+ \omega_{i}\omega_{j}\omega_{k}\omega_{l}U_{kl}\] \[C^{2}_{ij}=2(U_{ik}\omega_{j}\omega_{k}+U_{jk}\omega_{i}\omega_{ k}-2U_{kl}\omega_{i}\omega_{j}\omega_{k}\omega_{l})\] \[C^{3}_{ij}=U_{ik}\omega_{jk}+U_{jk}\omega_{ik}-U_{kl}\omega_{ik} \omega_{j}\omega_{l}-U_{kl}\omega_{jk}\omega_{i}\omega_{l}\] \[C^{4}_{ij}=2(U_{kl}\omega_{ik}\omega_{j}\omega_{l}+U_{kl}\omega_{ jk}\omega_{i}\omega_{l})\] \[C^{5}_{ij}=\delta_{ij}(\vec{\nabla}\cdot\vec{u})\] \[C^{6}_{ij}=\delta_{ij}\omega_{k}\omega_{l}U_{kl}+\omega_{i} \omega_{j}(\vec{\nabla}\cdot\vec{u})\, \tag{6}\]
where, \(C_{ij}^{n}=C_{ijkl}^{n}U_{kl}\). The viscous tensor can be written as a combination of seven basis tensors,
\[\eta_{ijkl}=\eta_{0}{C_{ijkl}}^{0}+\eta_{1}{C_{ijkl}}^{1}+\eta_{2}{C_{ijkl}}^{2}+ \eta_{3}{C_{ijkl}}^{3}+\eta_{4}{C_{ijkl}}^{4}+\zeta_{0}{C_{ijkl}}^{5}+\zeta_{1} {C_{ijkl}}^{6} \tag{7}\]
where, \(\eta_{1}\) to \(\eta_{4}\) are identified as shear viscosities and \(\zeta_{0},\,\text{and}\,\zeta_{1}\) are identified with bulk viscosities of the medium. From now onwards, we will concentrate on the shear viscosities of the medium; therefore, we will ignore the bulk part of the viscous stress tensor. So, the viscous stress tensor given in Eq. (3) becomes the shear stress tensor, which can be written as:
\[\pi_{ij} =\eta_{n}C_{ijkl}^{n}U^{kl}\] \[=\eta_{n}C_{ij}^{n}. \tag{8}\]
This Eq. (8) is basically the macroscopic expression of shear stress tensor \(\pi^{ij}\). For its microscopic expression, we have to use the kinetic theory framework, which can define the dissipative part of the stress tensor as:
\[\pi_{ij}=g\int\frac{d^{3}\vec{p}}{(2\pi)^{3}}mv_{i}v_{j}\delta f\, \tag{9}\]
where \(g\) is the degeneracy factor of the medium constituent particle with mass \(m\) and velocity \(v_{i}=p_{i}/m\).
To know the form of \(\delta f\), we will use Boltzmann Transport Equation (BTE):
\[\vec{v}\cdot\frac{\partial f}{\partial\vec{r}}+\vec{F}\cdot\frac{\partial f}{ \partial\vec{p}}+\frac{\partial f}{\partial t}=\left(\frac{\partial f}{ \partial t}\right)_{coll}\, \tag{10}\]
where \(f\) and \(\vec{F}\) are the non-equilibrium distribution function of the particles and the force acting on the particles, respectively. The BTE in relaxation time approximation (RTA) can be written as:
\[\vec{v}\cdot\frac{\partial f}{\partial\vec{r}}+\vec{F}\cdot\frac{\partial f}{ \partial\vec{p}}+\frac{\partial f}{\partial t}=-\frac{\delta f}{\tau_{c}} \tag{11}\]
where the system has been assumed to be slightly out of equilibrium. The total distribution function is composed of two parts- the part corresponding to local equilibrium \(f_{0}\) and a perturbed part \(\delta f\), i.e., \(f=f_{0}+\delta f\). \(\tau_{c}\) is the so-called relaxation time for the system. Substituting the expression of Coriolis force in place of \(\vec{F}\) and keeping the terms which are 1st order in \(\delta f\) in the LHS of Eq. (11) we have,
\[\vec{v}\cdot\frac{\partial f_{0}}{\partial\vec{r}}+2(\vec{v}\times\vec{ \Omega})\cdot\frac{\partial\delta f}{\partial\vec{v}}+\frac{\partial f_{0}}{ \partial t}=-\frac{\delta f}{\tau_{c}} \tag{12}\]
where the local equilibrium distribution \(f^{0}=1/\exp\left(\frac{E-\mu(\vec{r},t)-\vec{u}(\vec{r},t)\cdot\vec{F}}{T( \vec{r},t)}\right)+1\) and \(\vec{u}\) is the fluid velocity. By only keeping the terms that correspond to stress in the fluid, the LHS of Eq. (12) can be written as:
\[\frac{mv_{i}v_{j}}{T}\frac{\partial u_{j}}{\partial x_{i}}f_{0}(1-f_{0})+2( \vec{v}\times\vec{\Omega})\cdot\frac{\partial\delta f}{\partial\vec{v}}=- \frac{\delta f}{\tau_{c}} \tag{13}\]
where we have followed Einstein's summation convention. Using the identity \(U_{ij}\equiv\frac{1}{2}\left(\frac{\partial u_{j}}{\partial x_{i}}+\frac{ \partial u_{j}}{\partial x_{j}}\right)\), we can express Eq. (13) as:
\[\frac{m}{T}v_{i}v_{j}U_{ij}f_{0}(1-f_{0})+2(\vec{v}\times\vec{\Omega})\cdot \frac{\partial\delta f}{\partial\vec{v}}=-\frac{\delta f}{\tau_{c}}. \tag{14}\]
To calculate \(\pi_{ij}\), we need \(\delta f\), which would be obtained by solving Eq. (14). We will guess the solution of Eq. (14) as:
\[\delta f=\sum_{n=0}^{4}C_{n}C_{kl}^{n}v_{k}v_{l}. \tag{15}\]
The Eq. (14) can be rewritten as:
\[\frac{m}{T}v_{i}v_{j}U_{ij}f_{0}(1-f_{0})+2\epsilon_{ijk}v_{j} \omega_{k}\Omega\frac{\partial\delta f}{\partial v_{i}}=-\frac{\delta f}{\tau_ {c}}\] \[\implies\frac{m}{T}v_{i}v_{j}U_{ij}f_{0}(1-f_{0})+\frac{1}{\tau_{ \Omega}}\omega_{ij}v_{j}\frac{\partial\delta f}{\partial v_{i}}=-\frac{\delta f }{\tau_{c}}\, \tag{16}\]
where, \(\tau_{\Omega}=\frac{1}{2\Omega}\). We will see later that this \(\tau_{\Omega}\) will play same role as the cyclotron time period \(\tau_{B}=m/qB\) plays on the transport coefficient expressions at finite magnetic field. Now,
\[\frac{\partial\delta f}{\partial v_{i}}=\frac{\partial}{\partial v_{i}}\sum_{n=o }^{4}C_{n}C_{kl}^{n}v_{k}v_{l}\.\]
Using this result of in Eq. (16),
\[\frac{m}{T}v_{i}v_{j}U_{ij}f_{0}(1-f_{0})+\frac{2}{\tau_{\Omega}} \omega_{ij}v_{j}\sum_{n=o}^{4}C_{n}C_{ik}^{n}v_{k}=-\frac{1}{\tau_{c}}\sum_{n= 0}^{4}C_{n}C_{kl}^{n}v_{k}v_{l}\,\] \[\implies\frac{m}{T}v_{i}v_{j}U_{ij}f_{0}(1-f_{0})=\sum_{n=0}^{4}C _{n}\Big{(}-\frac{2}{\tau_{\Omega}}\omega_{ij}v_{j}v_{k}C_{ik}^{n}-\frac{1}{ \tau_{c}}C_{kl}^{n}v_{k}v_{l}\Big{)}\, \tag{17}\]
where, \(\omega_{ij}v_{i}v_{j}=0\). The Eq. (17) can be further simplified by explicitly expressing \(C_{ik}^{n}v_{j}v_{k}\) and \(C_{kl}^{n}v_{k}v_{l}\) in terms of elementary tensor structures. All the \(C_{n}\)'s can be calculated by equating the coefficients of the independent tensor blocks appearing in Eq. (17) to zero. By equating the coefficients \(v_{i}v_{j}U_{ij},U_{ij}v_{j}v_{k}\omega_{ik},U_{ij}v_{k}\omega_{j}\omega_{ik}( \vec{v}\cdot\vec{\Omega})\) and \(U_{ij}v_{i}\omega_{j}(\vec{v}\cdot\vec{\Omega})\) which occurs in the Eq. (17) to zero, we have respectively,
\[v_{i}v_{j}U_{ij} :-\frac{4C_{3}}{\tau_{\Omega}}-\frac{2C_{1}}{\tau_{c}}=\frac{m}{ T}f_{0}(1-f_{0})\] \[U_{ij}v_{j}v_{k}\omega_{ik} :-\frac{4C_{1}}{\tau_{\Omega}}+\frac{2C_{3}}{\tau_{c}}=0\] \[U_{ij}v_{k}\omega_{j}\omega_{ik}(\vec{v}\cdot\vec{\Omega}) :\frac{4C_{1}}{\tau_{\Omega}}-\frac{4C_{2}}{\tau_{\Omega}}-\frac{2C_{3}}{ \tau_{c}}+\frac{4C_{4}}{\tau_{c}}=0\] \[U_{ij}v_{i}\omega_{j}(\vec{v}\cdot\vec{\Omega}) :\frac{8C_{3}}{\tau_{\Omega}}-\frac{4C_{4}}{\tau_{\Omega}}+\frac{4C_{1} }{\tau_{c}}-\frac{4C_{2}}{\tau_{c}}=0. \tag{18}\]
Solving the above set of linear equations we have,
\[C_{1} =-\frac{m}{2T}f_{0}(1-f_{0})\frac{\tau_{c}}{1+4(\tau_{c}/\tau_{ \Omega})^{2}}\] \[C_{2} =-\frac{m}{2T}f_{0}(1-f_{0})\frac{\tau_{c}}{1+(\tau_{c}/\tau_{ \Omega})^{2}}\] \[C_{3} =-\frac{m}{T}f_{0}(1-f_{0})\frac{\tau_{c}(\tau_{c}/\tau_{\Omega}) }{1+4(\tau_{c}/\tau_{\Omega})^{2}}\] \[C_{4} =-\frac{m}{2T}f_{0}(1-f_{0})\frac{\tau_{c}(\tau_{c}/\tau_{\Omega} )}{1+4(\tau_{c}/\tau_{\Omega})^{2}}. \tag{19}\]
Now substituting the value of \(\delta f\) in Eq. (9), and using the result \(f\,v_{i}v_{j}v_{k}v_{l}\ d^{3}\vec{v}=\frac{v^{4}}{15}(\delta_{ij}\delta_{kl} +\delta_{ik}\delta_{jl}+\delta_{il}\delta_{jk})d^{3}v\), (\(d^{3}v\equiv 4\pi v^{2}dv\)) we have,
\[\pi_{ij} =g\int\frac{d^{3}\vec{p}}{(2\pi)^{3}}m\sum_{n=0}^{4}C_{n}C_{kl}^{ n}v_{i}v_{j}v_{k}v_{l}\] \[\pi_{ij} =g\int d^{3}v\frac{m^{4}}{(2\pi)^{3}}\sum_{n=0}^{4}C_{n}C_{kl}^{ n}(\delta_{ij}\delta_{kl}+\delta_{ik}\delta_{jl}+\delta_{il}\delta_{jk})\frac{v^{4}}{15}\] \[\pi_{ij} =\frac{2gm^{4}}{15}\sum_{n=0}^{4}C_{ij}^{n}\int\frac{d^{3}v}{(2 \pi)^{3}}v^{4}C_{n}C_{kl}^{n}\, \tag{20}\]
where \(\delta_{ij}\delta_{kl}+\delta_{ik}\delta_{jl}+\delta_{il}\delta_{jk}=2C_{ij}^{n}\). Substituting the values of \(C\)'s from Eq. (19) in Eq. (20), we get the corresponding viscosities as,
\[\eta_{n}=-\frac{2gm^{4}}{15}\int\frac{d^{3}v}{(2\pi)^{3}}v^{4}C_{n}. \tag{21}\]
The \(\eta_{0}\) is the viscosity in the absence of rotation, which will be the same to the expression in the absence of magnetic field case; therefore, it is given by [62; 64],
\[\eta_{0}=\frac{g\tau_{c}}{15T}\int\frac{d^{3}p}{(2\pi)^{3}}\frac{p^{4}}{m^{2}}f_ {0}(1-f_{0}).\]
And from the Eq. (21) we get,
\[\eta_{1} = \frac{g}{15T}\frac{\tau_{c}}{1+4(\tau_{c}/\tau_{\Omega})^{2}}\int \frac{d^{3}p}{(2\pi)^{3}}\frac{p^{4}}{m^{2}}f_{0}(1-f_{0})\] \[\eta_{2} = \frac{g}{15T}\frac{\tau_{c}}{1+(\tau_{c}/\tau_{\Omega})^{2}}\int \frac{d^{3}p}{(2\pi)^{3}}\frac{p^{4}}{m^{2}}f_{0}(1-f_{0})\] \[\eta_{3} = \frac{2g}{15T}\frac{\tau_{c}(\tau_{c}/\tau_{\Omega})}{1+4(\tau_{ c}/\tau_{\Omega})^{2}}\int\frac{d^{3}p}{(2\pi)^{3}}\frac{p^{4}}{m^{2}}f_{0}(1-f_{0})\] \[\eta_{4} = \frac{g}{15T}\frac{\tau_{c}(\tau_{c}/\tau_{\Omega})}{1+(\tau_{c} /\tau_{\Omega})^{2}}\int\frac{d^{3}p}{(2\pi)^{3}}\frac{p^{4}}{m^{2}}f_{0}(1-f _{0}). \tag{22}\]
Comparing the final expressions of \(\eta_{n}\) at finite \(\Omega\) with the same for finite \(B\), addressed in Refs. [62; 64], the reader can find the similarities in mathematical structure if he equates \(\tau_{\Omega}\equiv\tau_{B}\), i.e., \(\frac{1}{2\Omega}\equiv\frac{m}{qB}\), which may be understood as an equivalence between Coriolis and Lorentz forces
\[\vec{v}\times 2m\vec{\Omega} \equiv \vec{v}\times q\vec{B}\] \[\Rightarrow 2m\Omega \equiv qB. \tag{23}\]
The above expressions of viscosities can be cast in terms of the Fermi function as follows,
\[\int_{0}^{\infty}dp\ p^{6}f_{0}(1-f_{0}) = \int_{0}^{\infty}dp\ p^{6}\left(T\frac{\partial f_{0}}{\partial \mu}\right) \tag{24}\] \[= T\frac{\partial}{\partial\mu}\int_{0}^{\infty}dpf_{0}p^{6}\] \[= 4\sqrt{2}Tm^{7/2}\frac{\partial}{\partial\mu}\int_{0}^{\infty} dEf_{0}E^{5/2}\] \[= 4\sqrt{2}Tm^{7/2}\frac{\partial}{\partial\mu}\int\frac{E^{(7/2) -1}dE}{e^{(E-\mu)/T}+1}\] \[= 4\sqrt{2}T^{7/2}m^{7/2}\Big{(}T\frac{\partial}{\partial\mu}\int \frac{x^{(7/2)-1}dx}{A^{-1}e^{x}+1}\Big{)},\]
where, \(x=E/T\) and \(A=e^{\mu/T}\). The Fermi function is defined as \(f_{j}(A)\equiv\frac{1}{\Gamma(j)}\int_{0}^{\infty}\frac{x^{j-1}dx}{A^{-1}e^{x} +1}\), with the property that \(\frac{\partial}{\partial(\mu/T)}f_{j}(A)=f_{j-1}(A)\). Using the above definition, we have,
\[\int_{0}^{\infty}dp\ p^{6}f_{0}(1-f_{0})=\frac{15}{2}\sqrt{2\pi}m^{7/2}f_{5/2 }(A)T^{7/2}. \tag{25}\]
Using the result of Eq. (25) in Eq. (24) we have,
\[\eta_{1} = g\left(\frac{m}{2\pi}\right)^{3/2}\frac{\tau_{c}}{1+4(\tau_{c}/ \tau_{\Omega})^{2}}T^{5/2}f_{5/2}(A)\] \[\eta_{2} = g\left(\frac{m}{2\pi}\right)^{3/2}\frac{\tau_{c}}{1+(\tau_{c}/ \tau_{\Omega})^{2}}T^{5/2}f_{5/2}(A)\] \[\eta_{3} = g\left(\frac{m}{2\pi}\right)^{3/2}\frac{\tau_{c}(2\tau_{c}/\tau _{\Omega})}{1+4(\tau_{c}/\tau_{\Omega})^{2}}T^{5/2}f_{5/2}(A)\] \[\eta_{4} = g\left(\frac{m}{2\pi}\right)^{3/2}\frac{\tau_{c}(\tau_{c}/\tau _{\Omega})}{1+(\tau_{c}/\tau_{\Omega})^{2}}T^{5/2}f_{5/2}(A). \tag{26}\]
Following the similarity in the definition of parallel, perpendicular, and Hall shear viscosity components \(\eta_{\parallel,\perp,\times}\) at finite magnetic field [62; 64], one can define \(\eta_{\parallel}=\eta_{1}\), \(\eta_{\perp}=\eta_{2}\), \(\eta_{\times}=\eta_{4}\).
## III Results
In the previous section (formalism section), we got general expressions of different shear viscosity components for non-relativistic fermionic matter, which can apply to any temperature values (\(T\)), chemical potential (\(\mu\)) and angular velocity (\(\Omega\)). One may readily apply the expression for non-relativistic fluid, belonging to the subject of condensed matter physics and mechanical engineering, where the quantities \(T\), \(\mu\), and \(\Omega\) will be the order of eV in the natural unit. However, our destined system belongs to the subject of high-energy nuclear physics and astrophysics, where MeV will be the order of magnitude for the quantities \(T\), \(\mu\), and \(\Omega\). Imagining the quark-hadron phase transition \(T-\mu\) diagram, we can expect two extreme domains - (1) the early universe scenario of net quark/baryon-free domain (i.e., at \(\mu=0\)), which can be produced in LHC and RHIC experiments, and (2) the compact star scenario of degenerate electron or neutron or quark matter (i.e., at \(T=0\)), expected in white dwarfs and neutron stars. Our microscopic expressions of shear viscosity components at finite rotation can be easily applicable to RHIC/LHC matter by putting \(\mu=0\) and to compact star by putting \(T=0\) in the general forms of Eq. (26). Although, we have limitations for using non-relativistic matter, which can provide some overestimation with respect to the actual relativistic matter expected in RHIC/LHC experiments and compact stars. Our future goal is to reach that actual scenario by developing the framework in step by step. By putting \(\mu=0\) and \(A=e^{\mu/T}=1\) in Eq. (26), we get
\[\eta_{\parallel}=\eta_{1}=0.64g\left(\frac{m}{2\pi}\right)^{3/2} \frac{\tau_{c}}{1+4(\tau_{c}/\tau_{\Omega})^{2}}T^{5/2}\zeta(5/2)\] \[\eta_{\perp}=\eta_{2}=0.64g\left(\frac{m}{2\pi}\right)^{3/2}\frac {\tau_{c}}{1+(\tau_{c}/\tau_{\Omega})^{2}}T^{5/2}\zeta(5/2)\] \[\eta_{\times}=\eta_{4}=0.64g\left(\frac{m}{2\pi}\right)^{3/2} \frac{\tau_{c}(\tau_{c}/\tau_{\Omega})}{1+(\tau_{c}/\tau_{\Omega})^{2}}T^{5/2} \zeta(5/2) \tag{27}\]
as Fermi function become \(f_{5/2}(A=1)=(1-\frac{1}{2^{3/2}})\zeta(5/2)\). Using Eq. (27), we have plotted \(\eta_{||,\perp,\times}/\tau_{c}m^{3/2}T^{5/2}\) against \(T\)-axis in Fig. (2) and we get horizontal lines as all components are proportional to \(T^{5/2}\). We consider quark matter with mass, \(m=0.005GeV\) and relaxation time \(\tau_{c}=5fm\) and angular time period \(\tau_{\Omega}=35~{}GeV^{-1}=6.8fm\) for angular velocity \(\Omega=\frac{1}{2\tau_{\Omega}}=0.014GeV\). We keep comparable values of two-time scales, for which we can get a noticeable difference between parallel and perpendicular components of shear viscosity. We can understand the \(\eta_{\parallel,\perp,\times}\)
Figure 2: Normalized parallel (\(n=\parallel\)), perpendicular (\(n=\perp\)), Hall (\(n=\times\)) components of shear viscosity as well as shear viscosity without rotation are plotted against temperature axis.
in terms of effective relaxation time
\[\tau_{\parallel}=\frac{\tau_{c}}{1+4(\tau_{c}/\tau_{\Omega})^{2}}\] \[\tau_{\perp}=\frac{\tau_{c}}{1+(\tau_{c}/\tau_{\Omega})^{2}}\] \[\tau_{\times}=\frac{\tau_{c}(\tau_{c}/\tau_{\Omega})}{1+(\tau_{c} /\tau_{\Omega})^{2}} \tag{28}\]
as \(\eta_{\parallel,\perp,\times}\propto\tau_{\parallel,\perp,\times}\), while \(\eta_{0}\propto\tau_{c}\) only. So we can easily understand that the non-zero ratio \(\tau_{c}/\tau_{\Omega}\) for finite rotation will create the inequality \(\tau_{\parallel,\perp,\times}<\tau_{c}\) and the ratio is also the deciding factor for the ranking among \(\eta_{\parallel}\), \(\eta_{\perp}\), \(\eta_{\times}\). In Fig. (2), for present set of parameters \(\tau_{c}=5fm\), \(\tau_{\Omega}=6.8fm\) and ratio \(\tau_{c}/\tau_{\Omega}=0.73\), we get the ranking \(\eta_{\parallel}>\eta_{\times}>\eta_{\perp}\) but it can be changed for different values of the ratio \(\tau_{c}/\tau_{\Omega}\). This fact will be more clear in the next plot.
In Fig. (3), we have plotted the percentage of normalized viscosities (\(\eta_{n}/\eta_{0}\)) with respect to \(\Omega\) at \(\tau_{c}=5fm\). It is clearly seen in the plot that the relative magnitude of \(\eta_{\perp,\parallel}\) decreases with \(\Omega\) in the whole range, whereas \(\eta_{\times}\) initially increases and then decreases with \(\Omega\). In the lower range of \(\Omega\), \(\eta_{\perp,\parallel}\) are more dominant than \(\eta_{\times}\), on contrary in higher range of \(\Omega\), \(\eta_{\times}\) is more dominant than \(\eta_{\perp,\parallel}\). One can identify both \(\eta_{\perp,\parallel}\) will merge to \(\eta_{0}\) in the absence of vorticity, i.e, \(\eta_{\perp,\parallel}(\Omega\to 0)=\eta_{0}\). From this fact, we can conclude that the finite (global) vorticity can create anisotropy in shear viscosity components, as we have noticed in the finite magnetic field picture.
Let us try to visualize the different shear viscosity components via a schematic diagram - Fig. (4). The picture is precisely similar to the finite magnetic field picture described in Refs. [69]. Only the direction of the magnetic field along the z-direction will be replaced by the direction of orbital angular momentum or angular velocity, or global vorticity. In Fig. (4), the arrows represent the velocity direction, and their lengths represent their magnitudes, so changing the arrow lengths map the velocity gradient picture. Right, and left panels of Fig. (4) represent the gradient of velocity in the planes, which are parallel (ZX and ZY plane) and perpendicular (XY plane) to the vorticity/angular velocity, respectively.
Apart from the rotating quark matter system at \(\mu=0\), we can apply the microscopic expressions in Eq. (27) for rotating hadronic matter at \(\mu=0\), although the magnitude of angular momentum will be reduced to a smaller value in hadronic phase expansion. Considering the \(T\to 0\) limit of Eq. (26), we can get
\[\eta_{\parallel}=\eta_{1}=\frac{8g}{15\sqrt{\pi}}\left(\frac{m}{ 2\pi}\right)^{3/2}\frac{\tau_{c}}{1+4(\tau_{c}/\tau_{\Omega})^{2}}\mu^{5/2}\] \[\eta_{\perp}=\eta_{2}=\frac{8g}{15\sqrt{\pi}}\left(\frac{m}{2\pi} \right)^{3/2}\frac{\tau_{c}}{1+(\tau_{c}/\tau_{\Omega})^{2}}\mu^{5/2}\] \[\eta_{\times}=\eta_{4}=\frac{8g}{15\sqrt{\pi}}\left(\frac{m}{2\pi} \right)^{3/2}\frac{\tau_{c}(\tau_{c}/\tau_{\Omega})}{1+(\tau_{c}/\tau_{\Omega })^{2}}\mu^{5/2}\, \tag{29}\]
which may be applicable for rotating compact star systems like white dwarfs, neutron stars, and quark matter (expected in the core of a neutron star). However, an over-estimation of shear viscosity components of those rotating
Figure 3: Relative percentage of parallel (\(n=\parallel\)), perpendicular (\(n=\perp\)), Hall (\(n=\times\)) components of shear viscosity vs angular velocity.
media can be expected due to considering the non-relativistic description of relativistic matter. This fact can be understood from the Fig. (5), where the relativistic and non-relativistic velocity (\(v\)) of u quark, pion, and nucleon are plotted against momentum (\(p\)). From this simple picture, one can see the noticeable difference between relativistic (R) and non-relativistic (NR) curves are coming beyond the threshold momenta \(1MeV\), \(30MeV\) and \(300MeV\) for u quark, \(\pi\) meson and nucleon respectively. Overestimation in NR description with respect to R description will come for integration of momentum beyond those threshold values. Our future aim is to go for that relativistic description with an appropriate relativistic extension of the present framework.
Regarding the fluidity of the medium, quantified by shear viscosity to entropy density ratio, we can find a possibility of violation of KSS bound [70] due to rotation of medium via Coriolis force just like finite magnetic field picture via Lorentz force. The entropy density of non-relativistic matter in two extreme limits follow the relations - \(s\propto T^{3/2}\) at \(\mu\to 0\)and \(s\propto\mu^{3/2}\) at \(T\to 0\). The ratio between shear viscosity to entropy density will be \(\eta/s=\frac{\tau_{c}T}{5}\) at \(\mu\to 0\) and \(\eta/s=\frac{\tau_{c}\mu}{5}\) at \(T\to 0\), which can reach to the KSS bound \(\frac{1}{4\pi}\)[70] for relaxation time \(\tau_{c}(T)=\frac{5}{4\pi T}\) and \(\tau_{c}(\mu)=\frac{5}{4\pi\mu}\) respectively. At finite rotation, we can expect lower limit expressions for parallel, perpendicular, and Hall components of shear viscosity to entropy density ratio as,
\[\frac{\eta_{\parallel}}{s} = \frac{1}{4\pi}\frac{1}{1+4\Big{(}\frac{5}{4\pi T\tau_{m}}\Big{)}^ {2}}\] \[\frac{\eta_{\perp}}{s} = \frac{1}{4\pi}\frac{1}{1+\Big{(}\frac{5}{4\pi T\tau_{m}}\Big{)}^ {2}}\] \[\frac{\eta_{\times}}{s} = \frac{1}{4\pi}\frac{\Big{(}\frac{5}{4\pi T\tau_{m}}\Big{)}}{1+ \Big{(}\frac{5}{4\pi T\tau_{m}}\Big{)}^{2}}. \tag{30}\]
Figure 4: Velocity gradients along the axis of XY, ZX and ZY plane
Figure 5: Velocity (\(v\)) vs momentum (\(p\)) relation for u quark, \(\pi\) meson and nucleons.
The above expressions are for \(\mu=0\). By replacing \(T\) by \(\mu\) in Eq. (30), one can get their corresponding expression for \(T=0\). So, one can notice that by increasing angular velocity or decreasing \(\tau_{\Omega}\) of the medium, \(\eta_{\parallel,\perp}/s\) can go below \(\frac{1}{4\pi}\). The \(\eta_{\parallel}/s<1/(4\pi)\) is also expected and pointed out by Ref. [71] for finite magnetic field. As a matter of fact, a quantum version extension of the present formalism may be required to comment something on the lower bounds of \(\eta_{\parallel,\perp}/s\).
## IV Summary
In summary, we have explored the equivalence role of magnetic field and rotation or vorticity on shear viscosity via Lorentz force and Coriolis force, respectively. In the absence of magnetic fields or rotation, we get an isotropic shear viscosity coefficient, which is proportional to relaxation time only. Whereas at finite magnetic field or rotation, we get anisotropic shear viscosity coefficients, which are proportional to effective relaxation time along the parallel, perpendicular, and Hall directions. This effective relaxation time can be expressed in terms of actual relaxation time and cyclotron-type time period due to magnetic field or rotation. The physics and mathematical steps of the microscopic calculation of shear viscosity at a finite magnetic field or rotation are pretty similar. The microscopic quantity - the deviation from equilibrium distribution, is due to the macroscopic velocity gradient, so a proportional relation between them is considered with unknown proportionality constants, which have been calculated with the help of the relaxation time approximation of the Boltzmann transport equation. Then, the macroscopic relation between the shear stress tensor and velocity gradient with shear viscosity proportional constants is compared with the microscopic expression of the shear stress tensor in terms of deviation obtained from the Boltzmann transport equation. By this comparison, we get isotropic and anisotropic shear viscosities in terms of relaxation time and effective relaxation time in the absence and presence of magnetic fields or rotation. We generally obtain the deviation from the Boltzmann transport equation using Lorentz force for finite magnetic field case. The same is done here by including the Coriolis force for the finite rotation case. The present article has explored the detailed calculation of the finite rotation case only. During the description, we have also mentioned the equivalence with the finite magnetic field case. For simplicity, we have attempted it for non-relativistic matter but our immediate future plan is to extend it towards relativistic description. So far, to the best of our knowledge, it is the first time that we have addressed this anisotropic structure of shear viscosity of rotating matter due to the Coriolis force. We have noticed an equivalence role between the rotating time period for finite rotation case and cyclotron time period for finite magnetic field case, where the rotating time period is defined as the inverse of 2 times angular velocity. The factor 2 propagates from the basic definition of the Coriolis force.
## Acknowledgements
CWA acknowledges the DIA programme. This work was partially supported by the Doctoral fellowship in India (DIA) programme of the Ministry of Education, Government of India. AD gratefully acknowledges the MoE, Govt. of India. JD gratefully acknowledges the DAE-DST, Govt. of India funding under the mega-science project - "Indian participation in the ALICE experiment at CERN" bearing Project No. SR/MF/PS-02/2021- ITTI (E-37123). SG thanks Deeptak Biswas, Arghya Mukherjee for the useful discussion during the beginning stage of the work.
|
2310.08764 | Calibrating Likelihoods towards Consistency in Summarization Models | Despite the recent advances in abstractive text summarization, current
summarization models still suffer from generating factually inconsistent
summaries, reducing their utility for real-world application. We argue that the
main reason for such behavior is that the summarization models trained with
maximum likelihood objective assign high probability to plausible sequences
given the context, but they often do not accurately rank sequences by their
consistency. In this work, we solve this problem by calibrating the likelihood
of model generated sequences to better align with a consistency metric measured
by natural language inference (NLI) models. The human evaluation study and
automatic metrics show that the calibrated models generate more consistent and
higher-quality summaries. We also show that the models trained using our method
return probabilities that are better aligned with the NLI scores, which
significantly increase reliability of summarization models. | Polina Zablotskaia, Misha Khalman, Rishabh Joshi, Livio Baldini Soares, Shoshana Jakobovits, Joshua Maynez, Shashi Narayan | 2023-10-12T23:17:56Z | http://arxiv.org/abs/2310.08764v1 | # Calibrating Likelihoods towards Consistency in Summarization Models
###### Abstract
Despite the recent advances in abstractive text summarization, current summarization models still suffer from generating factually inconsistent summaries, reducing their utility for real-world application. We argue that the main reason for such behavior is that the summarization models trained with maximum likelihood objective do not accurately rank sequences by their consistency. In this work, we solve this problem by calibrating the likelihood of model generated sequences to better align with a consistency metric measured by natural language inference (NLI) models. The human evaluation study and automatic metrics show that the calibrated models generate more consistent and higher-quality summaries. We also show that the models trained using our method return probabilities that are better aligned with the NLI scores, which significantly increase reliability of summarization models.
## 1 Introduction
Recent years have witnessed a huge leap forward in abstractive summarization (Zhang et al., 2019; Liu et al., 2022), yet the wider adaptation of summarization models is limited by their tendency to generate _hallucinations_ - outputs with contradicting or unsupported information to their input article (Falke et al., 2019; Maynez et al., 2020). Hallucinations in summarization models can be mostly attributed to two main reasons. First, most summarization systems are trained to maximize the log-likelihood of the reference summary, which does not necessarily reward models for being faithful. Moreover, models are usually agnostic to the noises or artifacts of the training data, such as reference divergence, making them vulnerable to hallucinations (Kryscinski et al., 2019; Dhingra et al., 2019). Thus, models can generate texts that are not consistent with the input, yet would likely have reasonable model log-likelihood. We refer to this phenomenon as models' sequence likelihood not being calibrated to their consistency.
The textual entailment score - the entailment probability of a summary (hypothesis) given its input (premise) - has been widely used to quantify the extent to which generated summaries are faithful or consistent to their input (Falke et al., 2019; Maynez et al., 2020; Narayan et al., 2020; Honovich et al., 2022). Unsurprisingly, several efforts aiming to calibrate summarization models towards consistency focus on textual entailment signals. Pasunuru et al. (2017) use the _multi-task learning_ to jointly train their decoder as a summary generator as well as an entailment classifier. Pasunuru and Bansal (2018) proposed to use _reinforcement learning_ with sequence-level reward for entailment optimizing models to assign higher probability to logically-entailed summaries. _Reranking-based approach_(Falke et al., 2019) uses a two-stage reranking system that first generates candidate summaries and then uses textual entailment predictions to detect consistency errors and rerank alternative pre
Figure 1: XSum inputs and system generated summaries before and after calibration. The text spans in amber are hallucinated.
dicted summaries. Another trend proposes to leverage consistency signals via _controlled generation_(Keskar et al., 2019; Rashkin et al., 2021) to calibrate summarization models. Specifically, training examples are supplemented by prepending special tokens to inputs to indicate/control whether the output should be entailed or not. This way model is better calibrated in differentiating inconsistent examples from consistent examples. Some have also relied on _data filtering_ where we only train on examples whose summaries are predicted to be entailed by the input (Narayan et al., 2021; Aharoni et al., 2022).
Recently Liu et al. (2022) introduced calibration methods to align candidates' sequence likelihoods to their quality as measured by their similarities to the target sequence. First they decode candidates from a fine-tuned model on its own training dataset, and then continue training the model with a multi-task learning objective of sequence candidates with contrastive reranking and token-level generation. Liu et al. (2022) used metrics like Rouge Lin (2004) and BERTScore (Zhang et al., 2019) to rank different decoded candidates with their similarities to the target sequence. Zhao et al. (2023) generalizes Liu et al. (2022) and uses their similarities to the target sequence in the model's latent space, instead of relying on external metrics like Rouge and BERTScore. Both Zhao et al. (2023) and Liu et al. (2022) demonstrate that their methods significantly improve the quality of generated summaries when evaluated against target sequences using Rouge or BERTScore. However, the improvements in the similarities to the target sequence doesn't necessarily lead to consistent summaries. Figure 1 presents few hallucinated (spans mark in amber) summaries generated using these methods.
In this paper we propose _Sequence Likelihood Calibration with **NLI** (or **SLiC-NLI**) to calibrate summaries' sequence likelihood to their consistency. Our approach builds on (Zhao et al., 2023) and (Liu et al., 2022) but uses textual entailment scores to rank candidate summaries, instead of Rouge or BERTScore. In particular, we decode candidates from a fine-tuned model on its own training dataset, estimate entailment probabilities of candidate summaries given their respective inputs, and then continue training the model with a multi-task learning objective of sequence candidates with contrastive reranking and token-level generation.
Unlike reinforcement learning, it is a one-time offline process that avoids costly online decoding processes. Also, when compared to two-stage reranking systems, it doesn't require a separate reranking model that incurs additional complexity and compute.
We experimented with five different abstractive summarization tasks: CNN/DailyMail (Hermann et al., 2015), ForumSum (Khalman et al., 2021), RedditTIFU-long (Kim et al., 2019), SAMSum (Gliwa et al., 2019) and XSUM (Narayan et al., 2018), due to their diversity in domain, style, abstractiveness, and summary lengths. We show that using our approach models can generate better consistent summaries, without sacrificing their overall quality when evaluated automatically and by humans.
## 2 Related Work
### Measuring Consistency
A large number of approaches have been proposed for automatic detection of factual inconsistencies. Most notably, Natural Language Inference (Bowman et al., 2015) approaches has been shown to have a large correlation with human consistency ratings on generation tasks, including summarization (Maynez et al., 2020; Laban et al., 2022; Goyal et al., 2021; Goyal and Durrett, 2021). Other approaches based on question generation and answering have been also shown to perform well in detecting factual consistency (Scialom et al., 2021; Honovich et al., 2021; Deutsch et al., 2021), but usually require a pipeline of model inferences that makes them impractical for some applications. Many studies have investigated automatic detection of factual inconsistencies in a wider variety of tasks (Honovich et al., 2022; Tang et al., 2022), and show that large-scale NLI models have among the highest agreement with human ratings.
### Calibrating Consistency
Model calibration is commonly used in classification tasks, whereas in sequence generation it has not being well defined generally. In our context, model calibration refers to aligning the sequence likelihood to the target entailment probability.
Reranking-based ApproachMany works have proposed to reranking as approach to Many works have proposed approaches that first decode a number of outputs and re-rank them as a second stage. Liu and Liu (2021) decode outputs with diverse beam search and using a RoBERTa-based model to rank them next. Similarly in the neural machine
translation (NMT), Fernandes et al. (2022) and Lee et al. (2021) train rerankers that mimic automatic metrics (BLEU, COMET and BLEURT) and re-rank top-k decodes accordingly. SummaReRanker (Ravaut et al., 2022) found that performance is improved by training generation and reranking models on exclusive halves of the training data instead of on the same data. BRIO (Liu et al., 2022) includes sequence-to-sequence generation models for both generation and reranking stages. They rank different candidates by their similarities to the target sequence using automatic metrics. Zhao et al. (2023) generalizes this idea by computing the similarities to the target sequence in the model's latent space. Zhao et al. (2023)
RL-based ApproachReinforcement learning has been proposed as an approach to optimize signals directly. Paulus et al. (2018) optimize the evaluation metric ROUGE via RL fine-tuning. The authors found that optimizing for single discrete evaluation metric such as ROUGE can be detrimental to the model quality and fluency. Ziegler et al. (2019) and Stiennon et al. (2020) trained reward models to learn human preference based on collected human judgments of competent fine-tuned models. Using PPO, the supervised policy is fine-tuned against the learned reward model. The authors found that this approach leads to better quality summaries than optimizing with respect to ROUGE.
Controllable GenerationControllable generation has been proposed as an approach to increase consistency. He and Yi (2022) proposed the use of control codes to influence generated outputs to match desired characteristics such as style and length as they were observed in the training data. Rashkin et al. (2021) and Aharoni et al. (2022) extended this approach to increase consistency in grounded dialog and multilingual summarization, correspondingly, by adding a control feature based on inferred NLI scores given the summary and input document (Honovich et al., 2022).
Summary Generation with PlanningNarayan et al. (2021) proposed that intermediary plans, based on entities, are useful to increase grounding and consistency in summarization by avoiding common pitfalls seen in autoregressive generation. Moreover, sequence-to-sequence models can learn to produce those plans and the output summaries sequentially in an end-to-end manner. These plans are also controllable and models trained this way are able to produce summaries grounded to the modified plans. Further, Narayan et al. (2022) showed that plans based on questions and answers provide anchoring for more complex tasks, for instance multi-document summarization, aiding further on consistency of longer summaries.
Data Filtering ApproachNarayan et al. (2021) and Aharoni et al. (2022) additionally proposed a simple approach to filter the training data based on inferred NLI scores given the summary and input document. Using only a subset of the training data, inferred to be consistent with the input, model consistency by automatic metrics and human evaluations is improved.
## 3 Method
Following Zhao et al. (2023) and Liu et al. (2022), we introduce a third _calibration stage_ to the popular paradigm of pretraining and fine-tuning, as explained in Figure 2. Let \(D_{train}:\{\mathbf{x},\bar{\mathbf{y}}\}_{n}\) be the dataset used for fine-tuning. We first generate \(m\) candidates \(\{\hat{\mathbf{y}}\}_{m}\) for each training instance in \(D_{train}\) from a fine-tuned model; we refer to this augmented dataset as \(\hat{D}_{train}\) consisting of \(\{\mathbf{x},\{\hat{\mathbf{y}}\}_{m},\bar{\mathbf{y}}\}_{n}\). We then calibrate the fine-tuned model by continuing training with the following loss:
\[\mathcal{L}(\theta)=\sum_{n} L^{\mathrm{cal}}(\theta;\mathbf{x},\{\hat{\mathbf{y}}\}_{m}, \bar{\mathbf{y}})\] \[+\lambda L^{\mathrm{reg}}(\theta,\theta_{ft};\mathbf{x},\bar{ \mathbf{y}}), \tag{1}\]
Figure 2: Our method consists of two parts: **top** (blue color) represents the usual finetuning and inference and **bottom** (orange color) represents the SLIC-NLI methods consisting of the inference using the NLI model and the SLIC calibration.
where \(\theta\) and \(\theta_{ft}\) are the current and finetuned model weights, \(L^{\mathrm{cal}}\) and \(L^{\mathrm{reg}}\) are the calibration and regularization losses, respectively. The calibration loss \(L^{\mathrm{cal}}\) aims to align models' decoded candidates' sequence likelihood \(P_{\theta}(\hat{\mathbf{y}}|\mathbf{x})\) according to their entailment scores, whereas the regularization loss \(L^{\mathrm{reg}}\) prevents models from deviating significantly from their fine-tuned model parameters.
### Calibrating towards Consistency
In order to calibrate the model towards the consistency we annotate \(\{\hat{\mathbf{y}}\}_{m}\) with textual entailment scores (Natural Language Inference or NLI) (Bowman et al., 2015), i.e. we estimate entailment probabilities of candidate summaries given their respective inputs. To estimate the entailment we follow Honovich et al. (2022) and trained an NLI model by fine-tuning T5-11B (Raffel et al., 2020) on the Adversarial NLI (ANLI; Nie et al., 2020) dataset. In Figure 2 the dataset \(\tilde{D}_{train}\) annotated with entailment probabilities is represented as \(\tilde{D}_{train}:\{\mathbf{x},\{\hat{\mathbf{y}},\hat{\mathbf{e}}\}_{m},\tilde {\mathbf{y}}\}_{n}\), where \(\hat{\mathbf{e}}\) is the entailment score of the candidate \(\hat{\mathbf{y}}\). Figure 3 shows how the NLI scores are distributed across different datasets, overall we found out that for every dataset except for CNN/DailyMail we have good representation of good and bad examples for effective calibration. The calibration loss
\[L^{\mathrm{cal}}=\max(0, \beta-\log P_{\theta}(\hat{\mathbf{y}}_{+}|\mathbf{x})\] \[+\log P_{\theta}(\hat{\mathbf{y}}_{-}|\mathbf{x})) \tag{2}\]
then trains the model to learn the ranking among candidates pairs \((\hat{\mathbf{y}}_{+},\hat{\mathbf{y}}_{-})\), uniformly sampled from \(\{\hat{\mathbf{y}}\}_{m}\), according to their entailment scores. In this case, \(\hat{\mathbf{y}}_{+}\) ranks highers than \(\hat{\mathbf{y}}_{-}\) as \(\hat{\mathbf{e}}_{+}>\hat{\mathbf{e}}_{-}\).
Our approach differs from Zhao et al. (2023) and Liu et al. (2022) where they proposed to use the similarity between the candidate \(\hat{\mathbf{y}}\) and the target \(\bar{\mathbf{y}}\) conditioned on the context \(\mathbf{x}\) to get ranking among candidate pairs, instead of textual entailment scores.
### Length regularization
As a result of our extensive experimentation with the various \(\beta\)'s we have made curious observation about NLI scores. It appears that there is a slight positive correlation between the length of the generated summaries and NLI. However this phenomena could be seen as a way of the model to "cheat" and over-optimize in the direction of the higher NLI. Oftentimes a dramatic increase in the length can come out of the repetition of the same sentences over and over. Naturally we would like to avoid this behavior. In pursuit of containing the length of the generating summaries we experiment with an additional length regularization term. We have experimentally found out that it is best to compare the length of the generated sequence \(\hat{\mathbf{y}}\) with the length of the target sequence \(\tilde{\mathbf{y}}\) via simple ratio:
\[f_{len}(\hat{\mathbf{y}})=\left(1-\left|1-\frac{l(\hat{\mathbf{y}})}{l(\hat{ \mathbf{y}})}\right|\right), \tag{3}\]
where \(l(y)\) is the length of the sequence \(y\). We subsequently update our calibration loss \(L^{\mathrm{cal}}\) from (2) using \(f_{len}\) to scale the log-likelihoods, up-weighted with \(\alpha\):
\[L^{\mathrm{cal}}= \max(0,\beta-\alpha\cdot f_{len}(\hat{\mathbf{y}}_{+})\cdot\log P _{\theta}(\hat{\mathbf{y}}_{+}|\mathbf{x})\] \[+\alpha\cdot f_{len}(\hat{\mathbf{y}}_{-})\cdot\log P_{\theta}( \hat{\mathbf{y}}_{-}|\mathbf{x})) \tag{4}\]
Finally, for the regularization loss \(L^{\mathrm{reg}}\) we follow Zhao et al. (2023) and use the KL divergence loss minimizing the probability distribution distance between the calibrated model and the fine-tuned model at each token on the target sequence. Liu et al. (2022) proposed to use the cross-entropy loss as the regularization loss. Nevertheless, both losses have been shown to perform similarly for summarization (Zhao et al., 2023).
## 4 Experimental Setup
### Summarization Datasets
We have experimented with a diverse set of summarization datasets, with respect to different domains, styles, abstractivenesses, and summary lengths.
CNN/DailyMail(Hermann et al., 2015; See et al., 2017) summarization dataset contains 313k articles from the CNN and Daily Mail newspapers with bullet point summaries. The summaries are on average 3-4 sentences and relatively extractive.
ForumSum(Khalman et al., 2021) summarization dataset contains 4058 conversations from a wide variety of internet forums and their high-quality human written summaries.
RedditTIFU-long(Kim et al., 2019) summarization dataset contains 42k posts of informal stories from sub-reddit TIFU from 2013-Jan to 2018-Mar with author written summaries. The style and length of the summaries are very diverse.
SAMSum(Gliwa et al., 2019) summarization dataset contains 16k high-quality chat-dialogues and their summaries written by linguists.
Xsum(Narayan et al., 2018) summarization dataset consists of 227k BBC articles from 2010 to 2017 with a single sentence highly abstractive summary. Sometimes the summary contains information not present in the article.
### Automatic Evaluation
We report on ROUGE Lin (2004) which is commonly used to measure the informativeness and fluency of model generated summaries against gold-standard references.
We report on the reference-free NLI score as a proxy for faithfulness Maynez et al. (2020); Honovich et al. (2022). Regarding NLI, we compute for each summary whether it is entailed by the input, and report the average over all examples. We use the same NLI model that we use for calibration as described in SS3.1.
### Human Evaluation
We conducted human evaluation of the generated summaries for all 5 datasets. We picked our 3 models: finetuned, best calibrated (Eq 2 for \(L^{\mathrm{cal}}\)) and best calibrated with length regularization (Eq 4 for \(L^{\mathrm{cal}}\)), along with other baselines. For each dataset we sampled 100 examples from its corresponding test set. For each example we generate summaries using different models and send to crowd-workers for side-by-side quality annotation. We present our raters a document and model generated summaries, and ask them to assess each summary individually for overall _quality_ (on a scale of 1:Poor Summary to 5:Great summary)) and _factuality_ (a binary decision assessing whether everything in the summary can be verified in the document). Each assessment is replicated by three different crowd workers. For quality we average the annotated scores across all replicas of each task. For the factuality metric we aggregate the metric using majority vote. The models are anonymized and randomly shuffled to avoid biases in the annotation. For more details about the human evaluation template see Appendix D.
### Implementation Details
We experimented with T5 (large, 500M parameters) with a maximum input sequence length of 1,024 tokens and a maximum output length of 256 tokens. We trained all our models with a leaning rate of 0.001 and a batch size of 128, for 50K steps. We select best checkpoints using average Rouge performance on validation sets, unless specified otherwise. During inference, we use beam search with size 5 and alpha 0.8.
## 5 Results
Ablation on Calibration Weights and its Effect on LengthsWe first ablate the effect of different calibration weights (\(\beta\)) in Eq 2 without applying the length regularization. Table 1 presents our results.
We achieve up to \(\approx 30\%\) improvement in terms of the NLI scores on XSUM datasets, \(10.47\%\) on ForumSum, \(9.41\%\) on SAMSum, \(8.23\%\) on RedditTIFU-long, and, \(2.12\%\) on CNN/DailyMail. Using different values of \(\beta\) in \(L^{cal}\) allows to control the level of the calibration, i.e. the bold colors in Table 1 always correspond to the highest weight. We observe that higher calibration can often times affect the other metrics, for example ROUGE scores slightly decrease with the intensity of calibration, which can be non-desirable. Similar phenomenon can be seen with the increase in length and repetition which are can be a symptom of the model trying to "cheat" the NLI metric. On Figure 4 we demonstrate the Pareto frontier that allows us to explore the optimal tradeoff between the NLI scores and various metrics.
Ablation on Length RegularizerIn order to prevent the model from overfitting to the NLI metric we conduct an extensive set of experiments to analyze the effect of the length regularization on the various metrics. As per Eq. 4 we choose various \(\alpha\) in order to increase the effect of regularization. Results are presented in Table 2 and Figure 5. When \(\alpha\) is very small, the model performs similar to its
Figure 3: Distribution of the NLI scores over the inference outputs with beam size \(=15\). All dataset except for CNN/DailyMail have a diverse variety of generated summaries per document.
uncalibrated counterpart. But as we increase \(\alpha\), we start seeing the effect of joint consistency and length calibration. In order to pick the best configuration that equally optimizes for NLI and does not deviate much on length we propose an average score \(Avg_{\alpha}=\frac{\text{NLI}_{\alpha}^{\prime}+(1-\max(\mathbf{L}_{\alpha}^{ \prime},\mathbf{L}_{\text{wifi}\alpha}^{\prime})}{2},\) where \(X_{\alpha}^{\prime}=\frac{X_{\alpha}-\min(\mathbf{X})}{\max(\mathbf{X})-\min (\mathbf{X})}\), i.e. simple min-max nor
\begin{table}
\begin{tabular}{c c c c c c c c} \hline
**Dataset** & \(\beta\) & **NLI \%** & **NLI gain \%** & **R1/R2/RL.** & **Coverage \%** & **Length** & **Repetition \%** \\ \hline \multirow{4}{*}{NLI} & \multirow{4}{*}{NLI} & \(10^{-3}\) & **82.13** & **10.47** & 38.51 / 18.08 / 31.16 & 88.5 & 43.25 & 19.10 \\ & & \(10^{-4}\) & 78.15 & 6.50 & **40.74** / **20.09** / **33.59** & 86.9 & 32.28 & 13.10 \\ & & \(10^{-5}\) & 75.05 & 3.39 & 40.50 / 19.39 / 32.97 & 84.5 & 27.17 & 7.20 \\ & & w/o & 71.66 & 0.00 & 39.82 / 18.74 / 32.37 & 83.6 & 25.03 & 6.40 \\ \hline \multirow{4}{*}{NLI} & \multirow{4}{*}{NLI} & \(10^{-3}\) & **89.43** & **8.23** & 27.28 / 8.65 / 21.60 & 93.9 & 23.76 & 7.30 \\ & & \(10^{-4}\) & 84.45 & 3.24 & 29.96 / 10.82 / 24.82 & 90.9 & 16.34 & 2.40 \\ & & \(10^{-5}\) & 82.28 & 1.07 & 30.02 / **10.85** / **25.05** & 89.2 & 15.27 & 1.30 \\ & & w/o & 81.21 & 0.00 & **30.22** / 10.70 / 24.63 & 88.9 & 16.28 & 1.80 \\ \hline \multirow{4}{*}{NLI} & \multirow{4}{*}{NLI} & \(10^{-2}\) & **96.14** & **9.41** & 48.93 / 24.68 / 39.76 & 81.3 & 29.08 & 4.10 \\ & & \(3\cdot 10^{-4}\) & 91.51 & 4.78 & **54.47** / **30.15** / **45.72** & 80.4 & 19.74 & 1.60 \\ & & \(10^{-4}\) & 87.93 & 1.19 & 54.33 / 29.98 / **45.85** & 79.6 & 17.92 & 1.50 \\ & & w/o & 86.73 & 0.00 & 54.52 / 30.09 / 45.75 & 79.2 & 18.93 & 1.70 \\ \hline \multirow{4}{*}{NLI} & \(10^{-2}\) & **81.21** & **28.19** & 39.46 / 16.92 / 31.88 & 83.0 & 18.07 & 0.40 \\ & & \(3\cdot 10^{-4}\) & 77.46 & 24.44 & 41.32 /18.77 / 33.71 & 80.9 & 17.33 & 0.40 \\ & & \(10^{-3}\) & 57.21 & 4.19 & **44.80** / **21.93** / **36.99** & 74.1 & 17.22 & 0.40 \\ & & w/o & 53.02 & 0.00 & 44.73 / 21.88 / 36.94 & 73.4 & 16.94 & 0.40 \\ \hline \multirow{4}{*}{NLI} & \multirow{4}{*}{NLI} & \(10^{-2}\) & **89.47** & **2.12** & 42.41 / 20.25 / 29.78 & 99.4 & 68.68 & 18.50 \\ & & \(10^{-3}\) & 89.08 & 1.72 & 42.96 / 20.79 / 30.28 & 99.4 & 70.42 & 17.60 \\ \cline{1-1} & & \(3\cdot 10^{-4}\) & 88.57 & 1.22 & 43.52 / 21.23 / 30.78 & 99.3 & 69.26 & 14.10 \\ \cline{1-1} & & w/o & 87.36 & 0.0 & **44.29** / **21.82** / **31.62** & 99.2 & 57.13 & 3.90 \\ \hline \end{tabular}
\end{table}
Table 1: The effect of different calibration weights on the model performance in terms of NLI. We also report on other automatic measures: Rouge-1, Rouge-2 and Rouge-L scores (R1/R2/RL), Coverage (percentage of tokens in the generated summary that appeared in the input), Repetition (percentage of repeated tokens in the output summary) and the summary lengths. All the results are reported on respective validation sets.
Figure 4: Pareto frontier that demonstrates the trade-offs between NLI and other metrics such as Rouge-2, Coverage, Length and Repetition on all datasets.
malization. \(\mathbf{X}\) is the set of values with different values of \(\alpha\). Table 2 highlights best scores that indicate the best results according to this metric. We follow the same recipe to select the best models for all datasets.
Final Results and Human EvaluationsTable 3 present our final results on the corresponding test sets. We conducted human evaluation of the generated summaries. Table 4 shows that SLiC-NLI improves consistency of the summaries from 67% to 85% and the average quality scores from 2.96 to 3.43. The results of the experiments on all other datasets are summarized in Tables 7 (Appendix D). We also present summary lengths for comparison. The results show that calibration consistently improves the quality and factuality of all generated summaries. Humans consistently prefer the calibrated model over the non-calibrated model. See Figure 6 where we demonstrate one of the examples that was given to the raters, both SLIC version are the only two models that produced non-hallucinated summaries.
Correlation with probabilitiesWe study how the log-probability of the calibrated model correlates with NL (Table 5). For the beam search we either take the top-1 summary or the full beam outputs and compute the correlation across the whole datasets. The sentence log-probability as before computed as a sum of individual log-probabilities.
## 6 Conclusions
In this work we present _SLiC-NLI_ -- a new method for improving factuality of abstractive summarization models. The method calibrates the likelihood of the generative model with a consistency metric measured by NLI models.
SLiC-NLI achieves state-of-the-art results on both human evaluation and automatic metrics while being simple, effective and straight-forward to implement. We show that SLiC-NLI achieves a 18% (from 67% to 85%) increase in consistency of the summaries according to humans and 31% (from
\begin{table}
\begin{tabular}{c c c c c c c} \hline \(\alpha\) & **NLI \%** & **NLI gain \%** & **R1/R2/RL** & **Coverage \%** & **Length** & **Avg** \\ \hline
100 & 66.30 & 13.28 & 42.76/19.73/34.44 & 77.40 & 20.02 & 0.24 \\
10 & 68.24 & 15.22 & 42.73/19.68/34.44 & 77.80 & 19.62 & 0.32 \\
1 & 78.34 & 25.3 & 40.72/17/30.82 & 82.60 & 18.54 & 0.62 \\
**0.5** & **78.87** & **25.85** & **40.17/17.64/32.80** & **82.20** & **16.82** & **0.81** \\
0.1 & 74.28 & 21.27 & 41.36/19.22/34.11 & 79.80 & 15.69 & 0.73 \\
0.01 & 56.35 & 3.33 & 44.82/21.96/37.02 & 74.00 & 17.02 & 0.41 \\ \(10^{-3}\) & 53.62 & 0.60 & 44.89/21.99/37.06 & 73.30 & 17.16 & 0.34 \\ \(10^{-4}\) & 53.39 & 0.37 & 44.86/21.93/37.00 & 73.30 & 17.21 & 0.33 \\ \hline w/o \(f_{len}\) & 81.21 & 28.19 & 39.46/16.92/31.88 & 83.00 & 18.07 & 0.73 \\ \hline w/o \(L^{\mathrm{cal}}\) & 53.02 & 0.00 & 44.73/21.88/36.94 & 73.40 & 16.94 & 0.36 \\ \hline \end{tabular}
\end{table}
Table 2: The effect of various length regularizer weights on the XSum dataset performance. We choose \(\beta=0.5\) with the highest NLI scores of 78.87% on the XSum validation set.
Figure 5: Plot of NLI, Rouge (R1) and Summary lengths with various length regularizer weights. See Table 2 for exact numbers.
Figure 6: An XSum inputs and various system generated summaries. The text spans in amber are hallucinated.
49% to 80%) according to automatic metrics on XSUM dataset.
We believe that our method has the potential to improve quality and factuality of generated text in a variety of applications. In future work, we plan to investigate the use of our method with other types of models, such as instruction-tuned models of size PALM-2 Chowdhery et al. (2022) and GPT-4 OpenAI (2023). We hope that our work will contribute to the development of more reliable and accurate natural language generation systems.
\begin{table}
\begin{tabular}{c|c c c} \hline & quality & factual & length \\ \hline Frost (ECPP, Drop) & 3.18 &.76 & 17.57 \\ Cliff & 3.10 &.69 & 18.18 \\ Finetuned (w/o \(L^{\mathrm{cal}}\)) & 2.96 &.67 & 17.77 \\ SLIC-NLI (w/o \(f_{len}\)) & **3.43** & **.85** & 18.82 \\ SLIC-NLI (with \(f_{len}\)) & 3.21 & **.82** & 15.54 \\ \hline \end{tabular}
\end{table}
Table 4: Human Evaluation results on XSum dataset.
\begin{table}
\begin{tabular}{c c c} \hline
**Models** & **NLI \%** & **R1/R2/RL** & **Length** & **Repetition \%** \\ \hline & \multicolumn{3}{c}{**XSUM**} \\ \hline Pegasus & 54.00 & 46.32 / 24.21 / 38.64 & 18.91 & 0.45 \\ Brio 1 & 49.76 / 47.22 / 42.68 / 39.28 & 19.42 & 0.81 \\ SLiC & 51.93 & 43.96 / 20.80 / 35.99 2 & 16.85 & 0.50 \\ Cliff & 56.11 & 43.10 / 20.90 / 35.61 & 18.27 & 0.31 \\ FactPegasus & 52.37 & 37.13 / 15.08 / 30.33 & 16.57 & 0.32 \\ Frost (Drop) & 58.75 & 43.58 / 20.94 / 36.39 & 17.51 & 0.30 \\ \hline Finetuned (w/o \(L^{\mathrm{cal}}\)) & 48.52 & 44.02 / 22.07 / 36.64 & 17.75 & 0.35 \\ SLiC-NLI (w/o \(f_{len}\)) & **80.01** & 38.16 / 16.43 / 30.97 & 18.59 & 0.59 \\ SLiC-NLI (with \(f_{len}\)) & 74.16 & 40.10 / 18.86 / 33.34 & 15.74 & 0.32 \\ \hline & \multicolumn{3}{c}{**CNN/DailyMall**} \\ \hline Pegasus & 93.31 & 42.22 / 21.06 / 39.45 & 61.50 & 3.48 \\ Brio & 88.75 & 46.30 / 23.25 / 31.93 & 63.09 & 3.27 \\ SLiC & 93.38 & 43.86 / 21.18 / 30.88 & 52.59 & 3.80 \\ Cliff & 91.08 & 33.91 / 14.29 / 24.27 & 51.43 & 1.45 \\ Frost (Drop) & 93.49 & 43.50 / 21.56 / 40.83 & 57.54 & 3.46 \\ \hline Finetuned (w/o \(L^{\mathrm{cal}}\)) & 92.61 & 42.39 / 20.90 / 35.29 & 51.13 & 2.70 \\ SLiC-NLI (w/o \(f_{len}\)) & **94.58** & 40.84 / 19.54 / 38.30 & 66.48 & 15.68 \\ SLiC-NLI (with \(f_{len}\)) & 94.22 & 41.62 / 19.83 / 38.55 & 63.63 & 6.12 \\ \hline & \multicolumn{3}{c}{**ForumSum**} \\ \hline SLiC & 75.78 & 41.44 / 20.08 / 34.22 & 35.85 & 14.03 \\ Finetuned (w/o \(L^{\mathrm{cal}}\)) & 72.97 & 40.34 / 19.29 / 32.66 & 31.66 & 7.88 \\ SLiC-NLI (w/o \(f_{len}\)) & **78.26** & 38.74 / 19.15 / 32.25 & 38.12 & 18.47 \\ SLiC-NLI (with \(f_{len}\)) & 77.82 & 40.82 / 20.22 / 34.07 & 30.61 & 8.59 \\ \hline & \multicolumn{3}{c}{**XSum**} \\ \hline SLiC & 73.25 & 52.82 / 27.96 / 43.81 & 17.84 & 1.89 \\ Finetuned (w/o \(L^{\mathrm{cal}}\)) & 73.00 & 51.01 / 26.36 / 42.54 & 18.62 & 2.04 \\ SLiC-NLI (with \(f_{len}\)) & **86.57** & 46.44 / 22.60 / 37.66 & 30.00 & 5.28 \\ SLiC-NLI (with \(f_{len}\)) & 84.63 & 49.79 / 25.39 / 41.51 & 20.59 & 1.65 \\ \hline & \multicolumn{3}{c}{**RedditTUE -long**} \\ \hline SLiC & 75.61 & 27.51 / 79.81 / 21.71 & 16.22 & 1.20 \\ Finetuned (w/o \(L^{\mathrm{cal}}\)) & 69.10 & 27.52 / 9.16 / 22.53 & 14.54 & 0.70 \\ SLiC-NLI (with \(f_{len}\)) & **85.75** & 27.40 / 9.01 / 22.33 & 18.92 & 3.81 \\ SLiC-NLI (with \(f_{len}\)) & 80.87 & 27.43 / 9.35 / 22.78 & 15.40 & 2.15 \\ \hline \end{tabular}
\end{table}
Table 3: Final results on various test sets. We include results from several state-of-the-art summarization models such as Pegasus Zhang et al. (2019), Brio Liu et al. (2022), SLiC Zhao et al. (2023), Cliff Cao and Wang (2021), FactPegasu Wan and Bansal (2022) and Frost Narayan et al. (2021). Cliff, FactPegasus and Frost are particularly trained or designed to generate factual summaries. For Frost, we report on Frost (Drop) which avoids hallucinated entities in summaries by dropping them form their entity plans. In each dataset we consistently show outstanding results on NLI. Having shorter sequence can be motivated by the generation latency or the risk of repetition in the summaries, in that case SLiC-NLI variant with length regularisation can be used and it surpasses other baselines as well.
\begin{table}
\begin{tabular}{c|c c c c} \hline \(w\) & **Decoding** & **P (all)** & **S (all)** & **P (top-1)** & **S (top-1)** \\ \hline & & \multicolumn{3}{c}{-10\({}^{-1}\)} \\ \hline w/o & & 0.12 & 1.58 & 1.38 & 1.45 \\
0.01 & **3.05** & **3.12** & **2.83** & **2.94** \\
0.003 & 0.48 & 2.35 & 2.28 & 2.20 \\
0.001 & 0.28 & 2.18 & 2.14 & 1.99 \\
0.0003 & & 0.14 & 1.85 & 1.70 & 1.71 \\
0.0001 & & 0.14 & 1.74 & 1.60 & 1.62 \\ \hline \end{tabular}
\end{table}
Table 5: Correlation between the log-probabilities of our model and NLI. We run inference with various decodings and compute the Pearson (**P**) and Spearman(**S**) correlations. For the **beam** decoding we either used all the outputs or top-1.
### Limitations
While SLiC-NLI is a powerful and simple method for improving consistency of summarization models, it is important to acknowledge its limitations. For example, we haven't explored the capabilities of the method beyond summarization tasks, and since the field of LLMs is moving fast in the direction of single unified models, it is important to make sure that our method works well with instruction-tuning techniques. Additionally, improved consistency does not always lead to a high performance in terms of other metrics. There are no guarantees that creativity and helpfulness of a model outputs will not be affected by improved consistency. Finding a natural balance and control of these aspects is one of the topics we would like to explore in the future work. Finally, even though our method is exceptionally good at increasing the consistency between summaries and the documents, it doesn't guarantee that other types of hallucinations that are not covered by NLI metric will not be generated.
|
2301.11574 | Damage Preserving Transformation for Materials with Microstructure | The failure of heterogeneous materials with microstructures is a complex
process of damage nucleation, growth and localisation. This process spans
multiple length scales and is challenging to simulate numerically due to its
high computational cost. One option is to use domain decomposed multi-scale
methods with dynamical refinement. If needed, these methods refine coarse
regions into a fine-scale representation to explicitly model the damage in the
microstructure. However, damage evolution is commonly restricted to fine-scale
regions only. Thus, they are unable to capture the full complexity and breath
of the degradation process in the material. In this contribution, a generic
procedure that allows to account for damage in all representations is proposed.
The approach combines a specially designed orthotropic damage law, with a
scheme to generate pre-damaged fine-scale microstructures. Results indicate
that the damage approximation for the coarse representation works well.
Furthermore, the generated fine-scale damage patterns are overall consistent
with explicitly simulated damage patterns. Minor discrepancies occur in the
generation but subsequently vanish when explicit damage evolution continuous;
for instance under increased load. The presented approach provides a
methodological basis for adaptive multi-scale simulation schemes with
consistent damage evolution. | Philip P. Müller, Falk K. Wittel, David S. Kammer | 2023-01-27T07:45:19Z | http://arxiv.org/abs/2301.11574v2 | # Damage Preserving Transformation for Materials with Microstructure
###### Abstract
The failure of heterogeneous materials with microstructures is a complex process of damage nucleation, growth and localisation. This process spans multiple length scales and is challenging to simulate numerically due to its high computational cost. One option is to use domain decomposed multi-scale methods with dynamical refinement. If needed, these methods refine coarse regions into a fine-scale representation to explicitly model the damage in the microstructure. However, damage evolution is commonly restricted to fine-scale regions only. Thus, they are unable to capture the full complexity and breath of the degradation process in the material. In this contribution, a generic procedure that allows to account for damage in all representations is proposed. The approach combines a specially designed orthotropic damage law, with a scheme to generate pre-damaged fine-scale microstructures. Results indicate that the damage approximation for the coarse representation works well. Furthermore, the generated fine-scale damage patterns are overall consistent with explicitly simulated damage patterns. Minor discrepancies occur in the generation but subsequently vanish when explicit damage evolution continuous; for instance under increased load. The presented approach provides a methodological basis for adaptive multi-scale simulation schemes with consistent damage evolution.
keywords: Lattice, Continuum damage mechanics, Microstrutured disordered material, Anisotropic damage, Multi-scale simulation, Harmonic decomposition, Damage modelling
###### Contents
* 1 Introduction
* 2 Materials and Methods
* 2.1 Generic Damage Transforming Method
* 2.2 Continuum Representation of 2D Isotropic Continua with Damage
* 2.3 Exemplary Material Motivive
* 2.4 Determining the Damage Law for the Continuum
* 2.5 Process for the Construction of a Damaged Lattice
* 2.6 Numerical Simulations
* 2.6.1 Numerical Simulation Procedure
* 2.6.2 The UniformSim Simulation Setup
* 2.6.3 The MultiLoadSim Simulation Setup
* 2.6.4 The ReconstrSim Simulation Setup
* 3 Results
* 3.1 Details of a Numerical Simulation
* 3.2 Estimation of the Damage Law \(\widehat{\mathbf{D}}(\overline{\kappa})\)
* 3.3 Test of the Damage Evolution Law \(\widehat{\mathbf{D}}(\overline{\kappa})\)
* 3.4 Estimation of the Transfer Function \(\widehat{\overline{r}}(\overline{\kappa})\)
* 3.5 Tests of the Reconstruction Process
* 4 Summary and Conclusion
* 5 CRediT
* 6 Declaration of Competing Interest
* 7 Data Availability
* A Parameters of \(\widehat{r}_{x}(\kappa_{x})\) and \(\widehat{r}_{y}(\kappa_{y})\)
* B Influence of the Characteristic Length \(\ell\)
* C Influence of the Number of Loading Steps
## 1 Introduction
At a certain scale even heterogeneous materials will appear homogeneous and some can even be considered isotropic. Among others, this is true for concrete, one of the most widely used commodity on earth, a mixture made of sand, aggregates, cement, water and chemical admixtures. The growth of damage inside concrete is highly affected by the particular microstructure, where, depending on the scale, aggregates or even sand grains act either as focal points for stresses or obstacles for damage.
Damage initiates at very small scales, long before the macroscopic structure itself will fail or crack. Instead, the damage leads to a reduction of the material's stiffness. Nevertheless, at one point the accumulated damage becomes so widespread, that even its smallest increase, will trigger the previously isolated nuclei to merge. This leads to a cascade of increasingly larger defects, culminating in the emergence of a macroscopic crack.
Continuum based methods are the methods of choice if large structures should be simulated, due to their computational efficiency. For taking into account intrinsic degenerative processes constitutive laws are used. One of the earliest, but still widely used laws for modelling damage in concrete was proposed by Mazars (Lemaitre, 2001; Mazars et al., 1985). It employs a scalar damage variable to degrade the material's stiffness. However, even if the material was initially isotropic, damage will induce anisotropy into the material's behaviour. Clearly any scalar damage variable is inherently unable to capture this. During the years, a variety of anisotropic damage models were proposed to address this issue (Brancherie et al., 2009; Braun et al., 2021; H. Chen et al., 2016; Delaplace et al., 2008; Desmorat et al., 2007; Gaede et al., 2013). All of them consider the accumulated effects of the damage's growth, represented by internal state variables at the material points and by that disregard the actual microstructure, whose degeneration is the actual cause for the emerging damage.
To overcome this deficiency, the entire microstructure could be explicitly represented and simulated. Unfortunately, even with today's fast computers, this is only possible for small sizes. A way to overcome this barrier are multi-scale methods. They allow to invest computational power exactly where it is needed, by combining different representations. Although, many different methods were proposed over the past years, they can be classified to be either of hierarchical or of concurrent nature (Budarapu et al., 2017; Liu, 2018; Lloberas-Valls et al., 2012a; Matou et al., 2017; Unger and Eckardt, 2011).
Hierarchical methods, such as (Elia et al., 2022; Rezakhani et al., 2017; X. Sun et al., 2019; Xu et al., 2023), are characterised by a full separation of scales, which allows to treat every level independently from each other. Thus, the information is passed between the different levels as one serves as input for the hierarchically higher level.
Opposed to this, concurrent methods, such as (Lloberas-Valls et al., 2012b; Miller et al., 2009; Xiao et al., 2004), lack the full separation of scales and typically decompose the computational domain into different regions. Imagine a typical setting where high accuracy is only needed inside a small part of the computational domain, for example around a crack tip. Ideally, one limits methods with high accuracy but large computational burden to these small regions, while the remaining part of the computational domain is described by much more efficient methods. The link between the different regions is established by a coupling scheme (Bitencourt et al., 2015; Farhat et al., 1991; Lloberas-Valls et al., 2012b; Unger and Eckardt, 2011), of which the Arlequin method is a particular general one (Anciaux et al., 2008; Bauman et al., 2008; Guidault et al., 2007; Unger and Eckardt, 2011; Wellmann et al., 2012).
Whenever the domain decomposition is not available in advance, one must resort to adaptive methods, which refine regions on demand (_e.g._, P. Y. Chen et al., 2021; Evangelista, Alves, et al., 2020; Evangelista and Moreira, 2020; Rodrigues et al., 2018; Unger and Eckardt, 2011; Zhang et al., 2012). However, important questions are (i) how are the regions that need to be refined identified, and (ii) how is the loading history of the coarse connected to the initial state of the newly created fine scale representation? Especially (ii) does not seem to be addressed well in literature. Some authors assume that the coarse representation does not accumulate any damage prior to refinement, which is triggered by a stress or strain based criteria (L. Chen et al., 2021; P. Y. Chen et al., 2021; Rodrigues et al., 2018; Unger, Eckardt, and Konke, 2011). Other authors link the refinement criteria to a damage law (Lloberas-Valls et al., 2012b; Rezakhani et al., 2017; B. Sun et al., 2015; X. Sun et al., 2019; Xu et al., 2023). Since most damage laws employ a threshold below which it is inactive, refinement is triggered the first time the loading surpasses it. Both approaches restrict damage evolution only to refined regions, but have the advantage that the refined region is always undamaged at the beginning. Consequently, the entire load history of the coarse regions is disregarded and refinement might occur unnecessarily early, since even the smallest damage triggers a refinement.
In this paper we propose, to the best of our knowledge, a generic approach for the refinement step in adaptive concurrent multi-scale simulations, which allows to account for the damage evolution inside the coarse representation. To this end, we equip the coarse scale representation with its own anisotropic damage measure, which is based on a damage variable recently presented by Oliver-Leblond et al. (2021). Further, we develop a scheme to create fine scale representations with a certain initial damage pattern on the fly. This allows us to create fine scale regions that already contain an initial damage. Together, these two components will allow to include damage evolution in unrefined regions and to incorporate the coarse loading history upon refinement. In addition, the coarse damage measure will allows much better control of the refinement.
While our approach is generic and rather simple, its practical details highly depend on the selected represen
tations. Thus, we demonstrate it by applying it to one particular test case. The reminder of this paper is organised as follows: In Sec. 2, we explain our method in more detail and present the proposed techniques. In Sec. 3 we determine the parameters of our method and asses its applicability, before we draw final conclusions in Sec. 4.
## 2 Materials and Methods
The particular choice of the material's microstructure, also called motive, is in general arbitrary, but should follow principles of representative volume elements (RVE) (Lemaitre and Desmorat, 2005). The state of a discrete representation with its inherent characteristic structure is fully given by \(\mathbf{r}\), that describes every single discrete element (right side of Fig. 1). In this representation, damage \(\mathbb{D}(\mathbf{r})\) is given by the irreversible degeneration of the constituting elements. On the left side of Fig. 1, the smeared continuum representation is shown, which lacks such an explicit microstructure and only considers cumulative effects of the damage through internal state variables added to the constitutive law. Here, damage is given by \(\mathbf{D}\), which depends on the state \(\overrightarrow{\kappa}\) at a particular location and is embedded in the constitutive law.
Since the continuum representation loses its validity once cracks localise, one must refine the continuum to its discrete twin in such way that all important aspects of the fracture will be captured accurately on the fine scale. The key for a meaningful adaptive modelling of the damage evolution lies in the transformation of the continuum to the discrete representation, that conserves the degraded mechanical behaviour found inside the continuum. One focus of this work is an approach to construct a discrete representation that respects the preceding damage present in the continuum representation.
Even though the procedure is generic and in principal not restricted to specific numerical material representations, this paper focuses on one particular choice. However, we will outline the generic way of working with the method (see Sec. 2.1), before we start with our specific choice. We exemplary chose a two-dimensional plane stress, isotropic material (see Sec. 2.2) with an underlying material heterogeneity represented by a triangular network of beam-truss elements with linear-elastic, brittle behaviour with quenched disorder of breaking thresholds (see Sec. 2.3). We then discuss the particular choice of the damage law as well as the reconstruction step (see Secs. 2.4, 2.5). To determine and test them, we use data obtained from numerical simulations (see Sec. 2.6).
### Generic Damage Transforming Method
Initially, the domain is described as a continuum without any internal structure, whose state is fully described by the continuum state variable \(\overrightarrow{\kappa}\). In the continuum, the damage evolution is fully govern by the damage function \(\widehat{\mathbf{D}}(\overrightarrow{\kappa})\). Therefore, \(\widehat{\mathbf{D}}(\overrightarrow{\kappa})\) can be interpreted as the macroscopic damage, that is expected for a hypothetical discrete representation with identical loading. Thus we can determine the function describing the macroscopic damage by homogenising the discrete damage \(\mathbb{D}(\mathbf{r})\). This leads to a perspective on the damage law that is different from the conventional one, where the damage law is calibrated against a physical material. Instead, here the law is calibrated against a particular numerical representation of the material.
When the continuum model experiences a certain damage limit, it is no longer suitable and has to be refined to a discrete representation. However, this discrete state has to be consistent with the previous continuum representation. This includes stiffness and damage, which have to be preserved as much as possible by the transformation. Determining this reconstruction process challenging, since it is by its very nature not unique.
### Continuum Representation of 2D Isotropic Continua with Damage
To represent a two-dimensional isotropic material under plane stress, the Finite Element Method (FEM) and as damage measure continuum damage mechanics (CDM) is used (Lemaitre, 2001; Lemaitre, 1996; Lemaitre and Desmorat, 2005). We use the well known material law:
\[\boldsymbol{\sigma}=(\mathcal{I}-\mathcal{D})\;\mathcal{C}\boldsymbol{ \varepsilon}, \tag{1}\]
where \(\boldsymbol{\sigma}\) and \(\boldsymbol{\varepsilon}\) denote the continuum stress and strain tensors, respectively, \(\mathcal{C}\) is the continuum stiffness tensor of the undamaged material, and \(\mathcal{D}\) is the damage tensor. While in Eq. (1) damage is represented by a fourth order tensor, in this work we use the second order tensor \(\mathbf{D}\) to describe damage (Lemaitre and Desmorat, 2005, Sec. 1.1.4). This choice allows to account for anisotropic effects, while avoiding the complexity of fourth order tensors. Due to the choice of CDM, as continuum damage measure, the damage variable \(\mathbf{D}\) can directly be identified with the damage function \(\widehat{\mathbf{D}}(\overrightarrow{\kappa})\). Further, we identify \(\overrightarrow{\kappa}\) as the continuum state variable given as
\[\overrightarrow{\kappa}:=\begin{pmatrix}\kappa_{x}\\ \kappa_{y}\end{pmatrix}. \tag{2}\]
Figure 1: The continuum damage depends on the continuum state \(\overrightarrow{\kappa}\) and has value \(\mathcal{D}\). Formally \(\mathbb{D}\) is a fourth order tensor, but in this work we are using the second order tensor \(\mathbf{D}\) to represent damage. The discrete damage \(\mathbb{D}(\mathbf{r})\) depends on \(\mathbf{r}\) and hence on the state of all discrete elements of the lattice. The two representations as well as their damages are interconnected to each other by the homogenisation and refinement processes, respectively.
Scale of the lattice is exaggerated.
A zoning approach is used to divide the principal strain space along an angle \(\chi\), known as zone boundary, into an \(x\)- (shaded red parts) and \(y\)-zone (shaded yellow parts in Fig. 2). The two components \(\kappa_{x}\) and \(\kappa_{y}\) represent the maximal reached principal tensile strain in \(x\) and \(y\) direction, respectively, _i.e._
\[\kappa_{x}:=\max\left(\kappa_{x},\,\left\langle\varepsilon_{1}\right\rangle_{+ }\right),\qquad\kappa_{y}:=\max\left(\kappa_{y},\,\left\langle\varepsilon_{2} \right\rangle_{+}\right).\]
While \(\varepsilon_{1}\) and \(\varepsilon_{2}\) are the eigenvalues of the strain tensor \(\mathbf{\varepsilon}\), its eigenvectors form a Givens rotation matrix of angle \(\Gamma\), which is sometimes called eigenangle. The angle \(\Gamma\), together with the boundary \(\chi\), determines which zone the eigenvalues are associated with, see Fig. 2.
In the initial phase, where the CDM is used, the damage is still small. This allows us to make some simplifying assumptions on the damage variable \(\mathbf{D}\), that will also affect \(\widetilde{\mathbf{D}}(\,\overline{\kappa}\,)\): (i) We assume that the damage is orthotropic which reduces \(\mathbf{D}\) to a diagonal matrix. For that reason, we will only consider the eigenvalues of \(\mathbf{D}\), cf. Eq. (11). (ii) We assume that there is no coupling between the directions. Thus, a change of \(\kappa_{x}\) (\(\kappa_{y}\)) will only affect the damage along the \(x\)-direction (\(y\)-direction). Later we will slightly relax this assumption, cf. Eq. (12).
### Exemplary Material Motive
The example material motive chosen here is based on models proposed in Refs. (Herrmann et al., 1989; Mier, 2017), namely a regular triangular lattice but formed by \(3^{\mathrm{rd}}\) order Reddy truss-beam elements with characteristic lattice size \(\ell\)(Reddy, 1997; Reddy et al., 1997). Using beams allows to include bending properties and the resulting lattice is able to represent a Cosserat continuum (Ostoja-Starzewski, 2008; Vardoulakis, 2019). The microscopical beams consist of an isotropic material with Young's modulus \(E_{b}\) and Poisson's ratio \(\nu_{b}\). A list of all used material parameters is given in Tab. 1. In a multi-scale simulation, \(E_{b}\) has to be chosen such that the resulting behaviour of the discrete structure matches the one of the continuum, _i.e._ its stiffness tensor \(\mathcal{C}\). However, since this paper studies the refinement step in isolation, without having an actual continuum phase, the choice of \(E_{b}\) is actually irrelevant.
Lattice GeometryThe motive is defined by the number of nodes (\(N_{x}\), \(N_{y}\)) in \(x\)- and \(y\)-direction, the spatial extension in \(x\)-direction \(L_{x}\), with resulting characteristic lattice size \(\ell:=L_{x}/(N_{x}-1)\) and spatial \(y\)-extension \(L_{y}:=N_{y}\ell\,\sqrt{3}/2\). The influence of \(\ell\) on the damage evolution is small (see B). An out-of-plane height of \(H\) is assumed. To remove the symmetries of the lattice, topological disorder is introduced (Moukarzel et al., 1992; Wittel, 2006) by adding the random displacement
\[\overline{x}_{i}^{\Delta}:=a\frac{\ell}{2}\;\overline{x}_{i}^{*} \tag{3}\]
to every internal node of the grid, where \(\,\overline{x}_{i}^{*}\) is a random vector sampled uniformly from the unit circle (see Fig. 3a). The distortion is controlled by parameter \(a\in[0,\,1[\), known as distortion level.
Geometrical Properties of Beam-Truss ElementsThe thickness of beam \(i\), denoted as \(t_{i}\), depends on the lattice's geometry. It is given as \(t_{i}:=A_{i}^{(O)}/\ell_{i}\), where \(A_{i}^{(O)}\) is the area represented by the beam and \(\ell_{i}\) its length, see
\begin{table}
\begin{tabular}{c|c|c} \hline
**Property** & **Value** & **Unit** \\ \hline \hline \(N_{x}\), \(N_{y}\) & 300, 346 & \([-]\) \\ \hline \(L_{x}\), \(L_{y}\) & 2, 1.998 & m \\ \hline \(H\) & 1 & m \\ \hline \(E_{b}\) & \(1\times 10^{6}\) & Pa \\ \hline \(\nu_{b}\) & 0.3 & \([-]\) \\ \hline \(k_{\varepsilon}\) & 3 & \([-]\) \\ \hline \(\lambda_{\varepsilon}\) & 0.02 & \([-]\) \\ \hline \(k_{\Phi}\) & 3 & \({}^{\circ}\) \\ \hline \(\lambda_{\Phi}\) & 0.02 & \([-]\) \\ \hline \end{tabular}
\end{table}
Table 1: Parameters of the discrete material motive.
Figure 3: (a) Distortion of the central node, ignoring the distortion of the surrounding nodes. The location of the distorted node (yellow circle), is randomly selected within the blue circle of radius \(\ell/2\). Afterwards, the length of the beams are adjusted to match the new node location (black lines). (b) The thickness of beam \(i\) is given as \(t_{i}:=A_{i}^{(O)}/\ell_{i}\), where \(\ell_{i}\) is its length and \(A^{(O)}\) is the area the beam is representing. Points \(z_{L}\) and \(z_{K}\) are centres of the adjacent triangles’ incircles.
Figure 2: Interpretation of the zone boundary parameter \(\chi\). While \(\varepsilon_{1}\) and \(\varepsilon_{2}\) are the eigenvalues of \(\mathbf{\varepsilon}\), its eigenvectors are described by the value \(\Gamma\). The eigenangle \(\Gamma\) and the zone boundary \(\chi\) determines which eigenvalue acts in which direction.
Fig. 3b. The area \(A_{i}^{(O)}\) is formally defined as the set of points that are closer to beam \(i\) than any other beam, but are inside the lattice. It can be determined by finding the intersection of the angle's bisectors, _i.e._ centre of the incircle, of the two adjacent triangles denoted as \(z_{K}\) and \(z_{L}\) in Fig. 3b. In case the beam is part of the boundary \(A_{i}^{(O)}\) is artificially doubled. This ensures that in a regular lattice all beams have the same axial rigidity.
Damage Criterion Applied to the Beam-Truss LatticeIn the discrete representation, damage is the irreversible failure of elements, namely the reduction of their contributing stiffness to an insignificant level. To determine if a beam has surpassed its loading capacity, the elliptical criterion
\[\left(\frac{\varepsilon_{i}}{\varepsilon_{i;\,th}}\right)^{2}+\frac{\max \left(\left|\Phi_{i}^{(r)}\right|,\,\left|\Phi_{i}^{(l)}\right|\right)}{ \Phi_{i;\,th}}=:\Psi_{i}\geq 1 \tag{4}\]
is used, where \(\varepsilon_{i;th}\) and \(\Phi_{i;th}\) are the beam's elongation and bending thresholds, respectively (Herrmann et al., 1989). Both thresholds are sampled independently from the Weibull distributions \(\varepsilon_{i;th}\stackrel{{\text{iid}}}{{\sim}}\text{Weib}\left( k_{e},\,\lambda_{e}\right)\) and \(\Phi_{i;th}\stackrel{{\text{iid}}}{{\sim}}\text{Weib}\left(k_{ \Phi},\,\lambda_{\Phi}\right)\).
The Discrete State Variable\(\vv{r}\)The discrete state is uniquely described by \(\vv{r}\). However, for the context of this paper the surrogate discrete state variable
\[\vv{r}:=\begin{pmatrix}r_{x}\\ r_{y}\end{pmatrix} \tag{5}\]
is introduced and termed "discrete state variable". Since \(\vv{r}\) has only two components it does not uniquely describe the damaged state. This ambiguity will be resolved by the reconstruction process (see Sec. 2.5). \(\vv{r}\) is a purely mathematical quantity designed to have certain properties. First, its 1-norm \(\vv{r}:=\left\|\vv{r}\right\|_{1}:=|r_{x}|+|r_{x}|\) equals to \(\nicefrac{{N_{f}}}{{N_{T}}}\), where \(N_{f}\) is the number of failed beams and \(N_{T}\) the total number of beams in the lattice. \(\vv{r}\) is also called the ratio of failed beams (rfb). Second, its components are defined by associating them to the \(x\)- and \(y\)-zone, respectively, similar to \(\vv{\kappa}\) (see Sec. 2.2). But while \(\kappa_{x}\) is connected to strains in the \(x\)-zone, \(r_{x}\) is related to the amount of beams that have failed due to \(\kappa_{x}\).
### Determining the Damage Law for the Continuum
The damage function \(\widehat{\mathbf{D}}(\vv{\kappa})\) will take the role of the damage variable \(\mathbf{D}\) inside the constitutive equation (1). Thus, \(\widehat{\mathbf{D}}(\vv{\kappa})\) has to be designed such that its evolution mimics the expected behaviour of \(\mathbf{D}\) (see Sec. 3.2). For the extraction, which involves two steps, the UniformSim simulation data of fully discrete lattices is used (see Sec. 2.6.2).
Step 1: Effective Material Stiffness Tensor\(\mathcal{C}\)First, the effective stiffness tensor \(\mathcal{C}\) is calculated by homogenisation. After the convergence of each loading step, the following seven strain-states
\[\left\{\begin{pmatrix}1\\ 2\\ 3\end{pmatrix}\!\!,\begin{pmatrix}4\\ 5\\ 0\end{pmatrix}\!\!,\begin{pmatrix}6\\ 0\\ 0\end{pmatrix}\!\!,\begin{pmatrix}0\\ 7\\ 0\end{pmatrix}\!\!,\begin{pmatrix}0\\ 0\\ 8\end{pmatrix}\!\!,\begin{pmatrix}9\\ 0\\ 10\end{pmatrix}\!\!,\begin{pmatrix}0\\ 9\\ 8\end{pmatrix}\!\!\right\}, \tag{6}\]
denoted as \(\left(\varepsilon_{xx},\,\varepsilon_{yy},\,2\varepsilon_{xy}\right)^{\rm T} \times 10^{-3}\), were applied to the lattice, while blocking further damage to measure the resulting stresses. This results in an overdetermined system of 21 equations for the 6 unknown coefficients of \(\mathcal{C}\), which is solved by a least-square approach.
Step 2: Determining the Damage Variable\(\mathbf{D}\)Second, the damage variable \(\mathbf{D}\) is extracted from the effective stiffness tensors of the lattice. For this, a technique originally presented by Oliver-Leblond et al. (2021) is used. For completeness, the relevant equations are replicated to be
\[\mathbf{d}(\mathcal{T}) :=\text{tr}_{1,2}[\mathcal{T}]=\mathcal{T}_{kkij}, \tag{7a}\] \[K :=\frac{1}{4}\,\,\text{tr}\!\left[\mathbf{d}(\mathcal{C})\, \right],\] (7b) \[\mathbf{D} :=\mathbf{D}\!\left[\mathcal{C},\,\widehat{\mathcal{C}}\right]:= \frac{1}{2K}\left(\mathbf{d}(\mathcal{C})-\mathbf{d}\!\left(\!\widehat{ \mathcal{C}}\right)\,\right). \tag{7c}\]
The tensor defined by Eq. (7a) is also known as dilatation second order tensor, while scalar \(K\) of Eq. (7b) is the bulk modulus. Eq. (7c) combines the effective \(\mathcal{C}\) and undamaged stiffness tensor \(\widehat{\mathcal{C}}\) to the damage variable \(\mathbf{D}\). \(\mathbf{D}\) is by construction a real symmetric \(2\times 2\) matrix, thus fully characterised by its two eigenvalues \(d^{(x)}\) and \(d^{(y)}\) as well as a single scalar \(\Gamma\), describing the rotation of its eigenbasis (see Fig. 2).
### Process for the Construction of a Damaged Lattice
The reconstruction process, _i.e._ the creation of a discrete lattice with a particular damage, involves two components: (i) The transfer function \(\vv{\widetilde{r}}(\vv{\kappa})\), which transforms the continuum state \(\vv{\kappa}\) into the discrete surrogate state \(\vv{r}\). (ii) A scheme which transforms the surrogate state \(\vv{r}\) into the full discrete state \(\vv{r}\). Hence, the scheme must be able to resolve the inherently present ambiguity in \(\vv{r}\). As direct consequence of the definition of the discrete state \(\vv{r}\) (see Eq. (5)), the transfer function is given as
\[\vv{\widetilde{r}}(\vv{\kappa}):=\begin{pmatrix}\widehat{r}_{x}(\kappa_{x}) \\ \widehat{r}_{y}(\kappa_{y})\end{pmatrix}. \tag{8}\]
As for the damage function \(\widehat{\mathbf{D}}(\vv{\kappa})\), we are using data obtained from the UniformSim simulations (see Sec. 2.6.2) to empirically determine the function \(\vv{\widetilde{r}}(\vv{\kappa})\) that approximates \(\vv{r}\). Due to the nature of \(\vv{r}\) it is impossible to measure its components and thus to fit them directly. However, it is easy to measure and fit the quantity \(\vv{\widetilde{r}}:=\left\|\vv{r}\right\|_{1}\).
Because of the specific design of the simulations and assumptions, it is possible to associate the value \(\widetilde{r}\) to the components of \(\,\overline{r}\,\), see Sec. 2.6.2.
For reconstructing the full discrete state, a probabilistic scheme was devised. It starts by constructing an undamaged lattice from which certain beams are removed, such that the resulting damage matches in a statistical sense the one given by \(\,\overline{\kappa}\,\). Due to the assumed decoupling between the \(x\)- and \(y\)-zone, it is possible to handle the two directions independently. For each direction \(\alpha\), _i.e._\(x\) and \(y\), the following steps must be done:
1. From the continuum state \(\kappa_{\alpha}\) the corresponding discrete state variable, \(\,r_{\alpha}=\widehat{r}_{\alpha}(\kappa_{\alpha})\,\) is computed. Through the relationship \(N_{\alpha}:=r_{\alpha}\cdot N_{T}\), it is possible to determine how many failed beams are associated to this direction.
2. Each beam is assigned a probability defined as \[p_{i}\propto\frac{1}{\varepsilon_{i;\,th}}\,\left|\left\langle\,\overline{b} _{i},\,\,\overline{t}_{\alpha}\right\rangle\right|^{k},\] (9) where \(\varepsilon_{i;\,th}\) is the elongation threshold and \(\,\overline{b}_{i}\,\)the direction of the beam. The vector \(\,\overline{t}_{\alpha}\,\), called "damage basis", represents the main damage direction. In our motive, it is either \(\,\overline{t}_{x}:=\left(1,\,0\right)^{\mathrm{T}}\) or \(\,\overline{t}_{y}:=\left(0,\,1\right)^{\mathrm{T}}\). Finally, parameter \(k\), called "directional weight", is a tuning parameter that balances the relative importance of the two terms and needs to be determined (see Sec. 3.4).
3. The \(N_{\alpha}\) many beams to fail are drawn from the probability distribution defined by Eq. (9) without replacement.
4. The selected beams are marked as failed.
### Numerical Simulations
For estimating and testing the damage function \(\widehat{\mathbf{D}}(\,\overline{\kappa}\,)\) and the transfer function \(\,\overline{\widetilde{r}}\,(\overline{\kappa}\,)\), a series of different numerical simulations are carried out on fully discrete lattices. Due to the randomness of the lattice, 30 realisations were made for each case.
#### 2.6.1 Numerical Simulation Procedure
The numerical simulations were carried out using a customised version of the Akantu FEM library (Richart et al., 2015). At the beginning of each loading step the corresponding boundary conditions are applied to the lattice. Then the following iterative procedure is carried out:
First, the equilibrium positions of the nodes is determined, without considering beam failure. Then, for each beam that has not yet failed the failure condition Eq. (4) is evaluated. Among the beams that satisfy the failure condition the one with the largest \(\Psi\) value is removed. These steps are repeated until no beam satisfy the failure condition and the load increment ends.
#### 2.6.2 The UniformSim Simulation Setup
The first type of simulation, called UniformSim, is used for estimating the transfer function \(\,\overline{\widetilde{r}}\,(\overline{\kappa}\,)\) and the damage law \(\widehat{\mathbf{D}}(\,\overline{\kappa}\,)\). These simulations realise an uni-axial strain state of the lattice that is also rotated by an arbitrary but constant angle \(\varphi\), called the pull direction. (see Fig. 4). Thus
\[\boldsymbol{\varepsilon}_{\varphi}:=\mathbf{R}_{\varphi}^{\mathrm{T}}\begin{pmatrix} \widehat{\varepsilon}_{1}&0\\ 0&0\end{pmatrix}\mathbf{R}_{\varphi}, \tag{10}\]
where \(\mathbf{R}_{\varphi}\) is the Givens rotation matrix for angle \(\varphi\), is applied to the lattice's boundary. In each loading step, \(\widehat{\varepsilon}_{1}\) is increased by 0.0001 until 0.005 is reached. The limit is chosen to ensure that no localisation will occur and that damage maintains its diffuse character. The influence of the strain increment on the damage evolution is small (Appendix C).
The particular setup of the UniformSim simulations together with the previous assumptions on \(\mathbf{D}\) and \(\,\overline{\widetilde{\widetilde{r}}}\,(\overline{\kappa}\,)\) allows the following conclusions and simplifications:
1. A pull direction is either associated to the \(x\)- or \(y\)-zone (see Sec. 2.2). This allows to probe the behaviour of a single zone. Which zone is probed depends on \(\varphi\) and the yet unknown zone boundary value \(\chi\).
2. A simulation, _i.e._ a particular value of \(\varphi\), will only affect the state of either the \(x\)- or the \(y\)-zone. Thus, an increase of \(\widehat{\varepsilon}_{1}\) will only affect one eigenvalue of \(\mathbf{D}\) and a single component of \(\,\overline{\kappa}\,\) as well as \(\,\overline{r}\,\). Which component is affected depends on \(\varphi\) and \(\chi\).
3. For the continuum state variable the relation \(\,\overline{\kappa}\,\overset{!}{=}\,\left\|\,\overline{\kappa}\,\right\|_{1} \overset{!}{=}\,\left|\kappa_{\alpha}\right|\overset{!}{=}\widehat{ \varepsilon}_{1}\) holds. Thus, one component equals the applied uni-axial strain \(\widehat{\varepsilon}_{1}\), while the other is zero.
4. For the discrete state variable, the relation \(\,\widetilde{r}\,\overset{!}{=}\,\left\|\,\overline{r}\,\right\|_{1} \overset{!}{=}\,\left|r_{\alpha}\right|\) holds. Thus, only one component is non zero and equals \(\,\widetilde{r}\,\). This can be used to determine the components of \(\,\overline{r}\,\) from \(\widetilde{r}\), once the \(x\)- and \(y\)-zones are known (see Sec. 3.4).
Figure 4: Boundary conditions used by the UniformSim series, shown for the case of \(\varphi=0^{\circ}\). In general, the boundary conditions given by Eq. (10) are applied to the whole boundary. Scale of the lattice is exaggerated.
#### 2.6.3 The MultiLoadSim Simulation Setup
For testing the damage function as well as the reconstruction procedure, a second type of simulation is used, called MultiLoadSim. It realises a bi-axial strain state, imposed along the \(x\)- and \(y\)-axes (see Fig. 5). Both strains are increased until \(\varepsilon_{xx}=\varepsilon_{yy}=\varepsilon_{fin}\) is reached, where \(\varepsilon_{fin}\) is the control parameter. For each simulation, the loading is imposed in three different ways, but each time the same initial lattice is used:
XThenYSim: \(\varepsilon_{xx}\) is increased in steps of 0.0001 until it reaches \(\varepsilon_{fin}\) and then maintained. Then \(\varepsilon_{yy}\) is increased by the same increment until \(\varepsilon_{fin}\) is reached.
YThenXSim: The same as XThenYSim, however, the order of loading the axes is switched. BothXYSim: Both strains \(\varepsilon_{xx}\) and \(\varepsilon_{yy}\) are increased simultaneously, in steps of 0.0001 until \(\varepsilon_{fin}\) is reached.
All three paths reach the same final state, \(\varepsilon_{xx}=\varepsilon_{yy}=\kappa_{x}=\kappa_{y}=\varepsilon_{fin}\), but via different paths. As a consequence, the special relation \(\widetilde{\kappa}:=\left\|\,\widetilde{\kappa}\,\right\|_{1}=\left|\varepsilon _{xx}\right|+\left|\varepsilon_{yy}\right|\) holds in these simulations. Both, the XThenYSim and the YThenXSim loading path impose in the first half of the loading an uni-axial strain state and then switch to a bi-axial strain state for the second half, while the BothXYSim loading path imposes a bi-axial strain state from the beginning.
#### 2.6.4 The ReconstrSim Simulation Setup
For testing the reconstruction process (see Sec. 2.5), a third type of simulation is used, called ReconstrSim. The basic setup is equivalent to UniformSim, but for \(\varphi=0^{{}^{\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!
roughly at the lattice's centre. While the exact damage pattern depends on the realisation of the lattice and locations where the snapshot was taken, statistically they all look the same. It is this statistical damage pattern that we want to capture by the transfer function \(\overline{\widetilde{F}}\left(\overline{\kappa}\right)\) and recreate by the reconstruction process. Whereas the damage law \(\widehat{\mathbf{D}}(\overline{\kappa})\) captures the accumulated effects on the lattice's macroscopical stiffness.
### Estimation of the Damage Law \(\widehat{\mathbf{D}}(\overline{\kappa})\)
We now study the behaviour of the damage variable \(\mathbf{D}\) that we have extracted from the data of the UniformSim simulations. From these observations, we will determine the damage function \(\widehat{\mathbf{D}}(\overline{\kappa})\) as well as the zone boundary value \(\chi\) (see Fig. 2).
Functional Form of \(\widehat{d}_{x}(\kappa_{x})\) and \(\widehat{d}_{y}(\kappa_{y})\).Since we have assumed an orthotropic damage variable (see Sec. 2.2), we have to assume the same for the damage function. Thus arriving the tentative form of the damage function is given by
\[\widetilde{\mathbf{D}}(\overline{\kappa}):=\begin{pmatrix}d_{xx}(\overline{ \kappa})&0\\ 0&d_{yy}(\overline{\kappa})\end{pmatrix}.\]
To account for deviations from this assumption, we will connect the two diagonal elements of the damage function with the eigenvalues of the measured damage variable. Thus, we have only two functions that we need to determine.
Fig. 7 shows the evolution of the eigenvalues \(d^{(x)}\) and \(d^{(x)}\) from the extracted damage variable for the pull directions \(\varphi\in\left\{\,0^{{}^{\circ}},\,60^{{}^{\circ}}\,\right\}\) at various distortion levels. For \(\varphi=0^{{}^{\circ}}\), the eigenvalue \(d^{(x)}\) is much larger than \(d^{(y)}\), while for \(\varphi=60^{{}^{\circ}}\) the opposite is observed. Later, we will use this to determine the zone boundary value \(\chi\). Most importantly, the figures show that both eigenvalues follow a power law, irrespective of the pull direction and distortion. Thus we approximate the diagonal elements/eigenvalues of \(\widehat{\mathbf{D}}(\overline{\kappa})\) as:
\[d_{xx} \approx\widehat{d}_{x}(\kappa_{x};\,a,\,\varphi):=\alpha_{a, \varphi}^{(x)}\cdot\kappa_{x}^{\,\beta_{a,\varphi}^{(x)}}, \tag{11a}\] \[d_{yy} \approx\widehat{d}_{y}(\kappa_{y};\,a,\,\varphi):=\alpha_{a, \varphi}^{(y)}\cdot\kappa_{y}^{\,\beta_{a,\varphi}^{(y)}}. \tag{11b}\]
The parameters of these approximations depend on the distortion level \(a\) and the pull direction \(\varphi\). Later, we will eliminate their dependency on \(\varphi\) and obtain the final parameters that only depend on \(a\), which is constant. Fur
Figure 6: Representative simulation example. (a) Schematics of the model configuration, scale of the lattice is exaggerated. (b-d) Evolution of continuum and microstructure properties of the lattice. Dotted lines denote strains beyond the limit of \(0.005\) used in UniformSim. (b) Normalised measured stresses. Dashed lines represent the behaviour in case of suppressed damage. (c) Eigenvalues of the extracted damage variable \(\mathbf{D}\), see Eq. (1). (d) Ratio of failed beams \(\widetilde{r}\) in the specimen. (e-h) Snapshots of a small section of the microstructure. Colours indicate the remaining load carrying capacity of the beams \(\widehat{\Psi}_{i}:=1-\Psi_{i}\), where \(\Psi_{i}\) is defined by Eq. (4). Associated states are indicated in (d) by markers. Bending of beams is not shown.
ther, this choice guarantees that the damage is strictly increasing.
Because of our previous assumption about the independence of the directions, the approximations of the eigenvalues only depend on a single component of the continuum state \(\overline{\kappa}\) (Sec. 2.2). While this could be justified due to their large differences, that we can see in Fig. 7, we clearly see that even for \(\varphi=0^{\lx@math@degree}\), there is a certain coupling between \(d^{(x)}\) and \(d^{(y)}\). To handle this, we use a simple coupling scheme, which leads to the final damage function
\[\widehat{\mathbf{D}}(\overline{\kappa}):= \tag{12}\] \[\begin{pmatrix}\max\{\widehat{d}_{x}(\kappa_{x}),\frac{\widehat{ d}_{y}(\kappa_{y})}{\eta}\}&0\\ 0&\max\{\widehat{d}_{y}(\kappa_{y}),\frac{\widehat{d}_{x}(\kappa_{x})}{\eta} \}\end{pmatrix},\]
where \(\widehat{d}_{x}(\kappa_{x})\) and \(\widehat{d}_{y}(\kappa_{y})\) are the approximations of the eigenvalues defined by Eq. (11) but without the dependence on \(\varphi\). The coupling ensures that the eigenvalues of the damage function \(\widehat{\mathbf{D}}(\overline{\kappa})\) will at most differ by a factor of \(\eta\), which is exactly what we see in the case of uni-axial loading (see Fig. 7). Here, we will assume that the empirical parameter \(\eta\) equals \(10\) in all cases. We will later give a justification of the form and value of the proposed coupling. It is important to notice that this coupling is designed for the uni-axial case. However, a more elaborated coupling might be needed, depending on the details of other material motives.
Parameters of \(\widehat{d}_{x}(\kappa_{x})\) and \(\widehat{d}_{y}(\kappa_{y})\).Since the data, especially for the non-dominant eigenvalue shows strong variation for small strains, only data points corresponding to strains above \(0.002\) were used for the parameter estimation.
In Figs. 8a,b, we see that for small values of \(\varphi\), the \(\alpha_{\alpha,\varphi}^{(x)}\)-parameters are very close to each other, while for larger values of \(\varphi\) one observes a much larger scattering. In Figs. 8b,b, we see that for small values of \(\varphi\), the \(\alpha_{\alpha,\varphi}^{(x)}\)-parameters are very close to each other, while for larger values of \(\varphi\) one observes a much larger scattering.
Figure 7: Eigenvalues of the extracted damage variable \(\mathbf{D}\) for pull directions \(\varphi=0^{\lx@math@degree}\) (a) and \(60^{\lx@math@degree}\) (b). Solid lines correspond to \(d^{(x)}\), while dashed lines correspond to \(d^{(y)}\). Colours indicate different distortion levels \(a\) of the underlying lattice.
Interestingly, \(\alpha^{(y)}_{a,\varphi}\)-parameters behave inversely. Furthermore, on Fig. 8b we can clearly observe the \(\alpha^{(y)}_{a,\varphi}\) dependence on \(\varphi\). We see that \(\alpha^{(y)}_{a,\varphi}\) is small if \(\varphi\) is small too, but above a certain value of \(\varphi\), the parameters become much larger and their scattering increases. The same, but in an opposite way, holds for the \(\alpha^{(x)}_{a,\varphi}\)-parameters but in a less pronounced fashion.
The estimates for the \(\beta\)-parameters (see Figs. 8c,d) show a similar behaviour with respect to \(\varphi\). However, while we observed a significant change in the behaviour of the \(\alpha\)-parameters' values, from a particular value of \(\varphi\) on we just observe an increase of the variability of \(\beta\).
In summary, from Fig. 8 we can conclude that the \(\beta\)- and especially the \(\alpha^{(y)}\)-parameters have different regimes depending on \(\varphi\). Further, inside such a regime, their particular value does not depend much on \(\varphi\).
We also saw that the values for the \(\beta^{(x)}\)-parameters for small values of \(\varphi\) and \(\beta^{(y)}\)-parameters for large values of \(\varphi\) are both close to three. This means that the growth behaviour of \(\widehat{d}_{x}(\kappa_{x})\) and \(\widehat{d}_{y}(\kappa_{y})\) are very similar. This justifies the form of the coupling used in the damage function in Eq. (12).
Zone Boundary \(\chi\).In Figs. 7 and 8, we have observed that depending on the pull direction either \(d^{(x)}\) or \(d^{(y)}\) is dominant. We now exploit this fact to define \(\chi\). To this end, we define the dominance function \(\zeta\) as:
\[\zeta(a,\,\varphi):=\lg\left(\frac{d^{(x)}_{a,\varphi;\,\vec{\kappa}=0.005}}{ d^{(y)}_{a,\varphi;\,\vec{\kappa}=0.005}}\right), \tag{13}\]
with \(d^{(\alpha)}_{a,\varphi;\,\vec{\kappa}=0.005}\) as the damage eigenvalue associated to direction \(\alpha\), once the uni-axial strain has reached \(0.005\). The most important aspects of this function are its sign and root, to a lesser extend its value. \(\zeta>0\) means that \(d^{(x)}\) is dominant, while \(\zeta<0\) indicates that \(d^{(y)}\) is dominant. Thus, \(\chi\), which might depend on the distortion \(a\), is defined as \(\zeta(a,\,\chi)\stackrel{{!}}{{=}}0\).
Examining Fig. 9, we see that, irrespective of the distortion, \(\chi\) must lie between \(30^{\circ}\) and \(45^{\circ}\). After some experimentation, we decided to use \(40^{\circ}\) as zone boundary, irrespective of the distortion level. A closer analysis might yield different estimations.
\(\zeta\) can be seen as a measure of the coupling between \(d^{(x)}\) and \(d^{(y)}\). Thus, we can used it to determine the value of the empirical coupling parameter \(\eta\), see Eq. (12). Our value \(\eta=10\) was selected because it is roughly the mean value for \(\varphi=0^{\lx@math@degree}\).
Final Parameters of \(\widehat{d}_{x}(\kappa_{x})\) and \(\widehat{d}_{y}(\kappa_{y})\).Eliminating the dependency of the \(\alpha\)- and \(\beta\)-parameters on the pull direction \(\varphi\) will results in parameters that are valid inside the entire \(x\)- or \(y\)-zone. For this, we combine the different estimates as:
\[\lg\alpha^{(x)}_{a}\!:=\!\frac{1}{|\mathcal{X}|}\!\sum_{\varphi \in\mathcal{X}}\lg\alpha^{(x)}_{a,\varphi}, \quad\lg\alpha^{(y)}_{a}\!:=\!\frac{1}{|\mathcal{Y}|}\!\sum_{ \varphi\in\mathcal{Y}}\lg\alpha^{(y)}_{a,\varphi}, \tag{14a}\] \[\beta^{(x)}_{a}\!:=\!\frac{1}{|\mathcal{X}|}\!\sum_{\varphi\in \mathcal{X}}\beta^{(x)}_{a,\varphi}, \quad\quad\quad\beta^{(y)}_{a}\!:=\!\frac{1}{|\mathcal{Y}|}\!\sum_{ \varphi\in\mathcal{Y}}\beta^{(y)}_{a,\varphi}, \tag{14b}\]
where \(\mathcal{X}\) contains all the pull directions associated to the \(x\)- and \(\mathcal{Y}\) the ones associated to the \(y\)-zone. Parameters associated to the transversal directions are simply ignored, _e.g._\(\lg\alpha^{(y)}_{a,\varphi=0^{\lx@math@degree}}\). Further, the functional form of \(\widehat{d}_{x}(\kappa_{x})\) and \(\widehat{d}_{y}(\kappa_{y})\) is still given by Eq. (12), just without the dependency on \(\varphi\). Note that Eq. (14) weights the different pull directions equally.
However, the eigenvalues are flipped, which is the expected behaviour. During the first half of the loading (_i.e._\(\tau<0.5\)), the prediction of the dominant eigenvalue matches well with the reference value for both loading paths. At the same time, the non-dominant eigenvalue, _i.e._ the one belonging to the transverse direction, is captured with less but still acceptable accuracy. The mismatch is entirely due to the rather crude choice of the \(\eta\) coupling parameter (see Eq. (12)). However, it indicates that the proposed coupling is indeed working.
Nevertheless, for the second half of the loading (_i.e._\(\tau>0.5\)) the CDM is unable to capture the evolution to a satisfactory degree. In case of XThenYSim (orange lines), we see that the CDM approximation of the \(x\)-eigenvalue \(\widehat{d}_{x}\) remains constant, since \(\kappa_{x}\) is not affected by a loading along the \(y\)-axis. However, we see that in the reference system \(d^{(x)}\) continuously increase (see Fig. 11 for more). The \(y\)-eigenvalue \(\widehat{d}_{y}\) predicted by the CDM remains initially constant due to the coupling. Once \(\widehat{d}_{y}(\kappa_{y})\) has become larger than \(\widehat{d}_{x}(\varepsilon_{in})/\eta\;\widehat{d}_{y}(\kappa_{y})\) starts to increase. However, as it can be seen form Fig. 10b, the reference \(d^{(y)}\) starts to increase almost immediately.
A different case is the BothXYSim loading path. From Fig. 10, it seems that for \(\tau<0.5\) its damage grows slower than the dominant damage observed for the other two paths. This is because BothXYSim only has half the numbers of loading steps the other two have. If this is corrected for then it would actually grow faster. This indicates that there is some form of coupling between the two directions that is not considered correctly.
In Fig. 11, we can see how the final damage, _i.e._ values of \(d^{(x)}\) and \(d^{(y)}\) at \(\varepsilon_{xx}=\varepsilon_{yy}=\varepsilon_{fin}\), depend on the control parameter \(\varepsilon_{fin}\), using either the reference (solid lines), the CDM (dash-dotted lines) or the reconstruction (dashed lines). The colours distinguish the three different load paths that were tested (see Sec. 2.6.3). The collapse of the lines indicate that the damage is indeed path in
Figure 11: Final value of the \(d^{(x)}\) and \(d^{(y)}\) damage eigenvalues, computed using the reference (solid), CDM (dash doted) and reconstruction (dashed) method, plotted against \(\varepsilon_{fin}\). The colours indicate the three different loading cases from Sec. 2.6.3. All lattices have a distortion of \(a=0.3\).
Figure 10: Damage eigenvalues for the three different loading paths, described in Sec. 2.6.3, with final strain \(\varepsilon_{fin}=0.002\), plotted against \(\tau:=\nicefrac{{a}}{{2}}\varepsilon_{fin}\). Using a fully discrete simulation (solid) as reference and the CDM damage law \(\widehat{\mathbf{D}}(\overline{\kappa})\) (dash-dotted). The colours indicates the three different loading paths. The distortion of the lattices was \(a=0.3\).
dependent, regardless of the final strain \(\varepsilon_{fin}\). However, the final value depends on the particular method that was used. In in Fig. 10, we observe a gap between the final damage attained by the reference and the one predicted by the CDM. We can now see that this gap is systematic and actually increases with larger \(\varepsilon_{fin}\). This is again indicating that there is some form of coupling between the directions that is not take into account yet.
### Estimation of the Transfer Function \(\vv{\widetilde{r}}{\kappa}\)
Our procedure to reconstruct a discrete lattice representation based on a damaged continuum state (presented in Sec. 2.5) requires two unknown quantities: First, the transfer function \(\vv{\widetilde{r}}{\kappa}\), which maps the continuum state \(\vv{\kappa}\) to the corresponding discrete surrogate state \(\vv{\widetilde{r}}\). Second, the directional weight parameter \(k\), which balances the orientation and the strength of a beam during the reconstruction process (see Eq. (9)). Analogously to the determination of the damage function, the data from the UniformSim is used.
Functional Form of \(\vv{\tau}{x}\) and \(\vv{\tau}{y}\).As mentioned before, it is impossible to measure the components of \(\vv{\widetilde{r}}\) directly. However, as outlined in Sec. 2.6.2\(\vv{\widetilde{r}}\) is connected to the ratio of failed beam as \(\vv{\widetilde{r}}=\norm{\vv{\widetilde{r}}}_{1}=\vv{N}_{/N_{T}}\overset{!}{=} r_{\alpha}\). Thus, we can estimate \(\vv{\widetilde{r}}\) indirectly.
Fig. 12 shows \(\vv{\widetilde{r}}\) for the pull direction \(\varphi=0^{\lx@math@degree}\) at various distortion levels. As we can see, the distortion level has only minor influence. Different pull directions do not lead to a qualitative change (data not shown). For that reason, we approximate the mean rfb as
\[\vv{\widetilde{r}}\approx\norm{\vv{\widetilde{r}}{\kappa}}_{1}:=\vv{ \widehat{r}}(\kappa;\,a,\,\varphi):=\alpha_{a,\varphi}^{(r)}\cdot\kappa^{ \beta_{a,\varphi}^{(r)}}, \tag{15}\]
with the two fit parameters \(\alpha_{a,\varphi}^{(r)}\) and \(\beta_{a,\varphi}^{(r)}\). Both depend on the distortion \(a\) and the pull direction \(\varphi\). Due to our previous assumptions, we can identify its argument \(\kappa\) directly with \(\widetilde{\kappa}\). To eliminate the dependency on \(\varphi\) we use the same method as for the damage function (see Sec. 3.2). However, parameters associated to pull directions in \(\mathcal{X}\) are used to determine \(\vv{\tau}{x}\)(\(\kappa_{x}\)), while the ones belonging to \(\mathcal{Y}\) determine \(\widehat{r}_{y}\)(\(\kappa_{y}\)). This will transform the approximation of the scalar quantity \(\norm{\vv{\widetilde{r}}{\kappa}}_{1}\) into the one for \(\vv{\widetilde{r}}{(\kappa)}\).
For the discussion about the estimated \(\alpha\)- and \(\beta\)-parameters please see A.
Directional Weight Parameter \(k\).The empirical tuning parameter \(k\) influences the selection of beams during the reconstruction process. It balances a beam's strength, _i.e._ its elongation threshold \(\varepsilon_{th}\), against how well it aligns with the damage basis \(\vv{t}_{\alpha}\) (see Sec. 2.5). We determine \(k\) such that the reconstructed damage variable \(\vv{\overline{D}}\) resembles the reference damage \(\mathbf{D}\) most closely. To this end, we define
\[\Upsilon_{k}\!\!:=\!\!\norm{D_{11}\!-\!\mathcal{D}_{11}}_{\vv{t}_{2}}\!\!+\! \norm{D_{22}\!-\!\mathcal{D}_{22}}_{\vv{t}_{2}}\!\!+\!\norm{D_{12}\!-\!\mathcal{ D}_{12}}_{\vv{t}_{2}}\!\! \tag{16}\]
as a measure of separation between the two damages. For minimising \(\Upsilon_{k}\), we select a heuristic approach, in which the reconstruction process (see Sec. 2.5) is run for different values of \(k\). The \(k\) that minimises \(\Upsilon_{k}\) will then be used for the remaining part of this paper. However, for this particular reconstruction process, the used \(\alpha^{(r)}\)- and \(\beta^{(r)}\)-parameters still depended on \(\varphi\). Further, zoning was ignored and as damage basis the pull direction \(\varphi\) was used.
The underlying lattice was not distorted. From Fig. 13 we see that \(k=6\) minimises \(\Upsilon_{k}\) independent of the pull direction. We also see that pull directions \(\{\,30^{\lx@math@degree},\,90^{\lx@math@degree}\,\}\) seem to be almost unaffected by \(k\), however, their values match \(\Upsilon_{k=6}\). This is an artefact caused by the regular structure of the underlying lattice and the scalar product used in the definition of the selection probability (see Eq. (9)). However, this artefact is indicating that \(k=6\) is indeed a good choice.
Figure 13: \(\Upsilon_{k}\), Eq. (16), for some values of \(k\) and various pull directions. The reconstruction process was done on regular grids, without zoning. Further the pull direction and its orthogonal was used as damage basis.
### Tests of the Reconstruction Process
Now we evaluate the performance of the proposed reconstruction scheme to create a mechanically equivalent lattice, based solely on the continuum state \(\vv{\kappa}\) (see Sec. 2.5). For verification, we use the MultiLoadSim simulations (see Sec. 2.6.3). In addition, we use the Re-constrSim simulations to simulate a refinement step (see Sec. 2.6.4).
The MultiLoadSim ResultsIn Fig. 14, we see the results from the MultiLoadSim setup with \(\varepsilon_{fin}=0.002\) (see Sec. 2.6.3). They impose the bi-axial strain state \(\varepsilon_{xx}=\varepsilon_{yy}=\varepsilon_{fin}\), but with loading applied via three different paths. We used this setup before to assess the CDM (see Sec. 3.3). An important note concerning the reconstructed states is, that in each loading step the lattice and hence the damage is constructed anew. Thus, although it looks like a damage evolution, the damage at any loading step has no connection to the previous one. However, each time the same undamaged but distorted lattice was used.
If we now compare the damage from the references (solid lines) with the one from the reconstructed lattices (dashed lines) in Fig. 14, we see that the overall damage values are very similar to the ones obtained by the CDM. As before, we observe that for \(\tau<0.5\) the dominant eigenvalues, _i.e._\(d^{(x)}\) for XThenYSim and \(d^{(y)}\) for YThenXSim, are captured well. Then, for \(\tau>0.5\), these reconstructed eigenvalues stop growing and thus deviate from the references (solid lines). An effect we observed for CDM (dashed lines), too. But if we look at the other eigenvalues, _i.e._\(d^{(y)}\) for XThenYSim (orange) and \(d^{(x)}\) for the YThenXSim (green), we see that they start to increase almost immediately like the reference. This was not the case for the CDM (dash-dotted lines shown in Fig. 10). The reconstruction process is affected by the ignored coupling between the directions as well. However, the damage eigenvalues generated by it follow the reference much better than the ones computed by the CDM.
The ReconstrSim ResultsNow we are using the ReconstrSim simulation setup, described in Sec. 2.6.4. The lattices that are used here were reconstructed for the continuum state \(\vv{\kappa}=\left(\widehat{\varepsilon},\,0\right)^{\mathrm{T}}\). The system is loaded under uni-axial strain along the \(x\)-axis, starting at \(\widehat{\varepsilon}\). This setup simulates how a discrete region that was loaded up to \(\widehat{\varepsilon}\), as continuum and then refined behaves upon further loading.
In Fig. 15, we see how the damage eigenvalues \(d^{(x)}\) and \(d^{(y)}\) (dashed and dotted lines, respectively) and the rfb \(\widehat{r}\) (solid lines), evolve for different reconstruction strains \(\widehat{\varepsilon}\), indicated by different colours. The grey lines correspond to the reference without any reconstruction.
Figure 14: Damage eigenvalues for the three different loading paths, described in Sec. 2.6.3, with final strain \(\varepsilon_{fin}=0.002\), plotted against \(\tau:=\varepsilon_{fin}\). Using fully discrete simulations (solid) as reference and the reconstructed damage (dashed). The colours indicate the three different loading paths that were taken. The distortion of the lattices was \(a=0.3\). See Fig. 10 for the damage evolution predicted by the CDM.
Figure 15: Behaviour of the \(d^{(x)}\) and \(d^{(y)}\) damage eigenvalues and \(\widetilde{r}\). Colours indicate different reconstruction parameters \(\widehat{\varepsilon}\). Grey is the reference, _i.e._ no reconstruction. Circles indicate damage/rfb values of the lattices directly after reconstruction. Squares indicate damage/rfb values of the lattices for an applied strain of \(\widehat{\varepsilon}\).
The circles in Fig. 15 indicate the values for \(d^{(x)}\), \(d^{(y)}\) and \(\widetilde{r}\) have in the reconstructed lattice before any loading was applied to them. The circles associated to \(\|\,\widetilde{r}\,\|_{1}\) show, that the reconstructed lattices have a matching rfb value \(\widetilde{r}\) which is a consequence of its construction. It is, however, much more important, that the reconstructed \(d^{(x)}\) eigenvalue (circles), matches the one predicted by the reference. Thus the process is able to reconstruct the dominant eigenvalue.
We are also observing that \(d^{(y)}\) are not as well reconstructed. This is a consequence of the assumptions that the components of \(\,\widetilde{r}\,\) are independent. Since Re-constrSim only impose strains along the \(x\)-axis, we have \(\kappa_{y}\equiv 0\Rightarrow r_{y}\equiv 0\). Thus, the reconstructed \(y\)-eigenvalues we are seeing are caused by a directional sampling effect created during the reconstruction of the damage. However, since \(d^{(y)}\) is the non-dominant eigenvalue, we expect and allow that it is less well reconstructed.
The squares in Fig. 15 indicate the state of the lattices after an uni-axial strain of \(\widehat{\varepsilon}\) along the \(x\)-axis was applied to them. The difference between a square and its associated circle proves that this strain causes the failure of additional beams. If the reconstruction process would work perfectly, any strain below or equal \(\widehat{\varepsilon}\) should not lead to the failure of any beam. Therefore, we might have removed the right number of beams and these were more or less correctly oriented, the selection of some of them was not fully optimal.
Furthermore, we see that for subsequent loading steps, the damage and rfb continue to increase (Fig. 15). While the observed values for the restored systems remain above the reference, the restored lattices slowly converge towards them. This is because the damage created by the subsequent loading steps starts to dominate the artificial one, that we created through the restoration process.
## 4 Summary and Conclusion
In this study, we presented a generic approach for the creation of a discrete twin of a continuum representation containing an initial damage. The discrete twin's damage is created in such a way, that it is mechanically consistent to the original's continuum damage. This is a step towards adaptive multi-scale simulations, which take the state of the coarse description of a region into account upon its refinement.
While the method is general and has no restrictions concerning the used numerical representations, we presented it in form of a concrete example. As continuum representation, we have used FEM with CDM as damage measure. For the discrete representation, we have used a lattice based on a triangular grid consisting of brittle beam-truss elements.
One part of our method is the damage measure used inside the continuum representation. This measure is used during the initial continuum phase to track the evolving continuum damage. Unlike classical CDM, that are calibrated to match the degradation of a particular material, we calibrated the CDM against the degeneration of the discrete numerical representation. Thus, it measures the degeneration that would occur on a hypothetical fine scale representation.
We saw that the determined CDM is indeed able to capture the damage caused by uni-axial strains to a satisfying degree. However, for bi-axial loading, the CDM is unable to achieve the same. This is explained by the assumption that the directions are independent. Directional coupling must be used to further improve the CDM's accuracy
The second part of our method is the ability to construct a discrete damage that is mechanically consistent to a given continuum state \(\,\overline{\kappa}\,\). Since this problem is obviously not unique, we devised a stochastic scheme to generate representations containing such a particular discrete damage.
We have seen, that the reconstruction process is indeed able to create discrete lattices, whose initial degeneration is consistent with the given continuum state \(\,\overline{\kappa}\,\). A drawback is, that imposing a strain that corresponds to \(\,\overline{\kappa}\,\) itself, leads to the failing of some additional beams. This is indicating that the selection process needs to be further refined. Furthermore, as for the CDM, we observed problems for bi-axial strains, which are again caused by the independence assumed between the directions.
Nevertheless, our data is indicating that our approach works well for the case of uni-axial loading and is in principal able to work for bi-axial loading. The next step is to integrate our method into an adaptive multi-scale simulation scheme.
## 5 CRediT
Philip Muller: Conceptualisation, Methodology, Software, Validation, Writing - Original Draft, Visualisation; Falk Wittel: Conceptualisation, Writing - Review & Editing, Supervision; David Kammer: Conceptualisation, Writing - Review & Editing, Supervision.
## 6 Declaration of Competing Interest
The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.
## 7 Data Availability
The simulation data generated in this study have been deposited in the ETH Research collection database available at TBA.
Appendix A Parameters of \(\widehat{\mathbf{r}}_{x}(\mathbf{\kappa}_{x})\) and \(\widehat{\mathbf{r}}_{y}(\mathbf{\kappa}_{y})\)
The fitting parameters (see Eq. (15)) of the ratio of failed beams (rfb) \(\widetilde{r}\), denoted as \(\alpha_{a,\varphi}^{(r)}\) and \(\beta_{a,\varphi}^{(r)}\) were estimated in the same way as the ones for the two damage functions \(\widehat{d}_{x}(\kappa_{x})\) and \(\widehat{d}_{y}(\kappa_{y})\).
Figs. 16a show the values for the \(\alpha\)- and 16b for the \(\beta\)-parameters. Compared with the parameters we obtained for the damage law (see Fig. 8) we see much less variability here. This is because it is far easier to measure this quantity than the damage.
## Appendix B Influence of the Characteristic Length \(\mathbf{\ell}\)
The characteristic size is defined as \(\ell:=L_{x}/(N_{x}-1)\), where \(L_{x}\) is the specimen's \(x\)-extension and \(N_{x}\) the number of nodes along the \(x\)-direction. We will now study its influence on the observed damage. For this a simulation similar to UniformSim with \(\varphi=0^{{}^{\circ}}\) was performed for different values of \(N_{x}\). To account for the randomness the results are averaged over 30 runs. In Fig. 17 we observe that, with the exception of the very coarse \(N_{x}=50\) grid, all lines collapse. This shows that the damage evolution is independent from the discretisation.
Figure 16: Dependence of the two fitting parameters of Eq. (15), \(\lg\alpha_{a,\varphi}^{(r)}\) (a) and \(\beta_{a,\varphi}^{(r)}\) (b), on the pull direction \(\varphi\). Colours indicating different distortion levels \(a\). The error bars is given by the 95% confidence interval.
Figure 17: Evolution of the damage eigenvalues \(d^{(x)}\) (solid line) and \(d^{(y)}\) (dashed line) for distortion level \(a=0.3\). The specimen was strained along the \(x\)-direction up to \(\epsilon_{end}=0.005\). Different colours indicate different \(N_{x}\), number of nodes along the \(x\)-direction. \(N_{x}=300\) is the default case, see Table 1. For all cases the standard deviation (omitted) is roughly the same, see Fig. 7.
## Appendix C Influence of the Number of Loading Steps
In the simulation described in Sec. 2.6, the loading is increased in steps of \(10^{-4}\), until the final strain of \(\varepsilon_{end}=0.005\) is reached, which results in 50 increments. The effect of the loading step is studied with a series of simulations similar to UniformSim with \(\varphi=0^{{}^{\circ}}\). The specimen is loaded until a strain of \(\varepsilon_{end}=0.005\) is reached, but with a different number of load increments. To account for the variation each simulation was performed 30 times.
In Fig. 18 we observe that the damage evolution is independent of the loading rate.
Figure 18: Evolution of the damage eigenvalues \(d^{(x)}\) (solid line) and \(d^{(y)}\) (dashed line) for distortion level \(a=0.3\). The specimen was strained along the \(x\)-direction up to \(\varepsilon_{end}=0.005\). Different colours indicate number of load steps that were performed to reach it. For all cases the standard deviation (omitted) is roughly the same, see Fig. 7. |
2301.07302 | PIRLNav: Pretraining with Imitation and RL Finetuning for ObjectNav | We study ObjectGoal Navigation -- where a virtual robot situated in a new
environment is asked to navigate to an object. Prior work has shown that
imitation learning (IL) using behavior cloning (BC) on a dataset of human
demonstrations achieves promising results. However, this has limitations -- 1)
BC policies generalize poorly to new states, since the training mimics actions
not their consequences, and 2) collecting demonstrations is expensive. On the
other hand, reinforcement learning (RL) is trivially scalable, but requires
careful reward engineering to achieve desirable behavior. We present PIRLNav, a
two-stage learning scheme for BC pretraining on human demonstrations followed
by RL-finetuning. This leads to a policy that achieves a success rate of
$65.0\%$ on ObjectNav ($+5.0\%$ absolute over previous state-of-the-art). Using
this BC$\rightarrow$RL training recipe, we present a rigorous empirical
analysis of design choices. First, we investigate whether human demonstrations
can be replaced with `free' (automatically generated) sources of
demonstrations, e.g. shortest paths (SP) or task-agnostic frontier exploration
(FE) trajectories. We find that BC$\rightarrow$RL on human demonstrations
outperforms BC$\rightarrow$RL on SP and FE trajectories, even when controlled
for same BC-pretraining success on train, and even on a subset of val episodes
where BC-pretraining success favors the SP or FE policies. Next, we study how
RL-finetuning performance scales with the size of the BC pretraining dataset.
We find that as we increase the size of BC-pretraining dataset and get to high
BC accuracies, improvements from RL-finetuning are smaller, and that $90\%$ of
the performance of our best BC$\rightarrow$RL policy can be achieved with less
than half the number of BC demonstrations. Finally, we analyze failure modes of
our ObjectNav policies, and present guidelines for further improving them. | Ram Ramrakhya, Dhruv Batra, Erik Wijmans, Abhishek Das | 2023-01-18T04:40:50Z | http://arxiv.org/abs/2301.07302v2 | # PIRLNav: Pretraining with Imitation and RL Finetuning for ObjectNav
###### Abstract
We study ObjectGoal Navigation - where a virtual robot situated in a new environment is asked to navigate to an object. Prior work [1] has shown that imitation learning (IL) using behavior cloning (BC) on a dataset of human demonstrations achieves promising results. However, this has limitations - 1) BC policies generalize poorly to new states, since the training mimics actions not their consequences, and 2) collecting demonstrations is expensive. On the other hand, reinforcement learning (RL) is trivially scalable, but requires careful reward engineering to achieve desirable behavior. We present PIRLNav, a two-stage learning scheme for BC pretraining on human demonstrations followed by RL-finetuning. This leads to a policy that achieves a success rate of \(65.0\%\) on ObjectNav (\(+5.0\%\) absolute over previous state-of-the-art).
Using this BC\(\rightarrow\)RL training recipe, we present a rigorous empirical analysis of design choices. First, we investigate whether human demonstrations can be replaced with 'free' (automatically generated) sources of demonstrations, _e.g_. shortest paths (SP) or task-agnostic frontier exploration (FE) trajectories. We find that BC\(\rightarrow\)RL on human demonstrations outperforms BC\(\rightarrow\)RL on SP and FE trajectories, even when controlled for the same BC-pretraining success on train, and even on a subset of _val_ episodes where BC-pretraining success favors the SP or FE policies. Next, we study how RL-finetuning performance scales with the size of the BC pretraining dataset. We find that as we increase the size of the BC-pretraining dataset and get to high BC accuracies, the improvements from RL-finetuning are smaller, and that \(90\%\) of the performance of our best BC\(\rightarrow\)RL policy can be achieved with less than half the number of BC demonstrations. Finally, we analyze failure modes of our ObjectNav policies, and present guidelines for further improving them.
_Project page:_ram81.github.io/projects/pirlnav.
## 1 Introduction
Since the seminal work of Winograd [2], designing embodied agents that have a rich understanding of the environment they are situated in, can interact with humans (and other agents) via language, and the environment via actions has been a long-term goal in AI [3, 4, 5, 6, 7, 8, 9, 10, 11, 12]. We focus on ObjectGoal Navigation [13, 14], wherein an agent situated in a new environment is asked to navigate to any instance of an object category ('find a plant', 'find a bed', _etc._); see Fig. 2. ObjectNav is simple to explain but difficult for today's techniques to accomplish. First, the agent needs to be able to ground the tokens in the language instruction to physical objects in the environment (_e.g_. what does a 'plant' look like?). Second, the agent needs to have rich semantic priors to guide its navigation to avoid wasteful exploration (_e.g_. the microwave is likely to be found in the kitchen, not the washroom). Finally, it has to keep track of where it has been in its internal memory to avoid redundant search.
Humans are adept at ObjectNav. Prior work [1] collected a large-scale dataset of \(80k\) human demonstrations for ObjectNav, where human subjects on Mechanical Turk teleoperated virtual robots and searched for objects in novel houses. This first provided a human baseline on ObjectNav of \(88.9\%\) success rate on the Matterport3D (MP3D)
Figure 1: ObjectNav success rates of agents trained using behavior cloning (BC) _vs._ BC-pretraining followed by reinforcement learning (RL) (in blue). RL from scratch (_i.e_. BC=0) fails to get off-the-ground. With more BC demonstrations, BC success increases, and it transfers to even higher RL-finetuning success. But the difference between RL-finetuning _vs._ BC-pretraining success (in orange) plateaus and starts to decrease beyond a certain point, indicating diminishing returns with each additional BC demonstration.
dataset [15]1 compared to \(35.4\%\) success rate of the best performing method [1]. This dataset was then used to train agents via imitation learning (specifically, behavior cloning). While this approach achieved state-of-art results (\(35.4\%\) success rate on MP3D val dataset), it has two clear limitations. First, behavior cloning (BC) is known to suffer from poor generalization to out-of-distribution states not seen during training, since the training emphasizes imitating actions not accomplishing their goals. Second and more importantly, it is expensive and thus not scalable. Specifically, Ramrakhya _et al._[1] collected \(80k\) demonstrations on \(56\) scenes in Matterport3D Dataset, which took \(\sim\)\(2894\) hours of human teleoperation and \(\$50k\) dollars. A few months after [1] was released, a new higher-quality dataset called HM3D-Semantics v0.1 [16] became available with \(120\) annotated 3D scenes, and a few months after that HM3D-Semantics v0.2 added \(96\) additional scenes. Scaling Ramrakhya _et al._'s approach to continuously incorporate new scenes involves replicating that entire effort again and again.
Footnote 1: On val split, for 21 object categories, and a maximum of 500 steps.
On the other hand, training with reinforcement learning (RL) is trivially scalable once annotated 3D scans are available. However, as demonstrated in Maksymets _et al_. [17], RL requires careful reward engineering, the reward function typically used for ObjectNav actually _penalizes_ exploration (even though the task requires it), and the existing RL policies overfit to the small number of available environments.
Our primary technical contribution is PIRLNav, an approach for pretraining with BC and finetuning with RL for ObjectNav. BC pretrained policies provide a reasonable starting point for 'bootstrapping' RL and make the optimization easier than learning from scratch. In fact, we show that BC pretraining even unlocks RL with sparse rewards. Sparse rewards are simple (do not involve any reward engineering) and do not suffer from the unintended consequences described above. However, learning from scratch with sparse rewards is typically out of reach since most random action trajectories result in no positive rewards.
While combining IL and RL has been studied in prior work [18, 19, 20, 21, 22], the main technical challenge in the context of modern neural networks is that imitation pretraining results in weights for the policy (or actor), but not a value function (or critic). Thus, naively initializing a new RL policy with these BC-pretrained policy weights often leads to catastrophic failures due to destructive policy updates early on during RL training, especially for actor-critic RL methods [23]. To overcome this challenge, we present a two-stage learning scheme involving a critic-only learning phase first that gradually transitions over to training both the actor and critic. We also identify a set of practical recommendations for this recipe to be applied to ObjectNav. This leads to a PIRLNav policy that advances the state-the-art on ObjectNav from \(60.0\%\) success rate (in [24]) to \(65.0\%\) (\(+5.0\%\), \(8.3\%\) relative improvement).
Next, using this BC\(\rightarrow\)RL training recipe, we conduct an empirical analysis of design choices. Specifically, an ingredient we investigate is whether human demonstrations can be replaced with 'free' (automatically generated) sources of demonstrations for ObjectNav, _e.g_. (1) shortest paths (SP) between the agent's start location and the closest object instance, or (2) task-agnostic frontier exploration [25] (FE) of the environment followed by shortest path to goal-object upon observing it. We ask and answer the following:
1. _'Do human demonstrations capture any unique ObjectNav-specific behaviors that shortest paths and frontier exploration trajectories do not?'_ Yes. We find that BC / BC\(\rightarrow\)RL on human demonstrations outperforms BC / BC\(\rightarrow\)RL on shortest paths and frontier exploration trajectories respectively. When we control the number of demonstrations from each source such that BC success on train is the same, RL-finetuning when initialized from
Figure 2: ObjectNav trajectories for policies trained with BC\(\rightarrow\)RL on 1) Human Demonstrations, 2) Shortest Paths, and 3) Frontier Exploration Demonstrations.
BC on human demonstrations still outperforms the other two.
2. _'How does performance after RL scale with BC dataset size?'_ We observe diminishing returns from RL-finetuning as we scale BC dataset size. This suggests, by effectively leveraging the trade-off curve between size of pretraining dataset size _vs_. performance after RL-Finetuning, we can achieve closer to state-of-the-art results without investing into a large dataset of BC demonstrations.
3. _'Does BC on frontier exploration demonstrations present similar scaling behavior as BC on human demonstrations?'_ No. We find that as we scale frontier exploration demonstrations past \(70k\) trajectories, the performance plateaus.
Finally, we present an analysis of the failure modes of our ObjectNav policies and present a set of guidelines for further improving them. Our policy's primary failure modes are: a) Dataset issues: comprising of missing goal annotations, and navigation meshes blocking the path, b) Navigation errors: primarily failure to navigate between floors, c) Recognition failures: where the agent does not identify the goal object during an episode, or confuses the specified goal with a semantically-similar object.
## 2 Related Work
**ObjectGoal Navigation**. Prior works on ObjectNav have used end-to-end RL [17, 26, 27], modular learning [24, 28, 29], and imitation learning [1, 30]. Works that use end-to-end RL have proposed improved visual representations [26, 31], auxiliary tasks [27], and data augmentation techniques [17] to improve generalization to unseen environments. Improved visual representations include object relation graphs [31] and semantic segmentations [26]. Ye _et al_. [27] use auxiliary tasks like predicting environment dynamics, action distributions, and map coverage in addition to ObjectNav and achieve promising results. Maksymets _et al_. [17] improve generalization of RL agents by training with artificially inserted objects and proposing a reward to incentivize exploration.
Modular learning methods for ObjectNav have also emerged as a strong competitor [24, 28, 32]. These methods rely on separate modules for semantic mapping that build explicit structured map representations, a high-level semantic exploration module that is learned through RL to solve the 'where to look?' subproblem, and a low-level navigation policy that solves 'how to navigate to \((x,y)\)?'.
The current state-of-the-art methods on ObjectNav[1, 30] make use of BC on a large dataset of \(80k\) human demonstrations. with a simple CNN+RNN policy architecture. In this work, we improve on them by developing an effective approach to finetune these imitation-pretrained policies with RL.
**Imitation Learning and RL Finetuning**. Prior works have considered a special case of learning from demonstration data. These approaches initialize policies trained using behavior cloning, and then fine-tune using on-policy reinforcement learning [18, 20, 21, 22, 33, 34], On classical tasks like cart-pole swing-up [18], balance, hitting a baseball [33], and underactuated swing-up [34], demonstrations have been used to speed up learning by initializing policies pretrained on demonstrations for RL. Similar to these methods, we also use a on-policy RL algorithm for finetuning the policy trained with behavior cloning. Rajeswaran _et al_. [20] (DAPG) pretrain a policy using behavior cloning and use an augmented RL finetuning objective to stay close to the demonstrations which helps reduce sample complexity. Unfortunately DAPG is not feasible in our setting as it requires solving a systems research problem to efficiently incorporate replaying demonstrations and collecting experience online at our scale. [20] show results of the approach on a dexterous hand manipulation task with a small number of demonstrations that can be loaded in system memory and therefore did not need to solve this system challenge. This is not possible in our setting, just the 256\(\times\)256 RGB observations for the \(77k\) demos we collect would occupy over 2 TB memory, which is out of reach for all but the most exotic of today's systems. There are many methods for incorporating demonstrations/imitation learning with off-policy RL [35, 36, 37, 38, 39]. Unfortunately these methods were not designed to work with recurrent policies and adapting off-policy methods to work with recurrent policies is challenging [40]. See the Appendix A for more details. The RL finetuning approach that demonstrates results with an actor-critic and high-dimensional visual observations, and is thus most closely related to our setup is proposed in VPT [21]. Their approach uses Phasic Policy Gradients (PPG) [41] with a KL-divergence loss between the current policy and the frozen pretrained policy, and decays the KL loss weight \(\rho\) over time to enable exploration during RL finetuning. Our approach uses Proximal Policy Gradients (PPO) [42] instead of PPG, and therefore does not require a KL constraint, which is compute-expensive, and performs better on ObjectNav.
## 3 ObjectNav and Imitation Learning
### ObjectNav
In ObjectNav an agent is tasked with searching for an instance of the specified object category (_e.g._, 'bed') in an unseen environment. The agent must perform this task using only egocentric perceptions. Specifically, a RGB camera, Depth sensor2, and a GPS+Compass sensor that provides location and orientation relative to the start position of the episode. The action space is discrete and consists of move_forward (\(0.25m\)), turn_left (\(30^{\circ}\)), turn_right (\(30^{\circ}\)), look_up (\(30^{\circ}\)), look_down (\(30^{\circ}\)), and stop actions. An episode is considered successful if the
agent stops within \(1m\) Euclidean distance of the goal object within \(500\) steps and is able to view the object by taking turn actions [14].
We use scenes from the HM3D-Semantics v0.1 dataset [16]. The dataset consists of \(120\) scenes and \(6\) unique goal object categories. We evaluate our agent using the train/val/test splits from the 2022 Habitat Challenge3.
Footnote 3: [https://aihabitat.org/challenge/2022/](https://aihabitat.org/challenge/2022/)
### ObjectNav Demonstrations
Ramrakhya _et al_. [1] collected ObjectNav demonstrations for the Matterport3D dataset [15]. We begin our study by replicating this effort and collect demonstrations for the HM3D-Semantics v0.1 dataset [16]. We use Ramrakhya _et al_.'s Habitat-WebGL infrastructure to collect \(77k\) demonstrations, amounting to \(\sim\)\(2378\) human annotation hours.
### Imitation Learning from Demonstrations
We use behavior cloning to pretrain our ObjectNav policy on the human demonstrations we collect. Let \(\pi_{\theta}^{BC}(a_{t}\mid o_{t})\) denote a policy parametrized by \(\theta\) that maps observations \(o_{t}\) to a distribution over actions \(a_{t}\). Let \(\tau\) denote a trajectory consisting of state, observation, action tuples: \(\tau=\big{(}s_{0},o_{0},a_{0},\dots,s_{T},o_{T},a_{T}\big{)}\) and \(\mathcal{T}=\big{\{}\tau^{(i)}\big{\}}_{i=1}^{N}\) denote a dataset of human demonstrations. The optimal parameters are
\[\theta^{*}=\text{arg}\,\text{min}_{\theta}\sum_{i=1}^{N}\sum_{(o_{t},a_{t}) \in\tau^{(i)}}-\log\Big{(}\pi_{\theta}^{BC}(a_{t}\mid o_{t})\Big{)} \tag{1}\]
We use inflection weighting [43] to adjust the loss function to upweight timesteps where actions change (_i.e_. \(a_{t-1}\neq a_{t}\)).
Our **ObjectNav policy** architecture is a simple CNN+RNN model from [30]. To encode RGB input \((i_{t}=\text{CNN}(I_{t}))\), we use a ResNet50 [44]. Following [30], the CNN is first pre-trained on the Omnidata starter dataset [45] using the self-supervised pretraining method DINO [46] and then finetuned during ObjectNav training. The GPS+Compass inputs, \(P_{t}=(\Delta x,\Delta y,\Delta z)\), and \(R_{t}=(\Delta\theta)\), are passed through fully-connected layers \(p_{t}=\text{FC}(P_{t}),r_{t}=\text{FC}(R_{t})\) to embed them to 32-d vectors. Finally, we convert the object goal category to one-hot and pass it through a fully-connected layer \(g_{t}=\text{FC}(G_{t})\), resulting in a 32-d vector. All of these input features are concatenated to form an observation embedding, and fed into a 2-layer, 2048-d GRU at every timestep to predict a distribution over actions \(a_{t}\) - formally, given current observations \(o_{t}=[i_{t},p_{t},r_{t},g_{t}]\), \((h_{t},a_{t})=\text{GRU}(o_{t},h_{t-1})\). To reduce overfitting, we apply color-jitter and random shifts [47] to the RGB inputs.
## 4 RL Finetuning
Our motivation for RL-finetuning is two-fold. First, finetuning may allow for higher performance as behavior cloning is known to suffer from a train/test mismatch - when training, the policy sees the result of taking ground-truth actions, while at test-time, it must contend with the consequences of its own actions. Second, collecting more human demonstrations on new scenes or simply to improve performance is time-consuming and expensive. On the other hand, RL-finetuning is trivially scalable (once annotated 3D scans are available) and has the potential to reduce the amount of human demonstrations needed.
### Setup
The RL objective is to find a policy \(\pi_{\theta}(a|s)\) that maximizes expected sum of discounted future rewards. Let \(\tau\) be a sequence of object, action, reward tuples (\(o_{t}\), \(a_{t}\), \(r_{t}\)) where \(a_{t}\sim\pi_{\theta}(\cdot\mid o_{t})\) is the action sampled from the agent's policy, and \(r_{t}\) is the reward. For a discount factor \(\gamma\), the optimal policy is
\[\pi^{*}=\operatorname*{argmax}_{\pi}\mathbb{E}_{\tau\sim\pi}[R_{T}],\text{ where }R_{T}=\sum_{t=1}^{T}\gamma^{t-1}r_{t}. \tag{2}\]
To solve this maximization problem, actor-critic RL methods learn a state-value function \(V(s)\) (also called a critic) in addition to the policy (also called an actor). The critic \(V(s_{t})\) represents the expected value of returns \(R_{t}\) when starting from state \(s_{t}\) and acting under the policy \(\pi\), where returns are defined as \(R_{t}=\sum_{i=t}^{T}\gamma^{i-t}r_{i}\). We use DD-PPO [48], a distributed implementation of PPO [42], an on-policy RL algorithm. Given a \(\theta\)-parameterized policy \(\pi_{\theta}\) and a set of rollouts, PPO updates the policy as follows. Let \(\hat{A}_{t}=R_{t}-V(s_{t})\), be the advantage estimate and \(p_{t}(\theta)=\frac{\pi_{\theta}(a_{t}|o_{t})}{\pi_{\theta_{\text{all}}}(a_{t}| o_{t})}\) be the ratio of the probability of action \(a_{t}\) under current policy and under the policy used to collect rollouts. The parameters are updated by maximizing:
\[J^{PPO}(\theta)=\mathbb{E}_{t}\bigg{[}\text{min}\big{(}p_{t}(\theta)\hat{A}_{t },\text{clip}(p_{t}(\theta),1-\epsilon,1+\epsilon)\hat{A}_{t}\big{)}\bigg{]} \tag{3}\]
We use a sparse success reward. Sparse success is simple (does not require hyperparameter optimization) and has fewer unintended consequences (_e.g_. Maksymets _et al_. [17] showed that typical dense rewards used in ObjectNav actually _penalize_ exploration, even though exploration is necessary for ObjectNav in new environments). Sparse rewards are desirable but typically difficult to use with RL (when initializing training from scratch) because they result in nearly all trajectories achieving \(0\) reward, making it difficult to learn. However, since we pretrain with BC, we do not observe any such pathologies.
### Finetuning Methodology
We use the behavior cloned policy \(\pi_{\theta}^{BC}\) weights to initialize the actor parameters. However, notice that during behavior
cloning we do not learn a critic nor is it easy to do so - a critic learned on human demonstrations (during behavior cloning) would be overly optimistic since all it sees are successes. Thus, we must learn the critic from scratch during RL. Naively finetuning the actor with a randomly-initialized critic leads to a rapid drop in performance4 (see Fig. 8) since the critic provides poor value estimates which influence the actor's gradient updates (see Eq.(3)). We address this issue by using a two-phase training regime:
Footnote 4: After the initial drop, the performance increases but the improvements on success are small.
**Phase 1: Critic Learning**. In the first phase, we rollout trajectories using the frozen policy, pre-trained using BC, and use them to learn a critic. To ensure consistency of rollouts collected for critic learning with RL training, we sample actions (as opposed to using argmax actions) from the pre-trained BC policy: \(a_{t}{\sim}\pi_{\theta}(s_{t})\). We train the critic until its loss plateaus. In our experiments, we found \(8M\) steps to be sufficient. In addition, we also initialize the weights of the critic's final linear layer close to zero to stabilize training.
**Phase 2: Interactive Learning**. In the second phase, we unfreeze the actor RNN5 and finetune both actor and critic weights. We find that naively switching from phase 1 to phase 2 leads to small improvements in policy performance at convergence. We gradually decay the critic learning rate from \(2.5\times 10^{-4}\) to \(1.5\times 10^{-5}\) while warming-up the policy learning rate from 0 to \(1.5\times 10^{-5}\) between \(8M\) to \(12M\) steps, and then keeping both at \(1.5\times 10^{-5}\) through the course of training. See Fig. 3. We find that using this learning rate schedule helps improve policy performance. For parameters that are shared between the actor and critic (_i.e_. the RNN), we use the lower of the two learning rates (_i.e_. always the actor's in our schedule). To summarize our finetuning methodology:
Footnote 5: The CNN and non-visual observation embedding layers remain frozen. We find this to be more stable.
* First, we initialize the weights of the policy network with the IL-pretrained policy and initialize critic weights close to zero. We freeze the actor and shared weights. The only learnable parameters are in the critic.
* Next, we learn the critic weights on rollouts collected from the pretrained, frozen policy.
* After training the critic, we warmup the policy learning rate and decay the critic learning rate.
* Once both critic and policy learning rate reach a fixed learning rate, we train the policy to convergence.
### Results
**Comparing with the RL-finetuning approach in VPT [21]**. We start by comparing our proposed RL-finetuning approach with the approach used in VPT [21]. Specifically, [21] proposed initializing the critic weights to zero, replacing entropy term with a KL-divergence loss between the frozen IL policy and the RL policy, and decay the KL divergence loss coefficient, \(\rho\), by a fixed factor after every iteration. Notice that this prevents the actor from drifting too far too quickly from the IL policy, but does not solve uninitialized critic problem. To ensure fair comparison, we implement this method within our DD-PPO framework to ensure that any performance difference is due to the fine-tuning algorithm and not tangential implementation differences. Complete training details are in the Appendix C.3. We keep hyperparameters constant for our approach for all experiments. Table 1 reports results on HM3D val for the two approaches using \(20k\) human demonstrations. We find that PIRLNAv achieves \(+2.2\%\) Success compared to VPT and comparable SPL.
**Ablations**. Next, we conduct ablation experiments to quantify the importance of each phase in our RL-finetuning approach. Table 2 reports results on the HM3D val split for a policy BC-pretrained on \(20k\) human demonstrations and RL-finetuned for \(300M\) steps, complete training details are in Appendix C.4. First, without a gradual learning transition (row \(2\)), _i.e_. without a critic learning and LR decay phase, the policy improves by \(1.6\%\) on success and \(8.0\%\) on SPL. Next, with only a critic learning phase (row \(3\)), the policy improves by \(4.7\%\) on success and \(7.1\%\) on SPL. Using an LR decay schedule only for the critic after the critic learn
\begin{table}
\begin{tabular}{l r r} \hline \hline Method & Success (\(\uparrow\)) & SPL (\(\uparrow\)) \\ \hline
1) BC & \(52.0\) & \(20.6\) \\
2) BC\(\rightarrow\)RL-FT w/ VPT & \(59.7\)\(\pm\)\(0.70\) & \(\mathbf{28.6}\)\(\pm\)\(0.89\) \\ \hline
3) PIRLNAv (Ours) & \(\mathbf{61.9}\)\(\pm\)\(0.47\) & \(27.9\)\(\pm\)\(0.56\) \\ \hline \hline \end{tabular}
\end{table}
Table 1: Comparison with VPT on HM3D val [16, 32]
\begin{table}
\begin{tabular}{l r r} \hline \hline Method & Success (\(\uparrow\)) & SPL (\(\uparrow\)) \\ \hline
1) BC & \(52.0\) & \(20.6\) \\
2) BC\(\rightarrow\)RL-FT & \(53.6\)\(\pm\)\(1.10\) & \(\mathbf{28.6}\)\(\pm\)\(0.50\) \\
3) BC\(\rightarrow\)RL-FT (+ Critic Learning) & \(56.7\)\(\pm\)\(0.93\) & \(27.7\)\(\pm\)\(0.82\) \\
4) BC\(\rightarrow\)RL-FT (+ Critic Learning, Critic Decay) & \(59.4\)\(\pm\)\(0.42\) & \(26.9\)\(\pm\)\(0.38\) \\
5) BC\(\rightarrow\)RL-FT (+ Critic Learning, Actor Warmup) & \(58.2\)\(\pm\)\(0.55\) & \(26.7\)\(\pm\)\(0.69\) \\ \hline
6) PIRLNAv & \(\mathbf{61.9}\)\(\pm\)\(0.47\) & \(27.9\)\(\pm\)\(0.56\) \\ \hline \hline \end{tabular}
\end{table}
Table 2: RL-finetuning ablations on HM3D val [16, 32]
Figure 3: Learning rate schedule for RL Finetuning.
ing phase improves success by \(7.4\%\) and SPL by \(6.3\%\), and using an LR warmup schedule for the actor (but no critic LR decay) after the critic learning phase improves success by \(6.2\%\) and SPL by \(6.1\%\). Finally, combining everything (critic-only learning, critic LR decay, actor LR warmup), our policy improves by \(9.9\%\) on success and \(7.3\%\) on SPL.
Footnote 5: [https://github.com/google-learning/](https://github.com/google-learning/)
Footnote 6: The approach is called “BadSeed” on the HM3D leaderboard: eval.ai/web/challenges/challenge-page/1615/leaderboard/3899
**ObjectNav Challenge 2022 Results**. Using our overall two-stage training approach of BC-pretraining followed by RL-finetuning, we achieve state-of-the-art results on ObjectNav\(-65.0\%\) success and \(33.0\%\) SPL on both the test-standard and test-challenge splits and \(70.4\%\) success and \(34.1\%\) SPL on val. Table 3 compares our results with the top-4 entries to the Habitat ObjectNav Challenge 2022 [50]. Our approach outperforms Stretch [24] on success rate on both test-standard and test-challenge and is comparable on SPL (\(1\%\) worse on test-standard, \(4\%\) better on test-challenge). ProcTHOR [49], which uses \(10k\) procedurally-generated environments for training, achieves \(54\%\) success and \(32\%\) SPL on test-standard split, which is \(11\%\) worse at success and \(1\%\) worse at SPL than ours. For sake of completeness, we also report results of two unpublished entries uploaded to the leaderboard - Populus A. and ByteBOT. Unfortunately, there is no associated report yet with these entries, so we are unable to comment on the details of these approaches, or even whether the comparison is meaningful.
## 5 Role of demonstrations in BC\(\rightarrow\)RL transfer
Our decision to use human demonstrations for BC-pretraining before RL-finetuning was motivated by results in prior work [1]. Next, we examine if other cheaper sources of demonstrations lead to equally good BC\(\rightarrow\)RL generalization. Specifically, we consider \(3\) sources of demonstrations:
**Shortest paths (SP)**. These demonstrations are generated by greedily sampling actions to fit the geodesic shortest path to the nearest navigable goal object, computed using the ground-truth map of the environment. These demonstrations do not capture any exploration, they only capture success at the ObjectNav task via the most efficient path.
**Task-Agnostic Frontier Exploration (FE) [24]**. These are generated by using a 2-stage approach: 1) Exploration: where a task-agnostic strategy is used to maximize exploration coverage and build a top-down semantic map of the environment, and 2) Goal navigation: once the goal object is detected by the semantic predictor, the developed map is used to reach it by following the shortest path. These demonstrations capture ObjectNav-agnostic exploration.
**Human Demonstrations (HD) [1]**. These are collected by asking humans on Mechanical Turk to control an agent and navigate to the goal object. Humans are provided access to the first-person RGB view of the agent and tasked to reach within \(1\)m of the goal object category. These demonstrations capture human-like ObjectNav-specific exploration.
### Results with Behavior Cloning
Using the BC setup described in Sec. 3.3, we train on SP, FE, and HD demonstrations. Since these demonstrations vary in trajectory length (_e.g._ SP are significantly shorter than FE), we collect \(\sim\)\(12M\) steps of experience with each method. That amounts to \(240k\) SP, \(70k\) FE, and \(77k\) HD demonstrations respectively. As shown in Table 4, BC on \(240k\) SP demonstrations leads to \(6.4\%\) success and \(5.0\%\) SPL. We believe this poor performance is due to an imitation gap [51], _i.e._ the shortest path demonstrations are generated with access to privileged information (ground-truth map of the environment) which is not available to the policy during training. Without a map, following the shortest path in a new environment to find a goal object is not possible. BC on \(70k\) FE demonstrations achieves \(44.9\%\) success and \(21.5\%\) SPL, which is significantly better than BC on shortest paths (\(+38.5\%\) success, \(+16.5\%\) SPL). Finally, BC on \(77k\) HD obtains the best results - \(64.1\%\) success, \(27.1\%\) SPL. These trends suggest that task-specific exploration (captured in human demonstrations) leads to much better generalization than task-agnostic exploration (FE) or shortest paths (SP).
### Results with RL Finetuning
Using the BC-pretrained policies on SP, FE, and HD demonstrations as initialization, we RL-finetune each using our approach described in Sec. 4. These results are summarized in Fig. 4. Perhaps intuitively, the trends after RL-finetuning follow the same ordering as BC-pretraining, _i.e._ RL-finetuning from BC on HD \(>\) FE \(>\) SP. But there are two factors that could be leading to this ordering after RL-finetuning - 1) inconsistency in performance at initialization (_i.e._ BC on HD
\begin{table}
\begin{tabular}{l c c} \hline Training demonstrations & Success (\(\uparrow\)) & SPL (\(\uparrow\)) \\ \hline Shortest paths (\(240k\)) & \(6.4\%\) & \(5.0\%\) \\ Frontier exploration (\(70k\)) & \(44.9\%\) & \(21.5\%\) \\ Human demonstrations (\(77k\)) & \(\mathbf{64.1}\%\) & \(\mathbf{27.1}\%\) \\ \hline \end{tabular}
\end{table}
Table 4: Performance on HM3D val with imitation learning on SP, FE, and HD demonstrations. The size of each demonstration dataset is picked such that total steps of experience is \(\sim\)\(12M\).
\begin{table}
\begin{tabular}{l c c c c} \hline \hline \multirow{2}{*}{Method} & \multicolumn{2}{c}{test-std} & \multicolumn{2}{c}{test-challenge} \\ \cline{2-5} & Success (\(\uparrow\)) & SPL (\(\uparrow\)) & Success (\(\uparrow\)) & SPL (\(\uparrow\)) \\ \hline
1)Stereth [24] & \(60.0\%\) & \(34.0\%\) & \(56.0\%\) & \(29.0\%\) \\
2)ProProTHOR-Large [49] & \(54.0\%\) & \(32.0\%\) & - & - \\
3)Habitat-Web [1] & \(55.0\%\) & \(22.0\%\) & - & - \\
4)DD-PPO [50] & \(26.0\%\) & \(12.0\%\) & - & - \\
5)Prothia A & \(60.6\%\) & \(82.0\%\) & \(60.0\%\) & \(30.0\%\) \\
6)ByteBOT & \(68.0\%\) & \(87.0\%\) & \(64.0\%\) & \(35.0\%\) \\ \hline
7)PRILNav6 & \(\mathbf{65.0}\%\) & \(\mathbf{33.0}\%\) \\ \hline \hline \end{tabular}
\end{table}
Table 3: Results on HM3D test-standard and test-challenge [16, 50]. Unpublished works submitted only to the ObjectNav leaderboard have been graved out.
is already better than BC on FE), and 2) amenability of each of these initializations to RL-finetuning (_i.e_. is RL-finetuning from HD init better than FE init?).
We are interested in answering (2), and so we control for (1) by selecting BC-pretrained policy weights across SP, FE, and HD that have equal performance on a subset of train\(=\sim\)\(48.0\%\) success. This essentially amounts to selecting BC-pretraining checkpoints for FE and HD from earlier in training as \(\sim\)\(48.0\%\) success is the maximum for SP.
Fig. 5 shows the results after BC and RL-finetuning on a subset of the HM3D train and on HM3D val. First, note that at BC-pretraining train success rates are equal (\(=\sim\)\(48.0\%\)), while on val FE is slightly better than HD followed by SP. We find that after RL-finetuning, the policy trained on HD still leads to higher val success (\(66.1\%\)) compared to FE (\(51.3\%\)) and SP (\(43.6\%\)). Notice that RL-finetuning from SP leads to high train success, but low val success, indicating significant overfitting. FE has smaller train-val gap after RL-finetuning but both are worse than HD, indicating underfitting. These results show that learning to imitate human demonstrations equips the agent with navigation strategies that enable better RL-finetuning generalization compared to imitating other kinds of demonstrations, even when controlled for the same BC-pretraining accuracy.
**Results on SP-favoring and FE-favoring episodes**. To further emphasize that imitating human demonstrations is key to good generalization, we created two subsplits from the HM3D val split that are adversarial to HD performance - SP-favoring and FE-favoring. The SP-favoring val split consists of episodes where BC on SP achieved a higher performance compared to BC on HD, _i.e_. we select episodes where BC on SP succeeded but BC on HD did not or both BC on SP and BC on HD failed. Similarly, we also create an FE-favoring val split using the same sampling strategy biased towards BC on FE. Next, we report the performance of RL-finetuned from BC on SP, FE, and HD on these two evaluation splits in Table 5. On both SP-favoring and FE-favoring, BC on HD is at \(0\%\) success (by design), but after RL-finetuning, is able to significantly outperform RL-finetuning from the respective BC on SP and FE policies.
### Scaling laws of BC and RL
In this section, we investigate how BC-pretraining \(\rightarrow\) RL-finetuning success scales with no. of BC demonstrations.
**Human demonstrations**. We create HD subsplits ranging in size from \(2k\) to \(77k\) episodes, and BC-pretrain policies with the same set of hyperparameters on each split. Then, for each, we RL-finetune from the best-performing checkpoint. The resulting BC and RL success on HM3D val_vs_. no. of HD episodes is plotted in Fig. 1. Similar to [1], we see promising scaling behavior with more BC demonstrations.
Interestingly, as we increase the size of the BC pretraining dataset and get to high BC accuracies, the improvements from RL-finetuning decrease. _E.g_. at \(20k\) BC demonstrations, the BC\(\rightarrow\)RL improvement is \(10.1\%\) success, while at \(77k\) BC demonstrations, the improvement is \(6.3\%\). Furthermore, with \(35k\) BC-pretraining demonstrations, the RL-finetuned success is only \(4\%\) worse than RL-finetuning from \(77k\) BC demonstrations (\(66.4\%\)_vs_. \(70.4\%\)). Both suggest that by effectively leveraging the trade-off between the size of the BC-pretraining dataset _vs_. performance gains after RL-finetuning, it may be possible to achieve close to state-of-the-art results without large investments in demonstrations.
**How well does FE Scale?** In Section 5.1, we showed that BC on human demonstrations outperforms BC on both shortest paths and frontier exploration demonstrations, when controlled for the same amount of training experience. In contrast to human demonstrations however, collecting shortest paths and frontier exploration demonstrations is cheaper, which makes scaling these demonstration datasets easier. Since BC performance on shortest paths is significantly worse even with \(3\)x more demonstrations compared to FE and HD (\(240k\) SP _vs_. \(70k\) FE and \(77k\) HD demos, Sec. 5.1), we focus on scaling FE demonstrations. Fig. 6 plots performance on HM3D val against FE dataset size and a curve fitted using \(75k\) demonstrations to predict performance on FE dataset-sizes \(\geq 75k\). We created splits ranging in size from
Figure 4: ObjectNav performance on HM3D val with BC-pretraining on shortest path (SP), frontier exploration (FE), and human demonstrations (HD), followed by RL-finetuning from each.
Figure 5: BC and RL performance for shortest paths (SP), frontier exploration (FE), and human demonstrations (HD) with equal BC training success on HM3D train (left) and val (right).
\begin{table}
\begin{tabular}{l r r} \hline \hline Training demonstrations & BC Success (\(\uparrow\)) & RL-FT Success (\(\uparrow\)) \\ \hline
1) SP & \(\mathbf{5.2\%}\) & \(34.8\%\) \\
2) HD & \(0.0\%\) & \(\mathbf{57.2\%}\) \\ \hline
3) FE & \(\mathbf{26.3\%}\) & \(43.0\%\) \\
4) HD & \(0.0\%\) & \(\mathbf{57.2\%}\) \\ \hline \hline \end{tabular}
\end{table}
Table 5: Results on SP-favoring and FE-Favoring splits.
\(10k\) to \(150k\). Increasing the dataset size doesn't consistently improve performance and saturates after \(70k\) demonstrations, suggesting that generating more FE demonstrations is unlikely to help. We hypothesize that the saturation is because these demonstrations don't capture task-specific exploration.
## 6 Failure Modes
To better understand the failure modes of our BC\(\rightarrow\)RL ObjectNav policies, we manually annotate \(592\) failed HM3D val episodes from our best ObjectNav agent. See Fig. 7. The most common failure modes are:
**Missing Annotations** (\(27\%\)): Episodes where the agent navigates to the correct goal object category but the episode is counted as a failure due to missing annotations in the data.
**Inter-Floor Navigation** (\(21\%\)): The object is on a different floor and the agent fails to climb up/down the stairs.
**Recognition Failure** (\(20\%\)): The agent sees the object in its field of view but fails to navigate to it.
**Last Mile Navigation [52]** (\(12\%\)). Repeated collisions against objects or mesh geometry close to the goal object preventing the agent from reaching close to it.
**Navmesh Failure** (\(9\%\)). Hard-to-navigate meshes blocking the path of the agent. _E.g._ in one instance, the agent fails to climb stairs because of a narrow nav mesh on the stairs.
**Looping** (\(4\%\)). Repeatedly visiting the same location and not exploring the rest of the environment.
**Semantic Confusion** (\(5\%\)). Confusing the goal object with a semantically-similar object. _E.g._ 'armchair' for'sofa'.
**Exploration Failure** (\(2\%\)). Catch-all for failures in a complex navigation environment, early termination, semantic failures (_e.g._ looking for a chair in a bathroom), _etc_.
As can be seen in Fig. 7, most failures (\(\sim\)\(36\%\)) are due to issues in the ObjectNav dataset - \(27\%\) due to missing object annotations \(+9\%\) due to holes / issues in the navmesh. \(21\%\) failures are due to the agent being unable to climb up/down stairs. We believe this happens because climbing up / down stairs to explore another floor is a difficult behavior to learn and there are few episodes that require this. Oversampling inter-floor navigation episodes during training can help with this. Another failure mode is failing to recognize the goal object - \(20\%\) where the object is in the agent's field of view but it does not navigate to it, and \(5\%\) where the agent navigates to another semantically-similar object. Advances in the visual backbone and object recognition can help address these. Prior works [1, 24] have used explicit semantic segmentation modules to recognize objects at each step of navigation. Incorporating this within the BC\(\rightarrow\)RL training pipeline could help. \(11\%\) failures are due to last mile navigation, suggesting that equipping the agent with better goal-distance estimators could help. Finally, only \(\sim\)\(6\%\) failures are due to looping and lack of exploration, which is promising!
## 7 Conclusion
To conclude, we propose PIRLNav, an approach to combine imitation using behavior cloning (BC) and reinforcement learning (RL) for ObjectNav, wherein we pretrain a policy with BC on \(77k\) human demonstrations and then finetune it with RL, leading to state-of-the-art results on ObjectNav (\(65\%\) success, \(5\%\) improvement over previous best). Next, using this BC\(\rightarrow\)RL training recipe, we present a thorough empirical study of the impact of different demonstration datasets used for BC-pretraining on downstream RL-finetuning performance. We show that BC / BC\(\rightarrow\)RL on human demonstrations outperforms BC / BC\(\rightarrow\)RL on shortest paths and frontier exploration trajectories, even when we control for same BC success on train. We also show that as we scale the pretraining dataset size for BC and get to higher BC success rates, the improvements from RL-finetuning start to diminish. Finally, we characterize our agent's failure modes, and find that the largest sources of error are 1) dataset annotation noise, and inability of the agent to 2) navigate across floors, and 3) recognize the correct goal object.
**Acknowledgements**. We thank Karmesh Yadav for OVRL model weights [30], and Theophile Gervet for answering questions related to the frontier exploration code [24] used to generate demonstrations. The Georgia Tech effort was supported in part by NSF, ONR YIP, and ARO PECASE. The views and conclusions contained herein are those of the authors and should not be interpreted as necessarily representing the official policies or endorsements, either expressed or implied, of the U.S. Government, or any sponsor.
Figure 6: Success on ObjectNav HM3D val split vs. no. of frontier exploration demonstrations for training.
Figure 7: Failure modes of our best BC\(\rightarrow\)RL ObjectNav policy |
2306.15979 | Automatic continuity of Polynomial maps and cocycles | Classical theorems from the early 20th century state that any Haar measurable
homomorphism between locally compact groups is continuous. In particular, any
Lebesgue-measurable homomorphism $\phi:\mathbb{R} \to \mathbb{R}$ is of the
form $\phi(x)=ax$ for some $a \in \mathbb{R}$. In this short note, we prove
that any Lebesgue measurable function $\phi:\mathbb{R} \to \mathbb{R}$ that
vanishes under any $d+1$ ``difference operators'' is a polynomial of degree at
most $d$. More generally, we prove the continuity of any Haar measurable
polynomial map between locally compact groups, in the sense of Leibman. We
deduce the above result as a direct consequence of a theorem about the
automatic continuity of cocycles. | Tom Meyerovitch, Omri Nisan Solan | 2023-06-28T07:32:39Z | http://arxiv.org/abs/2306.15979v4 | # Automatic continuity of polynomial maps and cocycles
###### Abstract.
By classical theorems of Steinhaus and Weil, any Haar-measurable homomorphism between locally compact groups is continuous. In particular, any Lebesgue-measurable homomorphism \(\phi:\mathbb{R}\to\mathbb{R}\) is of the form \(\phi(x)=ax\) for some \(a\in\mathbb{R}\). In this short note, we prove that any Lebesgue measurable function \(\phi:\mathbb{R}\to\mathbb{R}\) that vanishes under any \(d+1\) derivatives is a polynomial of degree at most \(d\). More generally, we prove the continuity of any Haar-measurable polynomial map between locally compact groups, in the sense of Leibman. We deduce the above result as a direct consequence of a theorem about the automatic continuity of cocycles.
## 1. Statement of results
Around the beginning of the 20th century, in answer to an old question of Cauchy, it was shown by Steinhaus that any Lebesgue measurable function \(\phi:\mathbb{R}\to\mathbb{R}\) satisfying \(\phi(x+y)=\phi(x)+\phi(y)\) is of the form \(\phi(x)=ax\) for some \(a\in\mathbb{R}\). Given \(t\in\mathbb{R}\) and \(f:\mathbb{R}\to\mathbb{R}\), denote by \(\Delta_{t}f:\mathbb{R}\to\mathbb{R}\) the function given by
\[\Delta_{t}f(x)=f(x+t)-f(x).\]
Steinhaus's theorem can be viewed as the case \(d=1\) of the following theorem:
**Theorem 1.1**.: _Let \(f:\mathbb{R}\to\mathbb{R}\) be a measurable function. Suppose that there exists \(d\geq 0\) that satisfies_
\[\Delta_{t_{d}}\ldots\Delta_{t_{0}}f(x)=0\text{ for all }x,t_{0},\ldots,t_{d}\in \mathbb{R}.\]
_Then \(f\) is a polynomial of degree at most \(d\). Namely, there exists \(a_{0},\ldots,a_{d}\in\mathbb{R}\) such that_
\[f(x)=a_{0}+a_{1}x+\ldots a_{d}x^{d}.\]
By elementary considerations, Steinhaus's theorem is equivalent to the statement that any Lebesgue measurable homomorphism of \(\mathbb{R}\) is continuous. Weil extended this to show that any Haar measurable homomorphism between locally compact Polish groups is continuous. This phenomena of continuity following from apriori weaker assumptions is often referred to as "automatic continuity". See Rosendal's survey [2] for more on the subject of automatic continuity of homomorphisms.
Theorem 1.1 and Weil's result can be unified using Leibman's notion of _polynomial maps_ between groups [1]: Briefly, a function \(P:G\to H\) is called a polynomial map if it vanishes after applying any \(d\) difference operators. Polynomial maps of degree \(0\) are precisely constant functions. Up to a constant, any polynomial map of degree at most \(1\) is a homomorphism.
**Theorem 1.2**.: _Let \(P:G\to H\) be a Haar measurable polynomial map between locally compact groups \(G,H\). Then \(P\) is continuous._
Theorem 1.2 itself is again an immediate corollary of the following slightly more general result:
**Theorem 1.3**.: _Let \(f:G\to H\) be a measurable function between locally compact groups \(G,H\). For every \(g\in G\) consider the function \(\Delta_{g}f:G\to H\) given by_
\[\Delta_{g}f(x)=f(gx)f(x)^{-1}\text{ for all }x\in G.\]
_Suppose \(\Delta_{g}f:G\to H\) is continuous for almost every \(g\in G\), with respect to Haar measure on \(G\). Then \(f:G\to H\) also continuous._
Again, Weil's theorem is a particular case because for any homomorphism \(\phi:G\to H\)\(\Delta_{g}\phi:G\to H\) is a constant function for every \(g\in G\), hence continuous. Theorem 1.2 follows directly from Theorem 1.3 by indcution on \(d\).
We will introduce one more level of generalization, using the formulation of cocycles for a group action: Let \(G\),\(H\) be groups, and let \(G\curvearrowright X\) an action of \(G\) on a set \(X\). A _coycle_ with respect to this action is a function \(c:G\times X\to H\) that satisfies
\[c(g_{1}g_{2},x)=c(g_{1},g_{2}x)c(g_{2},x). \tag{1}\]
Any function \(f:X\to H\) defines a cocycle \(\Delta f:G\times X\to H\) given by
\[\Delta f(g,x)=f(g\cdot x)f^{-1}(x)\text{ for }g\in G,x\in X. \tag{2}\]
The map \(f\mapsto\Delta f\) is called the _coboundary map_. It is easy to check that \(c=\Delta f\) satisfies (1). A cocycle of the form \(\Delta f\) is called a _coboundary_. For a function \(f:G\to H\), we denote by \(\Delta f:G\times G\to H\) the coboundary associated with the action of \(G\) on itself by multiplication from the left.
From Equation (1) it follows that for any cocycle \(c:G\times X\to H\)\(c(1_{G},x)=1_{H}\) for all \(x\in X\) and also that for any \((g,x)\in G\times X\) the following holds:
\[c(g^{-1},x)=\left(c(g,g^{-1}x)\right)^{-1}.\]
**Theorem 1.4**.: _Let \(G,H\) be locally compact Polish groups, and let \(X\) be a locally compact Polish group. Suppose that \(G\) acts on \(X\) by homeomorphisms, and let \(c:G\times X\to H\) be a cocycle for this action. Suppose that:_
1. _For almost every_ \(g\in G\) _with respect to Haar measure the function_ \(c_{g}:X\to H\) _given by_ \(c_{g}(x)=c(g,x)\) _is continuous._
2. _For every_ \(x\in X\) _the function_ \(c^{x}:G\to H\) _given by_ \(c^{x}(g)=c(g,x)\) _is Haar-measurable._
_Then \(c:G\times X\to H\) is continuous._
In that case where \(X=\{x_{0}\}\) is a singleton, any cocycle \(c:G\times X\to H\) is of the form \(c(g,x_{0})=\phi(g)\), where \(\phi\) is a homomorphism. Thus, the case where \(X\) is a singleton, Theorem 1.4 coincides with Weil's theorem on automatic continuity of homomorphisms.
Theorem 1.3 follows directly by applying Theorem 1.4 to the coboundary \(\Delta f:G\times G\to H\).
_Acknowledgements:_ I thank Yair Glasner for introducing me to Rosendal's article [2] and Uri Bader for helpful comments.
## 2. Continuity of cocyles
In this section, we prove Theorem 1.4. The proof is based on the following theorem of Andre Weil, which is a generalization of Steinhaus's theorem from the case \(G=\mathbb{R}\):
**Theorem 2.1**.: _Let \(G\) be a locally compact Polish group and \(E\subseteq G\) a Haar-measurable set of positive Haar measure. Then \(EE^{-1}\subseteq G\) contains an open neighborhood of \(1_{G}\)._
For the rest of this section, let \(c:G\times X\to H\) be a cocycle that satisfies the assumptions of Theorem 1.4.
**Lemma 2.2**.: _Let \(V\subseteq H\) be an open set and \(x_{0}\in X\). Then there exists \(h_{0}\in H\), an open set \(W\subseteq X\) with \(x_{0}\in W\) and a Haar-measurable set \(E\subseteq G\) of positive Haar measure such that \(c(g,x)\in Vh_{0}\) for every \((g,x)\in E\times W\)._
Proof.: Let \(V\subseteq H\) be an open set and \(x_{0}\in X\). Since \(H\) is a locally compact Polish group, it is \(\sigma\) compact, so that the open cover \(\{Vh:\ h\in H\}\) admits an countable sub-cover \(\{Vh_{n}\}_{n=0}^{\infty}\), where \(\{h_{0},\ldots,h_{n},\ldots\}\subseteq H\) is some countable set. Then there exists \(n\in\mathbb{N}\) such that \(\{g\in G:\ c(g,x_{0})\in Vh_{n}\}\) has positive Haar measure. Assume without loss of generality that \(E_{0}:=\{g\in G:\ c(g,x_{0})\in Vh_{0}\}\) has positive Haar measure. For every \(g\in E_{0}\), by continuity of \(c_{g}:G\to H\), there exists \(n\in\mathbb{N}\) such that \(c(g,x)\in Vh_{0}\) whenever the distance between \(x\) and \(x_{0}\) is at most \(\frac{1}{n}\). It follows that there exists \(n\in\mathbb{N}\) such that set
\[E=\{g\in E_{0}:\ \forall x\in X\ d(x,x_{0})<\frac{1}{n}\ \Rightarrow\ c(g,x)\in Vh_{0}\}\]
has a positive Haar measure. With \(n\) as above, let \(W\subseteq X\) denote the open ball of radius \(\frac{1}{n}\) around \(x_{0}\).
**Lemma 2.3**.: _The cocycle \(c:G\times X\to H\) is continuous at \(\{1_{G}\}\times X\)._
Proof.: We need to prove the following: For every \(x_{0}\in G\) and very open neighborhood \(V\subseteq H\) of \(1_{H}\) there exist open sets \(U\subseteq G\) and \(W\subseteq X\) with \((1_{G},x_{0})\in U\times W\) so that for every \((g,x)\in U\times W\) we have \(c(g,x)\in V\). Fix \(x_{0}\in X\) and an open set \(V\subseteq H\) with \(1_{H}\in V\). Find an open set \(V_{1}\subseteq H\) with \(1_{H}\in V_{1}\) such that \(V_{1}V_{1}^{-1}\subseteq V\). By Lemma 2.2 there exists a \(E_{1}\subseteq G\) of positive Haar measure, an open set \(W\subseteq X\) with \(x_{0}\in W\) and \(h_{0}\in H\) such that \(c(g,x)\in V_{1}h_{0}\). Choose an open set \(W_{1}\subseteq W\) such that the closure \(\overline{W_{1}}\) is compact and is contained in \(W\). Let \(U_{1}=\{g\in G:\ \forall x\in\overline{W_{1}}\ g^{-1}(x)\in W\}\). Then \(U_{1}\subseteq G\) is a non-empty open set, so there exists \(\tilde{g}\in G\) such that \(E_{1}:=(E\tilde{g})\cap U_{1}\) has positive Haar measure. Let \(U=E_{1}E_{1}^{-1}\), then \(U=(E\cap U_{1})\cap(E^{-1}\cap U_{1})\). By Weil's Theorem 2.1, \(U\) contains an open neighborhood of \(1_{G}\). Now suppose \(g\in U\) and \(x\in W\). Then there exists \(g_{1},g_{2}\in(E\cap U_{1})\) such that \(g=g_{1}g_{2}^{-1}\), and so for every \(x\in W_{1}\) we have
\[c(g,x)=c(g_{1}g_{2}^{-1},x)=c(g_{1},g_{2}^{-1}x)\left(c(g_{2},g_{2}^{-1}x) \right)^{-1}.\]
Since \(x\in W_{1}\) and \(g_{1},g_{2}\in U_{1}\), it follows that \(g_{2}^{-1}(x)\in W\). Thus \(c(g_{1},g_{2}^{-1}x),c(g_{2},g_{2}^{-1}x)\in V_{1}\). It follows that \(c(g,x)\in V\).
Proof of Theorem 1.4.: Choose \(g_{0}\in G\) and \(x_{0}\in X\). We need to show that \(c\) is continuous at \((g_{0},x_{0})\), namely that for every open neighborhood \(V\subseteq H\) of \(1_{H}\) the set \(c^{-1}(Vc(g_{0},x_{0}))\subseteq G\times X\) contains an open neighborhood of \((g_{0},x_{0})\). Fix an open neighborhood \(V\subseteq H\) of \(1_{H}\). Find an open neighborhood \(V_{1}\subseteq H\) of \(1_{H}\) such that \(V_{1}V_{1}\subseteq V\).
By Lemma 2.3 there exist a open sets \(U_{1}\subseteq G\) and \(\tilde{W}_{1}\subset X\) with \(1_{G}\in U_{1}\ g_{0}x_{0}\in\tilde{W}_{1}\) such that \(c(g,x)\in V_{1}\) for every \((g,x)\in U_{1}\times\tilde{W}_{1}\). By continuity of \(x\mapsto g_{0}x\), there exists an open neighborhood \(W_{1}\subseteq X\) of \(x_{0}\) such that \(g_{0}x\in\tilde{W}_{1}\) for every \(x\in W_{1}\).
By continuity of \(c_{g_{0}}:X\to H\), there exists an open neighborhood \(W_{2}\subseteq X\) of \(x_{0}\) such that \(c(g_{0},x)\in V_{1}c(g_{0},x_{0})\) for every \(x\in W_{2}\). Let \(W=W_{1}\cap W_{2}\cap W_{3}\). Fix \(g\in U_{1}\) and \(x\in W\). Then \(g_{0}x\in W_{1}\) so \(c(g,g_{0}x)\in V_{1}\) and \(c(g_{0},x)\in V_{1}c(g_{0},x_{0})\). It follows that
\[c(gg_{0},x)=c(g,g_{0}x)c(g_{0},x)\in V_{1}V_{1}c(g_{0},x_{0})\subseteq Vc(g_{0},x_{0}).\] |
2303.16476 | Elliptic curves with a rational 2-torsion point ordered by conductor and
the boundedness of average rank | In this paper we refine recent work due to A. Shankar, A. N. Shankar, and X.
Wang on counting elliptic curves by conductor to the case of elliptic curves
with a rational 2-torsion point. This family is a small family, as opposed to
the large families considered by the aforementioned authors. We prove the
analogous counting theorem for elliptic curves with so-called square-free index
as well as for curves with suitably bounded Szpiro ratios. We note that our
assumptions on the size of the Szpiro ratios is less stringent than would be
expected by the naive generalization of their approach. | Stanley Yao Xiao | 2023-03-29T06:15:29Z | http://arxiv.org/abs/2303.16476v2 | Elliptic curves with a rational 2-torsion point ordered by conductor and the boundedness of average rank
###### Abstract.
In this paper we refine recent work due to A. Shankar, A. N. Shankar, and X. Wang on counting elliptic curves by conductor to the case of elliptic curves with a rational 2-torsion point. This family is a _small_ family, as opposed to the large families considered by the aforementioned authors. We prove the analogous counting theorem for elliptic curves with so-called square-free index as well as for curves with suitably bounded Szpiro ratios. We note that our assumptions on the size of the Szpiro ratios is less stringent than would be expected by the naive generalization of their approach.
## 1. Introduction
In this paper we consider the problem of counting elliptic curves and estimating their average rank in certain thin families ordered by their conductor. The families we consider will be sub-families of the family \(\mathcal{E}_{2}\) of elliptic curves with a rational 2-torsion point, which we may assume is _marked_, and so it suffices to consider the family given by
\[E_{a,b}:y^{2}=x(x^{2}+ax+b):a,b\in\mathbb{Z}. \tag{1.1}\]
The discriminant of the curves in this family is given by
\[\Delta(E_{a,b})=16b^{2}(a^{2}-4b). \tag{1.2}\]
The _conductor_ of the curve is the quantity \(C(E_{a,b})\) given in terms of \(p\)-adic valuation by
\[v_{p}(C(E_{a,b}))=\begin{cases}0&\text{if }p\nmid\Delta(E_{a,b})\\ 1&\text{if }E_{a,b}\text{ has multiplicative bad reduction at }p\\ 2&\text{if }E_{a,b}\text{ has additive bad reduction at }p.\end{cases} \tag{1.3}\]
Our goal in this paper is to count certain subfamilies of elliptic curves of our families in \(\mathcal{E}_{2}\) ordered by conductor, as well as estimating the average rank with respect to such an ordering. This is analogous to the recent work of A. N. Shankar, A. Shankar, and X. Wang [11] on counting elliptic curves in large families having bounded conductor, while showing that the average size of the 2-Selmer group in the families they consider is at most 3.
Before we state our theorems, let us make a comparison with the approach and results in [11]. In their treatment they deal with families which are conjectured to have positive density. In order to make progress they make a key assumption which we do not need: they assumed that the \(j\)-invariant \(j(E)\) of their elliptic curves \(E\) is bounded by \(O(\log|\Delta(E)|)\). In particular, this allows them to remove the well-known archimedean difficulties of counting elliptic curves by discriminant. Another key idea critical to their argument is to embed their families into the space of binary quartic forms. As a corollary they are able to count the corresponding 2-Selmer elements with a bit of extra work. Since we do not require this embedding, and indeed the strength of our results depend on our direct treatment of the elliptic curves under consideration, we do not obtain an analogous count of 2-Selmer elements. In fact we expect such a count to be useless for our purposes: most of the 2-Selmer elements we encounter for the family \(\mathcal{E}_{2}\) are expected to in fact correspond to elements of the Shafarevich-Tate group III; see work of Klagsbrun and Lemke-Oliver [8] as well as recent work of Bhargava and Ho [3].
Therefore in order to obtain an estimate for the average rank we instead need to look at the _3-Selmer group_. For this we need to employ the parametrization of 3-Selmer elements of curves in our family due to Bhargava and Ho [3]. Because their approach fundamentally uses Bhargava's geometry of numbers paradigm, we are unable to apply the more refined estimates obtained by Browning and Heath-Brown [5] which we use to count elliptic curves. This necessarily leads to a weaker theorem; in fact we only obtain boundedness of
the 3-Selmer group on average rather than an exact average.
There is a significant difference between the results of [11] and our results. In particular, Shankar, Shankar, and Wang obtain theorems where counting by conductor produces the same order of magnitude as counting by discriminant: in other words, on average the conductor is only marginally smaller than the discriminant in the cases they consider. For us, however, we obtain substantially larger counts when ordering our curves by conductor rather than discriminant: this phenomenon is again caused by the special shapes of our discriminants. One can view this as an instance of _global restrictions affect Cohen-Lenstra heuristics in unexpected ways_. Our previous paper [13] with C. Tsang provides another example of this phenomenon.
To motivate our results, let us discuss the analogous results of [11] in more detail. First we recall, as discussed in [11], that their strategy involves first counting elliptic curves by their _discriminant_ first. This is a priori an impossible task already, so in order to make progress they required the following severe restriction: they assumed that the \(j\)-invariants of the curves \(E\) under consideration are bounded by \(\log|\Delta(E)|\). This means that for this subset of curves the problem of counting by discriminant and counting by naive height are essentially equivalent. Then they require one of two assumptions. The cleaner of the two assumptions is that the quotient \(\Delta(E)/C(E)\), which they call the index, is square-free. The second assumption, which is not disjoint from the first, is a bound on the so-called Szpiro ratio defined by
\[\beta_{E}=\frac{\log|\Delta(E)|}{\log C(E)}. \tag{1.4}\]
Their second assumption is then the requirement that \(\beta_{E}\leq\kappa\) for some \(\kappa<7/4\).
In our situation we do not require any restrictions on the size of the \(j\)-invariant. This is because unlike in the large family case it is possible to count the number of elliptic curves with rational 2-torsion by discriminant precisely; see Theorem 1.9 in [13]. On the other hand some restrictions on the \(p\)-adic valuation of the discriminant akin to the assumptions in [11] mentioned above are necessary.
Crucial to our arguments is the existence of a canonical degree-2 isogeny for each curve \(E_{a,b}\) in \(\mathcal{E}_{2}\), defined by:
\[\phi:\mathcal{E}_{2}\to\mathcal{E}_{2},E_{a,b}\mapsto E_{-2a,a^{2}-4b}. \tag{1.5}\]
The conductor of an elliptic curve is invariant under isogeny, so
\[C(E_{a,b})=C(E_{-2a,a^{2}-4b}).\]
However, the discriminant is not in general invariant under isogeny. It is thus more natural to consider the Szpiro ratios of \(E_{a,b}\) and \(E_{-2a,a^{2}-4b}\) simultaneously.
We will require the notion of the _conductor polynomial_ for an elliptic curve \(E_{a,b}\in\mathcal{E}_{2}\):
\[\mathcal{C}(E_{a,b})=b(a^{2}-4b). \tag{1.6}\]
We also define
\[\operatorname{ind}(E)=\frac{\mathcal{C}(E_{a,b})}{C(E_{a,b})}. \tag{1.7}\]
Note that \(\operatorname{ind}(E)\in\mathbb{Z}\).
The analogues of the non-archimedean conditions as in [11] in our situation will be:
1. Either the conductor polynomial \(\mathcal{C}(E_{a,b})\) is cube-free, equivalently that \(\operatorname{ind}(E)\) (as defined by (1.7)) is square-free; or
2. The average of the Szpiro ratios \(\beta(E_{a,b})\) and \(\beta(E_{-2a,a^{2}-4b})\) is less than \(155/68\).
The value of the number \(155/68>9/4\) is significant because \(9/4\) is the natural analogue of the constant \(7/4\) as an upper bound for the Szpiro ratio obtained in [11], obtained using geometry of numbers. Therefore the positivity of \(\Psi=155/68-9/4\) represents progressing beyond the simple application of geometry of
numbers present in [11].
We define, for \(\kappa<\frac{155}{68}\), the family
\[\mathcal{E}_{2,\kappa}=\{E\in\mathcal{E}_{2}:(\beta_{E}+\beta_{\phi(E)})/2\leq \kappa\} \tag{1.8}\]
and
\[\mathcal{E}_{2}^{*}=\{E\in\mathcal{E}_{2}:\mathcal{C}(E_{a,b})\text{ is cube-free}\}. \tag{1.9}\]
The first theorem in this paper is the following, which gives a an asymptotic formula for the number of curves in the families \(\mathcal{E}_{2,\kappa}\) and \(\mathcal{E}_{2}^{*}\), is the following:
**Theorem 1.1**.: _Let \(1<\kappa<155/68\) be a positive number. Then we have_
\[\#\{E\in\mathcal{E}_{2}^{*}:C(E)<X\}\sim\frac{(2+3\sqrt{2})\Gamma(1/4)^{2}}{6 \sqrt{\pi}}\prod_{p}\left(1-\frac{2p-1}{p^{3}}+\frac{2(p-1)^{2}}{p^{13/4}} \right)X^{\frac{3}{4}},\]
\[\#\{E\in\mathcal{E}_{2,\kappa}:C(E)<X\}\sim\frac{(2+3\sqrt{2})\Gamma(1/4)^{2}} {6\sqrt{\pi}}\prod_{p}\left(1+\frac{1}{p^{2}}+p^{\frac{3}{2}}\frac{p-1}{p^{4}} +\frac{2(p-1)^{2}}{p^{3}(p^{1/4}-1)}\right)X^{\frac{3}{4}}\]
\[\#\{E_{a,b}\in\mathcal{E}_{2}:|\mathcal{C}(E_{a,b})|<X\}\sim\frac{(2+3\sqrt{2} )\Gamma(1/4)^{2}}{6\sqrt{\pi}}\prod_{p}\left(1-\frac{1}{p^{6}}\right)X^{\frac{ 3}{4}}.\]
We expect \(\mathcal{E}_{2,\kappa}\) to satisfy the asymptotic formula in Theorem 1.1 for all \(\kappa>1\). The \(abc\)-conjecture implies that there are only finitely many elliptic curves with \(\beta_{E}>6\). This then shows that if we replace \(\mathcal{E}_{2,\kappa}\) with \(\mathcal{E}_{2}\) that the second asymptotic formula in (1.1) will hold as well. The \(p\)-adic densities present in the asymptotic formulae arise from the densities of elliptic curves in \(\mathcal{E}_{2}\) over \(\mathbb{Q}_{p}\) with fixed Kodaira symbol. These densities are computed in Section 2.
Our next theorem follows from adapting the methods in [3] on counting 3-Selmer elements of curves in \(\mathcal{E}_{2}\):
**Theorem 1.2**.: _When elliptic curves in \(\mathcal{E}_{2,\kappa}\) for \(1<\kappa<155/68\) or \(\mathcal{E}_{2}^{*}\) are ordered by their conductors, the average size of their 3-Selmer groups is bounded._
We remark that Theorem 1.2 is weaker than Theorem 1.2 in [11] in that we do not have an exact count of the average. It is stronger in the sense that we obtain a result applicable to both \(\mathcal{E}_{2}^{*}\) and \(\mathcal{E}_{2,\kappa}\).
### Uniformity estimates
In order to prove Theorem 1.1 we will need to prove certain tail estimates. Indeed, we will require the following theorems:
**Theorem 1.3** (Uniformity estimate for curves with cube-free conductor polynomial).: _For all \(\delta>0\) there exists a positive number \(\kappa\) such that_
\[\#\left\{E_{a,b}\in\mathcal{E}_{2}^{*}:C(E_{a,b})\leq X,\operatorname{ind}(E_ {a,b})>X^{2\delta}\right\}=O_{\delta,\kappa}\left(X^{\frac{3}{4}-\kappa} \right).\]
For curves for which \((\beta_{E}+\beta_{\phi(E)})/2\) is bounded, we have the following:
**Theorem 1.4** (Uniformity estimate for curves with bounded average Szpiro ratio).: _Suppose that \(1<\kappa<155/68\). Then there exists \(\kappa^{\prime}\), depending only on \(\kappa\), such that_
\[\#\left\{E_{a,b}\in\mathcal{E}_{2,\kappa}:C(E_{a,b})\leq X,\frac{\beta_{E}+ \beta_{\phi(E)}}{2}\leq\kappa\right\}=O_{\kappa^{\prime}}\left(X^{\frac{3}{4}- \kappa^{\prime}}\right).\]
A summary of these uniformity estimates in the case of large families is given in [11]; we refer the reader to the aforementioned paper for historical progress. As mentioned in [11], the main difficulty in proving Theorems 1.3 and 1.4 is that the size of the conductor polynomial can be very large for curves with bounded conductor. This necessitates some new ideas.
Departing from the ideas given in [11], our new input is that the shape of our conductor polynomial allows us to turn the counting problem into one about counting integer points on a family of quadrics over \(\mathbb{P}^{2}\) or counting over sublattices of \(\mathbb{Z}^{2}\) defined by congruence conditions. Depending on the size of the parameters involved the bounds are stronger from one interpretation over the other. To do this we rely on a uniform
estimate, essentially sharp, of counting integer points having bounded coordinates due to Browning and Heath-Brown [5]. This is an application of the so-called global determinant method; see [10] and [14] for a summary. In particular Browning and Heath-Brown's theorem is the crucial ingredient in allowing us to push beyond the \(9/4\)-barrier which comes naturally if one applies the geometry of numbers approach naively. Another key input is the use of linear programming to estimate the optimal value of the Szpiro ratio possible from curves satisfying various bounds.
### Outline of the paper
In Section 2 we will characterize the possible Kodaira symbols of curves in \(\mathcal{E}_{2}\) and compute their relative densities. In Section 3 we compute the real density of curves in \(\mathcal{E}_{2}\) and prove the third part of Theorem 1.1. In Section 4 we prove the first two parts of Theorem 1.1 assuming the uniformity estimates given by Theorems 1.3 and 1.4. In Sections 5 and 6 we prove the aforementioned uniformity estimates. Finally, in Section 7 we prove Theorem 1.2.
## 2. Kodaira symbols for curves in \(\mathcal{E}_{2}\)
The Kodaira symbols of curves in \(\mathcal{E}_{2}\), based on the Table 1 of [11], can be significantly simplified. This is because our minimal Weierstrass model is already minimal with respect to every prime because the constant coefficient is zero. In particular, we see at once that all of the symbols requiring a power of \(p\) to exactly divide the constant coefficient and for \(p\) to divide the \(x^{2}\) and \(x\)-coefficients are not possible for curves in \(\mathcal{E}_{2}\). This leaves only \(\mathrm{I}_{n},n\geq 1,\mathrm{I}_{0}^{*},\mathrm{III},\) and \(\mathrm{III}^{*}\) as possible Kodaira symbols in our family.
### Contributions to the conductor for type-\(\mathrm{I}_{0}^{*}\) primes
Put \(a=p^{k}a_{0}\) and \(b=p^{\ell}b_{0}\). Then our conductor polynomial takes the form
\[\mathcal{C}(E_{a,b})=p^{\ell}b_{0}(p^{2k}a_{0}^{2}-4p^{\ell}b_{0}).\]
The constraint that \(p^{7}\nmid\Delta(E_{a,b})\) implies that
\[2\ell+\min\{2k,\ell\}\leq 6.\]
We also have \(k\geq 1,\ell\geq 2\). If \(\ell>2\) then \(\min\{2k,\ell\}>2\), and therefore \(2\ell+\min\{2k,\ell\}>6\). Hence we must have \(\ell=2\) and \(k=1\). This implies that \(\mathcal{C}(E_{a,b})\) is exactly divisible by \(p^{4}\). In this case we note that the conductor \(C(E_{a,b})\) is only divisible by \(p^{2}\).
Therefore, the congruence information for a prime of Kodaira symbol \(\mathrm{I}_{0}^{*}\) is contained in the \(\mathbb{Z}\)-module \((\mathbb{Z}/p^{3}\mathbb{Z})^{2}\). For \((a,b)\in(\mathbb{Z}/p^{2}\mathbb{Z})^{2}\), we have that \(E_{a,b}(\mathbb{Z}/p^{2}\mathbb{Z})\) has Kodaira symbol \(\mathrm{I}_{0}^{*}\) if and only if \(a\equiv 0\pmod{p}\) and \(b\equiv 0\pmod{p^{2}},b\not\equiv 0\pmod{p^{3}}\). Thus there are \(p^{2}(p-1)\) choices for \((a,b)\in(\mathbb{Z}/p^{3}\mathbb{Z})^{2}\), and the relative density is \((p-1)/p^{4}\).
### Contributions to the conductor for type-\(\mathrm{III}\) primes
If a given curve \(E_{a,b}\in\mathcal{E}_{2}\) has Kodaira symbol \(\mathrm{III}\) at a prime \(p\), then we must have \(p|b\) exactly. We may then put \(b=pb_{0}\) where \(p\nmid b_{0}\). Our curve then has the equation
\[E_{a,b}:y^{2}=x^{3}+ax^{2}+pb_{0}x.\]
If we put \(a=p^{k}a_{0}\) with \(p\nmid a_{0}\), then our conductor polynomial is equal to
\[\mathcal{C}(E_{a,b})=pb_{0}(p^{2k}a_{0}^{2}-4pb_{0}).\]
We then see that \(p^{2}\) exactly divides the conductor polynomial, and hence exactly divides the conductor \(C(E_{a,b})\). Therefore all of the congruence information is contained in the ring \((\mathbb{Z}/p^{2}\mathbb{Z})^{2}\), and we have that \(E_{a,b}(\mathbb{Z}/p^{2}\mathbb{Z})\) has Kodaira symbol \(\mathrm{III}\) if and only if \(a\equiv 0\pmod{p}\) and \(b\equiv 0\pmod{p},b\not\equiv 0\pmod{p^{2}}\). There \(p(p-1)\) such possibilities, and the relative density is \((p-1)/p^{3}\).
### Contributions to the conductor for type-\(\mathrm{III}^{*}\) primes
If a given curve \(E_{a,b}\in\mathcal{E}_{2}\) has Kodaira symbol \(\mathrm{III}^{*}\) at a prime \(p\), then we must have \(p^{3}|b\) exactly. We may then put \(b=p^{3}b_{0}\) where \(p\nmid b_{0}\). Our curve then has the equation
\[E_{a,b}:y^{2}=x^{3}+ax^{2}+p^{3}b_{0}x.\]
If we put \(a=p^{k}a_{0}\) with \(p\nmid a_{0}\), then our conductor polynomial is equal to
\[\mathcal{C}(E_{a,b})=p^{3}b_{0}(p^{2k}a_{0}^{2}-4p^{3}b_{0}).\]
Since \(k\geq 2\) by Table 1 of [11], it follows that \(p^{6}\) exactly divides the conductor polynomial. Note that in this case the conductor \(C(E_{a,b})\) is divisible by \(p^{2}\). Thus the congruence condition is contained in the \(\mathbb{Z}\)-module
\((\mathbb{Z}/p^{4}\mathbb{Z})^{2}\), and \(E_{a,b}(\mathbb{Z}/p^{4}\mathbb{Z})\) has Kodaira symbol \(\mathrm{III}^{*}\) if and only if \(a\equiv 0\pmod{p^{2}}\) and \(b\equiv 0\pmod{p^{3}},b\not\equiv 0\pmod{p^{4}}\). Thus there are \(p^{2}(p-1)\) choices for \((a,b)\), and the relative density is \((p-1)/p^{6}\).
### Contribution to the conductor for semi-stable primes
Recall that for an elliptic curve \(E/\mathbb{Q}\), \(E\) has multiplicative bad reduction at \(p\) if and only if \(E\) has semi-stable bad reduction at \(p\). Now the exponent \(k\geq 1\) can be arbitrarily large. Over the \(\mathbb{Z}\)-module \((\mathbb{Z}/p^{k+1}\mathbb{Z})^{2}\) a pair \((a,b)\in(\mathbb{Z}/p^{k+1}\mathbb{Z})^{2}\) corresponds to an elliptic curve \(E_{a,b}\) having semi-stable bad reduction at \(p\) if and only if \(p\) divides exactly one of \(b\) and \(c=a^{2}-4b\). If \(b\equiv 0\pmod{p^{k}},b\not\equiv 0\pmod{p^{k+1}}\) then we must have \(a\not\equiv 0\pmod{p}\). Thus there are \(p^{k}(p-1)\) choices for \(a\) and \(p-1\) choices for \(b\). If \(c\equiv 0\pmod{p^{k}},c\not\equiv 0\pmod{p^{k+1}}\) then we cannot have \(a\equiv 0\pmod{p}\), since otherwise \(b\equiv 0\pmod{p}\) also which is not allowed. Therefore for any \(a\) co-prime to \(p\) we may choose a unique \(b\pmod{p^{k+1}}\) such that \(c\equiv 0\pmod{p^{k}}\) and \(c\not\equiv 0\pmod{p^{k+1}}\). There are again \(p^{k}(p-1)^{2}\) such choices. Moreover it is clear that the two sets of possibilities are disjoint. It follows that there are \(2p^{k}(p-1)^{2}\) possibilities, and the relative density is \(2(p-1)^{2}/p^{k+2}\).
## 3. The family \(\mathcal{E}_{2}\) ordered by conductor polynomial
Recall that our family \(\mathcal{E}\) is given by the equation (1.1). Analogous to our construction [13], we introduce the _conductor polynomial_ of \(E_{a,b}\) as
\[\mathcal{C}(E_{a,b})=b(a^{2}-4b),c=a^{2}-4b. \tag{3.1}\]
Let \(A_{\infty}(Z)\) be the Lebesgue measure of the set
\[\{(x,y)\in\mathbb{R}^{2}:|y(x^{2}-y)|\leq Z,|y|\geq 4\} \tag{3.2}\]
As in [13], we will compute the area \(A_{\infty}(X)\) by comparing it to an elliptic integral. Indeed we have:
\[A_{\infty}(Z) =\int_{-\sqrt{2}Z^{\frac{1}{4}}}^{\sqrt{2}Z^{\frac{1}{4}}}\int_{ \frac{x^{2}-\sqrt{x^{4}+4Z}}{Z}}^{\frac{x^{2}+\sqrt{x^{4}+4Z}}{Z}}dydx+2\int _{\sqrt{2}Z^{\frac{1}{4}}}^{\sqrt{Z}}\int_{\frac{x^{2}-\sqrt{x^{4}+4Z}}{Z}}^ {\frac{x^{2}+\sqrt{x^{4}+4Z}}{Z}}dydx+2\int_{\sqrt{2}Z^{\frac{1}{4}}}^{\sqrt{ Z}}\int_{\frac{x^{2}+\sqrt{x^{4}-4Z}}{Z}}^{\frac{x^{2}+\sqrt{x^{4}+4Z}}{Z}}dydx\] \[=\int_{-\sqrt{2}Z^{\frac{1}{4}}}^{\sqrt{2}Z^{\frac{1}{4}}}\sqrt {x^{4}+4Z}dx+2\int_{\sqrt{2}Z^{\frac{1}{4}}}^{\sqrt{Z}}\left(\sqrt{x^{4}+4Z}- \sqrt{x^{4}-4Z}\right)dx\]
Making the substitution \(x=\sqrt{2}Z^{\frac{1}{4}}z\) and \(dx=\sqrt{2}Z^{\frac{1}{4}}dz\) gives
\[A_{\mathbb{R}}(Z)=2\sqrt{2}Z^{\frac{3}{4}}\int_{-1}^{1}\sqrt{z^{4}+1}dz+4\sqrt {2}Z^{\frac{3}{4}}\int_{1}^{Z^{1/4}/\sqrt{2}}\left(\sqrt{z^{4}+1}-\sqrt{z^{4} -1}\right)dz.\]
We have the elliptic integral equality
\[\int_{-1}^{1}\sqrt{z^{4}+1}dz=\frac{2}{3}\left(\sqrt{2}+\frac{\Gamma(1/4)^{2}} {4\sqrt{\pi}}\right); \tag{3.3}\]
see Lemma 3.10 in [13] or [6]. To evaluate the other integral we note that by integration by parts we have
\[2\int_{1}^{T}\left(\sqrt{z^{4}+1}-\sqrt{z^{4}-1}\right)dz =\frac{2}{3}\left[z\sqrt{z^{4}+1}-z\sqrt{z^{4}-1}\right]_{1}^{T} +\frac{2}{3}\int_{1}^{T}\frac{dz}{\sqrt{z^{4}+1}}+\frac{2}{3}\int_{1}^{T} \frac{dz}{\sqrt{z^{4}-1}}\] \[=\frac{4}{3(\sqrt{T^{4}+1}+\sqrt{T^{4}-1})}-\frac{2\sqrt{2}}{3}+ \frac{2}{3}\left(\frac{(1+\sqrt{2})\Gamma(1/4)^{2}}{8\sqrt{\pi}}\right)+O \left(T^{-1}\right)\] \[=\frac{2}{3}\left(-\sqrt{2}+\frac{(1+\sqrt{2})\Gamma(1/4)^{2}}{8 \sqrt{\pi}}\right)+O(T^{-1}).\]
This gives
\[A_{\infty}(Z)=\frac{(2+3\sqrt{2})\Gamma(1/4)^{2}}{6\sqrt{\pi}}Z^{\frac{3}{4}}+O \left(Z^{\frac{1}{2}}\right). \tag{3.4}\]
We may use a refined version of Davenport's lemma, due to Barroero and Widmer [1]:
**Proposition 3.1**.: _Let \(m\) and \(n\) be positive integers and let \(\Lambda\subset\mathbb{R}^{n}\) be a lattice. Denote the successive minima of \(\Lambda\) by \(\lambda_{i},i=1,\cdots,n\). Let \(\mathcal{Z}\subset\mathbb{R}^{n}\) be a definable family in an o-minimal structure, and suppose the fibres \(\mathcal{Z}_{T}\) are bounded. Then there exists a positive number \(c_{\mathcal{Z}}\), depending only on \(\mathcal{Z}\), such that_
\[\left|\#(\mathcal{Z}_{T}\cap\Lambda)-\frac{\operatorname{Vol}(\mathcal{Z}_{T}) }{\det(\Lambda)}\right|\leq c_{\mathcal{Z}}\sum_{j=0}^{n-1}\frac{V_{j}( \mathcal{Z}_{T})}{\lambda_{1}\cdots\lambda_{j}}\]
_where \(V_{j}(\mathcal{Z}_{T})\) is the sum of the \(j\)-dimensional volumes of the orthogonal projections of \(\mathcal{Z}_{T}\) onto every \(j\)-dimensional coordinate subspace of \(\mathbb{R}^{n}\)._
Suppose now that we are given a set \(\mathcal{S}\subset\mathbb{Z}^{2}\) defined by congruence conditions modulo some integer \(n>0\). Then we may break \(\mathcal{S}\) up into a union of \(n^{2}\nu(S)\) translates of the lattice \(n\mathbb{Z}\times n\mathbb{Z}\), where \(\nu(S)\) denotes the volume of the closure of \(S\) in \(\hat{\mathbb{Z}}^{2}\). Applying Proposition 3.1 to each of these translates and summing gives us the following result:
**Proposition 3.2**.: _Let \(S\subset\mathbb{Z}^{2}\) be a set of pairs \((a,b)\) defined by congruence conditions on \(a,b\) modulo some positive integer. Then we have_
\[\#\{(a,b)\in\Lambda:|\mathcal{C}(E_{a,b})|\leq X\}=\nu(S)A_{\infty}(X)+O_{ \varepsilon}\left(n\nu(S)X^{\frac{1}{2}+\varepsilon}\right).\]
We now prove the third part of Theorem 1.1. Let us put \(N_{m}(X)\) be the number of curves \(E_{a,b}\in\mathcal{E}_{2}\) such that \(p^{2}|a\) and \(p^{3}|b\) for each \(p\) dividing \(m\). It follows that
\[\#\{E_{a,b}:|\mathcal{C}(E_{a,b})|\leq X\} =\sum_{m\geq 1}\mu(m)N_{m}(X)\] \[=\sum_{m=1}^{X^{\delta}}\mu(m)A_{\infty}(X)+\sum_{m=1}^{X^{ \delta}}O\left(mX^{1/2+\varepsilon}\right)+O\left(\sum_{\begin{subarray}{c}m \geq X^{\delta}\\ m\text{ square-free}\end{subarray}}N_{m}(X)\right)\] \[=A_{\infty}(X)\prod_{p}\left(1-\frac{1}{p^{\delta}}\right)+O \left(X^{\frac{1}{2}+2\delta+\varepsilon}\right)+O\left(\sum_{X^{\delta}\leq m \leq X^{1/6}}\frac{X^{3/4+\varepsilon}}{m^{6}}\right).\]
Here the last estimate comes from the fact that by using the isogeny \(\phi\) we may assume that \(|b|\leq X^{1/2}\), and hence \(m^{3}||b|\) implies that \(|m|\leq X^{1/6}\). We can optimize \(\delta\) by solving
\[\frac{1}{2}+2\delta=\frac{3}{4}-6\delta\]
which gives
\[\delta=\frac{1}{32}.\]
This is sufficient for the proof of the third part of Theorem 1.1.
## 4. Main counting theorems assuming uniformity estimates
We follow the same approach as in [11], and note that our uniformity estimates and note that
\[\#\{E\in\mathcal{E}_{2}^{*}:C(E)<X\} =\sum_{n\geq 1}\#\{E\in\mathcal{E}_{2}^{*}:\operatorname{ind}(E)=n,C(E)<nX\}\] \[=\sum_{n,q\geq 1}\mu(q)\#\{E\in\mathcal{E}_{2}^{*}:nq| \operatorname{ind}(E),\mathcal{C}(E)<nX\}\] \[=\sum_{\begin{subarray}{c}n,q\geq 1\\ nq<X^{\delta}\end{subarray}}\mu(q)\#\{E\in\mathcal{E}_{2}^{*}:nq| \operatorname{ind}(E),\mathcal{C}(E)<nX\}+O\left(X^{\frac{3}{2}-\kappa}\right). \tag{4.1}\]
The last line follows from our uniformity estimates.
We then perform another inclusion-exclusion sieve to evaluate each summand on the right-hand side of the expression above. For each prime \(p\) let
\[\chi_{\Sigma_{p},nq}:\mathbb{Z}_{p}^{2}\to\mathbb{R}\]
be the characteristic function of the set of all \((a,b)\in\mathbb{Z}_{p}^{2}\) that satisfy the reduction type specified by \(\Sigma_{p}\) and satisfy \(nq|\operatorname{ind}(E_{a,b})\). Let us put \(\chi_{p}=1-\chi_{\Sigma_{p},nq}\) and define
\[\chi_{k}:=\prod_{p|k}\chi_{p}\]
for square-free integers \(k\). Then we have
\[\prod_{p}\chi_{\Sigma_{p},nq}(a,b)=\sum_{k}\mu(k)\chi_{k}(a,b) \tag{4.2}\]
for every \((a,b)\in\mathbb{Z}^{2}\). Put \(\nu_{*}(nq,\Sigma)\) to be the product over all primes \(p\) of the integral of \(\chi_{\Sigma_{p},nq}\) over \(\mathbb{Z}_{p}^{2}\). Then for \(nq<X^{\delta}\) we have
\[\#\{E\in\mathcal{E}_{2}^{*}:nq\mid\operatorname{ind}(E),\mathcal{ C}(E)<nX\} =\sum_{\begin{subarray}{c}(a,b)\in\mathbb{Z}^{2}\\ 0<|\mathcal{C}(E_{a,b})|<nX\end{subarray}}\sum_{k\geq 1}\mu(k)\chi_{k}(a,b)\] \[=\sum_{\begin{subarray}{c}(a,b)\in\mathbb{Z}^{2}\\ 0<|\mathcal{C}(E_{a,b})|<nX\end{subarray}}\sum_{k=1}^{X^{4\delta}}\mu(k)\chi_{ k}(a,b)+O\left(X^{\frac{3}{4}-\kappa}\right)\] \[=A_{\infty}(nX)\nu_{*}(nq,\Sigma)+O_{\varepsilon}\left(X^{\frac{ 1}{2}+\varepsilon}+X^{\frac{3}{4}-\kappa}\right)\]
where \(\nu_{*}(nq,\Sigma)\) is the product over all primes \(p\) of the \(p\)-adic integral of \(\chi_{\Sigma_{p},nq}\). For each \(n\), put \(\lambda_{i}(n,\Sigma)\) for the volume of the closure in \(\hat{\mathbb{Z}}^{2}\) of the set of all \((a,b)\in\mathbb{Z}^{2}\) such that \(E_{a,b}\) belongs to \(\mathcal{G}=\mathcal{E}_{2}(\Sigma)\) and \(E_{a,b}\) has index \(n\). Returning to (4.1), we obtain
\[\#\{E\in\mathcal{G}:C(E)<X\} =A_{\infty}(1)X^{\frac{3}{4}}\sum_{\begin{subarray}{c}n,q\geq 1\\ nq<X^{\delta}\end{subarray}}\mu(q)n^{\frac{3}{4}}\nu_{*}(nq,\Sigma)+o\left(X^{ \frac{3}{4}}\right)\] \[=A_{\infty}(1)X^{\frac{3}{4}}\sum_{n\geq 1}n^{\frac{3}{4}}\lambda_{ *}(n,\Sigma),\]
where the final equality follows by reversing the inclusion-exclusion sieve in (4.1).
For each prime \(p\) and integer \(k\geq 0\), put \(\overline{\nu}_{*}(p^{k},\Sigma)\) for the \(p\)-adic density of the set of all \((a,b)\in\mathbb{Z}^{2}\) such that \(E_{a,b}\in\mathcal{E}_{*}(\Sigma)\) and \(\operatorname{ind}_{p}(E_{a,b})=p^{k}\). The constant \(\lambda_{*}(n,\Sigma)\) is a product over all \(p\) of local densities:
\[\lambda_{*}(n,\Sigma) =\prod_{p|n}\overline{\nu}_{*}(p^{0},\Sigma)\prod_{\begin{subarray} {c}p^{k}|n\\ k\geq 1\end{subarray}}\overline{\nu}_{*}(p^{k},\Sigma)\] \[=\prod_{p}\overline{\nu}_{*}(p^{0},\Sigma)\prod_{\begin{subarray} {c}p^{k}|n\\ k\geq 1\end{subarray}}\overline{\nu}_{*}(p^{k},\Sigma)\]
It follows that \(\lambda_{*}(n,\Sigma)\) is a multiplicative function of \(n\), and hence
\[\sum_{n\geq 1}n^{\frac{3}{4}}\lambda_{*}(n,\Sigma) =\prod_{p}\overline{\nu}_{*}(p^{0},\Sigma)\prod_{p}\left(\sum_{k= 0}^{\infty}p^{\frac{3\delta}{4}}\frac{\overline{\nu}_{*}(p^{k},\Sigma)}{ \overline{\nu}_{*}(p^{0},\Sigma)}\right)\] \[=\prod_{p}\left(\sum_{k=0}^{\infty}p^{\frac{3\delta}{4}}\overline{ \nu}_{*}(p^{k},\Sigma)\right).\]
The computation of \(\overline{\nu}_{*}(p^{k})\) then follows from the calculations in Section 2. In the case of \(\mathcal{E}_{2}^{*}\), we have that \(p\mid\operatorname{ind}(E_{a,b})\) if and only if \(p\) is semi-stable. Further, our imposition implies that \(\operatorname{ind}(E_{a,b})\) is square-free in this case, so \(k\leq 1\). We thus obtain the density
\[\frac{(p-1)^{2}}{p^{2}}+\frac{p-1}{p^{3}}+\frac{2(p-1)^{2}}{p^{3}}+p^{\frac{3} {4}}\frac{2(p-1)^{2}}{p^{4}}=1-\frac{2p-1}{p^{3}}+\frac{2(p-1)^{2}}{p^{13/4}} \tag{4.3}\]
For the general case, we have that the bulk of the contribution comes from the semi-stable primes. The density is then seen to be
\[\sum_{k=1}^{\infty}p^{\frac{3(k-1)}{4}}\frac{2(p-1)^{2}}{p^{k+2}} =\frac{2(p-1)^{2}}{p^{3}}+\sum_{k=2}^{\infty}p^{\frac{3(k-1)}{4}} \frac{2(p-1)^{2}}{p^{k+2}}\] \[=\frac{2(p-1)^{2}}{p^{3}}+\frac{2(p-1)^{2}p^{\frac{3}{4}}}{p^{4}} \sum_{k=0}^{\infty}p^{\frac{3k}{4}}p^{-k}\] \[=\frac{2(p-1)^{2}}{p^{3}}+\frac{2(p-1)^{2}}{p^{13/4}}\frac{p^{1/4} }{p^{1/4}-1}\] \[=\frac{2(p-1)^{2}}{p^{3}}+\frac{2(p-1)^{2}}{p^{3}(p^{1/4}-1)}. \tag{4.4}\]
The contribution from the other Kodaira symbols occurs for \(k=0,2,4\). Combined with the contribution. This gives the total contribution
\[\frac{(p-1)^{2}}{p^{2}}+\frac{p-1}{p^{3}}+p^{\frac{3}{2}}\frac{p-1 }{p^{4}}+p^{3}\frac{p-1}{p^{6}}+\frac{2(p-1)^{2}}{p^{3}}+\frac{2(p-1)^{2}}{p^{ 3}(p^{1/4}-1)}\] \[=1-\frac{2p-1}{p^{2}}+\frac{p-1}{p^{3}}+\frac{p-1}{p^{3}}+p^{ \frac{3}{2}}\frac{p-1}{p^{4}}+\frac{2p^{2}-4p+2}{p^{3}}+\frac{2(p-1)^{2}}{p^{3 }(p^{1/4}-1)}\] \[=1-\frac{2p^{2}-p-2p+2-2p^{2}+4p-2}{p^{3}}+p^{\frac{3}{2}}\frac{p- 1}{p^{4}}+\frac{2(p-1)^{2}}{p^{3}(p^{1/4}-1)}\] \[=1+\frac{1}{p^{2}}+p^{\frac{3}{2}}\frac{p-1}{p^{4}}+\frac{2(p-1)^ {2}}{p^{3}(p^{1/4}-1)}. \tag{4.5}\]
This suffices for the proof of Theorem 1.1.
## 5. Counting curves with cube-free conductor polynomial
In this section we prove the necessary uniformity estimates in the sieve given in Section 4 in the case when the conductor polynomial \(\mathcal{C}(E_{a,b})\) is cube-free.
In order to take advantage of these results, we note that \(\mathcal{C}(E_{a,b})\) being cube-free implies the curve \(E_{a,b}\) has no primes of Kodaira symbol \(\mathrm{I}_{0}^{*}\) and \(\mathrm{III}^{*}\). Let \(P=p_{1}\cdots p_{k}\) be the product of a finite number of primes, and we shall restrict our attention to those curves \(E\in\mathcal{E}_{2}^{*}\) which have Kodaira symbol \(\mathrm{III}\) at each of the primes dividing \(P\), and no other primes. We then have that \(a=Pu,b=Pv\) for some \(u,v\in\mathbb{Z}\) and
\[\mathcal{C}(E_{a,b})=P^{2}v\left(Pu^{2}-4v\right)\]
Further, the integers \(v\) and \(w=Pu^{2}-4v\) are co-prime to \(P\). Our cube-free condition then implies we that we may express \(v,w\) in the form:
\[v=v_{0}v_{1}^{2},\gcd(v_{0},v_{1})=1,v_{0},v_{1}\text{ square-free} \tag{5.1}\]
and
\[w=Pu^{2}-4v=w_{0}w_{1}^{2},\gcd(w_{0},w_{1})=1,w_{0},w_{1}\text{ square-free}. \tag{5.2}\]
Thus we obtain a quadratic curve in \(\mathbb{P}^{2}\) defined by the equation
\[Pu^{2}=w_{0}w_{1}^{2}+4v_{0}v_{1}^{2}. \tag{5.3}\]
We then see that the conductor is equal to
\[C(E_{a,b})=P^{2}|v_{0}v_{1}w_{0}w_{1}|\]
and
\[\operatorname{ind}(E_{a,b})=\frac{P^{2}|v_{0}v_{1}^{2}w_{0}w_{1}^{2}|}{P^{2}|v _{0}v_{1}w_{0}w_{1}|}=|v_{1}w_{1}|. \tag{5.4}\]
We now prove Theorem 1.3. The proof will follow from Theorem 5.1 below.
We give some further preliminaries before stating Theorem 5.1. Our bound on the conductor is equivalent to
\[|v_{0}v_{1}w_{0}w_{1}|\leq XP^{-2}=Z, \tag{5.5}\]
say. We then restrict \(v_{0},v_{1},v_{0},v_{1}\) into dyadic boxes
\[|v_{0}|\asymp T_{1},|w_{0}|\asymp T_{2},|v_{1}|\asymp T_{3},|w_{1}|\asymp T_{4}. \tag{5.6}\]
Put
\[t_{i}=\frac{\log T_{i}}{\log Z}, \tag{5.7}\]
and by further dividing into dyadic ranges and considering (5.5), we may suppose that
\[t_{1}+t_{2}+t_{3}+t_{4}=1. \tag{5.8}\]
We will be concerned with estimating
(5.9) \[R(T_{1},T_{2},T_{3},T_{4})=\{E_{a,b}:y^{2}=x(x^{2}+ax+b),(\ref{eq:eq:eq:eq:eq:eq: eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eqeq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eqeq:eq:eq:eqeq:eq:eqeq:eq:eq:eq:eqeq:eq:eqeq:eq:eqeq:eqeq:eq:eqeq:eqeq:eqeq:eqeq:eq:eqeq:eq:eqeqeq:eqeq:eqeq:eqeq:eqeq:eqeqeq:eqeq:eqeq:eqeq:eqeq:eqeq:eqeq:eqeqeq:eqeqeq:eqeq:eqeq:eqeq:eq:eqeq:eqeq:eqeq:eqeq:eqeq:eqeq:eqeqeq:eqeqeq:eqeq:eqeqeq:eqeq:eqeq:eqeqeq:eqeqeqeq:eqeq:eqeqeq:eqeq:eqeqeq:eqeqeq:eqeqeq:eqeqeq:eqeqeq:eqeqeq:eqeq:eqeqeq:eqeqeq:eqeqeq:eqeqeq:eqeqeq:eqeqeq:eqeqeq:eqeqeq:eqeqeqeq:eqeqeq:eqeqeq:eqeqeq:eqeqeq:eqeqeq:eqeq:eqeqeqeq:eqeqeq:eqeqeqeq:eqeqeq:eqeqeqeq:eqeqeqeq:eqeq:eqeqeqeq:eqeqeqeq:eqeqeq:eqeqeqeq:eqeqeqeq:eqeqeq:eqeqeq:eqeqeqeq:eqeqeq:eqeqeq:eqeqeqeqeq:eqeqeqeq:eqeqeq:eqeqeqeq:eqeqeqeq:eqeqeq:eqeqeqeq:eqeqeqeq:eqeqeqeq:eqeqeqeq:eqeqeqeqeq:eqeqeqeqeq:eqeqeqeq:eqeqeqeq:eqeqeqeqeqeq:eqeqeqeqeq:eqeqeqeq:eqeqeqeqeq:eqeqeqeqeqeq:eqeqeqeqeq:eqeqeqeq:eqeqeqeq:eqeqeqeq:eqeqeqeqeqeq:eqeqeqeqeqeq:eqeqeqeqeqeq:eqeqeqeqeqeq:eqeqeqeqeqeq:eqeqeqeqeqeqeq:eqeqeqeqeqeq:eqeqeqeqeqeqeq:eqeqeqeqeqeqeq:eqeqeqeqeqeqeq:eqeqeqeqeqeqeqeqeqeq:eqeqeqeqeqeq:eqeqeqeqeqeq:eqeqeqeqeqeqeq:eqeqeqeqeqeqeqeqeq:eqeqeqeqeqeqeq:eqeqeqeqeqeqeqeq:eqeqeqeqeqeqeqeq:
**Proposition 5.4**.: _Suppose that there exists \(\delta_{1}>0\) such that either_
\[t_{1}+t_{2}+t_{3}\leq\frac{3}{4}-\delta_{1}\text{ or }t_{1}+t_{2}+t_{4}\leq\frac{3} {4}-\delta_{1}. \tag{5.11}\]
_Then there exists \(\kappa>0\) such that_
\[R(T_{1},T_{2},T_{3},T_{4})=O_{\kappa}\left(X^{\frac{3}{4}-\kappa}\right).\]
Proof.: Without loss of generality, we may suppose that
\[t_{1}+t_{2}+t_{3}\leq\frac{3}{4}-\delta_{1}.\]
Then we may choose \(v_{0},v_{1},w_{0}\) in \(O(T_{1}T_{2}T_{3})=O\left(X^{3/4-2\delta_{1}}\right)\) ways. Having done so, (5.3) becomes
\[Pu^{2}-w_{0}w_{1}^{2}=4v_{0}v_{1}^{2}\]
where \(w_{0},v_{0},v_{1}\) are fixed and \(u,w_{1}\) are bounded by a power of \(X\). The left hand side is a quadratic form, and thus this equation has \(O(\tau(4v_{0}v_{1}^{2})\log X)=O_{\varepsilon}\left(X^{\varepsilon}\right)\) many solutions. Choosing \(\varepsilon=\delta_{1}/2\) we conclude that
\[R(T_{1},T_{2},T_{3},T_{4})=O_{\delta}\left(X^{\frac{3}{4}-\frac{\delta_{1}}{2} }\right),\]
and choosing \(\kappa=2\delta_{1}\) say gives us the desired result.
Suppose now that (5.11) fails, so that we have for some \(\delta_{2}>0\) which may be arbitrarily small so that
\[t_{1}+t_{2}+t_{3}\geq\frac{3}{4}-\delta_{2}\text{ and }t_{1}+t_{2}+t_{4}\geq \frac{3}{4}-\delta_{2}. \tag{5.12}\]
We now show that fixing \(v_{0},w_{1}\) and treating (5.3) as counting integral points in \(O_{\varepsilon}(X^{\varepsilon})\) lattices of the shape
\[\{(x,y)\in\mathbb{Z}^{2}:x\equiv\omega y\pmod{4v_{1}^{2}},|x|\asymp T_{1}^{1 /2}T_{3},|y|\asymp T_{4}\}\]
will allow us to recover \(u=x,w_{1}=y\) which then gives \(v_{0}=(Pu^{2}-w_{0}w_{1}^{2})/4v_{1}^{2}\). Applying Lemma 5.3 gives the bound
\[O\left(\frac{T_{1}^{1/2}T_{3}T_{4}}{T_{3}^{2}}+1\right)=O\left(\frac{T_{1}^{1 /2}T_{4}}{T_{3}}+1\right).\]
Summing over \(|w_{0}|\asymp T_{2},|v_{1}|\asymp T_{3}\) gives
\[O\left(T_{1}^{1/2}T_{2}T_{4}+T_{2}T_{3}\right). \tag{5.13}\]
Symmetrically, we obtain the bound
\[O\left(T_{2}^{1/2}T_{1}T_{3}+T_{1}T_{4}\right).\]
This provides a satisfactory bound if for some \(\delta_{2}>0\) we have
\[\frac{t_{1}}{2}+t_{2}+t_{4}<\frac{3}{4}-\delta_{2}\text{ and }t_{2}+t_{3}< \frac{3}{4}-\delta_{2} \tag{5.14}\]
or
\[\frac{t_{2}}{2}+t_{1}+t_{3}<\frac{3}{4}-\delta_{2}\text{ and }t_{1}+t_{4}< \frac{3}{4}-\delta_{2}. \tag{5.15}\]
We summarize this as:
**Proposition 5.5**.: _Suppose that \(t_{1},t_{2},t_{3},t_{4}\) satisfies (5.14) or (5.15). Then_
\[R(T_{1},T_{2},T_{3},T_{4})=O_{\delta_{2},\varepsilon}\left(X^{3/4-\delta_{2}+ \varepsilon}\right).\]
Next let us analyze how both (5.14) and (5.15) can fail for every \(\delta_{2}>0\). First note that it is not possible for
\[t_{2}+t_{3}\geq\frac{3}{4}-\delta_{2}\text{ and }t_{1}+t_{4}\geq\frac{3}{4}- \delta_{2}\]
if \(\delta_{2}\) is sufficiently small, since \(t_{1}+t_{2}+t_{3}+t_{4}=1\) by (5.8). Therefore, upon assuming \(\delta_{2}\) is arbitrarily small, we may suppose that
\[\frac{t_{1}}{2}+t_{2}+t_{4}\geq\frac{3}{4}-\delta_{2}\text{ and }\frac{t_{2}}{2}+t_{1}+t_{3} \geq\frac{3}{4}-\delta_{2}\]
or
\[\frac{t_{1}}{2}+t_{2}+t_{4}\geq\frac{3}{4}-\delta_{2}\text{ and }t_{1}+t_{4}\geq \frac{3}{4}-\delta_{2},\]
and a symmetric case which is equivalent to the line above. In In the first case we obtain by summing the two inequalities
\[\frac{3}{2}(t_{1}+t_{2})+t_{3}+t_{4}\geq\frac{3}{2}-2\delta_{2},\]
which implies that
\[t_{1}+t_{2}\geq 1-4\delta_{2}.\]
This then implies \(t_{3}+t_{4}\leq 4\delta_{2}\), so are excluded from the theorem as long as \(\delta_{2}<\delta_{2}\). In the second case we have
\[\frac{t_{1}}{2}+t_{2}+t_{4}\geq\frac{3}{4}-\delta_{2}\text{ and }t_{1}+t_{4}\geq \frac{3}{4}-\delta_{2}. \tag{5.16}\]
By (5.8), the latter inequality implies
\[t_{2}+t_{3}\leq\frac{1}{4}+\delta_{2},\]
and therefore (5.12) gives
\[t_{1}\geq\frac{1}{2}-2\delta_{2}.\]
We note that the condition
\[\frac{t_{1}}{2}+t_{2}+t_{4}\geq\frac{3}{4}-\delta_{2} \tag{5.17}\]
and (5.8) imply that \(t_{1}\leq 1/2+2\delta_{2}\). To see this, by (5.8) we have
\[t_{2}+t_{4}\leq 1-t_{1}.\]
If \(t_{1}>1/2+2\delta_{2}\), then in turn we find that
\[\frac{t_{1}}{2}+t_{2}+t_{4} \leq\frac{t_{1}}{2}+1-t_{1}\] \[=1-\frac{t_{1}}{2}<1-\frac{1}{4}-\delta_{2}\] \[=\frac{3}{4}-\delta_{2}.\]
This in turn violates (5.17). We thus conclude from (5.17) that
\[\frac{1}{2}-2\delta_{2}\leq t_{1}\leq\frac{1}{2}+2\delta_{2}\text{ and }\frac{1}{2}-\delta_{2}\leq t_{2}+t_{4}\leq\frac{1}{2}+2\delta_{2}. \tag{5.18}\]
But then (5.16) implies that
\[t_{4}\geq\frac{3}{4}-t_{1}-\delta_{2}\geq\frac{1}{4}-3\delta_{2},\]
thus (5.18) implies that
\[t_{2}\leq\frac{1}{2}+2\delta_{2}-t_{4}\leq\frac{1}{4}+5\delta_{2}. \tag{5.19}\]
Feeding these estimates back into (5.8) gives
\[t_{3} =1-t_{1}-t_{2}-t_{4}\] \[\leq 1-\frac{1}{2}-2\delta_{2}-\frac{1}{4}-5\delta_{2}-\frac{1}{4 }-3\delta_{2}\] \[=10\delta_{2}.\]
Next (5.12) implies that
\[t_{2} \geq\frac{3}{4}-t_{1}-t_{3}\] \[\geq\frac{3}{4}-\frac{1}{2}-2\delta_{2}-10\delta_{2}\] \[=\frac{1}{4}-12\delta_{2}\]
and feeding this back into (5.18) gives
\[t_{4} \leq\frac{1}{2}+2\delta_{2}-t_{2}\] \[\leq\frac{1}{2}-\frac{1}{4}+14\delta_{2}\] \[=\frac{1}{4}+14\delta_{2}.\]
Now observe that
\[\frac{t_{1}}{2}+t_{3}\leq\frac{1}{4}+11\delta_{2}\]
and
\[\frac{t_{2}}{2}+t_{4}\geq\frac{1}{8}-6\delta_{2}+\frac{1}{4}-3\delta_{2}= \frac{3}{8}-9\delta_{2}.\]
Hence we have
\[\frac{t_{1}}{2}+t_{3}\leq\frac{t_{2}}{2}+t_{4}, \tag{5.20}\]
provided again that we choose \(\delta_{2}\) sufficiently small. Further, we have
\[\frac{t_{2}}{2}+t_{4}\leq\frac{1}{8}+\frac{5\delta_{2}}{2}+\frac{1}{4}+14 \delta_{2}\leq\frac{t_{1}}{2}+t_{3}+\frac{1}{8}+\frac{33\delta_{2}}{2}. \tag{5.21}\]
We summarize this as:
**Proposition 5.6**.: _Suppose that (5.12) and (5.16) hold. Then_
\[\frac{t_{1}}{2}+t_{3}\leq\frac{t_{2}}{2}+t_{4}\leq\frac{1}{8}+\frac{33\delta_ {2}}{2}+\frac{t_{1}}{2}+t_{3}.\]
We now seek to apply Corollary 2 in [5] to (5.3) with the variables satisfying (5.6). We have a bound of the form
\[\left(\frac{T_{3}T_{4}\max\{T_{1}^{1/2}T_{3},T_{2}^{1/2}T_{4}\}}{T_{1}T_{2}} \right)^{1/3}\tau(v_{0}v_{1}) \tag{5.22}\]
Since (5.21) implies
\[T_{1}T_{3}^{2}<T_{2}T_{4}^{2}\leq Z^{\frac{1}{4}+33\delta_{2}}T_{1}T_{3}^{2},\]
equation (5.22) gives the bound
\[\left(\frac{(Z/T_{1}T_{2})T_{1}^{1/4}T_{3}^{1/2}T_{2}^{1/4}T_{4}^{1/2}Z^{\frac {1}{48}+\frac{33\delta_{2}}{4}}}{T_{1}T_{2}}\right)^{1/3}Z^{\varepsilon}\]
for any \(\varepsilon>0\). Using
\[T_{3}T_{4}\ll\frac{Z}{T_{1}T_{2}}\]
we obtain
\[\left(\frac{Z^{\frac{9}{2}+\frac{1}{16}+\frac{33\delta_{2}}{4}}}{(T_{1}T_{2}) ^{9/4}}\right)^{1/3}X^{\varepsilon}=\frac{Z^{\frac{25}{48}+\frac{116\delta_{2 }}{4}}}{(T_{1}T_{2})^{3/4}}Z^{\varepsilon}.\]
Multiplying by \(T_{1},T_{2}\) we obtain
\[Z^{\frac{25}{48}+\frac{11\delta_{2}}{4}}(T_{1}T_{2})^{1/4}Z^{\varepsilon}.\]
By (5.18) and (5.19) we have
\[(T_{1}T_{2})^{1/4}\ll Z^{\frac{1}{4}\left(\frac{1}{2}+2\delta_{2}+\frac{1}{4} +2\delta_{2}\right)}=Z^{\frac{2}{16}+\delta_{2}}\]
Therefore the total contribution is at most
\[O_{\varepsilon}\left(Z^{\frac{17}{24}+\varepsilon}\right), \tag{5.23}\]
by choosing \(\delta_{2}\) sufficiently small with respect to \(\varepsilon\). This completes the proof of Theorem 5.1.
We remark that the estimate (5.23) gives us more room than we need, and this can be used to good effect to control the number of curves with large Szpiro ratio.
## 6. Curves with large Szpiro constant
Recall that for an elliptic curve \(E\), the _Spizro ratio_ is defined to be
\[\beta_{E}=\frac{\log|\Delta(E)|}{\log C(E)}. \tag{6.1}\]
Our goal in this section is to count curves in \(\mathcal{E}_{2}\) having bounded conductor and Szpiro constant as large as possible. To do so, we follow the strategy in [11] and fix sets of primes \(\mathcal{P}_{\mathbb{I}_{0}^{*}},\mathcal{P}_{\text{III}},\mathcal{P}_{\text{ III}^{*}}\) and consider curves having Kodaira symbol \(\mathbb{I}_{0}^{*},\text{III},\text{III}^{*}\) at these primes respectively. Next we consider a set \(\Sigma\) consisting of a finite set of prime powers \(p_{i}^{k_{i}}\) for \(1\leq i\leq m\) and \(q_{j}^{\ell_{j}},1\leq j\leq n\). We then consider the set of curves \(\mathcal{E}(\Sigma)\) satisfying the property that \(N(\Sigma)=p_{1}^{k_{1}}\cdots p_{m}^{k_{m}}q_{1}^{\ell_{1}}\cdots q_{n}^{\ell _{n}}\) exactly divides \(\mathcal{C}(E)\), and \(\mathcal{C}(E)\) is square-free away from \(\mathcal{P}_{\mathbb{I}_{0}^{*}},\mathcal{P}_{\text{III}},\mathcal{P}_{\text{ III}^{*}},\Sigma\). It then follows that \(P_{\mathbb{I}_{0}^{*}}^{2}P_{\text{III}}^{2}P_{\text{III}^{*}}^{2}p_{1}\cdots p _{m}q_{1}\cdots q_{n}\) exactly divides \(C(E)\).
We put
\[Q_{1}=\prod_{j=1}^{m}p_{j}^{k_{j}-1}\text{ and }Q_{2}=\prod_{j=1}^{n}q_{j}^{ \ell_{j}-1}.\]
and
\[P_{1}=p_{1}\cdots p_{m}\text{ and }P_{2}=q_{1}\cdots q_{n}.\]
We then count the number of elements in
\[\mathcal{E}_{2}(\mathcal{P},\Sigma)(X)=\{E_{a,b}:C(E_{a,b})\leq X,\gcd(a,b)=1,P_{1}Q_{1}||b,P_{2}Q_{2}||a^{2}-4b,\]
\[\mathcal{C}(E_{a,b})/(P_{\mathbb{I}_{0}^{*}}^{2}P_{\text{III}}^{2}P_{\text{III }^{*}}^{2}P_{1}P_{2}Q_{1}Q_{2})\text{ is square-free}\}.\]
If we set \(c=a^{2}-4b\), then \(\mathcal{C}(E_{a,b})=|bc|\), just like in the previous subsection. Our condition of \(C(E_{a,b})\leq X\) then implies that
\[|\mathcal{C}(E_{a,b})|=|b(a^{2}-4b)|\leq P_{\mathbb{I}_{0}^{*}}^{2}P_{\text{ III}^{*}}^{4}XQ_{1}Q_{2}. \tag{6.2}\]
If we write
\[C^{\prime}(E_{a,b})=\frac{C(E_{a,b})}{P_{\mathbb{I}_{0}^{*}}^{2}P_{\text{III}} ^{2}P_{\text{III}^{*}}^{2}}\text{ and }\mathcal{C}^{\prime}(E_{a,b})=\frac{\mathcal{C}(E_{a,b})}{P_{ \mathbb{I}_{0}^{*}}^{4}P_{\text{III}}^{2}P_{\text{III}^{*}}^{6}}\]
then we can recast (6.2) as
\[|\mathcal{C}^{\prime}(E_{a,b})|\leq\frac{XQ_{1}Q_{2}}{P_{\mathbb{I}_{0}^{*}}^ {2}P_{\text{III}}^{2}P_{\text{III}^{*}}^{2}}. \tag{6.3}\]
Further, we can replace the variables \(b,c\) with \(P_{\mathbb{I}_{0}^{*}}^{2}P_{\text{III}}P_{\text{III}^{*}}^{3}b,P_{\mathbb{I}_ {0}^{*}}^{2}P_{\text{III}}P_{\text{III}^{*}}^{3}c\) which gives the condition
\[|b(P_{\mathbb{I}_{0}}P_{\text{III}^{*}}a^{2}-4b)|\leq\frac{XQ_{1}Q_{2}}{P_{ \mathbb{I}_{0}^{*}}^{2}P_{\text{III}}^{2}P_{\text{III}^{*}}^{2}}.\]
Put
\[Y=\frac{X}{P_{\mathbb{I}_{0}^{*}}^{2}P_{\text{III}}^{2}P_{\text{III}^{*}}^{2}}.\]
For the purposes of estimating the error term, we may use the fact that elements in \(\mathcal{E}_{2}\) are naturally connected by a rational 2-isogeny which maps
\[E_{a,b}\mapsto E_{-2a,a^{2}-4b}, \tag{6.4}\]
which preserves the conductor and essentially swaps the roles of \(b,c\).
Now put
\[b=P_{1}Q_{1}u\text{ and }c=P_{2}Q_{2}v,\]
where \(u,v\) are square-free integers co-prime to \(P_{1},P_{2}\) respectively. It follows that our bound condition becomes
\[|uv|\leq\frac{Q_{1}Q_{2}Y}{P_{1}Q_{1}P_{2}Q_{2}}=\frac{Y}{P_{1}P_{2}}. \tag{6.5}\]
Using the symmetry between \(u,v\), we may assume that \(|u|\leq|v|\) and therefore \(|u|\leq\sqrt{Y/P_{1}P_{2}}\).
Then we are left to estimate the sum
\[N(\Sigma)(X)=\sum_{|u|\leq\sqrt{Y/P_{1}P_{2}}}\mathcal{N}_{u}\left(\frac{Q_{2}Y} {P_{1}|u|}\right), \tag{6.6}\]
where \(\mathcal{N}_{u}(Z)\) is the number of solutions to the inequality
\[|P_{1_{0}^{*}}P_{\text{III}}\cdot a^{2}-4P_{1}Q_{1}u|\leq Z\text{ subject to }P_{1_{0}^{*}}P_{\text{III}}\cdot a^{2}-4P_{1}Q_{1}u\equiv 0 \pmod{P_{2}Q_{2}}.\]
We estimate \(\mathcal{N}_{u}(Q_{2}YP_{1}^{-1}|u|^{-1})\) in several different ways, depending on the relative sizes of the quantities involved.
First note that
\[P_{1}Q_{1}\ll P_{2}Q_{2}\]
is equivalent to
\[\frac{X^{1/2}}{(P_{1}P_{2})^{1/2}}\ll\frac{Q_{2}^{1/2}X^{1/2}}{P_{1}Q_{1}^{1/ 2}}.\]
Further we have
\[P_{1}Q_{1}|u|\ll\frac{Q_{2}X}{P_{1}|u|}. \tag{6.7}\]
Thus \(a\) is constrained to a union of intervals of total length \(O(\sqrt{Q_{2}Y/P_{1}|u|})\). It follows that
\[\mathcal{N}_{u}\left(\frac{Q_{2}Y}{P_{1}|u|}\right)=O\left(\sqrt{\frac{Q_{2}Y} {P_{1}|u|}}\frac{1}{P_{2}Q_{2}}+1\right).\]
Next we consider the possibility that
\[\frac{\sqrt{Y}}{P_{2}\sqrt{P_{1}Q_{2}|u|}}\ll 1.\]
This is equivalent to
\[|u|\gg\frac{Y}{P_{2}^{2}P_{1}Q_{2}}.\]
This is only relevant if
\[\frac{Y}{P_{2}^{2}P_{1}Q_{2}}\ll\sqrt{\frac{Y}{P_{1}P_{2}}}.\]
This condition implies
\[Y\ll P_{2}^{3}P_{1}Q_{2}^{2}=(P_{1}P_{2})(P_{2}Q_{2})^{2}.\]
If this holds then we obtain the estimate
\[\sum_{|u|\leq\sqrt{Y/P_{1}P_{2}}}\mathcal{N}_{u}\left(\frac{Q_{2}Y }{P_{1}|u|}\right) \ll\sqrt{\frac{Y}{P_{1}P_{2}}}+\sum_{|u|\leq Y/(P_{1}P_{2})(P_{2} Q_{2})}\frac{Y^{1/2}}{P_{2}(P_{1}Q_{2}|u|)^{1/2}}\] \[=\frac{Y^{1/2}}{(P_{1}P_{2})^{1/2}}+\frac{Y}{(P_{1}P_{2})(P_{2}Q_ {2})}. \tag{6.8}\]
If on the other hand
\[(P_{1}P_{2})(P_{2}Q_{2})^{2}\ll Y,\]
then we obtain
\[\sum_{|u|\leq\sqrt{Y/P_{1}P_{2}}}\mathcal{N}_{u}\left(\frac{Q_{2} Y}{P_{1}|u|}\right) \ll\sqrt{\frac{Y}{P_{1}P_{2}}}+\sum_{|u|\leq\sqrt{Y/P_{1}P_{2}}} \frac{Y^{1/2}}{P_{2}(P_{1}Q_{2}|u|)^{1/2}}\] \[=\frac{Y^{1/2}}{(P_{1}P_{2})^{1/2}}+\frac{Y^{3/4}}{(P_{1}P_{2})^ {3/4}(P_{2}Q_{2})^{1/2}}.\]
If
\[P_{1}Q_{1}\gg P_{2}Q_{2}\]
then
\[\frac{Q_{2}^{1/2}Y^{1/2}}{P_{1}Q_{1}^{1/2}}\ll\frac{Y^{1/2}}{(P_{1}P_{2})^{1/2}}.\]
In the range
\[1\leq|u|\ll\frac{Q_{2}^{1/2}Y^{1/2}}{P_{1}Q_{1}^{1/2}}\]
one has to consider the possibility, as above, that
\[\frac{X}{(P_{1}P_{2})(P_{2}Q_{2})}\ll\frac{Q_{2}^{1/2}Y^{1/2}}{P_{1}Q_{1}^{1/2 }}.\]
This is equivalent to
\[Y\ll\frac{Q_{2}}{Q_{1}}(P_{2}Q_{2})^{2}.\]
This again gives the bound (6.8). Otherwise
\[\sum_{|u|\leq Q_{2}^{1/2}Y^{1/2}/P_{1}Q_{1}^{1/2}}\mathcal{N}_{u} \left(\frac{Q_{2}Y}{P_{1}|u|}\right) \ll\sqrt{\frac{Y}{P_{1}P_{2}}}+\sum_{|u|\leq Q_{2}^{1/2}Y^{1/2}/P_ {1}Q_{1}^{1/2}}\frac{Y^{1/2}}{P_{2}(P_{1}Q_{2}|u|)^{1/2}}\] \[=\frac{Y^{1/2}}{(P_{1}P_{2})^{1/2}}+\frac{Y^{3/4}}{(P_{1}P_{2})(P _{2}Q_{2})^{1/4}}.\]
In the range
\[\frac{Q_{2}^{1/2}Y^{1/2}}{P_{1}Q_{1}^{1/2}}\ll|u|\leq\frac{Y^{1/2}}{(P_{1}P_{2 })^{1/2}}. \tag{6.9}\]
Then \(a^{2}\) is constrained by
\[4P_{1}Q_{1}|u|-\frac{Q_{2}Y}{P_{1}|u|}\leq a^{2}\leq 4P_{1}Q_{1}|u|+\frac{Q_{2}Y} {P_{1}|u|},\]
so \(a\) is constrained in two intervals of length
\[O\left(\frac{Q_{2}Y/(P_{1}|u|)}{\sqrt{P_{1}Q_{1}|u|}}\right)=O\left(\frac{Q_{2 }Y}{(P_{1}|u|)^{3/2}Q_{1}^{1/2}}\right).\]
Dividing by \(P_{2}Q_{2}\) to account for the congruence, this means that for a fixed \(u\) satisfying (6.9), we have
\[\mathcal{N}_{u}\left(\frac{Q_{2}X}{P_{1}|u|}\right)=O\left(\frac{X}{Q_{1}^{1/ 2}P_{2}(P_{1}|u|)^{3/2}}+1\right).\]
Summing over the range (6.9) then gives the bound
\[O\left(\frac{X^{3/4}}{(Q_{1}Q_{2})^{1/4}P_{1}P_{2}}+\frac{X^{1/2}}{(P_{1}P_{2 })^{1/2}}\right).\]
To summarize, we obtain the bound
\[\sum_{|u|\leq\sqrt{X/P_{1}P_{2}}}\mathcal{N}_{u}\left(\frac{Q_{2}Y}{P_{1}|u|} \right)\ll\frac{Y^{1/2}}{(P_{1}P_{2})^{1/2}}+ \tag{6.10}\]
\[\begin{cases}\frac{Y}{(P_{1}P_{2})(P_{2}Q_{2})}&\text{if $P_{1}Q_{1}\ll P_{2}Q_{2}$ and $Y\ll(P_{1}P_{2})(P_{2}Q_{2})^{2}$}\\ \\ \frac{Y^{3/4}}{(P_{1}P_{2})^{3/4}(P_{2}Q_{2})^{1/2}}&\text{if $P_{1}Q_{1}\ll P_{2}Q_{2}$ and $Y \gg(P_{1}P_{2})(P_{2}Q_{2})^{2}$}\\ \\ \frac{Y}{(P_{1}P_{2})(P_{2}Q_{2})}+\frac{Y^{3/4}}{P_{1}P_{2}(Q_{1}Q_{2})^{1/4} }&\text{if $P_{2}Q_{2}\ll P_{1}Q_{1}$ and $Y\ll Q_{2}Q_{1}^{-1}(P_{2}Q_{2})^{2}$}\\ \\ \frac{Y^{3/4}}{(P_{1}P_{2})(P_{2}Q_{2})^{1/4}}+\frac{Y^{3/4}}{(P_{1}P_{2})(Q_{1} Q_{2})^{1/4}}&\text{if $P_{2}Q_{2}\ll P_{1}Q_{1}$ and $Y\gg Q_{2}Q_{1}^{-1}(P_{2}Q_{2})^{2}$}.\end{cases}\]
In the complementary range \(|v|\leq\sqrt{Y/P_{1}P_{2}}\), symmetry gets us the same bounds but with the roles of \(P_{1}Q_{1}\) and \(P_{2}Q_{2}\) reversed. That is
\[\sum_{|v|\leq\sqrt{Y/P_{1}P_{2}}}\mathcal{N}_{v}\left(\frac{Q_{2}Y}{P_{1}|v|} \right)\ll\frac{Y^{1/2}}{(P_{1}P_{2})^{1/2}}+ \tag{6.11}\]
\[\begin{cases}\frac{Y}{(P_{1}P_{2})(P_{1}Q_{1})}&\text{if $P_{2}Q_{2}\ll P_{1}Q_{1}$ and $Y\ll(P_{1}P_{2})(P_{1}Q_{1})^{2}$}\\ \\ \frac{Y^{3/4}}{(P_{1}P_{2})^{3/4}(P_{1}Q_{1})^{1/2}}&\text{if $P_{2}Q_{2}\ll P _{1}Q_{1}$ and $Y\gg(P_{1}P_{2})(P_{1}Q_{1})^{2}$}\\ \\ \frac{Y}{(P_{1}P_{2})(P_{1}Q_{1})}+\frac{Y^{3/4}}{P_{1}P_{2}(Q_{1}Q_{2})^{1/4} }&\text{if $P_{1}Q_{1}\ll P_{2}Q_{2}$ and $Y\ll Q_{1}Q_{2}^{-1}(P_{1}Q_{1})^{2}$}\\ \\ \frac{Y^{3/4}}{(P_{1}P_{2})(P_{1}Q_{1})^{1/4}}+\frac{Y^{3/4}}{(P_{1}P_{2})(Q_{1 }Q_{2})^{1/4}}&\text{if $P_{1}Q_{1}\ll P_{2}Q_{2}$ and $Y\gg Q_{1}Q_{2}^{-1}(P_{1}Q_{1})^{2}$}.\end{cases}\]
Note that we can determine \(Q_{1},Q_{2}\) given \(P_{1},P_{2}\) in \(O_{\varepsilon}\left(X^{\varepsilon}\right)\) ways. We may then restrict \(P_{1},P_{2}\) into intervals \([T_{1},2T_{1}),[T_{2},2T_{2})\) respectively. Let \(\mathcal{E}_{2}(\Sigma;T_{1},T_{2})\) denote the set of curves with \(P_{1},P_{2}\) in that range.
Now suppose that
\[Y^{\delta}\ll_{\delta}T_{1}T_{2}\ll_{\delta}Y^{1/2-\delta} \tag{6.12}\]
for some \(\delta>0\). The condition
\[(P_{2}Q_{2})^{2}(P_{1}P_{2})\gg Y\text{ implies that }P_{2}Q_{2}\gg\frac{Y^{1/2}}{(P _{1}P_{2})^{1/2}},\]
hence
\[\frac{Y}{(P_{1}P_{2})(P_{2}Q_{2})}\ll\frac{Y^{1/2}}{(P_{1}P_{2})^{1/2}}.\]
Further, \(P_{2}Q_{2}\geq P_{1}Q_{1}\) and \(Q_{i}\geq P_{i}\) for \(i=1,2\) imply that
\[P_{2}Q_{2}\geq\sqrt{P_{1}Q_{1}P_{2}Q_{2}}=\sqrt{Q_{1}Q_{2}}\cdot\sqrt{P_{1}P_{ 2}}\geq P_{1}P_{2}.\]
It follows that
\[\frac{Y^{3/4}}{(P_{1}P_{2})^{3/4}(P_{2}Q_{2})^{1/2}}\leq\frac{Y^{3/4}}{(P_{1}P_ {2})^{5/4}}.\]
Thus, the first two lines of (6.10) can be replaced with
\[\frac{Y^{3/4}}{(P_{1}P_{2})^{5/4}}+\frac{Y^{1/2}}{(P_{1}P_{2})^{1/2}}.\]
If
\[P_{1}Q_{1}\ll_{\delta}X^{1/3-\delta}\]
then certainly \(Q_{1}\ll_{\delta}Y^{1/3-\delta}\), and therefore
\[Q_{1}Q_{2}^{-1}(P_{1}Q_{1})^{2}\ll_{\delta}Y^{1-3\delta}.\]
Thus \((P_{1}Q_{1})^{2}Q_{1}Q_{2}^{-1}\gg Y\) implies that
\[P_{1}Q_{1}\gg_{\delta}Y^{1/3-\delta}\]
for any \(\delta>0\). The third line of (6.11) can then be replaced with
\[\frac{Y^{2/3+\delta}}{P_{1}P_{2}}+\frac{Y^{3/4}}{(P_{1}P_{2})^{5/4}}.\]
Finally, we obtain the bound
\[|\mathcal{E}_{2}(\Sigma;T_{1},T_{2})|\ll_{\delta}\frac{Y^{3/4}}{(T_{1}T_{2})^ {1/4}}+(T_{1}T_{2}Y)^{1/2}+Y^{2/3-\delta}. \tag{6.13}\]
If (6.12) holds then we obtain the bound
\[|\mathcal{E}_{2}(\Sigma;T_{1},T_{2})|\ll_{\delta}X^{\frac{3-\delta}{4}}\]
since \(T_{1}T_{2}\ll Y^{1/2-\delta}\ll_{\delta}X^{1/2-\delta}\). Next we show that the number of curves with
\[T_{1}T_{2}\gg Y^{1/2-\delta}\text{ and }\beta_{E}\leq\frac{9}{4}-2\delta \tag{6.14}\]
is negligible. We first put
\[\alpha_{i}=\frac{\log P_{i}}{\log X}\text{ for }i=1,2, \tag{6.15}\]
\[\beta_{i}=\frac{\log Q_{i}}{\log X}\text{ for }i=1,2, \tag{6.16}\]
and
\[\upsilon=\frac{\log|u|}{\log X}\text{ and }\nu=\frac{\log|v|}{\log X}.\]
Further put
\[\gamma_{\mathfrak{l}_{0}^{*}}=\frac{\log P_{\mathfrak{l}_{0}^{*}}}{\log X}, \gamma_{\text{III}}=\frac{\log P_{\text{III}}}{\log X},\text{ and }\gamma_{\text{III}^{*}}=\frac{\log P_{\mathfrak{l}_{0}^{*}}}{\log X}\]
Then the Szpiro ratio of the pair \(E_{a,b},E_{-2a,a^{2}-4b}\) is
\[\frac{\beta_{E}+\beta_{\phi(E)}}{2} =\frac{\log|\Delta(E_{a,b})|+\log|\Delta(E_{-2a,a^{2}-4b})|}{2 \log C(E)}\] \[=\frac{6(2\gamma_{\mathfrak{l}_{0}^{*}}+\gamma_{\text{III}}+3 \gamma_{\text{III}^{*}})+3(\alpha_{1}+\beta_{1}+\upsilon+\alpha_{2}+\beta_{2} +\nu)}{2(2(\gamma_{\mathfrak{l}_{0}^{*}}+\gamma_{\text{III}}+\gamma_{\text{ III}^{*}})+\alpha_{1}+\upsilon+\alpha_{2}+\nu)}.\] \[=1+\frac{8\gamma_{\mathfrak{l}_{0}^{*}}+2\gamma_{\text{III}}+14 \gamma_{\text{III}^{*}}+\alpha_{1}+3\beta_{1}+\upsilon+\alpha_{2}+3\beta_{2}+ \nu}{2(2(\gamma_{\mathfrak{l}_{0}^{*}}+\gamma_{\text{III}}+\gamma_{\text{ III}^{*}})+\alpha_{1}+\upsilon+\alpha_{2}+\nu)}. \tag{6.17}\]
By breaking into dyadic ranges we may assume that
\[2(\gamma_{\mathfrak{l}_{0}^{*}}+\gamma_{\text{III}}+\gamma_{\text{III}^{*}})+ \alpha_{1}+\upsilon+\alpha_{2}+\nu=1. \tag{6.18}\]
Since we are looking to minimize the average of the Szpiro ratios of the curve \(E_{a,b}\) as well as its isogenous cousin \(E_{-2a,a^{2}-4b}\), we obtain the optimization problem
\[\begin{array}{cl}\min&8\gamma_{\mathfrak{l}_{0}^{*}}+2\gamma_{\text{III}}+14 \gamma_{\text{III}^{*}}+\alpha_{1}+\alpha_{2}+3\beta_{1}+3\beta_{2}+\upsilon+ \nu\\ \text{subject to}&2(\gamma_{\mathfrak{l}_{0}^{*}}+\gamma_{\text{III}}+\gamma_{ \text{III}^{*}})+\alpha_{1}+\upsilon+\alpha_{2}+\nu=1\\ &\alpha_{1}+\alpha_{2}\geq\frac{1}{2}-\delta\\ &\alpha_{1}+\beta_{1}\geq\alpha_{2}+\beta_{2}\\ &\beta_{2}\geq\alpha_{2}\text{ and }\beta_{1}\geq\alpha_{1}.\end{array}\]
Using a linear program solver, we find that the above linear program has an optimal value of \(5/4-\kappa(\delta)\) for some number \(\kappa\) depending on \(\delta\), so in particular choosing \(\delta\) arbitrarily small the optimal value of the linear program above approaches \(5/4\). Therefore, for all but a negligible number of curves the condition \(P_{1}P_{2}\gg X^{1/2-\delta}\) implies that the Szpiro ratio is greater than \(9/4-\kappa(\delta)\).
To push beyond the \(9/4\) barrier, we note that the \(9/4\) barrier comes from \(\alpha_{1},\alpha_{2},\beta_{1},\beta_{2}\) all nearly equal to \(1/4\). This means that the curves at the boundary all satisfy \(P_{i}\asymp Q_{i}\), so most of the exponents in \(Q_{i}\) are equal to one. To measure this, we impose the condition that
\[Q_{i}\leq P_{i}X^{\kappa}\text{ for }i=1,2. \tag{6.20}\]
That is, the primes of multiplicative bad reduction with higher multiplicity mostly divide the discriminant at most twice. We shall prove the following proposition:
**Proposition 6.1**.: _Let \(\mathcal{E}_{2}^{+}(X)\) be the subset of elliptic curves in \(\mathcal{E}_{2}\) consisting of those curves with conductor bounded by \(X\) and_
\[\beta_{i}<\alpha_{i}+\frac{1}{102},\]
_where \(\alpha_{i},\beta_{i}\) are defined as in (6.15) and (6.16) respectively. Then there exists \(\kappa>0\) satisfying_
\[|\mathcal{E}_{2}^{+}(X)|=O_{\kappa}\left(X^{\frac{3}{4}-\kappa}\right).\]
Proof.: Put \(R_{i}\) for the product of all primes \(p\) such that \(p^{2}||b,c\) respectively, and put \(S_{i}\) so that \(P_{i}Q_{i}=R_{i}^{2}S_{i}\) for \(i=1,2\). Then (6.20) implies that we have
\[b=R_{1}^{2}S_{1}u,c=R_{2}^{2}S_{2}v. \tag{6.21}\]
Note that
\[S_{i}=\frac{P_{i}Q_{i}}{R_{i}^{2}}\leq\left(\frac{P_{i}}{R_{i}}\right)^{2}X^{\kappa}\]
by (6.20). Now observe that \(P_{i}/R_{i}\) is bounded by \(Q_{i}/P_{i}\leq X^{\kappa}\), hence
\[S_{i}\leq X^{3\kappa}\text{ for }i=1,2. \tag{6.22}\]
Therefore those curves captured by (6.20) satisfy the equation
\[c=a^{2}-4b\Rightarrow S_{2}R_{2}^{2}u=a^{2}-4S_{1}R_{1}^{2}v\]
with \(S_{1},S_{2}\leq X^{3\kappa}\). The height condition on the conductor implies
\[|uv|\operatorname{rad}(S_{1}S_{2})R_{1}R_{2}\leq X.\]
Note that \(\operatorname{rad}(S_{1}S_{2})\geq 1\), and since \(S_{1}S_{2}\leq X^{6\kappa}\), it follows that
\[(uS_{1})R_{1}(vS_{2})R_{2}\leq X^{1+6\kappa}.\]
To finish the proof, it suffices to note that Theorem 5.1 and its proof are given in terms of an auxiliary parameter \(Z\). Replacing \(Z\) with \(X^{1+6\kappa}P^{-2}\) in (5.23) gives
\[\left(\frac{X^{1+6\kappa}}{P^{2}}\right)^{\frac{17}{24}+\varepsilon} \tag{6.23}\]
In particular, provided that
\[(1+6\kappa)<\frac{24}{17}\cdot\frac{3}{4}=\frac{18}{17}\]
or
\[\kappa<\frac{1}{102}\]
we have an acceptable error term.
By Proposition 6.1, we obtain a modified linear program
\[\begin{array}{cl}\min&8\gamma_{\mathfrak{l}_{\mathfrak{l}_{\mathfrak{l}_{ \mathfrak{l}_{\mathfrak{l}}}^{*}}}}+2\gamma_{\text{III}}+14\gamma_{\text{III}^{ *}}+\alpha_{1}+\alpha_{2}+3\beta_{1}+3\beta_{2}+v+\nu\\ \text{subject to}&2(\gamma_{\mathfrak{l}_{\mathfrak{l}_{\mathfrak{l}}}^{*}}+ \gamma_{\text{III}}+\gamma_{\text{III}^{*}})+\alpha_{1}+v+\alpha_{2}+\nu=1\\ &\alpha_{1}+\alpha_{2}\geq\frac{1}{2}-\delta\\ &\alpha_{1}+\beta_{1}\geq\alpha_{2}+\beta_{2}\\ &\beta_{2}\geq\alpha_{2}+\frac{1}{102}\text{ and }\beta_{1}\geq\alpha_{1}+ \frac{1}{102}.\end{array} \tag{6.24}\]
This linear program has an optimal value of \(261/102\), which corresponds to the bound
\[\frac{\beta_{E}+\beta_{\phi(E)}}{2}<\frac{155}{68},\]
and we note that
\[\frac{155}{68}>\frac{9}{4}.\]
## 7. 3-Selmer elements of elliptic curves with a marked 2-torsion point
We follow the parametrization obtained by Bhargava and Ho [3] for 3-Selmer elements of elliptic curves in the family \(\mathcal{E}_{2}\) with a marked rational 2-torsion point. In particular, they proved that the 18-dimensional space
\[\mathbb{Q}^{3}\otimes\operatorname{Sym}_{2}(\mathbb{Q}^{3})\]
of triples of \(3\times 3\) symmetric matrices represent elements of the 3-Selmer groups of elliptic curves with a marked 2-torsion point. Further they showed that 3-Selmer elements admit integral representatives, so we may take triples of integral \(3\times 3\) symmetric matrices instead. Moreover we must take _equivalence classes_ of a \(\operatorname{GL}_{3}(\mathbb{Z})\times\operatorname{GL}_{3}(\mathbb{Z})\)-action obtained by letting \(\operatorname{GL}_{3}\) act simultaneously on the three ternary quadratic forms defined by each of the matrices in the triple, and on the triple itself. In order to obtain a faithful action we must mod out by a certain element of order 3, which gives us an action by the group \(\operatorname{SL}_{3}^{2}(\mathbb{Z})/\mu_{3}\).
Using a product of two Siegel fundamental domains for the action of \(\operatorname{SL}_{3}(\mathbb{Z})\) on \(\operatorname{SL}_{3}(\mathbb{R})\) one then obtains a fundamental domain for the action of \(\operatorname{SL}_{3}^{2}(\mathbb{Z})\) on \(V_{3}\), the space of triples of \(3\times 3\) symmetric matrices. Further, they proved that the problematic regions with regards to geometry of numbers, namely the so-called cusps, contain only irrelevant points. Using the now-standard "thickening and cutting off the cusp" method of Bhargava (see [2], [3], [4], etc.), one then obtains an expression of the shape
\[N(S;X)=\frac{1}{C_{G_{0}}^{(i)}}\int_{g\in N^{\prime}(a)A^{\prime}}\#\{x\in S ^{\operatorname{irr}}\cap E(\nu,\alpha,X)\}dg.\]
Here
\[E(\nu,\alpha,X)=\nu\alpha G_{0}R\cap\{x\in V_{3}^{(i)}:H(x)<X\},\]
where \(G_{0}\) is a compact, semi-algebraic left \(K\)-invariant subset of the group \(G(\mathbb{R})=\operatorname{GL}_{3}^{2}(\mathbb{R})/\mu_{3}\) which is the closure of a non-empty, connected open set and such that every element has determinant at least one and \(R=R^{(i)}\) denotes a connected and bounded set representing real-orbits of \(G(\mathbb{R})\) acting on \(V_{3}(\mathbb{R})\). Further, we have
\[K=\operatorname{subgroup}\;\operatorname{SO}_{3}(\mathbb{R}) \subset\operatorname{GL}_{3}^{+}(\mathbb{R})\text{ of orthogonal transformations};\] \[A^{\prime}=\{\alpha(t,u):t,u>c\},\text{where}\] \[\alpha(t,u)=\begin{pmatrix}t^{-2}u^{-1}&&\\ &tu^{-1}&\\ &&tu^{2}\end{pmatrix};\] \[N^{\prime}=\{\nu(x,x^{\prime},x^{\prime\prime}):(x,x^{\prime},x^{ \prime\prime})\in I^{\prime}(a)\};\] \[\text{where }n(x,x^{\prime},x^{\prime\prime})=\begin{pmatrix}1&&\\ x&1&\\ x^{\prime}&x^{\prime\prime}&1\end{pmatrix}.\]
Here \(I^{\prime}(a)\) is a measurable subset of \([-1/2,1/2]^{3}\) dependent only on \(\alpha\in A^{\prime}\) and \(c>0\) is an absolute constant. Now the set \(\nu\alpha G_{0}R\) is then seen as the image of \(\nu\alpha G_{0}\) acting on \(R\), where \(\nu\alpha G_{0}\) is the left-translation of \(G_{0}\) by \(\nu\in N^{\prime}\) and \(\alpha\in A^{\prime}\).
A key observation, implicit in previous works on this subject but seemingly not written down explicitly, is that obtaining an asymptotic formula for \(N(S;X)\) boils down to an acceptable estimate for the number of integral points inside \(E(\nu,\alpha,X)\). Note that the set \(\nu\alpha G_{0}R\) is independent of the choice of height \(H\). The height function used by Bhargava and Ho in [3] has degree 24: it is the defined to be the maximum of \(|a_{2}|^{6},|a_{4}|^{3}\) where \(a_{2},a_{4}\) are degree 6 and degree 12 homogeneous polynomials in the entries of \(B\in V_{3}(\mathbb{R})\) respectively. For us the height is given by the conductor polynomial, which we recall is given by (3.1). For \(a=a_{2},b=a_{4}\) the conductor polynomial evidently has degree 24 instead of 36.
The key observations are the following: we can use the structure of the action of \(G(\mathbb{R})\) on \(V\) to transform the problem of counting in a bounded region in \(V\) to a corresponding problem of counting in a region in \(\mathbb{R}^{2}\)
where our earlier arguments of counting elliptic curves in the related region apply. As long as the counting problem for Selmer elements is compatible with the error estimates obtained in the essentially purely geometric reduction argument, we are able to obtain a suitable count of the total number of Selmer elements and thus obtain our desired outcome.
We define the set \(\mathcal{R}_{X}\) analogously to the way it is defined in [3], namely
\[\mathcal{R}_{X}(h)=\mathcal{F}hR\cap\{B\in V_{3}(\mathbb{R}):|\mathcal{C}(B)| <X\},\mathcal{R}_{X}=\mathcal{R}_{X}(1). \tag{7.1}\]
The key lemma we require, which is proved in its essential form as Proposition 7.3 in [3], is the following:
**Proposition 7.1**.: _Let \(h\) take a random value in \(G_{0}\) uniformly with respect to the Haar measure \(dg\). Then the expected number of elements \(B\in\mathcal{F}hR\cap V(\mathbb{Z})\) such that \(|\mathcal{C}(B)|<X\) and \(b_{1111}\neq 0\) (respectively \(b_{111}\neq 0\)) is equal to \(\operatorname{Vol}(\mathcal{R}_{X})+O\left(X^{\frac{17}{24}}\right)\)._
Proof.: The proof essentially follows along the same lines as given in [3]. The key difference is that in the proofs of Proposition 7.3-7.6 in [3] one must replace the parameter \(k=36\) with \(k=24\); all other details are the same.
Finally we note that the _upper bound sieve_ is essentially trivial to execute compared to the lower bound sieve, and the desired upper bound follows from Proposition 7.1.
|
2302.01223 | Practical Bandits: An Industry Perspective | The bandit paradigm provides a unified modeling framework for problems that
require decision-making under uncertainty. Because many business metrics can be
viewed as rewards (a.k.a. utilities) that result from actions, bandit
algorithms have seen a large and growing interest from industrial applications,
such as search, recommendation and advertising. Indeed, with the bandit lens
comes the promise of direct optimisation for the metrics we care about.
Nevertheless, the road to successfully applying bandits in production is not
an easy one. Even when the action space and rewards are well-defined,
practitioners still need to make decisions regarding multi-arm or contextual
approaches, on- or off-policy setups, delayed or immediate feedback, myopic or
long-term optimisation, etc. To make matters worse, industrial platforms
typically give rise to large action spaces in which existing approaches tend to
break down. The research literature on these topics is broad and vast, but this
can overwhelm practitioners, whose primary aim is to solve practical problems,
and therefore need to decide on a specific instantiation or approach for each
project. This tutorial will take a step towards filling that gap between the
theory and practice of bandits. Our goal is to present a unified overview of
the field and its existing terminology, concepts and algorithms -- with a focus
on problems relevant to industry. We hope our industrial perspective will help
future practitioners who wish to leverage the bandit paradigm for their
application. | Bram van den Akker, Olivier Jeunen, Ying Li, Ben London, Zahra Nazari, Devesh Parekh | 2023-02-02T17:03:40Z | http://arxiv.org/abs/2302.01223v1 | # Practical Bandits: An Industry Perspective
###### Abstract.
The bandit paradigm provides a unified modeling framework for problems that require decision-making under uncertainty. Because many business metrics can be viewed as _rewards_ (a.k.a. _utilities_) that result from _actions_, bandit algorithms have seen a large and growing interest from industrial applications, such as search, recommendation and advertising. Indeed, with the bandit lens comes the promise of direct optimisation for the metrics we care about.
Nevertheless, the road to successfully applying bandits in production is not an easy one. Even when the action space and rewards are well-defined, practitioners still need to make decisions regarding multi-arm or contextual approaches, on- or off-policy setups, delayed or immediate feedback, myopic or long-term optimisation, etc. To make matters worse, industrial platforms typically give rise to large action spaces in which existing approaches tend to break down. The research literature on these topics is broad and vast, but this can overwhelm practitioners, whose primary aim is to solve practical problems, and therefore need to decide on a specific instantiation or approach for each project. This tutorial will take a step towards filling that gap between the theory and practice of bandits. Our goal is to present a unified overview of the field and its existing terminology, concepts and algorithms--with a focus on problems relevant to industry. We hope our industrial perspective will help future practitioners who wish to leverage the bandit paradigm for their application.
+
Footnote †: _WWW’_23, April 30-May 4, 2023, Austin, TX, USA_
+
Footnote †: _WWW’_23, April 30-May 4, 2023, Austin, TX, USA_
+
Footnote †: _WWW’_23, April 30-May 4, 2023, Austin, TX, USA_
+
Footnote †: _WWW’_23, April 30-May 4, 2023, Austin, TX, USA_
+
Footnote †: _WWW’_23, April 30-May 4, 2023, Austin, TX, USA_
+
Footnote †: _WWW’_23, April 30-May 4, 2023, Austin, TX, USA_
+
Footnote †: _WWW’_23, April 30-May 4, 2023, Austin, TX, USA_
+
Footnote †: _WWW’_23, April 30-May 4, 2023, Austin, TX, USA_
+
Footnote †: _WWW’_23, April 30-May 4, 2023, Austin, TX, USA_
+
Footnote †: _WWW’_23, April 30-May 4, 2023, Austin, TX, USA_
+
Footnote †: _WWW’_23, April 30-May 4, 2023, Austin, TX, USA_
+
Footnote †: _WWW’_23, April 30-May 4, 2023, Austin, TX, USA_
+
Footnote †: _WWW’_23, April 30-May 4, 2023, Austin, TX, USA_
+
Footnote †: _WWW’_23, April 30-May 4, 2023, Austin, TX, USA_
+
Footnote †: _WWW’_23, April 30-May 4, 2023, Austin, TX, USA_
+
Footnote †: _WWW’_23, April 30-May 4, 2023, Austin, TX, USA_
+
Footnote †: _WWW’_23, April 30-May 4, 2023, Austin, TX, USA_
## 1. Introduction & Motivation
Modern-day platforms on the web consist of many moving parts, all making many decisions over time. These decisions can include ranking of content on a homepage, recommendations, notifications and pricing--all personalised to the user being targeted. Because of the complexity of such systems, manually optimising these decisions to achieve a common goal (such as retention, revenue, user satisfaction, and others) quickly becomes an unmanageable task. For this reason, many businesses have adopted data-driven approaches to decision-making, so as to scale up while optimising for the desired success metrics.
As a research problem, sequential algorithmic decision-making under uncertainty predates modern web technology, with some of its earliest formulations being credited to Robbins (Robbins, 2009). In essence, a decision-maker needs to iteratively take an _action_ from a certain action space for a number of _rounds_, with the goal of maximising the cumulative _rewards_ (a.k.a. _returns_ or _utilities_) that the decision-maker obtains. This seminal work has inspired a rich body of literature focusing on all aspects of what is now known as "the bandit problem," ultimately laying the groundwork for recent impressive advances in reinforcement learning (Nakamoto et al., 2017; Dosovitskiy et al., 2018). As such, there is promise in adopting the bandit perspective for problems faced by the online businesses.
There is an abundance of high quality literature introducing researchers and practitioners to the theory behind the bandit paradigm and its algorithms (Robbins, 2009; Robbins, 2010; Robbins, 2011). However, identifying work relevant to the everyday challenges faced by industry practitioners can be cumbersome. Indeed, algorithmic advances that lead to empirical progress typically stem from fundamental theoretical groundwork, which can be hard to navigate for those who wish to use these algorithms to their fullest potential, whilst remaining focused on practice.
It is our goal to take a step towards bridging this gap between the theory and practice of bandits. This tutorial is the product of an industry-wide collaboration, giving rise to a diverse industrial perspective of using bandits in practice, for a wide range of applications.
## 2. Proposal Details
### Topic & Relevance
All authors have first-hand experience using bandit algorithms for various applications in large-scale web platforms, such as e-commerce, and content streaming.
We have curated a family of topics based on the challenges practitioners face. Each topic has seen a large number of recent contributions from both academia and industry, which can feel intimidating to navigate for those with limited experience or exposure to the bandit paradigm and its many facets.
#### 2.1.1. The Bandit Setting
A first step in navigating the appropriate literature is to properly characterise the problem setting (Robbins, 2009).
In its most general form, a bandit setting arises when learning without full-information (i.e., partial supervision) (Dosovitskiy et al., 2018). The appropriate algorithms vary when shifting between a _full-bandit_ setting, where only one of the actions is taken at the time, or a form of _semibandits_ where this is extended to multiple actions (Robbins, 2009). Actions can be combinatorial, ranking, discrete or continuous; rewards can be
single- or multi-objective, delayed or immediate. We aim to provide a general overview, clarifying concepts and guiding practitioners to the relevant literature for their problem. As much as possible, we will ground each bandit formalism in a real application (e.g., by mapping ranking bandits to a recommender system example).
Note that bandits can be seen as a special case of Reinforcement Learning (RL) where state space is restricted to a single state (Srivastava et al., 2017). While many bandit settings can naturally be extended into a more general RL formulation, we will not venture into this area of research too deeply. However, we will make it clear when the limitations of the bandit formalism can potentially be overcome with RL, and provide pointers to relevant literature when applicable.
#### 2.1.2. On-policy
The canonical bandit setting is the online, _on-policy_ setting, wherein an agent learns a policy by interacting directly with the environment in a sequence of decisions. This gives rise to the well-known _explore-exploit_ dilemma, in which the policy must weigh the informational value of taking an action with the reward it will yield. Seminal practical applications of such methods in web platforms make use of Upper-Confidence-Bounds (UCB) (Kal
We will build to advanced algorithms from the ground up, making this tutorial an adequate starting point for novice as well as experienced researchers and practitioners in this fast-growing field.
### Audience
This tutorial targets every researcher and practitioner that has considered the use of bandit algorithms to solve a practical problem. The material requires an expected background knowledge on a MSc. level in computer science, machine learning or related fields, and will build its way up to advanced algorithms from fundamentals. Because of our practical focus, fundamental theory will be linked to practical considerations when applicable. We expect attendees to feel empowered to devise practical bandit-based solutions to machine learning problems they will be faced with in the future.
### Previous Editions
This will be the first edition of this tutorial, but the presenters have experience in teaching material covering similar topics in the past. We anticipate to present (an extended version of) this tutorial at similar conferences in the future.
### Tutorial Material
We will provide all the slides as well as a curated reading list covering relevant literature to participants. The notebooks used for the hands-on session will additionally be made available, all hosted in a public GitHub repository. If possible, we would like to record our tutorial and share the recordings with interested participants afterwards.
### Equipment
No additional equipment will be necessary--but it would be preferable if recording equipment is available in the room. Otherwise, we will use a recording setup with Zoom.
### Video Teaser
The video teaser can be found at [https://youtu.be/IHva_kgRqq4](https://youtu.be/IHva_kgRqq4).
### Organisation Details
We will host all materials on a publicly available GitHub repository. This includes the slides that will be covered during the tutorial, as well as the notebooks for the hands-on sessions. Because the material will be self-contained and the notebooks will run on Google Colaboratory, this lends itself to an asynchronous setup where interested participants can go through the materials at their own pace at a later date. Additionally, we wish to record the presentations so they can be distributed afterwards.
## 3. Organiser Biographies
The workshop organisers and their biographies are listed here in alphabetical order:
**Bram van den Akker**: ([email protected]) is a Senior Machine Learning Scientist at Booking.com with a background in Computer Science and Artificial Intelligence from the University of Amsterdam. At Booking.com, Bram's work focuses on bridging the gap between applied research and practical requirements for Bandits all across the company. Previously, Bram has held positions at Shopify & Panasonic, and has peer reviewed contributions to conferences and workshops such as TheWebConf, RecSys, and KDD of which one has been awarded with a best-paper award (Shi et al., 2020).
**Olivier Jeunen**: ([email protected]) is a Lead Decision Scientist at ShareChat with a PhD from the University of Antwerp (Shi et al., 2020), having previously held positions at Amazon, Spotify, Facebook and Criteo. Olivier's research focuses on applying ideas from causal and counterfactual inference to recommendation and advertising problems, which have led to 20 + peer reviewed contributions to top-tier journals, conferences, and workshops at NeurIPS, KDD, RecSys and WSDM, of which two have been recognised with best paper awards (Shi et al., 2020; Shi et al., 2020). He is an active PC member for KDD, RecSys, The WebConf, WSDM and SIGIR, whilst reviewing for several journals and workshops--which has led to two outstanding reviewer awards. He currently serves as co-Web Chair for RecSys, co-chaired the Dutch-Belgin Information Retrieval Workshop '20 and the CONSEQUENCES Workshop at RecSys' '22 (Shi et al., 2020), and co-lectured tutorials at the RecSys Summer School' '19, UMAP '20 (Shi et al., 2020) and The WebConf '21 (Shi et al., 2020).
**Ying Li**: ([email protected]) is a Senior Research Scientist at Netflix. She obtained her Ph.D. from the University of California, Los Angeles, and B.S. from Peking University. At Netflix, her research interest focuses on large-scale search and recommendation systems, bandit and long-term user satisfaction optimization. Prior to Netflix, she was an Applied Scientist in Amazon, focusing on cold-start classification and large-scale extreme classification using NLP. She has co-chainer the REVEAL (Reinforcement learning-based recommender systems at scale) workshop at RecSys 2022 (Shi et al., 2020).
**Ben London**: ([email protected]) is a Sr. Scientist at Amazon Music. He earned his Ph.D. in 2015 at the University of Maryland, where he was advised by Lise Getoor and worked closely with Ben Taskar and Bert Huang. His research investigates machine learning theory and algorithms, with a focus on generalization guarantees, structured prediction, recommendation, contextual bandits, and evaluation/learning with logged bandit feedback. His work has been published in JMLR, ICML, NeurIPS, AISTATS, UAI and RecSys. He was a co-organizer of the NeurIPS 2019 Workshop on ML with Guarantees, an area chair for ICML (2020, 2022) and NeurIPS (2020, 2021), a senior PC member for IJCAI, and has reviewed for numerous conferences and journals.
**Zahra Nazari**: ([email protected]) is a Sr. Scientist at Spotify working on the design and evaluation of recommender systems with a focus on sequential and long term optimization using reinforcement learning. Prior to Spotify, she had held positions at Google, Twitter and Framehawk and earned her Ph.D. at the University of Southern California with a focus on preference elicitation and human behaviour modelling in complex situations such as negotiations. She has published 20+ papers in top conferences such as IJCAI, AAMAS, SIGIR, WSDM and The Web Conference. Zahra is serving as the program committee member in conferences such as SIGIR,
The Web Conference, CIKM and has co-organized the workshop on Multi-Objective Recommender Systems in Recsys for two consecutive years (2021 and 2022).
* [1]**Devesh Parekh** ([email protected]) is a Staff Research Engineer at Netflix working on simplifying the deployment of contextual bandit solutions to new problems in member-facing applications. Devesh has previously worked on causal inference and contextual bandit problems in the areas of programmatic marketing, signup optimization, and member engagement messaging at Netflix.
|
2310.06985 | PlatoSim: An end-to-end PLATO camera simulator for modelling
high-precision space-based photometry | PLAnetary Transits and Oscillations of stars (PLATO) is the ESA M3 space
mission dedicated to detect and characterise transiting exoplanets including
information from the asteroseismic properties of their stellar hosts. The
uninterrupted and high-precision photometry provided by space-borne instruments
such as PLATO require long preparatory phases. An exhaustive list of tests are
paramount to design a mission that meets the performance requirements, and as
such, simulations are an indispensable tool in the mission preparation. To
accommodate PLATO's need of versatile simulations prior to mission launch -
that at the same time describe accurately the innovative but complex
multi-telescope design - we here present the end-to-end PLATO simulator
specifically developed for the purpose, namely PlatoSim. We show step-by-step
the algorithms embedded into the software architecture of PlatoSim that allow
the user to simulate photometric time series of CCD images and light curves in
accordance to the expected observations of PLATO. In the context of the PLATO
payload, a general formalism of modelling, end-to-end, incoming photons from
the sky to the final measurement in digital units is discussed. We show the
strong predictive power of PlatoSim through its diverse applicability and
contribution to numerous working groups within the PLATO Mission Consortium.
This involves the on-going mechanical integration and alignment, performance
studies of the payload, the pipeline development and assessments of the
scientific goals. PlatoSim is a state-of-the-art simulator that is able to
produce the expected photometric observations of PLATO to a high level of
accuracy. We demonstrate that PlatoSim is a key software tool for the PLATO
mission in the preparatory phases until mission launch and prospectively
beyond. | N. Jannsen, J. De Ridder, D. Seynaeve, S. Regibo, R. Huygen, P. Royer, C. Paproth, D. Grießbach, R. Samadi, D. R. Reese, M. Pertenais, E. Grolleau, R. Heller, S. M. Niemi, J. Cabrera, A. Börner, S. Aigrain, J. McCormac, P. Verhoeve, P. Astier, N. Kutrowski, B. Vandenbussche, A. Tkachenko, C. Aerts | 2023-10-10T20:12:16Z | http://arxiv.org/abs/2310.06985v4 | # PlatoSim: An end-to-end PLATO camera simulator for modelling high-precision space-based photometry
###### Abstract
Context:PLAnetary Transits and Oscillations of stars (PLATO) is the ESA M3 space mission dedicated to detect and characterise transiting exoplanets including information from the asteroseismic properties of their stellar hosts. The uninterrupted and high-precision photometry provided by space-borne instruments such as PLATO require long preparatory phases. An exhaustive list of tests are paramount to design a mission that meets the performance requirements and, as such, simulations are an indispensable tool in the mission preparation.
Aims:To accommodate PLATO's need of versatile simulations prior to mission launch that at the same time describe innovative yet complex multi-telescope design accurately, in this work we present the end-to-end PLATO simulator specifically developed for that purpose, namely PlatoSim. We show, step-by-step, the algorithms embedded into the software architecture of PlatoSim that allow the user to simulate photometric time series of charge-coupled device (CCD) images and light curves in accordance to the expected observations of PLATO.
Methods:In the context of the PLATO payload, a general formalism of modelling, end-to-end, incoming photons from the sky to the final measurement in digital units is discussed. According to the light path through the instrument, we present an overview of the stellar field and sky background, the short- and long-term barycentric pixel displacement of the stellar sources, the cameras and their optics, the modelling of the CCDs and their electronics, and all main random and systematic noise sources.
Results:We show the strong predictive power of PlatoSim through its diverse applicability and contribution to numerous working groups within the PLATO mission consortium. This involves the ongoing mechanical integration and alignment, performance studies of the payload, the pipeline development, and assessments of the scientific goals.
Conclusions:PLatoSim is a state-of-the-art simulator that is able to produce the expected photometric observations of PLATO to a high level of accuracy. We demonstrate that PlatoSim is a key software tool for the PLATO mission in the preparatory phases until mission launch and prospectively beyond.
Conclusions:
Methods: numerical - Space vehicles: instruments - Instrumentation: photometers - Planets and satellites: detection
## 1 Introduction
Thanks to the parts-per-million (ppm) precision photometry delivered by space telescopes in the past two decades, the astrophysical frontier has undergone a revolution. As a consequence of this technological progress, answers to profound questions are now within reach, such as the habitability of other planets beyond our Solar System. With the Convection, Rotation, and planetary Transits (CoRoT; Auvergne et al. 2009) mission marking the start of this endeavour, the quest for habitable planets was followed by NASA's _Kepler_ space mission (Borucki et al. 2010) aimed to discover the first Earth-like planet orbiting a Sun-like star. Together with _Kepler_'s extended operation, the so-called _K2_ mission (Howell et al. 2014), and successor, the Transiting Exoplanet Survey Satellite (TESS; Ricker et al. 2015) mission, a wealth of scientific discoveries opened gateways to research areas and synergies thought impossible at the beginning of the millennium. Beyond the continuous planet discoveries by TESS, the ongoing CHaracterising Exoplanet Satellite (CHEOPS; Benz
et al., 2021) mission likewise plays a key role in the precise characterisation of known exoplanet systems.
Also complementary science cases have flourished from the long and uninterrupted observations by CoRoT and _Kepler_, and shorter baseline but all-sky coverage observations by TESS. Being a principle driver for many of these missions, the ability to finally detect acoustic oscillations within the interior of solar-like stars (e.g. Kjeldsen and Bedding, 1995) was not only a success story for asteroseismology (e.g. Michel et al., 2008; Gillililil et al., 2010; Chaplin et al., 2010, 2011; Hekker, 2013; Huber et al., 2013), but vital for studies of exoplanets (e.g. Christensen-Dalsgaard et al., 2010; Silva Aguirre et al., 2015; Campante et al., 2016), stellar activity (e.g. Garcia et al., 2010; Chaplin and Basu, 2014; Brun et al., 2015; Kiefer, 2022), and galactic archaeology (e.g. Stokholm et al., 2019; Silva Aguirre et al., 2020; Hon et al., 2021; Stello et al., 2022). The last two decades of space data (initiated with the MOST mission; Walker et al., 2003) have also been a treasure for pulsation studies across the entire Hertzsprung-Russell diagram - from the pre-main sequence to the last stages of stellar evolution, and from the lowest to the highest stellar masses (e.g. see the reviews of Brown and Gillililand, 1994; Cunha et al., 2007; Aerts et al., 2010; Garcia and Ballot, 2019; Bowman, 2020; Aerts, 2021).
Despite the remarkable achievements from planet hunting space missions, no Earth-Sun analogue has been discovered to date (e.g. Hill et al., 2023) and the parameter landscape of low-mass and long-orbital period planets is vastly unexplored (e.g. Bryson et al., 2021). Due to the limited sky coverage of CoRoT and _Kepler_, both targeting stars too faint to efficiently follow up with ground-based radial velocity (RV) surveys, the hunt for small terrestrial exoplanets by similar long-baseline missions is soon to be continued with the PLAnetary Transits and Oscillation of stars (PLATO; Rauer et al., 2014, Rauer et al. in prep.) mission. PLATO is the third medium (M3) mission in ESA's Cosmic Vision 2015-2025 programme with a current launch date set for the end of 2026. Compared to its predecessors, PLATO aims to obtain high precision and continuous photometric time series of more than 245 000 bright stars (\(V\)-band magnitude \(<\) 15) over its nominal mission duration of 4 years. Using the tools of asteroseismology and ground-based spectroscopy as an integrated part of the mission strategy, the goal of PLATO is not only to detect but also to characterise exoplanets around stars of magnitude \(V<\) 10 with a precision of 5% in radius and 10% in mass. Moreover, PLATO is the first mission dedicated to derive the age of planets to a 10% precision from asteroseismic modelling of the host stars. To meet its scientific objectives, PLATO has been designed to provide a photometric precision of \(\leq\) 50 ppm h\({}^{-1/2}\) for more than 15 000 solar-like stars with \(V\leq\) 11 (Rauer et al., 2014, Rauer et al. in prep.).
PLATO's requirements are challenging due to numerous noise sources. These involve complex interactions between the components of the instrument which can best be modelled at pixel level. Thus, prior to the in-flight operations, end-to-end simulations of the instrument have proven to be a very efficient way to scrutinise performance bottlenecks for the success of the mission. Consequently, instrument simulators have been developed for missions covering a wide range of applications, in the X-ray (e.g. SIXTE; Dauser et al., 2019), optical (CHEOPSim; Futyan et al., 2020, SmCADO; Leschinski et al., 2016), (near)infrared (MIRISim; Klaassen et al., 2021, Specsim; Lorente et al., 2006), and all the way to the radio (pyuvsim; Lannan et al., 2019). Also multi-purpose software packages exist, as MAISIE (O'Brien et al., 2016) and SOPHISM (Blanco Rodriguez et al., 2018), and simulation frameworks, such as Pyxe1 (Arko et al., 2022), which are specifically designed for detectors.
To confirm that each performance requirement is within scope, several simulation tools have been developed for PLATO. The PLATO Light Curve Simulator (PSLS1; Samadi et al., 2019) and the PLATO Instrument Noise Estimator (PINE; Borner et al., 2022) are pragmatic yet simplified tools. None of these tools alone provide the ability to simulate all of the expected observations of the PLATO space mission, including image time series, meta data, housekeeping data, and light curves. In this work, we present PlatoSim2, a dedicated end-to-end PLATO camera simulator with all of these features.
Footnote 1: [https://sites.lesia.obspm.fr/psls/](https://sites.lesia.obspm.fr/psls/)
PlatoSim builds on the heritage of the Eddington CCD Data Simulator (Arentofi et al., 2004, De Ridder et al., 2006) that was developed for the decommissioned Eddington (ESA) and MONS (Danish Space Agency) space missions. The original code was later expanded to meet the demands of generalising simulations for space-borne observatories (such as the _ASTRIOD Simulator_; Marcos-Arenal et al., 2013). Aiming at realistic applications to the PLATO mission, Zima et al. (2010) adapted the software into a so-called first generation end-to-end _PLATO Simulator_. That included a change of software language from IDL to C++ to overcome pre-existing performance bottlenecks. Shortly after PLATO's selection as ESA's M3 candidate, Marcos-Arenal et al. (2014) revisited the software to give it a more modern modular software architecture and expanded its use cases for both the PLATO mission and other (future) photometric missions operating in the optical.
With already existing multi-instrument software packages (such as MAISIE and SOPHISM), and the increasing demand for dedicated yet diverse use cases for the PLATO mission, the development of PlatoSim over recent years has somewhat replaced the mission adaptability aspect with an in-depth applicability for the PLATO instrument. This in turn has resulted in huge advancements of the algorithms implemented and allowed the software to stay up to date with changes ranging from the observational strategy at mission level to the description of the smallest hardware components of the payload. Furthermore, a complete Python wrapper around the generic C++ code has made it easy to configure, set up, and run simulations, which has especially proven valuable for the PLATO mission consortium. PlatoSim has so far been used by multiple teams to estimate the impact of technical or programmatic tradeoffs on the final mission performance, including end of life (EOL) ageing effects, preparation of the data-processing pipelines, preparation of the engineering and scientific calibrations, development and real-time testing of the fine guiding sensor algorithms, among others.
In this paper we describe the implementation and algorithmic design of PlatoSim, but before doing so a small overview of the payload is given in Sect. 2. Next we present the image acquisition model in Sect. 3, the image generation model in Sect. 4, and each effect implemented will be detailed in Sect. 5 to 8. PlatoSim's photometry module will be presented in Sect. 9, the software architecture in Sect. 10, applications in Sect. 11, and concluding remarks in Sect. 12.
## 2 The PLATO instrument
As illustrated in the left panel of Fig. 1, the PLATO payload utilises an innovative multi-telescope concept consisting of 26 small but wide-field refractive telescopes (\(\sim\)1037 deg\({}^{2}\) each; Perenais et al. 2021) mounted on a single optical bench. For historical reasons the PLATO mission consortium describes the combined unit of baffle, optical elements, and detectors as a _camera_ and hence for consistency we also adopt this nomenclature.
Each camera consists of a telescope optical unit (TOU) with a 12 cm entrance pupil diameter and a focal plane array (FPA) containing four CCDs connected to electronic controllers called front-end electronics (FEEs) - in total comprising 104 CCDs and 26 FEEs. As visualised in Fig. 1 (left) the cameras are organised in four groups each with six 'normal' cameras (or N-CAMs) and one group of two 'fast' cameras (or F-CAMs). A preliminary normalised spectral response curve for the N-CAM is shown in Fig. 2, which illustrates a strong similarity in response to the photometer of CHEOPS and _Kepler_ as the mission strategy of PLATO targets only slightly cooler (solar) spectral type stars (see e.g. the mission handbooks of CHEOPS3, TESS4, and _Kepler5_)
Footnote 3: [https://sci.esa.int/web/cosmic-vision/](https://sci.esa.int/web/cosmic-vision/).
Footnote 4: [https://heasarc.gsfc.nasa.gov/docs/tess/documentation.html](https://heasarc.gsfc.nasa.gov/docs/tess/documentation.html)
Footnote 5: [https://archive.stsci.edu/missions-and-data/kepler/documents](https://archive.stsci.edu/missions-and-data/kepler/documents)
All cameras of a given group share the same line of sight (LOS) and field of view (FOV). An opening angle of 9.2\({}^{\circ}\) of each N-CAM group relative to the F-CAMs, designed to increase the global sky coverage, entails a rather complex overlapping FOV arrangement as shown in the right-hand plot of Fig. 1. This plot shows the provisional long-duration observation phase (LOP) south field which is a subset of the all-sky PLATO Input Catalogue (asPIC; Montalto et al. 2021). Only the FOV arrangement of the N-CAMs is shown here but the pointing of the F-CAMs is aligned with the pointing of the platform (magenta star). We note that the right-hand plot of Fig. 1 shows the FOV of the spacecraft's orientation at the first _mission quarter_. As PLATO will orbit the Sun from the second lagrange point (L2) in exactly one year, the spacecraft is required to realign its solar panels towards the Sun every \(\sim 91\) days in order to provide power to function. As we subsequently show in Sect. 11, this is an important constraint for generating realistic simulations.
Depending on the exact location in the FOV a star may be observed with a number of overlapping N-CAMs being
Figure 1: Overview of the PLATO multi-camera design. **Left:** Schematics of the PLATO spacecraft consisting of the payload module (with colour indication of the telescope groups) and the service module (bus). Credit: ESA/ATG medialab. **Right:** On-sky FOV of PLATO shown for a pointing towards the Long-duration Observation Phase (LOP) south in equatorial coordinates. The increasing darker shade of blue illustrates the increasing N-CAM overlap of \(n_{\rm CAM}\in[6,12,18,24]\) (also indicated in the white boxes). The coloured dots show the pointing of each N-CAM group cf. the left-hand plot. The magenta star indicates the pointing of the two F-CAMs, which is parallel to the pointing of the platform, while the magenta circle shows the (camera-only) F-CAM FOV. _Data courtesy: Montalto et al. (2021) and Perenais et al. (2021)_.
Figure 2: Preliminary normalised N-CAM spectral response curve at beginning of life (BOL) (with the red dots representing the mission requirements) compared to those similar planet hunting missions such as CHEOPS (orange dotted-dashed line), TESS (green dashed line) and _Kepler_ (blue dotted line). Each response curve is computed with cubic spline interpolation for illustrative purposes. The grey shaded areas are cut-off wavelengths which are dominated by optical transmission in the blue (left) and by the CCD anti-reflection coating in the red (right). The blue and red shaded regions illustrate the blue and red transmission regions of the two F-CAMs, respectively. _Data courtesy: ESA and NASA_.
\(n_{\text{CAM}}\in[6,12,18,24]\) as indicated by the increasing colour gradient from light blue to dark blue. Considering only the effective FOV (i.e. the corrected optical FOV, taking into account the optical and mechanical vignetting and the gaps between the CCDs in the focal plane), the total estimated effective N-CAM FOV of 2132 deg\({}^{2}\) covers almost 19 times that of _Kepler_.
Sharing identical optical designs (as shown later in Fig. 5) the main difference between the F-CAM and N-CAM is the operational mode, the readout cadence, and the wavelength transmission. With a readout cadence of 2.5 s secured by a CCD frame-transfer mode, the F-CAMs are foremost fine guidance sensors for the attitude and orbit control system (AOCS). Featuring frame-transfer CCDs implies that their FOV is about half the FOV of a single N-CAM (Pertenais et al., 2021). Being equipped with respectively a blue and red colour filter makes the F-CAMs ideal science instruments for asteroseismology of bright stars. The N-CAMs operate without an optical filter in a full-frame readout mode at a cadence of 25 s and are the primary photometers used to meet the core science goals.
PlatoSim is designed to model the Teledyne-e2v CCD270 detector that assemble the FPA of PLATO as shown in Fig 3a. A main characteristic of this detector is the large photosensitive area (see Table 1) together with a division into two CCD halves (a F and E side; see dashed lines) each with an independent readout register to speed up the readout process. On top of this design, the frame-transfer CCDs of the F-CAM are divided into two CCD halves parallel to the readout register: a photosensitive area and a charge storage area. The charge storage area is covered by a metallic shield which is illustrated by the purple shaded regions in Fig. 3a. With the exception of an initial CCD frame-transfer for the F-CAM compared to the N-CAM, each CCD half is read-out following a standard _rolling shutter_ technique first in parallel direction towards the readout register
\begin{table}
\begin{tabular}{l l} \hline Parameter & Description/Value \\ \hline TOU & 26 units \\ \hline Optics & 6 refractive lenses \\ Design & Axis-symmetric dioptics \\ Aperture diameter & 12 cm \\ Full FOV & 2132 deg\({}^{2}\) \\ Spectral range & 500 – 1000 nm \\ \hline CCDs (FEEs) & 104 units (26 units) \\ \hline Model & Teledyne-e2v CCD270 \\ Design & Back illuminated \\ Pixel size & 18 \(\mu\)m \\ Plate scale & 15 arcsec (on-axis) \\ \hline N-CAMs & 24 units \\ \hline Camera overlap & 6, 12, 18, 24 (partial) \\ Effective FOV & 1037 deg\({}^{2}\) \\ Detector FPA & 4 full-frame CCDs \\ Detector size & 4510 \(\times\) 4510 pixel \\ Exposure time & 21 s \\ Readout time & 4 s \\ Cadence & 25 s \\ \hline F-CAMs & 2 units (red and blue filter) \\ \hline Camera overlap & 2 (full) \\ Effective FOV & 619 deg\({}^{2}\) \\ Detector FPA & 4 frame-transfer CCDs \\ Detector size & 4510 \(\times\) 2255 pixel \\ Exposure time & 2.3 s \\ Readout time & 0.2 s \\ Cadence & 2.5 s \\ \hline \end{tabular}
\end{table}
Table 1: Characteristics of the PLATO payload.
\begin{table}
\begin{tabular}{l l} \hline Data product & Sampling [s] \\ \hline Imagette for N-CAM (F-CAM) & 25 (2.5) \\ Light curve: stellar aperture & 50, 600 \\ Light curve: inverse aperture & 50, 600 \\ Flux centroid: stellar aperture & 50, 600 \\ Flux centroid: inverse aperture & 50, 600 \\ Flux standard deviation & 600 \\ Calibration product & 600 \\ Calibration windows & 600 \\ \hline \end{tabular}
\end{table}
Table 2: List of PLATO data products downlinked to ground for pre-selected stars at different sampling rates. Some simple processing is applied to the pixel data before the extraction of for example light curves and flux centroids. We note that only imagettes are downlinked for the F-CAMs.
\begin{table}
\begin{tabular}{l l} \hline Acronym & Description \\ \hline ADU & Analogue-to-Digital Unit \\ AIV & Assembly, Integration, and Verification \\ AOCS & Attitude and Orbital Control System \\ asPIC & All-sky PLATO Input Catalogue \\ & (See Montalto et al., 2021) \\ BFE & Brighter-Fatter Effect \\ BOL & Beginning Of Life \\ CCD & Charge-Coupled Device \\ CR & Cosmic Ray \\ CTI & Charge Transfer Inefficiency \\ CAM & PLATO camera \\ DPU & Data Processing Unit \\ DS & Dark Signal \\ DSNU & Dark Signal Non-Uniformity \\ EOL & End Of Life \\ FEE & Front-End Electronics \\ FP & Focal Plane \\ FPA & Focal Plane Array \\ FOV & Field Of View \\ IPRNU & Intra-Pixel Response Non-Uniformity \\ KDA & Kinematic Differential Aberration \\ LOP & Long-duration Observation Phase \\ & (See Nascimbeni et al., 2022) \\ LOS & Line Of Sight \\ NSR & Noise-to-Signal Ratio \\ PSF & Point Spread Function \\ PRNU & Pixel Response Non-Uniformity \\ PLATO & PLAnetary Transits and Oscillations of stars \\ & (See Rauer et al., 2014, Rauer et al. in prep.) \\ PLM & PayLoad Module \\ RS & Readout Smearing \\ RTS & Random Telegraph Signal \\ TED & Thermo-Elastic Drift \\ TOU & Telescope Optical Unit \\ \hline \end{tabular}
\end{table}
Table 3: List of acronyms _heavily_ used throughout this paper.
and then in serial direction towards the corner of the register (i.e. left for the F side and right for the E side as seen in the CCD reference frame).
We elaborate more on the details of FPA in the following, however, before doing so, an overview of the PLATO data flow (from acquisition to downlink) is here placed in context to PlatoSim. While images are collected by the CCDs, the FEEs are responsible for extracting windows around preselected targets, sky background regions, and CCD regions used for calibration. By analogy with the FEE windowing strategy, the schematic overview of Fig. 3 highlights an important feature which is a part of PlatoSim's image acquisition model, namely the concept of a _CCD subfield_. Being a CCD area under consideration of size (\(n_{\rm{recom}}\times n_{\rm{col}}\)), the subfield is introduced to make long-duration simulation studies of modest sized subfields (such as Fig. 3b) feasible. It is however often more computationally efficient to only simulate an _imagette_ which is a \(6^{2}\) pixel subfield (or \(9^{2}\) pixel subfield for the F-CAMs) around a target star (illustrated in Fig. 3c). The imagette is a key data product of PLATO as all targets will have their photometry extracted from an imageette (see Sect. 9). Unlike the designated strategy to only extract imagettes in flight (except for saturated stars), the PlatoSim subfield can take any rectangular shape (smaller than the CCD dimensions) thus allowing more versatile simulations.
Following the data flow post the FEE, all data are sent to the data processing unit (DPU) that extracts and prepares each product for compression, archiving, and lastly transmission to ground. The PLATO data products that will be downlinked to ground are shown in Table 2 and can be categorised into time series of: imagettes, fluxes, flux standard deviations, flux centroids, and data for calibration. To observe as many stars as possible, only imagettes will be sampled at the rate of the nominal N-CAM cadence, while the remaining data products will be sampled every 50 s or 600 s (i.e. averages over two or 24 exposures, respectively). Accompanying these measurements, time series of the inverted aperture mask (called an _extended aperture_) will be computed as well. The calibration data consist of imagettes (or windows), for example, to compute the local sky background flux, axillary data, and housekeeping data. Except for the flux centroids, all data products of Table 2 can be simulated by PlatoSim as will be addressed in this paper.
A high level instrument summary of this section is provided in Table 1. Moreover, since space mission terminology is known to be notoriously heavy, a glossary of acronyms that are used throughout this paper is presented in Table 3.
## 3 Image acquisition model
Every PLATO measurement starts with reading out the CCD subfield after an exposure to obtain the pixel values \(S_{ij}\) expressed in analogue-to-digital units (ADU) where we use \(i\) and \(j\) as the row and column pixel coordinates. These pixel values can be modelled with
\[S_{ij}(t)=\left[\left(I_{ij}(t)\ g_{\rm{CCD},ij}(t)+B_{ij}(t)\right)\ g_{\rm{ FEE},ij}(t)+\epsilon_{\rm{RN},ij}(t)\right]_{n_{\rm{in}}}\,. \tag{1}\]
Here, \(I_{ij}(t)\) is the number of photo- and thermal electrons accumulated in pixel \((i,j)\) during the exposure, which is caused by the target star, the sky background, open shutter smearing, dark signal, among others, and which we describe in Sect. 4. The product of the CCD gain \(g_{\rm{CCD}}\) (expressed in \(\rm{\mu V/e^{-}}\)) and the FEE gain \(g_{\rm{FEE}}\) (expressed in \(\rm{ADU/\mu V}\)) make up the total gain \(g\) (expressed in \(\rm{ADU/e^{-}}\)),
\[g(t)=g_{\rm{CCD}}(t)\ g_{\rm{FEE}}(t)\,. \tag{2}\]
We highlight the pixel dependence of the gain: since each CCD are read out in two halves, the left-hand side and the right-hand side have in practice slightly different gains.
Moreover, these gains are not constant but depend on the number of electrons in the well, leading to the well-known effect of CCD non-linearity. The underlying reason is that the CCD output amplifier and the FEE re-amplifier do no longer amplify linearly for a high number of electrons, leading to a sublinear
Figure 3: Schematic overview of how PlatoSim generates a CCD image in the FPA. **a)** Illustrative overview of the N-CAM FPA with the 4 CCDs. The blue axes indicate the focal plane with the central blue dot (in the middle of the 4 CCDs) represents the optical axis \(Z_{\rm{FP}}\) pointing in the positive direction towards the reader. The green axes illustrate the origin for \(n_{\rm{CCD}}=4\) as a reference and the readout register of each CCD is highlighted with a red bar. Each CCD is divided into an F and E side (with independent CCD and FEE gain appliances) with anti-parallel serial readout directions. For the F-CAM the location of the metallic shields of frame-transfer CCDs are highlighted as purple shaded rectangles. **b)** Simulated \(400\times 400\) pixel subfield of the LOP south. The subfield location on \(n_{\rm{CCD}}=3\) is displayed in panel a) together with the corresponding parallel overscan and serial prescan region which is used by PlatoSim to reconstruct a proper smearing and bias map, respectively. **c)** A so-called imagette showing a zoom-in on a target star from panel b). Indicated by the subpixel barycentres, the PIC target (green dot) has three significant fainter stellar contaminants (yellow dots scaled in size to the target star magnitude).
increase of the output voltage (and thus ADU) with an increasing number of electrons. Measurements using the PLATO flight model CCDs revealed that the actual (bias subtracted) pixel signal deviates no more than \(100\,\mathrm{ADU}\) (\(<1\%\)) from what a linear model would predict from the number of electrons in the well. This deviation can be reasonably well modelled using a simple polynomial, leading to
\[g_{\mathrm{FEE,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \
is the detector efficiency (including e.g. the pixel response non-uniformities and bad pixels), and \(\bar{Q}(t,x,y,\lambda)\) is the quantum efficiency of the CCD. Each of the quantities mentioned above will be described in more detail in this or subsequent sections.
In addition to the _electron accumulation_ described in Eq. (7), there are also several _electron redistribution_ effects that are not as easily described by the expression in Eq. (7). The most relevant ones are charge diffusion in the silicate, charge-transfer inefficiency during readout, and full-well saturation causing blooming. These effects cause a redistribution of electrons in surrounding pixels, and are modelled in PlatoSim as an additional process after the CCD exposure. We refer to Sect. 8 for more details.
Equations (6) and (7) highlight the computational challenge that simulating space-based images poses. PLATO's primary science goal, exoplanets, requires knowledge about instrumental noise in the low-frequency regime (i.e. a time scale similar to both the transit duration and the orbital period of a planet) as well as the noise in the higher-frequency regime (i.e. a time scale similar to several phenomena of stellar variability). It is therefore sometimes needed to simulate an observational run of 90 days (after which the observations are interrupted to turn the solar panels back towards the Sun). With a measurement cadence of 25 s for the N-CAMs this implies more than 311 000 measurements for a time series, while keeping track of the low-frequency drifts as well as the high-frequency jitter of the spacecraft using a sufficiently small time step \(\delta t\). The computational burden of the spatial integration over each pixel is driven by the resolution needed to simulate the intra-pixel sensitivity variations, which requires to discretise each pixel in \(64^{2}\) subpixel elements (cf. Zima et al. 2010). The integration over the wavelength range is needed to take into account the variation of the point spread function (PSF) as well as the optical throughput with wavelength.
In practice some simplifications are needed to make the computations feasible. A first approximation is to eliminate the integration over the wavelengths by using a wavelength-averaged PSF weighted with a stellar spectrum of a Sun-like star, as well as using wavelength-averaged values of the throughput \(T\), the detector efficiency \(E\), and the quantum efficiency \(Q\). This reduces Eq. (7) to the simplified expression
\[\begin{split} I_{ij}^{\text{(exp)}}(t)=\iint\limits_{i\,j}& \left[F_{\star}(t,x,y)+F_{\text{sky}}(x,y)+F_{\text{stray}}(t,x,y) \right]\\ &\qquad\cdot\bar{T}(t,x,y)\cdot\bar{E}(t,x,y)\cdot\bar{Q}(t,x,y) \;dx\;dy\\ &+F_{\text{comics}}(t,i,j)+F_{\text{dark}}(t,i,j)\,.\end{split} \tag{8}\]
A second approximation is to neglect the intra-pixel variations and only take into account the pixel variations for those use cases where it has a limited effect.
In the following sections, we provide more details on how we take into account the different quantities included in Eq. (8). Section 5 deals with the incident radiation fluxes \(\bar{F}_{\star}(t,x,y)\), \(F_{\text{sky}}(x,y)\), and \(F_{\text{comics}}(t,i,j)\). This also involves computing the time-dependent focal plane coordinates of the stars, which is described in Sect. 6. Section 7 deals with the throughput and efficiency quantities \(T(t,x,y)\), \(E(t,x,y)\), and \(Q(t,x,y)\).
## 5 Light and electron sources
This section focuses on part of the ingredients of the image generation described in Sect. 4, more particularly on the polychromatic sources \(\bar{F}_{\star}(t,x,y)\), \(\bar{F}_{\text{sky}}(x,y)\), \(\bar{F}_{\text{stray}}(t,x,y)\), \(F_{\text{comics}}(t,i,j)\), \(F_{\text{dark}}(t,i,j)\) in Eq. (8), representing respectively flux originating from incident stellar light, sky background light, stray light, and photoelectrons coming from cosmic hits and dark current.
### Incident stellar light
The number of monochromatic photons per second \(F_{\star}(t,x,y,\lambda)\) in Eq. (7) coming from incident light can be further broken down as
\[F_{\star}(t,x,y,\lambda)=A\cdot\bar{F}_{\star}(\lambda,t)\cdot g(x,y,x_{0},y _{0},\lambda,t)\,, \tag{9}\]
where \(A\) is the light collecting area of one camera (113.1 cm\({}^{2}\) in the case of a PLATO camera), \(\bar{f}_{\star}(\lambda,t)\) is the spectral photon distribution of the star (expressed in photons s\({}^{-1}\) m\({}^{-2}\) nm\({}^{-1}\)) and \(g(x,y,x_{0},y_{0},\lambda,t)\) is the monochromatic normalised PSF at focal plane coordinates \((x,y)\) of a star centred around the focal plane coordinates \((x_{0},y_{0})\).
The main time dependence of the PSF comes from a temperature dependence, which can lead to a slight change of the focus. For PLATO, focus changes are dominated by thermal variations of the TOU structure (changing the distance between the lenses), the optical lenses (changing the diffractive index), and temperature differences between the bipods that interface the FPA to the optical bench (Borsa et al. 2022). PlatoSim uses a grid of monochromatic (Huygens) PSFs computed with Zemax Opticalto6 with a spatial sampling of \(8^{2}\) pixels times \(64^{2}\) subpixels per pixel. In-flight, the TOU will be temperature controlled around the pupil of the camera to the optimal focus temperature, and hence from the above discussion, a fixed and homogeneous temperature of \(-70^{\circ}\)C throughout the camera is assumed in the Zemax model. We note that this model realistically reflects the expected optical performance as it includes image distortion together with typical manufacturing and integration tolerances (e.g. refractive index, irregularities, lens and lens surface tilt and/or decentre, and inter-lens distances).
Footnote 6: [https://www.zemax.com/pages/opticstudio](https://www.zemax.com/pages/opticstudio)
In practice the point spread function of a star is also affected by so-called _charge diffusion_ (see e.g. Rodney & Tonry 2006; Fairfield et al. 2007; Widenhorn et al. 2010; Lawrence et al. 2011). When a photon enters the CCD silicate it frees one or more electrons, which then wander away, including laterally, for a short distance before they are collected by a gate electrode. The net result is that electrons can also end up in neighbouring pixels which slightly diffuses (blurs) the PSF. PlatoSim models this effect by convolving the PSF with a spherical Gaussian having a half-width of 0.2 pixels. In the remainder of this section, when we refer to the PSF we always designate the PSF that has been convolved with a diffusion kernel. Figure 4 illustrates some PSF examples that are used by PlatoSim, both the Zemax as well as the analytical model, with and without charge diffusion taken into account.
As mentioned in the discussion leading to Eq. (8), the integration over the wavelength is computationally cumbersome and in practice PlatoSim therefore uses a polychromatic normalised PSF \(\bar{g}(x,y,x_{0},y_{0})\) derived as a weighted average of monochromatic PSFs, weighted with the spectral energy distribution (SED) of a G0 dwarf star in the wavelength range of the PLATO passband \(\mathcal{P}\)
\[\bar{g}(x,y,x_{0},y_{0})=\frac{\int_{\mathcal{P}}g(x,y,x_{0},y_{0},\lambda)\; f_{\text{GOV}}(\lambda)\;d\lambda}{\int_{\mathcal{P}}f_{\text{GOV}}(\lambda)\;d\lambda}\,, \tag{10}\]
where the SED \(f_{\rm 600V}(\lambda)\) was taken from Coelho et al. (2005). This allows to approximate Eq. (9) with the polychromatic photon flux
\[\tilde{F}_{\star}(t,x,y)=A\cdot\ F_{\star}(t)\cdot\bar{g}(x,y,x_{0},y_{0})\,, \tag{11}\]
used in Eq. (8). In the expression above \(F_{\star}(t)\) denotes the polychromatic stellar photon flux integrated over \(\mathcal{P}\)
\[F_{\star}(t)=\int_{\mathcal{P}}f_{\star}(\lambda,t)\,d\lambda\approx\Delta \lambda_{\mathcal{P}}\cdot F_{0}\cdot 100^{\rm m\,(t)/5}\,. \tag{12}\]
Here, \(\Delta\lambda_{\mathcal{P}}\) is the full width at half maximum of the PLATO passband (about \(532\,\rm nm\) for a normal camera), \(V\) is the Johnson-Cousin visual magnitude of the star, and \(F_{0}=1.00179\cdot 10^{8}\,\rm photons\,s^{-1}\,m^{-2}\,nm^{-1}\) is the zero-point reference flux corresponding to a \(V=0\) G0-dwarf star. Alternatively, it is possible to use PLATO magnitudes \(\mathcal{P}\), which can be derived from the Johnson-Cousin \(V\) magnitude using the transformation derived from synthetic stellar spectra by Marchiori et al. (2019)
\[V-\mathcal{P}=c_{0}+c_{1}\ T_{\rm eff}+c_{2}\ T_{\rm eff}^{2}+c_{3}\ T_{\rm eff }^{3}\,, \tag{13}\]
where \(T_{\rm eff}\) is the effective temperature of the star. The best fit coefficients \(\{c_{0},c_{1},c_{2},c_{3}\}\) tabulated in Marchiori et al. (2019) has recently been revisited by Fialho et al. (in prep.). The flux can then be derived (as was done in the same article) using
\[F_{\mathcal{P}}=100^{-(\mathcal{P}-\mathcal{P}_{\rm ap})/5}\,, \tag{14}\]
with a mission BOL zero point of \(\mathcal{P}_{\rm ap}=20.77\) for an A0 dwarf star of \(\mathcal{P}=0\) being the current best fit estimate for the N-CAM (and correspondingly F-CAM zero-points of \(\mathcal{P}_{\rm rp,blue}=20.18\) and \(\mathcal{P}_{\rm rp,red}=19.81\) for the blue and red filter, respectively).
The practical implementation of Eqs. (8) and (11) requires the focal plane coordinates \((x_{0},y_{0})\) of the star around which the PSF is centred, as well as a numerical integration of the PSF over the relevant pixels. The former are derived from the equatorial sky coordinates \((\alpha,\delta)\) of the star using the transformations outlined in Appendix A. However, an accurate derivation also requires to take into account optical distortion, kinematic aberration, the (imperfect) attitude control of the spacecraft, and the thermal drift of the camera, all of which displace the PSF in the focal plane. More details of this description are given in Sect. 6. The integration of the PSF is computationally non-trivial to implement because the PSF varies over the focal plane. Fast convolution of the PSF using the fast Fourier transform (FFT) is therefore only justified when the simulated region of the CCD is sufficiently small. Alternatively, Appendix B shows how an analytical approximation can be constructed that allows to efficiently integrate over the pixels, and which is used by PlatoSim in case of larger CCD regions. Such approximation is also beneficial to realistically characterise the PSF's change in shape and size over time induced typically by a change in the thermal environment, also known as _PSF breathing_ (e.g. see Bely et al., 1993, for orbital focus variations of HST).
On top of all of this, PlatoSim takes into account that the same star is projected multiple times on the focal plane at different locations. Apart from the nominal PSF that carries more than \(99.9\%\) of the light, there are two so-called _ghost_ (\(\underline{\Omega}\)) images of the star. The latter are caused by the fact that the optical elements of each camera (as shown in Fig. 5) not only refract the light but also cause internal reflections so that part of the light is also projected elsewhere in the focal plane.
The most important ghost is a _point-like_ image that carries no more than \(0.08\%\) of the light. Results from the test campaign of the engineering camera model revealed that the intensity of the point-like ghost decreases exponentially from the optical axis outwards. This ghost image will be extremely weak for stars for which the nominal PSF beyond \(8^{\circ}\) away from the optical axis, however, in practise heavily saturated stars (\(V\sim 0\)) will show visible ghosts for nominal PSF positions \(<12^{\circ}\) from the optical axis (at the level of a few tens of ADU). The point-like ghost
Figure 4: Illustration of a synthetic PLATO PSF generated at different optical axis distances \(\theta\) with **a)** Zemax OpticStudio and **b)** an analytic model. The top panels show the high resolution PSF for \(\vartheta=3^{\circ}\) (left) and \(\vartheta=18^{\circ}\) (right). The lower panels show the corresponding PSF after a \(0.2\,\rm pixel\) Gaussian diffusion kernel has been applied. Each PSF is constructed at an azimuth angle of \(45^{\circ}\) and has a resolution of 64 subpixel elements corresponding to a \(1^{\prime}\times 1^{\prime}\) field on the sky. The image is normalised such that the sum over all pixels is equal to 1.
is caused after reflection of the light on the CCD surface and on both surfaces of the entrance window (front and back). Its PSF is thus very similar to the nominal one, and is located diametrically opposite of the optical axis, that is
\[\begin{pmatrix}x_{\mathrm{rp}}\\ y_{\mathrm{nr}}\end{pmatrix}=-\begin{pmatrix}x_{\mathrm{rp}}\\ y_{\mathrm{nr}}\end{pmatrix}\,, \tag{15}\]
where we used the optical axis as the origin of the focal plane reference frame. Point-like ghosts are therefore created by stars whose nominal PSF is on another CCD (e.g. see Fig. 3a).
The second ghost image is a so-called _extended_ ghost, and is caused by reflection of the CCD surface and the back of lens L6 (see again Fig. 5). It carries only \(0.003\%\) of the light (well below the mission requirements), and is located on the same CCD as the nominal PSF but radially shifted towards the edge of the FOV
\[\begin{pmatrix}x_{\mathrm{rp}}\\ y_{\mathrm{nr}}\end{pmatrix}_{\mathrm{DR}}=1.0672\begin{pmatrix}x_{\mathrm{rp} }\\ y_{\mathrm{nr}}\end{pmatrix}\,. \tag{16}\]
An analysis with Zemax shows that its PSF can be well approximated with a homogeneous disk with a large diameter (hence the name extended) that depends on the distance of the optical axis, ranging from about 200 pixels close to the optical axis to more than 370 pixels at the edge of the FOV. The exact dependence was tabulated using Zemax, and then approximated with a second order polynomial interpolant that is used in PlatoSim simulations. The nominal source PSF will be inside the extended ghost for a star up to \(6^{\circ}\) away from the optical axis. As an example, a (saturated) star of \(V=0\) located \(7^{\circ}\) from the optical axis will create a ghost of \(\sim 220\) pixel diameter distributed with \(\sim 800\) e\({}^{-}\) pixel\({}^{-1}\) (as shown later in Fig. 16). Clearly, with such a spatial dilution of a tiny fraction of the light, extended ghosts are only relevant for the brightest stars such as Canopus in LOP south and Vega in the LOP north (following the updated results of Nascimbeni et al.2022), and will be well below the background noise for all other stars.
### Incident light from the sky background
The diffuse sky background \(F_{\mathrm{sky}}\) that affects every PLATO CCD measurement consists mainly of zodiacal light, and light from unresolved Milky Way stars. The former is caused by sunlight being scattered by inter-planetary dust particles agglomerated across the ecliptic plane. To model the zodiacal light for simulating space-based photometry, De Ridder et al. (2006) used the monochromatic values of the zodiacal light at \(\lambda=500\) nm in the vicinity of the Earth tabulated by Leinert et al. (1998), and assumed that the spectral distribution of the zodiacal light is the same as the one of the Sun (\(F_{\odot}(\lambda)\), tabulated in Wehrli 1985) to estimate the amount of zodiacal light flux that hits the detector.
Marchiori et al. (2019) adopted the same approach but improved upon it by including the reddening factor \(f_{\mathrm{red}}(\lambda)\) of the solar spectrum, the small correction factor \(f_{\mathrm{f}2}=0.975\) for a spacecraft in L2 rather than the direct vicinity of the Earth, and by including the passband (i.e. the spectral response \(S(\lambda)\)) of a PLATO camera when integrating over the zodiacal spectrum. PlatoSim adopts the same approach which leads to the following expression for the zodiacal flux
\[F_{\mathrm{ZL}}(\alpha,\delta)=\frac{F_{\mathrm{ZL}}(\alpha,\delta,500\, \mathrm{nm})\cdot f_{\mathrm{ZL}}}{F_{\odot}(500\,\mathrm{nm})}\int F_{\odot} (\lambda)\,f_{\mathrm{rad}}(\lambda)\,S(\lambda)\,\mathrm{d}\lambda\,, \tag{17}\]
where \(F_{\mathrm{ZL}}(\alpha,\delta,500\,\mathrm{nm})\) is the monochromatic zodiacal flux at \(500\) nm derived from Leinert et al. (1998). To model the Galactic sky background PlatoSim adopts the same approach as in De Ridder et al. (2006), using tabulated Pioneer 10 observations from beyond 2.8 AU (where the contribution of the zodiacal light is negligible).
Figure 6 shows the combined sky background model of PlatoSim in an all-sky attoff projection in Galactic coordinates together with the suggested LOPs of Nascimbeni et al. (2022). Since PlatoSim do not include the sky background of extra galactic sources, care must be taken for simulations that covers the FOV of the large Magellanic Clouds (which are partially overlapping with the LOP south). The final selected LOP(s) of PLATO will be chosen such that the combined sky background flux will be below the mission requirement of \(20\) e\({}^{-}\) pixel\({}^{-1}\) s\({}^{-1}\).
Although the PLATO camera design has a baffle and a stray light mask, these do not perfectly block reflected light of celes
Figure 5: Layout of the Telescope Optical Unit (TOU) together with the detectors on the right forming the Focal Plane Array (FPA). Light passes the entrance window to the left (which for the F-CAMs is a dedicated optical filter) and propagates through the refractive optical lenses (L1–L6) unto the FPA on the right. _Credit: ESA_.
Figure 6: Atoff projection in Galactic coordinates (\(l,\,b\)) of the all-sky background model used by PlatoSim. The model includes zodiacal light and diffuse galactic light in units of incoming photons per second per pixel. The blue dashed line shows the ecliptic plane (with the location of the Sun clearly visible as the highest intensity area) and with the crosses illustrating respectively the LOP north (orange) and LOP south (magenta) from Nascimbeni et al. (2022). We note that data gaps in the zodiacal map of Leinert et al. (1998) and in the galactic Pioneer 10 map have been interpolated using a cubic spline.
tial bodies in our Solar System. The main stray light contributors for PLATO are reflected light of the Earth and the Moon, for which the requirement of the combined flux is set to be less than \(40\,\mathrm{e}^{-}\,\mathrm{pixel}^{-1}\,\mathrm{s}^{-1}\). The straylight differs from one camera group to the other, as different camera groups are pointing in a different direction. Detailed modelling of the straylight requires the exact sky position of the Earth and the Moon as well as an optical straylight model resulting from an in-depth analysis of the camera surfaces, coatings, and paintings. Due to the importance of stray light, the inclusion of such a model is currently under development for PlatoSim.
### Cosmic rays
Cosmic rays (CRs) are high-energy cosmic particles that leave a high intensity trail over multiple pixels when colliding with a detector. The exact morphology of the dissipated energy on the detector depends both on the properties of detector (e.g. material and front/back illumination) and on the properties of the cosmic particle (e.g. particle type, energy, and incident angle). Furthermore, since the frequencies of cosmic rays depend on the time-varying space weather (dependent on the solar cycle, coronal mass-ejection events, galactic processes, etc.), and their impact strongly depends on spacecraft properties (such as physical orientation, material shielding/penetration, etc.), a realistic model is non-trivial. CR simulators do exist, such as STARDUST (Rolland et al., 2008), Geant4 (Allison et al., 2016), GRAS (Santtin et al., 2005), and CosmiX7(Lucasuri and Prod'homme, 2020). All of these codes model particle transport using the Monte Carlo technique. CosmiX is the fastest code due to its semianalytical approach. CosmiX was initially developed for space-borne missions such a Gaia and PLATO, but is presently too time consuming to be efficiently integrated into PlatoSim.
Footnote 7: [https://gitlab.com/david.lucsanyi/cosmix](https://gitlab.com/david.lucsanyi/cosmix)
Instead a simplified model is implemented in PlatoSim: the number of CR hits during an exposure is drawn from a Poisson distribution around a mean value that scales with the exposure time \(t_{\mathrm{exp}}\) and subfield size (\(n_{\mathrm{row}}\times n_{\mathrm{col}}\)) of the CCD area under consideration
\[N_{\mathrm{CR}}\sim\mathcal{P}(\mu=R_{\mathrm{CR}}\,\,\Delta t_{\mathrm{exp}} \,\,n_{\mathrm{row}}\,\,n_{\mathrm{col}}\,\,s_{\mathrm{pix}}^{2})\,, \tag{18}\]
where \(R_{\mathrm{CR}}\) is the cosmic hit rate (\(\mathrm{events}\,\mathrm{s}^{-1}\,\mathrm{cm}^{-2}\)), \(\Delta t_{\mathrm{exp}}\) is the exposure time, and \(s_{\mathrm{pix}}^{2}\) is the surface area of one pixel. The impact locations on the CCD are randomly chosen over the entire subfield. The trail length on the CCD of each cosmic hit is drawn from a uniform distribution, and the intensity in electron counts is drawn from a skew-normal distribution (given a location \(\psi_{\mathrm{CR}}\), scale \(\omega_{\mathrm{CR}}\), and shape parameter \(\alpha_{\mathrm{CR}}\)).
We have validated PlatoSim's CR model to the CosmiX simulator, since this software is open source and in excellent agreement with more complex CR codes such as Geant4 and GRAS. Since CosmiX is a dedicated module in the open source detector framework Pyzel8(Arko et al., 2022), we used this to generate representative PLATO CCD images by loading in a dark frame generated by PlatoSim and then applied CosmiX. A configuration file made it easy to select settings representative of the PLATO CCD needed for CosmiX.
Footnote 8: [https://esa.gitlab.io/pyzel/](https://esa.gitlab.io/pyzel/)
Figure 7 shows a visual model comparison between PlatoSim (top left) and CosmiX (top right). The bottom panel shows a proton irradiation test performed at cold temperature on a PLATO flight model CCD (black dots from Prod'homme et al., 2018) and a best model fit of the PlatoSim skew-normal distribution to the test data (blue solid line). The corresponding CosmiX model (see Lucasuri and Prod'homme, 2020) shows a 93% model agreement to the test data, where the remaining 7% mainly corresponds to so-called secondary \(\delta\)-electrons emerging from the setup itself (i.e. when the proton beam outside the cryostat collides with the aluminium flange). These electrons leave their imprint as an increased fraction of events below \(4\,\mathrm{k}\mathrm{e}^{-}\), hence, we choose to exclude the first three data points (pink squares) from the PlatoSim model fit.
It is clear that the simplified CR model of PlatoSim naturally shows discrepancies with CosmiX and the PLATO CCD test data (especially at higher deposits of charge). Most noticeable from the pixel data is the difference in CR morphology. In particular CosmiX shows a more discrete nature of the charge deposits along the tracks. The underlying reason is because CosmiX assumes that, while it tracks groups of electron clusters around vertices created by interactions with the incoming ionising particle, the loss of energy of the primary particle through the Si depletion zone of the detector is negligible. On the other hand,
Figure 7: A CR model comparison. **Top panels:** A visual comparison between PlatoSim (left) and CosmiX (right), shown for a small \(150^{4}\) pixel subfield. The images are generated using a cycle time of \(25\,\mathrm{s}\), a CR hit rate of \(100\,\mathrm{events}\,\mathrm{s}^{-1}\,\mathrm{cm}^{-2}\) (corresponding to solar maximum), and a CR trail length for PlatoSim of up to \(300\,\mathrm{pixel}\). As input for CosmiX, we used a unidirectional \(55\,\mathrm{MeV}\) proton beam and a \(40\,\mathrm{\SIUnitSymbolMicro}\) Sipel volume. **Bottom panel:** Number distribution of total charge deposit per event. The black dots show the measurements from the proton irradiation test campaign on a PLATO flight model CCD at \(203\,\mathrm{K}\)(Prod’homme et al., 2018). The blue solid line is a best fit model of PlatoSim’s skew-normal distribution to the test data and the blue shaded region is the \(2\sigma\) confidence interval. The first three data points (pink squares) were excluded in the fit as they originates from secondary \(\delta\)-electrons from the setup. The best fit parameters (\(\psi_{\mathrm{CR}},\omega_{\mathrm{CR}},\alpha_{\mathrm{CR}}\)) = (\(5232\pm 76,3842\pm 197,6\pm 1\)) were used to produce the PlatoSim simulation above.
PlatoSim models the total charge deposit more continuous and with a gradient that is effectively determined by the distributions from which the energy, incident angle/location, and trail length are drawn. Despite these discrepancies, overall the two models agree sufficiently well in order for PlatoSim to fulfill its original purpose, namely to train the on-board CR rejection algorithm of the PLATO reduction pipeline.
### The dark signal
General for all CCDs, the PLATO detectors show a _dark signal_, that is thermal electrons that are generated even in the absence of incident light. This dark signal contributes to the total noise budget with a temporal and spatial component which we model. The dark signal accumulated during an exposure of duration \(\Delta t_{\rm exp}\) and a readout of duration \(\Delta t_{\rm ro}\), as occurring in Eqs. (7) and (8), is modelled in PlatoSim using a Poisson distribution
\[F_{\rm dark}(i,j)\sim\mathcal{P}\left(\mu=n_{\rm DS,\,\,\,j}\cdot(\Delta t_{ \rm exp}+\Delta t_{\rm ro})\right)\,, \tag{19}\]
where \(n_{\rm DS}\) is the dark signal. In practice the latter is not fixed for a particular CCD, but shows a fixed-pattern spatial variation over the CCD which is usually characterised by the dark signal non-uniformity (DSNU; \(\sigma_{\rm DSNU}\)). PlatoSim models the DSNU by drawing \(n_{\rm DS}\) from a normal distribution
\[n_{\rm DS}\sim\mathcal{N}\left(\mu=\bar{n}_{\rm DS},\,\sigma=\sigma_{\rm DSNU }\right)\,. \tag{20}\]
The nominal values of \(\bar{n}_{\rm DS}\) and \(\sigma_{\rm DSNU}\) for a PLATO CCD, as tabulated by the manufacturer e2v, are respectively \(\bar{n}_{\rm DS}=1.2\,{\rm e^{-}\,s^{-1}}\) and \(\sigma_{\rm DSNU}/\bar{n}_{\rm DSNU}=15\%\) root-mean-square (rms) at mission BOL. These values are slightly conservative compared to on-ground calibration estimates (Verhoeve et al. 2016) of \(\bar{n}_{\rm DS}\approx 1.03\,{\rm e^{-}\,s^{-1}}\) and \(\sigma_{\rm DSNU}/\bar{n}_{\rm DSNU}\approx 12\%\) rms. Apart from the exposure time, the dark signal also depends on the detector temperature, which PlatoSim models as a linear function of the CCD temperature using a slope of \(5\,{\rm e^{-}\,s^{-1}\,K^{-1}}\), being the mission requirement value.
### Readout smearing
Similar to many space-borne instruments the PLATO cameras do not use a mechanical shutter to block light from reaching the CCD during readout. This implies that during the readout the CCD continues to gather photons. During the row transfers at readout, pixels will therefore be shifting 'under' the PSFs of stars in the same column and accumulate photons of these stars during the short time that it takes to shift one row of pixels to the next one. In practice this will lead to a uniform bright vertical trail that is imprinted in each column, but is most visible for those columns that contain bright stars. This effect, called _readout smearing_, is described with the second term of Eq. (6), and is illustrated in Fig. 3b for pixel column \(\sim 290\), mainly caused by the saturated star in the top of the column.
If, with a slight abuse of notation, we denote with \(I_{ij}^{\rm(exp)}\) the number of electrons that were accumulated in pixel \((i,j)\) during an exposure of duration \(\Delta t_{\rm exp}\), the total number of electrons per second collected in the entire column \(j\) is
\[\bar{I}_{j}=\frac{1}{\Delta t_{\rm exp}}\sum_{i=1}^{n_{\rm res}}I_{ij}^{\rm( exp)}\,. \tag{21}\]
Here \(n_{\rm res}\) is the number of rows that are illuminated. We note again that this is different for the F-CAMs than for the N-CAMs as the former feature readout using frame-transfer and the bottom half of their CCDs are covered to prevent illumination. During readout, each pixel \((i,j)\) in column \(j\) will collect \(\delta t_{\rm trans}\cdot\bar{I}_{j}\) electrons, where \(\delta t_{\rm trans}\) is the time it takes to transfer one row of photoelectrons to the next one. This happens partly during the readout phase of the previous exposure when the row was transferred from the top of the CCD to location \(i\), and during the readout phase of the current exposure when the row is transferred from location \(i\) to the readout register at the bottom of the CCD.
Since also photons collected during readout are subjected to photon noise, the final value for the accumulated flux during readout is taken from a Poisson distribution
\[I_{ij}^{\rm(pos)}=\mathcal{P}\left(\mu=\delta t_{\rm trans}\,\,\bar{I}_{j} \right)\,. \tag{22}\]
The readout smearing is measured using a parallel overscan region whose size spans 30 virtual pixel rows for the N-CAMs (see pink box of Fig. 3a).
## 6 Focal plane positions of the stars
This section describes how PlatoSim computes the positions \((x_{0},y_{0})\) of a star in the focal plane mentioned in Eqs. (9) and (11) and the corresponding pixel positions \((i_{0},j_{0})\) on the CCD. A realistic model of the time dependence of these positions is crucial to understand the non-white noise it induces.
### Differential kinematic aberration
Given the true equatorial sky coordinates \((\alpha,\delta)\) of a star, the _apparent_ sky coordinates are slightly different because of stellar kinematic aberration: the overall motion of the spacecraft relative to the star induces a shift of the apparent stellar position due to the aberration of the light rays as they enter the camera. The amplitude and direction of the shift of the apparent stellar position depends on the angle between the position vector \(\@vec{s}\) of the star and the velocity vector \(\@vec{v}\) of the spacecraft with respect to the star. More specifically, if \(\theta\) is the unabarrelated angle between these two vectors (i.e. \(\cos\theta=\@vec{v}\cdot\@vec{s}\)), then the aberrated angle \(\theta_{\rm ab}\) between them is given by
\[\theta_{\rm ab}=\tan^{-1}\left(\frac{\sqrt{1-\beta^{2}}\sin\theta}{\beta+\cos \theta}\right)\,, \tag{23}\]
where \(\beta=v/c\) with \(v\) the velocity of the spacecraft with respect to the star and \(c\) the speed of light. The corresponding aberrated position vector of the star \(\@vec{s}_{\rm ab}\) is given by
\[\@vec{s}_{\rm ab}=\@vec{v}\cos\theta_{\rm ab}+\frac{\@vec{s}-\@vec{v}\cos \theta}{|\@vec{s}-\@vec{v}\cos\theta|}\,\sin\theta_{\rm ab}\,. \tag{24}\]
The corresponding pixel displacement of a star therefore depends on its position in the focal plane and varies in time as PLATO's velocity vector changes over its orbit. However, the AOCS of PLATO uses bright fine guidance stars in the same FOV observed by the F-CAMs to continuously stabilise its pointing. These fine guidance stars experience roughly the same aberration, so that the aberration is therefore largely and continuously corrected by the AOCS. The correction is only approximate since a given pixel's LOS depends on its exact location in the focal plane and the small pixel-to-pixel variation of the aberration that still occurs. This _differential aberration_ is not radially symmetric around the optical axis of a camera but is offsets from
the pointing axis of the platform determined by the F-CAMs. Hence the maximum amplitude of the differential aberration of an N-CAM is at the FOV edge furthest from the platform pointing, resulting in a shift of up to \(\sim 0.8\) pixel in 3 months.
The differential aberration is taken into account using a realistic orbit read by PlatoSim. Generally PLATO orbits around the L2 following a libration point orbit (a so-called Lissajous orbit), but the dominant velocity component for the kinematic aberration is the one following the orbit of L2 around the Sun. In the remainder of the paper, when we use sky coordinates, we always refer to the apparent sky coordinates subjected to kinematic aberration.
### Projection of the star on the focal plane
The exact position of the star on a CCD depends on where in the sky the spacecraft platform is pointing, how the camera is mounted on the platform, how the focal plane reference frame is defined in the camera, and finally how the CCDs are orientated inside the focal plane. In practice, PlatoSim models the CCD coordinates (\(x_{\text{CCD}},y_{\text{CCD}}\)) through a set of reference frame transformations
\[\begin{pmatrix}x_{\text{CCD}}\\ y_{\text{CCD}}\\ \end{pmatrix}=\mathbf{R}_{\text{FP}}^{\text{CCD}}\cdot\mathbf{R}_{\text{CAR}}^ {\text{PM}}\cdot\mathbf{R}_{\text{RM}}^{\text{CM}}\cdot\mathbf{R}_{\text{RM}}^ {\text{RM}}\cdot\mathbf{R}_{\text{20}}^{\text{RM}}\\ y_{\text{eq}}\\ \end{pmatrix}\,. \tag{25}\]
Here, \((x_{\text{eq}},y_{\text{eq}},z_{\text{eq}})=(\cos\delta\cos\alpha,\cos\delta \sin\alpha,\sin\delta)\) are the components of the unit vector pointing towards a star with apparent sky coordinates (\(\alpha,\delta\)) in the equatorial reference frame. The rotation matrices in Eq. (25) are used to transform from the equatorial reference frame (EQ), to the payload module (PLM) reference frame, to the camera (CAM) boresight reference frame, to the (undistorted) focal plane (FP) reference frame, and finally to the CCD reference frame. The rotation matrix \(\mathbf{R}_{\text{20}}^{\text{PLM}}\) depends on three angles \((\alpha_{\text{F,RM}},\delta_{\text{F,RM}},\epsilon_{\text{F,RM}})\) defining the orientation of the spacecraft, and \(\mathbf{R}_{\text{RM}}^{\text{CM}}\) depends on two angles \((\eta_{\text{CAM}},\rho_{\text{CM}})\) defining the orientation of a camera on the payload module. All five of these angles are time dependent because of spacecraft pointing jitter and slow thermo-elastic drifts as explained in Sections 6.3 and 6.4. The matrix \(\mathbf{R}_{\text{CM}}^{\text{PL}}\) involves a pinhole projection as explained in Appendix A. The resulting focal plane coordinates are moreover subjected to optical distortion which is explained in Section 6.5.
### Payload module pointing jitter
The AOCS controls the stability of the spacecraft pointing and is affected mainly by the reaction wheels and the fine guidance star system. As the AOCS is not perfect, the payload module jitters around a mean pointing, causing the stars to slightly move over the CCD (with a typical distance smaller than a pixel). The high-frequency components of the jitter cause PSF blurring, while the low-frequency components can displace the barycentre of the PSF from one pixel to the next. Due to the non-uniformity of the pixel-to-pixel response, the pointing jitter leads to increased photometric noise, and is thus a key driver for the photometric performance. Several correction algorithms have therefore been published so far in the literature (e.g. Drummond and Fialho et al. 2006).
To model the pointing jitter we define the platform yaw \(\phi\), the pitch \(\theta\), and the roll \(\psi\) as rotation angles around respectively the \(X_{\text{PM3}}\), \(Y_{\text{PM3}}\), and \(Z_{\text{PM3}}\) axes, such that the angles increase with a clockwise rotation, when looking along the positive axes. At any given time a perturbation to the pointing direction and the roll angle of the spacecraft is calculated by first performing a roll rotation around the \(Z_{\text{PM3}}\) axis, then a pitch rotation around the rotated \(Y_{\text{PM3}}\) axis, and finally a yaw rotation around the twice-rotated \(X_{\text{PM3}}\) axis. The combined rotation matrix (in the reference frame of the platform) is thus given by
\[\mathbf{R}(\theta,\varphi,\psi)=\mathbf{R}(\theta)\,\mathbf{R}(\phi)\,\mathbf{ R}(\psi)\,. \tag{26}\]
In PlatoSim the update of the platform pointing using the rotation matrix above is done for every time step \(\delta t\) (cf. Eq. (6)).
The yaw, pitch, and roll time series used in the simulations are either taken from a detailed perturbation dynamical model of the spacecraft (not included in PlatoSim), or are simulated using red noise. In the latter case the jitter angles are modelled as in De Ridder et al. (2006), using a first-order auto-regressive model
\[\theta_{n+1}=e^{-\delta t/\tau}\,\theta_{n}+\varepsilon_{n+1}\,. \tag{27}\]
Here \(\theta_{n}\) is the yaw angle a time \(t_{n}\), \(\tau\) is the jitter time scale, \(\delta t\ll\tau\) is the discretised time step, and \(\varepsilon\) is a Gaussian distributed noise fluctuation with zero mean and a variance equal to
\[\text{Var}[\varepsilon_{n}]=\sigma^{2}\frac{\delta t}{\tau}\,, \tag{28}\]
where \(\sigma\) is the amplitude scale of the jitter.
Figure 8 shows for the yaw angle a power spectral density (PSD) function of the red noise model from PlatoSim (top panel using a rms amplitude scale of 0.04 arcsec) and dynamical model (bottom) produced by PLATO's prime contractor Otto Hydraulic Bremen (OHB)/Thales Alenia Space (TAS). These simulations have a duration of 27 hours and a (fast) jitter time scale of 8 Hz. Compared to the dynamical model description of OHB/TAS, which assumes the instrument feedback from the F-CAMs to correct the pointing, the PSD proportionality \(1/f^{2}\) for the red noise model is lacking the correlated systematics between yaw, pitch, and roll (as seen by the additional kink of the OHB/TAS
Figure 8: Power spectral distribution (PSD) for a high frequency jitter time series of the yaw angle produced with PlatoSim’s red noise model (top) and a dynamical OHB/TAS model (bottom). The two simulated models are sampled at 8 Hz and have a time duration of 27 h. The solid black line corresponds to a 1 min moving median filter.
model). Nevertheless it has been shown that at a cadence of 25 s (i.e. \(10^{4}\)\(\mu\)Hz) the residual jitter noise of the dynamical model behaves Gaussian, which is, such as red noise, a stochastic process.
### Thermo-elastic drift
To maintain the solar panels in the direction of the Sun, PLATO will perform a 90\({}^{\circ}\) rotation every three months after completing a quarter of the orbit around the Sun. During a run of three months, the spacecraft is not rotated to maintain a fixed field of view, which implies that the part of the spacecraft that is directly pointing towards the Sun gradually changes. In turn this means that the thermal profile of the spacecraft also slowly changes in time. In particular the thermal flexure of the optical bench will introduce slight changes to the pointing direction of each camera. This effect, also known as thermo-elastic drift (TED), will lead to a slow drift of the stars over the focal plane, up to 80% of a pixel as a worst case estimate for PLATO. The camera drift in PlatoSim is modelled using the same formalism as done for the AOCS jitter (see Sect. 6.3) taking the Euler angles (yaw, pitch, roll) as input either from a thermo-dynamical model or from a red noise model.
### Optical field distortion
A last physical effect that impacts the positions of the star in the focal plane is the optical field distortion. In every real-life application a camera is subjected to image distortions due to slight manufacturing errors of the optical lens and relative optical alignment errors. Building from the heritage of the Brown-Conrady model (Brown, 1971), a unified distortion model was formulated by Wang et al. (2008) (referred to as the _Wang model_), which classifies the lens distortion into a radial, a tangential, and a thin prism distortion. Radial distortion is caused by an imperfect radial curvature of a lens, whereas the tangential (or decentring) distortion is caused by misalignments between different lens elements, and the thin prism distortion arises from a slight tilt of a lens with respect to the detector.
PlatoSim uses the self-contained distortion model in the case of the Zemax PSF, and implements the Wang distortion model for the analytic PSF. The Wang model is applied to all cameras and transforms the undistorted to the distorted (D) focal plane coordinates using
\[\begin{pmatrix}x_{\mathrm{irp}}\\ y_{\mathrm{irp}}\end{pmatrix}_{D}=\begin{pmatrix}x_{\mathrm{irp}}\\ y_{\mathrm{irp}}\end{pmatrix}+\begin{pmatrix}D_{x}\\ D_{y}\end{pmatrix}\,, \tag{29}\]
where
\[D_{x}=x_{\mathrm{irp}}(k_{1}r^{2}+k_{2}r^{4}+k_{3}r^{6})+x_{ \mathrm{irp}}(p_{1}x_{\mathrm{irp}}+p_{2}y_{\mathrm{irp}})+q_{1}r^{2}\,, \tag{30}\] \[D_{y}=y_{\mathrm{irp}}(k_{1}r^{2}+k_{2}r^{4}+k_{3}r^{6})+y_{ \mathrm{irp}}(p_{1}x_{\mathrm{irp}}+p_{2}y_{\mathrm{irp}})+q_{2}r^{2}\,.\]
Here \(r=\sqrt{\lambda_{\mathrm{irp}}^{2}+y_{\mathrm{irp}}^{2}}/l\) is the undistorted radial distance from the optical axis normalised by the focal length \(l\). The set of coefficients \((k_{1},k_{2},k_{3},q_{1},q_{2},p_{1},p_{2})\) belongs to the model description of respectively the radial (\(k\)), the tangential (\(p\)), and thin prism (\(q\)) component. The coefficients used by PlatoSim to model the direct and inverse distortion have been derived using a Zemax model as part of the mission preparation. In addition, PlatoSim allows the coefficients to be provided as a time series, to allow modelling the effect of a changing thermal environment on the optical distortion.
An illustration of the expected field distortion model representative for the PLATO camera is shown in the left-hand panel of Fig. 9 for one quadrant of the focal plane. Here the black dots represent the undistorted (i.e. unobservable distortion-free) chief ray positions and the red diamonds are the distorted chief ray positions from the Wang model. The right-hand panel of the same figure shows the residual plot between the Zemax and Wang model computed at the same FOV grid shown in the left-hand plot. Overall the residual plot shows a good agreement between the two models, with the Wang model generally resulting in a slightly stronger distortion compared to the Zemax prediction, and where the largest discrepancies are farthest from the optical axis (dashed orange line in the left-hand plot).
## 7 Optical throughput and detector efficiency
This section describes the throughput and efficiency quantities \(\bar{T}(t,x,y)\), \(\bar{E}(t,x,y)\), and \(\bar{Q}(t,x,y)\) that occur in Eq. (8).
### Optical throughput
The total _optical throughput_ (also known as the spectral response) of an optical instrument represents its efficiency to convert incident photons into counts of electrons at detector level. Hence it is a product of the dimensionless optical transmission \(T_{ij}(t,\lambda)\) and the quantum efficiency \(Q_{ij}(t,\lambda)\). The time-dependence comes from a slow degradation between the beginning and the end of the mission (\(\mathrm{BOL}\to\mathrm{EOL}\)), which is modelled in PlatoSim using a linear relation. We discuss this model choice in the following. Figure 10 illustrates the total optical throughput integrated over the PLATO passband, at the beginning of life over one full-frame CCD.
The monochromatic transmission \(T_{ij}(t,\lambda)\) combines the effects of the transmission efficiency of a photometric passband filter \(T_{\mathrm{fil}}\), the vignetting \(T_{\mathrm{vin}}\), the particulate and molecular contamination \(T_{\mathrm{con}}\), and the polarisation transmission efficiency \(T_{\mathrm{pol}}\),
\[T_{ij}(t,\lambda)=T_{\mathrm{fil}}(t,\lambda)\;T_{\mathrm{vin},ij}(\theta, \lambda)\;T_{\mathrm{con}}(t,\lambda)\;T_{\mathrm{pol},ij}(\theta,\lambda)\,. \tag{31}\]
The polychromatic version, as appearing in Eq. (8) and used by PlatoSim, is derived by taking the average over the PLATO
Figure 9: Illustration of the field distortion models. **Left:** Distortion over one quadrant of the FPA with a grid step size of 2\({}^{\circ}\): Shown are the undistorted paraxial chief ray coordinates (black dots) and the real (distorted) chief ray coordinates calculated by the Wang et al. (2008) distortion model (red diamonds), together with the CCD area (dark grey area enclosed by dark blue lines) and the effective size of the camera FOV (dashed orange line). **Right:** Residuals between the Wang and the Zemax distortion model evaluated in the FPA grid points shown in the left panel. The colour bar serves as a reference of the radial distance to the optical axis, \(\theta\). We note that with a PLATO plate scale of 18 \(\mu\)m a residual of 0.1 mm corresponds to \(\sim\) 5.6 pixel.
passband
\[\tilde{T}_{ij}(t)=\tilde{T}_{\rm fil}(t)\ \tilde{T}_{\rm vin,j}(\theta)\ \tilde{T}_{\rm con }(t)\ \tilde{T}_{\rm pol,j}(\theta)\,. \tag{32}\]
The phenomenon of vignetting, that is the brightness attenuation towards the edge of the FOV, can be divided in three components: natural, optical, and mechanical vignetting. The attenuation by natural vignetting is caused by the fact that off-axis light rays not only have a longer travel distance but also see a projected (i.e. reduced) area of the entrance pupil, leading to a decreasing light intensity at angles far away from the optical axis. Optical vignetting is induced by the optical design of a camera that features multiple optical elements. A lens earlier in the light path causes a reduction of the effective opening of the next lens because the output angles of the former are limited. Also this causes a decrease in intensity towards the edge of the FOV. Mechanical vignetting is due to the blocking of light rays by the straight mask, and causes a semi-hard circular border of the FOV at a maximum angle \(\theta_{\rm max}\) from the optical axis. The vignetting for a PLATO camera was analysed using a Zemax optical model, as well as measured during the camera assembly, integration, and verification, which led to the following best fitting parametric model
\[\tilde{T}_{\rm vin}(\theta)=\begin{cases}1-k_{1}\theta^{2}-k_{2}\theta^{4}-k_{ 3}\theta^{6}&\text{for }\theta\leq\theta_{\rm max}=18.9^{\circ}\,,\\ c+e^{-(\theta-\theta_{\rm max})/\sigma}&\text{for }\theta>\theta_{\rm max}\,, \end{cases} \tag{33}\]
where \(\{k_{1},\ k_{2},\ k_{3}\}=\{4.18\cdot 10^{-2},\ -5.65\cdot 10^{-5},\ 2.37\cdot 1 0^{-7}\}\). Here \(\sigma=0.6^{\circ}\) and \(c\) is a constant so that the function is continuous in \(\theta=\theta_{\rm max}\).
The particulate contamination is the (unintended) presence of particles on (mostly optical) surfaces, whereas the molecular contamination is the layer of molecules on top of a surface caused by out-gassing of materials in the first phase of the mission (for an in depth discussion see e.g. Zhao et al., 2009). A major part of the particulate contamination takes place during the fairing ejection (i.e. well before PLATO will start its journey to the L2) which, by and large, sets the level of contamination in the camera entrance window. The story of the molecular contamination is on the other hand more complex. The majority of out-gassing typically takes place during the cooldown of the spacecraft (being mostly during the first three days after launch), but will in reality never stop completely. Furthermore, the out-gassing can also take place from various materials at different rates. However, we assume that out-gassing during launch is neglectable for PLATO, due to the spacecraft's limited duration within Earth's atmosphere, together with a slow rate of change for out-gassing at the time PLATO starts operating in the L2. Hence, as mentioned before, a linearly decreasing model of the transmission efficiency is a good approximation if radiation damage is the dominating factor for the degradation. PlatoSim uses a throughput value of 0.972 due to particulate contamination and a value of 0.9573 due to molecular contamination, which are the BOL requirement values.
The polarisation transmission efficiency is modelled using the following (fairly arbitrary monotonically decreasing) parametric model
\[\tilde{T}_{\rm pol,ij}(\theta)=\tilde{T}_{\rm pol,max}\cos\left(\frac{\vartheta }{\theta_{\rm ref}}\cos^{-1}\left[\tilde{T}_{\rm pol}(\theta_{\rm ref})\right] \right)\,, \tag{34}\]
where \(\tilde{T}_{\rm pol,max}\) is the \(\tilde{T}_{\rm pol}\) maximal value and \(\tilde{T}_{\rm pol}(\theta_{\rm ref})\) is the value at a certain reference angular distance \(\theta_{\rm ref}\) away from the optical axis.
Lastly, the quantum efficiency (QE) first presented in Eq. (6) is generally defined as
\[Q_{ij}(t,\lambda)=\frac{Q_{\rm ext,ij}(t,\lambda)}{1-T_{\rm ref,ij}(t,\lambda )}\,, \tag{35}\]
where \(Q_{\rm ext}\) is the external quantum efficiency, that is the ratio of the number of electrons over the number of incident photons, and \(T_{\rm ref}\) is the reflectivity, that is the fraction of photons reflected at the surface despite the anti-reflection coating. This expression implies that the quantum efficiency also depends to second order on the angle of the incident light. Since the PLATO payload's optical design leads to CCD illumination over a wide range of incident angles (up to 40\({}^{\circ}\); Rauer et al., 2014, Raurer et al. in prep.), the variation of the QE with incidence angle is taken into account in PlatoSim. As before we avoid dealing with the wavelength dependence of the QE by averaging it over the passband \(\tilde{Q}_{ij}\approx(Q_{ij}(\lambda))_{\lambda}\), and we use a similar parametric model as above
\[\tilde{Q}(\vartheta)=\tilde{Q}_{\rm max}\cos\left(\frac{\vartheta}{\theta_{ \rm ref}}\cos^{-1}\left[\tilde{Q}(\vartheta_{\rm ref})\right]\right)\,. \tag{36}\]
### Detector efficiency
The detector efficiency \(E_{ij}(t)\) in Eq. (8) encompasses both spatial pixel sensitivity variation as well as defective pixels. The former is caused by the fact that the electric field structure within a pixel is affected by small variations in pixel size, the structure of the gate electrodes, the thickness of the SiO\({}_{2}\) insulation layer, and the doping uniformity in the epitaxial (crystalline) Si layers (see e.g. Jorden et al., 1994). The combination of pixel sensitivity variations and spacecraft pointing jitter introduces tiny flux variations that increase the noise level. PlatoSim therefore includes
Figure 10: Illustration of the total throughput map for one full frame CCD. The dotted diagonal line shows the distance from the optical axis in degrees (\(\vartheta\)) and the red dashed lines show the angular position of the stray light mask. We note that the FOV in the focal plane physically extents beyond \(\theta_{\rm max}\) (to \(\sim 19.6^{\circ}\) indicated by the red dotted line) due to the effect of optical distortion of. Sect. 6.5 and is followed by an exponential intensity decay of vignetting (modelled out to 20\({}^{\circ}\) shown by the red solid line).
the pixel-response non-uniformity (PRNU; often referred to as the _flat-field_ for space-borne instruments) in its simulations.
The most reliable flat-field is an empirical one obtained from measuring the PRNU at different wavelengths using a flight model of the camera. These measurements are then combined in a weighted average over the PLATO passband. Early in the design of the space mission such measurements are not available, in which case PlatoSim resorts to a simulated flat-field. The important feature here is that the spatial variation in pixel sensitivities usually do not follow a white noise pattern, but shows a spatial correlation. To simulate this, PlatoSim first models the 2D Fourier transform \(E_{\mathrm{FT}}(m,n)\) of the PRNU map as
\[\tilde{E}_{\mathrm{FT}}(m,n)=\frac{\varepsilon_{m,n}}{1+m^{\beta}+n^{\beta}}\,, \tag{37}\]
where \((m,n)\) are the spatial wavelengths, \(\varepsilon_{m,n}\sim\mathcal{N}(0,1)\) is Gaussian distributed noise fluctuation, and the exponent \(\beta\) determines the strength of the correlation at larger spatial distances on the CCD. Subsequently, the inverse Fourier transform is taken and scaled to have a given mean and rms, to obtain the actual PRNU map \(\tilde{E}(i,j)\). Figure 11 shows an example of a relative flat-field for \(\beta=2\), which shows a strong spatial correlation over the CCD.
Defective pixels also impact the detector efficiency and fall under one of the following three categories: _dead_, _hot_, or _telegraphic_. Compared to a normal pixel a dead pixel has an anomalously low sensitivity, whereas a hot pixel has an anomalously high dark current. A telegraphic (or RTS; random telegraph signal) pixel, on the other hand, is a pixel which periodically switches between an active state with high dark current and an inactive state with normal dark current, and vice versa. While the identification of defective pixels has been determined as part of the on-ground calibration on a flight model CCD (Verhoeve et al. 2016), the relative fraction of defective pixels is less than 0.1% and are thus not expected to pose a problem for PLATO. Therefore, defective pixels are currently not implemented in PlatoSim, however, we do acknowledge the potential importance of implementing these (in particular RTS pixels) later in the mission.
## 8 Electron redistribution models
### Brighter-latter effect
The brighter-fatter effect (BFE) is an electron redistributing effect caused by electrostatic interaction between neighbouring pixels. Pixels that already collected a large number of electrons during an exposure shrink in effective collecting area, so that they collect and retain less electrons because some of them are now attracted to a neighbouring pixel. The BFE is different from charge diffusion discussed in Sect. 5.1 in the sense that the former is caused by a changing electrostatic field configuration during an exposure, while the latter is caused by electrons drifting laterally during a random walk before ending up in a pixel. The BFE phenomenon is fairly well understood, and the model implemented in PlatoSim uses the approximative framework outlined by Antilogus et al. (2014), Guyonnet et al. (2015) and Astier et al. (2019).
The electrostatic field lines caused by the pixel gate electrodes define the path of the incoming electrons and therefore determines to which pixel the electron is attracted. In the case of a perfect regular grid of electrodes, the pixel boundaries fall at the geometrical midpoint between two electrodes as is nicely illustrated in Fig. 4 of Antilogus et al. (2014). When a pixel accumulates electrons the electrostatic field lines change as the accumulated electrons have a repulsive effect on new incoming electrons. Electrons close to the geometrical midpoint are now attracted more by the neighbouring electrode, effectively shifting the pixel boundary towards the first electrode so that its corresponding collecting area becomes smaller. The boundary between a 'central' pixel (\(C\)) and its neighbouring pixel 'north' (\(N\)) can be affected by the number of electrons in a nearby (although not necessarily neighbouring) pixel '\(P\)'. In a first approximation the boundary shift \(\delta X^{C\leftrightarrow N}\) scales linearly with the charge \(Q_{P}\) already accumulated in pixel \(P\). Since the electric field is additive, the total effect can be computed by summing up the effects of all nearby pixels \(\{P\}\)
\[\delta X^{C\leftrightarrow N}=\frac{1}{2}\sum_{\{P\}}a_{P}^{C\leftrightarrow N }\,Q_{P}\,. \tag{38}\]
where the factor \(1/2\) is added to be compatible with the definition of the coefficients \(a_{P}^{C\leftrightarrow N}\) as defined in Guyonnet et al. (2015). The sum of the linear coefficients \(a_{P}^{C\leftrightarrow N}\) should be zero as we do not expect any net shift to happen when all charges \(Q_{P}\) are equal.
Antilogus et al. (2014) argue that the change in charge \(\delta Q^{C\leftrightarrow N}\) in pixel \(C\) due to the boundary shift \(C\leftrightarrow N\) scales in a first approximation linearly with the shift \(\delta X^{C\leftrightarrow N}\) as well as with the charge density at the boundary between the two pixels which scales to first order by the total amount of charge \(Q_{C}+Q_{N}\)
\[\delta Q^{C\leftrightarrow N}=\frac{1}{4}\sum_{\{P\}}a_{P}^{C \leftrightarrow N}\,Q_{P}\,(Q_{C}+Q_{N})\,, \tag{39}\]
where we added an extra scale factor \(1/2\) instead of absorbing it in the coefficients \(a_{P}^{C\leftrightarrow N}\) to be compatible with Guyonnet et al. (2015). The coefficients \(a_{P}^{C\leftrightarrow N}\) in Eq. (39) still sum up to zero. The change in charge in pixel \(C\) is also affected by the change in boundary with the other neighbouring pixels'south' (\(S\)), 'west' (\(W\)), and 'east' (\(E\)), so that we can write
\[\delta Q_{C} = \delta Q^{C\leftrightarrow N}+\delta Q^{C\leftrightarrow S}+ \delta Q^{C\leftrightarrow E}+\delta Q^{C\leftrightarrow W}\] \[= \frac{1}{4}\sum_{\{X\}}\sum_{\{P\}}a_{P}^{C\leftrightarrow N}\,Q _{P}\,(Q_{C}+Q_{X})\]
Figure 11: Illustration of the automatically generated flat-field (PRNU) for a full-frame CCD image. This image represent the flat-field used to construct the subfield in Fig. 12 and has a peak-to-peak pixel sensitivity variation \(\sim 4\%\) and a local rms noise level of \(\sim 1\%\).
where \(\{X\}\) stands for [north, south, west, east]. The coefficients \(\alpha_{P}^{C\leftrightarrow X}\) were derived in two steps. First, the inter-pixel variance and covariance curves were computed using a set of flatfields with a large range of fluxes, using the prescription of Astier et al. (2019). Then, the electrostatic model of Astier & Regnault (2023) was fitted to obtain the coefficients \(\alpha_{P}^{C\leftrightarrow X}\).
### Charge transfer efficiency
The effect of charge-transfer inefficiency (CTI) happens during the readout of the CCD because of imperfections in the CCD silicon substrate lattice, which creates electron traps. Due to the stochastic capture and release of electrons into and out of these traps, part of the signal is left behind when some electrons are trapped while being transferred during readout. The delayed release of electrons is subsequently causing a smearing effect leading to the well-known CTI tails. As radiation damage is the leading cause for the creation of these traps, the CTI deteriorates fast with time for any space mission due to the increased radiation dose received compared to that of ground-based instruments (e.g. Massey et al., 2014).
A first simple CTI model implemented in PlatoSim follows the fraction of the total charge in a pixel \(\theta_{\rm CTI}\) that is left behind, hence, \(\theta_{\rm CTI}=1-\theta_{\rm CTE}\). As charge transfer happens both during parallel transfer as well as serial transfer, the charge that is lost during each of these transfers is (Janesick, 2001)
\[Q_{N+n}=\frac{Q_{0}\,N!}{(N-n)!\,n!}(1-\theta_{\rm CTI})^{n}\,\theta_{\rm CTI} ^{N+n}\,, \tag{41}\]
where \(Q_{0}\) is the initial charge contained in the target pixel, \(N\) is the number of pixel transfers, \(n\) is the trailing pixel number following the target pixel (with \(n=0\) being the target pixel itself), and \(Q_{N+n}\) is the remaining charge in the \(N+n\) pixel.
As a more physical description (naturally introducing the time dependence), the analytic model proposed by Short et al. (2013) (abbreviated the _Short model_) for radiation-induced CTI for CCD detectors is also implemented into PlatoSim. This implementation includes 4 different CTE trap species, each with its own trap density \(n_{t}\) (traps pixel\({}^{-1}\)), trap capture cross-section \(\sigma_{t}\) (m\({}^{2}\)), and a release time scale \(t_{t}\) (s). In addition, the model also requires a parameter \(\beta\) describing whether adding electrons to a charge package is increasing the volume of the package or its density. Prod'homme et al. (2016) derived BOL and EOL values for these quantities for the PLATO mission, assuming a nominal operating temperature of 203 K. Figure 12 illustrates a worst case example of the effect of CTI at EOL (6.5 yr after commissioning) using the Short model. While the CTI increases with time the photometric quality decrease correspondingly, hence, the PLATO data reduction pipeline therefore contains a CTI correction step (see e.g. Israel et al., 2015, for good overview of such corrections).
### Blooming
For bright stars (\(V\leq 8.5\) for N-CAM observations) the full-well capacity of some of the pixels is reached, and the electrons start to overflow to neighbouring pixels in the same column, which is usually referred to as _blooming_. Although this saturation happens during the CCD exposure, PlatoSim models it pragmatically as a post-exposure effect that is independent of CCD non-linearity (which is applied later). The caveat of this implementation results in a slightly enhanced blooming pattern at the expense of computational speed, compared to an iterative approach which models their interaction every time step \(\delta t\). In a first model, the electron excess is simply distributed evenly between the pixels above and below the saturated pixel. If these pixels also get saturated, the overflow goes to the next pixel in the column, etc. The result is a blooming pattern that is symmetric with respect to the central pixel of the star, that is the upward blooming trail has the same length as the downward one.
Tests with the PLATO CCDs revealed, however, that in practice blooming can be highly asymmetric, with one trail being significantly longer than the other one. Although the exact underlying cause is still being investigated, the working hypothesis is that the electrons experience barriers in one direction, so that the excess electrons follow the path of less resistance in the opposite direction. Rather than trying to mimic the still uncertain physical causes, PlatoSim simply allows to specify the fraction of excess electrons that goes downwards (i.e. towards the readout register), which is sufficient to test the design and testing of the mask creation and photometry extraction of saturated stars.
## 9 Photometry
One of the main applications of PlatoSim is its ability to provide realistic light curves extracted at pixel level using its build-in photometric algorithm. Besides alleviating PlatoSim's ability to generate light curves on demand, this feature is especially important when running large batches of simulations where the raw pixel data and house keeping data may be of the order of several gigabytes (or even terabytes). Due to the new software design of PlatoSim, generating light curves at run time significantly reduces the storage memory for output and adds a minimal time of execution per simulation.
To date a noteworthy list of literature exists on software for PSF photometry (Da Costa, 1992; Schechter et al., 1993; Anderson & King, 2006; Popowicz, 2018; Hedges et al., 2021), aperture photometry (Howell, 1989; Stetson, 1987; Naylor, 1998; Libralato et al., 2016; Bryson et al., 2010; Handberg & Lund, 2014;
Figure 12: Illustration of the effect of CTI at post mission EOL (i.e. 6.5 yr after commissioning) using the Short model. The plot shows a centrally placed \(200\times 200\) pixel CCD subfields of PIC stars from the LOP south including cosmic rays simulated using a hit rate of 10 events s\({}^{-1}\) cm\({}^{-1}\). The images has been clipped by a \(2\sigma\) cut and then normalised for illustrative purposes. The readout register is located towards the bottom of the image.
Lund et al., 2015; Aigrain et al., 2016; Smith et al., 2016; Marchioni et al., 2019; Hoyer et al., 2020), or pipelines that either combine both or provide both methodologies in a single software package (Kjeldsen and Frandsen, 1992; Still and Barclay, 2012; Bradley et al., 2016; Lightkurve Collaboration et al., 2018). Generally the PLATO light curve generation of non-saturated stars follows two separate data processing chains, namely PSF photometry designed for on-ground data products (i.e. imagesets of \(V<11\) stars belonging to the asPIC sample called P1) and aperture photometry designed for on-board data products (i.e. flux measurements of fainter targets (\(11<V<15\)) primary consisting of stars from the asPIC sample called P5). For now only the a subset of the on-board algorithms are implemented in PlatoSim whereas a functional coupling to the full processing chains, on-board and on-ground, have been established as will be explained in Sect. 11.2.
The performance reached by the photometry (both on-ground and on-board) directly depends on the knowledge of the PSF across the focal plane for each independent pointing. However, since the PSF morphology is expected to change notably after launch (due to slight changes of the optical mount during launch and to changes in the thermal environment throughout the mission) the 'true' PSF cannot be measured from ground. Furthermore, acquiring accurate knowledge about the PLATO PSF in-flight is a main challenge due to the sparse pixel sampling of the PSF. Thus, we first elaborate on the procedure to overcome this challenge by reconstructing the true but unknown PSF across the CCD focal plane.
### PSF inversion using microscanning
PSF reconstruction builds from the idea of extracting a high resolution PSF, \(\mathbf{x}\), from a series of corresponding lower resolution PSFs, \(\mathbf{y}\), following (Park et al., 2003)
\[\mathbf{A}\ \mathbf{x}=\mathbf{y}\,, \tag{42}\]
where \(\mathbf{A}\) is a PSF projection matrix onto the low-resolution pixel grid. Mathematically the inversion is solved by discretising the PSF using a sum of basic functions \(\phi_{i}\)
\[f(x,y)=\sum_{i}a_{i}\ \phi_{i}(x,y)\,, \tag{43}\]
with \(a_{i}\) being the unknown inversion coefficients. Solving Eq. (42) can be tackled using a least-squares procedure. The preferred method for PLATO is discussed in Samadi et al. (2019).
Following from the PSF inversion technique first employed by CoRoT (Auvergne et al., 2009) and later matured with Kepler (Bryson et al., 2010), in practice the series of low resolution pixel frames, \(\mathbf{y}\), is acquired by intentionally commanding the AOCS to follow a certain pattern of small coordinate displacements. Such selective jittering is also known as a _microscanning_ session. Figure 13 displays the Archimedean spiral pattern decided for PLATO, shown for a best case (left) and worst case (right) scenario in terms of pointing stability during the session.
Microscanning sessions will be performed during the quarterly interruptions necessary to realign the spacecraft's solar panels and will have a duration of around three hours each. The current in-flight strategy for microscanning is going to provide the inverted PSF for all P1 sample stars, whereas only a subset of carefully selected P5 (named R2) sample stars across the FOV will have their PSFs inverted for each pointing. Thus, the remaining P5 targets will have their high resolution PSFs determined from interpolation. The exact scheme of interpolation and for which magnitude range inverted PSFs reliably can be determined has been established by the PLATO team responsible for the data processing and algorithms. A transition to this interpolation strategy is currently being integrated into PlatoSim. As an approximate but realistic approach, PlatoSim allows to use a precomputed grid of inverted PSFs generated from both the worst and best case microscanning examples displayed in Fig. 13.
Currently the grid of inverted PSFs are only fully representative for a single camera and quarter. In practice a grid of inverted PSFs for each camera is ideal, since the PSFs are strongly dependent on the alignment of optical lenses in each TOU, and for each mission quarter, since changes in PSF morphology together with ageing effects (such as the CTI) impact the accuracy of the inversion increasingly over time. Thus, particularly important for EOL conditions, more microscans are needed in the future to bring the photometry closer to the current mission strategy. However, as the state-of-the-art PSF- and aperture photometry algorithms require a high resolution PSF as input, this approach is already more realistic than the unphysical use of the 'true' PSF (be it Zemax or analytical).
### Pre-processing steps
Prior to the photometry extraction several pre-processing steps are applied to the simulated CCD subfield. First the bias offset is subtracted by computing the mean over a serial prescan region (orange box in Fig. 3a) and a virtual1 overscan region from either the F or E side of the detector, dependent on the subfield location. Next, readout smearing is corrected for by subtracting the bias corrected smearing map obtained from the parallel overscan region (pink box in Fig. 3a). The gain is then used to convert the pixel values from ADU to counts of photoelectrons. Lastly, the bias subtracted sky background map is multiplied with the overall throughput to get counts of \(\mathrm{e^{-}\,pixel^{-1}\,exposure^{-1}}\) and then subtracted from the pixel map.
Footnote 1: Virtual as in extra readouts of the register.
Figure 13: Illustration of the Archimedean spiral jitter pattern including; **Left:** minor residual AOCS jitter as a best case; **Right:** major AOCS residual jitter as a worst case. For an ideal pattern the distance \(D\) between consecutive measurements is approximately constant and distance between consecutive spiral arms is \(D\sqrt{3}/2\). These files contain 430 scans (shown as coloured circles marking the start of each exposure) and forms a near-equilateral triangular grid, which in turn provides a dense but near complete sampling over PLATO’s pixel grid (dotted lines), needed for a successful PSF inversion. _Data courtesy: OHB/TAS_.
### Optimal aperture algorithm
From the wealth of aperture photometry pipelines mentioned above the on-board algorithm implemented in PlatoSim follows from, a study by Marchiori et al. (2019). By design, it optimises the photometric quality towards planet transit searches. This study found that a binary mask serves as the best compromise between noise-to-signal ratio (NSR) and the ratio of stellar contamination.
The general idea is to build a mask starting with the pixel having the lowest NSR, then adding one pixel at a time under the condition that adding it should contribute more to the aggregated signal than to the aggregated noise. For an imagette, this procedure can be formulated mathematically by first arranging all \(n\) pixels in increasing order of NSR
\[\mathrm{NSR}_{n}=\frac{\sqrt{\sigma_{F_{T_{n}}}+\sum_{k=1}^{N_{C}}\,\sigma_{F_ {C_{\mathrm{adj}}}}+\sigma_{B_{n}}+\sigma_{D_{n}}+\sigma_{Q_{n}}}}{F_{T_{n}}}\,, \tag{44}\]
where \(F_{T_{n}}\) and \(\sigma_{F_{T_{n}}}\) are respectively the mean flux and photon noise of the target star, \(\sigma_{F_{C_{\mathrm{adj}}}}\) is the photon noise of each \(k\) stellar contaminant, \(\sigma_{B_{n}}\) is the sky background noise, \(\sigma_{D_{n}}\) is the combined detector noise, and \(\sigma_{Q_{n}}\) is the quantisation noise. The second step consists in determining the aggregated NSR over the imagette's \(m\) pixels conforming to the aforementioned pixel order of increasing NSR
\[\mathrm{NSR}_{\mathrm{agg}}(m)=\frac{\sqrt{\sum_{n=1}^{m}\left(\sigma_{F_{T_{n }}}+\sum_{k=1}^{N_{C}}\,\sigma_{F_{C_{\mathrm{adj}}}}+\sigma_{B_{n}}+\sigma_ {D_{n}}+\sigma_{Q_{n}}\right)}}{\sum_{n=1}^{m}F_{T_{n}}}\,. \tag{45}\]
The last step is simply to construct the aperture from the collection of pixels \(m\) that minimises Eq. (45).
Pointing performance degradations over long time scales (due to long-term drifts) and on short time scales (due to reaction wheel momentum dumps, attitude tweaks, loss of fine guidance, pre/post safe mode events, etc.) may introduce significant pixel displacements. Compared to PSF photometry which by design is more robust against such instrumental perturbations, for aperture photometry this lead to systematic errors in the form of flux loss outside the pixel mask. To mitigate the loss of photometric precision over the course of a mission quarter the strategy for PLATO is to update the aperture of each star periodically under the condition that a lower NSR can be achieved by the updated mask (Marchiori et al., 2019).
Investigations led by the PLATO performance team have shown that the combined effect of TED and KDA can lead to a barycentric displacement up to 1.3 pixel over a three months duration. We illustrate this worst case scenario in Fig. 14 for a \(V=10\) star positioned \(5^{\circ}\) from the optical axis at an initial central intra-pixel position. As the star drifts over the pixel array the mask update events (grey dotted lines) manifest themselves as flux jumps (which is amplified with a 1 h running flux median shown in green in Fig. 14). This highlights a strategy trade-off between the mask update frequency vs. potential error propagation from the pipeline corrections. To minimise the mask update frequency developers of the PLATO pipeline are currently investigating if suboptimal apertures can accommodate the fully predictable KDA contribution on the stellar barycentric displacement across the FPA. For further information we refer to Samadi et al. (2019) for a discussion on the impact of the mask updates on the final light curve and how long term and short term pixel displacements can be corrected in the post-processing procedure.
It is noteworthy that systematic noise sources acting on time scales shorter than the cadence are not included in the mask creation of Eq. (45). This is the case for AOCS jitter and to lesser extent CCD effects that degrade the photometric quality (such as CTI, cosmetic defects, etc.). Including the contribution of jitter (and larger attitude tweaks) is a challenge since it depends on the final shape of the mask. Nevertheless, instrumental perturbations such as jitter has been shown to have negligible impact on the photometry in nominal conditions (Marchiori et al., 2019), and can partially be corrected for during the PLATO pipeline chain (cf. Samadi et al., 2019). Moreover, as soon as the aperture mask of Eq. (45) has been defined, the stellar pollution ratio (SPR) from nearby contaminant can be derived, which can help for rejecting false-positive planet detections from blended eclipsing binaries.
## 10 PlatoSim software architecture
The creation of a synthetic CCD image starts with a set of input parameters that defines the general properties of the spacecraft's hardware components, the stellar field, and the simulated observation itself. As depicted in Fig. 15 the PlatoSim software generates a simulation in two steps: **a)** it configures all input parameters and constructs an output file and then **b)** it follows all (requested) algorithm steps in a loop over a total number of exposures defined by the user.
The simulation construction relies on a YAML configuration file that initialises all the input parameters and optionally reads further supplementary files (purple boxes). As a minimum PlatoSim needs a YAML input file and a star catalogue to successfully run (green boxes). Identical to the physical hardware components that make up the PLATO payload, PlatoSim consist of a platform, telescope10, camera, and detector module, in combination with a sky module to include sky background and stellar variable signals, and two time series generators (jitter and drift) for the inclusion of pointing systematics in an automated way (blue boxes). Each module deals with a particular effect or subsystem. Combined, these are controlled by a global simulation object that is directly configurable in Python. As mentioned
Figure 14: On-board photometry performed with the optimal aperture algorithm of Marchiori et al. (2019) for a \(V=10\) star. With a central barycentric pixel position and a (worst case) systematic drift of 1.3 pixel over the course of one mission quarter, the figure shows the algorithm in action with the automatic pixel-mask updates triggered every 14 days (grey-dotted lines) if a lower NSR can be achieved (which is not the case for the update at 42 days). Since the NSR does not scale linearly with flux, the mask-update strategy does not necessarily increase the flux level (as the case at 28 days). The positive flux outliers are due to contamination from cosmic rays (with a hit rate of 10 events s\({}^{-1}\) cm\({}^{-2}\)).
in Sect. 4, the smallest time varying phenomenae, referred to as PlatoSim's 'heartbeat', is initialised in order to partition each exposure into smaller time steps.
The construction of a simulation with respect to input and output is completely standardised in modules. This is highly beneficial for understanding subsystems or individual effects. All supplementary input files should be provided in ascii format and output files are in HDF5 format. As indicated by the purple boxes in Fig. 15a, whether an effect is included and/or a model selected leaves an enormous flexibility for the user to conduct highly diverse (and complex) simulations.
Upon execution, after the setup has finished in step a), PlatoSim generates a time series of synthetic CCD images in a loop over the total number of requested exposures, as illustrated in Fig. 15b. Over the course of a single exposure, the algorithmic steps are organised into two consecutive classes: first those that are computed per heartbeat, and next those that are computed only once per exposure. The flow of the algorithmic steps are approximately according to the light path of the incident photons, placing each effect or subsystem in a logical order after their physical occurrence. However, as mentioned earlier this is only approximate as different effects physically do not follow a deterministic entry point of occurrence which algorithmic architectures necessarily need to comply to. For example various CCD and FEE effects (such as BFE, blooming, and non-linearity) strongly depend on each other. As a final action of each acquisition, the photometry module is (optionally) applied, the output to written to disk, and the internal clock is updated.
Lastly, PlatoSim allows for the possibility of parallelisation. The execution of a simulation on multiple cores (or CPUs) is not only possible per subfield (as was done for the simulations in Sect. 11), but also in time. The former execution is a standard _sequential_ workflow (i.e. where each simulation run independently on a designated core), whereas the latter options is a _partitioning_ (i.e. a single simulation is chopped into smaller time series and deliver each to a different core). The latter option is more complex as it requires the time series of any supplementary input file (e.g. the variable source file) to be computed ahead of the simulation, however, the random seeds are handled automatically by PlatoSim during the partitioning.
## 11 Applications to the PLATO mission
Due to the end-to-end and modular design, PlatoSim is used for many different disciplines within the PLATO mission consortium as will be discussed in the following. We note that the data and results of Sects. 11.3 and 11.4 are made available to everyone through the PlatoSim repository.
Figure 15: Schematic of the PlatoSim software package. **a)** Overview of initialisation and configuration of PlatoSim prior to simulation execution. **b)** Overview of each simulation step as a loop over the total number of exposures. The boxes represent input files (purple), the output file (orange), software modules (blue), and the general simulation steps (green). The two flowcharts combined illustrates PlatoSim’s events of execution by a) first constructing a simulation (and all input parameters needed) followed by b) creating synthetic pixel images for a given number of exposures.
### Mission preparation
The assembly, integration, verification and test (AIV/AIT) of a spacecraft such as PLATO involves a considerable amount of preparatory work. This is done first of all to design the details of operations for each of the tests, which will impact the quality and the relevance of the results as well as the planning. Secondly, preparatory work also helps defining the necessary Ground Support Equipment (optical stimuli, telemetry acquisition systems, etc.), designing the data processing algorithm for each test, etc. PlatoSim played a key role for a series of tests in this respect.
The first delicate operation when assembling PLATO is to align the cameras. This involves assembling the FPA, bearing the detectors with the TOU within very tight accuracy budgets (Pertenais et al., 2021), and ensuring the optimal performance of the camera in operations. While the PLATO cameras will operate in vacuum and around \(-70^{\circ}\)C, the alignment and assembly occur at room temperature, that is \(\sim 100^{\circ}\)C warmer and under normal atmospheric pressure. Consequently, during the alignment, not only is the optical quality of the cameras considerably degraded, but the detectors also produce tens of thousands of e\({}^{-}\) s\({}^{-1}\) pixel\({}^{-1}\) of dark current. The optical verification necessary for a proper alignment hence requires dedicated operational modes. This includes short integration times and partial readout of the CCDs to limit the dark current, CCD clearout between every frame, continuous image-dump between observations, etc. PlatoSim was used to simulate data obtained at ambient temperature to make the trade-off between observations under full pupil illumination and observations with a Hartmann mask. Hence, PlatoSim was used to design the test approach and data-reduction process, and estimate the resulting accuracy in each case. The details on the simulations and on the alignment of the engineering camera are presented in Royer et al. (2020) and Royer et al. (2022), respectively.
PlatoSim is also instrumental in the preparation of the environmental tests performed at thermal vacuum (TVAC). We here cite four examples where PlatoSim was used in the testing. First the long term stability test, aimed at testing the measurement stability at camera-level. Secondly, the characterisation of the image-ghosts due to multiple reflections in between various optical surfaces and on the detector (Pertenais et al., 2022). Third, the characterisation of the camera image geometry, that is of the optical distortions induced by the very wide field. Finally and fourth, the optimisation of the measurement strategy for the critical but time consuming determination of the best focus temperature, and characterisation of the image quality at the optimal focus temperature (Borsa et al., 2022).
PlatoSim is also used to prepare the operations at spacecraft level, for instance in simulations to test the compression algorithms to run on the Instrument Control Unit on-board the spacecraft, or in simulations of real-time operations of the Fine Guiding System, that is in the feedback loop between the fast cameras and the AOCS (Griessbach et al., 2021). Figure 16 displays an example of a Hartmann pattern simulated for an ambient temperature test (left), as well as a simulation of the extended ghost image caused by an very bright star (right).
### Pipeline validation
According to PLATO's main objectives the pipeline chain needs to be able to preserve signals in the light curve belonging to transiting planets, stellar activity, and stellar pulsations. These are time varying phenomena ranging from seconds to months. PLATO must deal with a huge photometric dynamical range of more than ten orders of magnitude. Furthermore, a strategy for merging the light curves across multiple cameras and mission quarters, in combination with the limited telemetry capabilities, require that most of the scientific analysis must be done autonomously on-board the spacecraft. Altogether this demands a sublime pipeline performance. The construction of a versatile pipeline can only be done prior to launch using a test harness of simulations for which PlatoSim plays a key role.
In comparison to the built in pre-processing steps of PlatoSim's on-board photometry module (see Sect. 9.2), the removal of outliers and flux corrections from short term and long term pixel displacements are algorithms yet to be implemented. We highlight that the full PLATO pipeline chain consist of three main branches for the light curve generation: i) on-ground; ii) on-board; and iii) for saturated stars (Rauer et al., 2014, Rauer et al. in prep.). To accommodate the need for generating fully calibrated data product of non-saturated stars, a computationally bridge between PlatoSim and the (preliminary) PLATO reduction and extraction pipeline has been established.
For the validation of the photometric extraction of saturated stars PlatoSim is heavily used due to its ability to simulate non-linear density. To validate the pipeline performance of non-saturated stars, _stitching_ and _detrending_ of light curves is required. Indeed, a wealth of events will leave large data gaps and introduce systematic errors that can be highly correlated and thus almost impossible to model and remove. Poor attempts can easily hinder the detection of the astrophysical signals. Hence light curve stitching and detrending are well studied topics in space photometry (Garcia et al., 2011; Vanderburg and Johnson, 2014; Handberg and Lund, 2014; Lund et al., 2015; Aigrain et al., 2017; Hippke et al., 2019; Lund et al., 2021). They are also extremely instrument dependent and science driven and thus deserve special attention for PLATO. As such PlatoSim simulations are currently being used for this aspect of the pipeline, which is vital for PLATO's ability to characterise the exoplanet host stars, and thus ultimately the planets themselves.
### Performance studies
The capacity to investigate how the photometric precision internally depends on instrumental systematics on both short and long time scales shorter than the cadence is one of PlatoSim's advan
Figure 16: Examples of simulated data generated in preparation of the AIT/AIV of PLATO mission. **Left:** Simulation of a hartmann pattern obtained at ambient temperature and far out of focus, in preparation of the camera alignment. **Right:** Simulation of a \(V=0\) star close to the optical axis of the camera (white dot) and the extended ghost it creates on the detector via parasitic reflections, ran in preparation of the thermal-vacuum tests.
tages. As an example we show a performance study of PLATO's expected photometric precision at BOL.
With a premise that a 24 h duration dataset is representative for estimating the averaged NSR of each light curve, we simulated 10 000 F5-K7 dwarf and sub-dwarf stars from the LOP south covering a large photometric dynamical range. To populate the NSR-V diagram approximately evenly a total number 2 500 stars was drawn from their camera observability \(n_{\mathrm{CAM}}\in[6,12,18,24]\), meaning that in total 150 000 light curves were generated. To simulate the effect of stellar crowding a choice was made to include photometric contaminant stars brighter than \(\Delta V<5\) and within a relative radial distance to their target of 45 arcsec, that is maximally three pixels away from the target barycentre within each simulated imagette.
All simulations are configured with a realistic AOCS jitter time series from OHB/TAS sampled at 8 Hz (being the model shown in the bottom panel of Fig. 8) to realistically include its impact on the photometric precision 'as expected'. All random and systematic noise sources are configured 'as required' of the instrument design. For computational alleviation the analytic PSF model was used and charge diffusion was activated. Furthermore, it is assumed that the properties of each TOU, CCD, and FFE among all cameras are identical and their noise sources are uncorrelated. However, we randomly displace each camera pointing to imitate camera misalignments to the interface of the optical bench. For all simulations a constant intrinsic intensity was considered and the photometric extraction was performed by the build-in photometry module of PlatoSim (as described in Sect. 9) due to its superior computational speed compared to the preliminary on-board PLATO pipeline.
For each star we compute the NSR by first combining the normalised light curves from each camera, averaging measurements of the same camera group (as they have identical time stamps). Next we resample the data into 1 h bins, and compute the standard deviation. The resulting photometric noise is shown in Fig. 17a for each individual camera observation (i.e. at camera level) and Fig. 17b for the multi-camera observations (i.e. at instrument level).
Taking a closer look at Fig. 17a the colours represent the total number of stellar contaminants included within each simulation and the orange line is NSR model prediction given the mission requirements at BOL. In detail this model contain a component of noise from AOCS jitter (pink dashed line), photon noise (pink dashed-dotted line), and sky plus read noise (pink dotted line). As expected for dynamic range simulated, the photon noise dominated the noise budget, however, random noise from the sky background and readout noise will ultimately dominate beyond \(V>15\). The simulations shows a slight discrepancy with the model prediction in the bright end where noise from AOCS jitter starts to dominate. A thorough investigation shows a clear indication that the NSR at the onset of pixel saturation (grey dotted line) only starts to dependent on the CCD properties, such as the barycentric location of star, for relatively high rms amplitudes of the jitter Euler angles. The photometric prediction for saturated stars may thus behave very different to the in-flight measurements (especially considering asymmetric blooming not included here), nevertheless, it is clear that the photometric precision extracted from imagettes depends on the combination of saturation and AOCS jitter with an ultimate noise floor set for \(V<7.4\). This relates to the onset of moderate saturation (grey dotted line), as defined here, to when blooming cause flux leakage out of an imagette and an extended mask is needed to conserve the flux measurement (for which an extended mask library are currently being designed using PlatoSim).
For the NSR at instrument level, Fig. 17b displays a clear division between stars observed with \(n_{\mathrm{CAM}}\in[6,12,18,24]\). The applied camera misalignment model introduces significant barycentric displacements up to a few tens of pixels from camera to camera, which in turn may render some stars unobservable with one or more cameras (as expected in-flight). The figure il
Figure 17: NSR(\(V\)) simulation study at BOL as required by the mission. **a)** Noise budget at the camera level with each data point coloured after the number of stellar contaminants contained within 3 pixel and \(\Delta V<5\) of the target star. The model prediction of the noise (orange solid curve) consists of three photometric noise components: jitter noise (pink dashed line calculated using an averaged (yaw, pitch, and roll) jitter time series rms amplitude value of 0.037 arcsec) dominating in the bright regime, readout and sky background noise (pink dotted line using a sky background value of around 60 e\({}^{-2}\) pixel\({}^{-1}\) s\({}^{-1}\)) dominating in the faint regime, and photon noise (pink dashed-dotted line) dominated between the two aforementioned regimes. Also the onset of saturation is indicated (grey dotted line) together with the onset of _moderate saturation_ (grey dotted line) here defined as where an extended mask expanding beyond the dimensions of an imagette is needed to capture total stellar flux due to blooming. **b)** Noise budget at the instrument level with each data point representing a multi-camera observation colour by the number of N-CAM observations used in the NSR calculation. The mission requirement of the photometry for \(V<11\) is shown for an observability of \(n_{\mathrm{CAM}}\in[6,12,18,24]\) (horizontal dashed lines coloured after \(n_{\mathrm{CAM}}\)).
lustrates that the mean noise level for stars with \(V<11\) satisfy the requirements of \(\mathrm{NSR}\leq\{100,70,58,50\}\) ppm h\({}^{-1/2}\) for observations with \(n_{\mathrm{CAM}}\in\{6,12,18,24\}\), respectively. This agrees with a similar simulation study of Rauer et al. (in prep.) using the PINE simulator (Borner et al., 2022).
PINE is a theoretical noise estimator that uses the true (i.e. uncontaminated) flux of the target star to calculate the NSR, meanwhile such an approach is not readily possible at pixel level whilst extracting the photometry of stars from real stellar fields (such as that of the PIC). Since flux leakage from stellar contaminant(s) into the aperture mask is unavoidable (and can only partially be mitigated by the mask definition itself), the scattered data points below each of the corresponding NSR-\(n_{\mathrm{CAM}}\) curves of Fig. 17b are PIC targets with one or more contaminants (as shown in Fig. 17a). Naturally, accounting for stellar contamination is expected to place each of these measurements, above the corresponding NSR curve due to the addition of photon noise from the stellar contaminant(s).
### Hare and Hound exercises
Hare and Hound exercises, also known as injection and retrieval exercises, have more pragmatically been performed with other PLATO simulators (such as PSLS; Samadi et al., 2019). Although a full suite of such exercises using PlatoSim is beyond the scope of this paper, we make a realistic show case of PLATO's ability to detect an Earth-like planet orbiting a Sun-like host star. We simulate a \(V=10\) star (i.e. \(\mathcal{P}\simeq 10.4\) cf. Eq. (13)) observed with all 24 N-CAMs. We simulate 13 mission quarters (\(\sim 3.3\) yr) including four transits to test the retrieval efficiency of 2, 3, and 4 transits.
We employ the same computational setup as in Sect. 11.3 with a change of the AOCS jitter to a red noise model sampled at \(0.1\) Hz, and an activation of the KDA model. The TED for each camera and quarter is included using a second order polynomial model whilst uniformly drawing the model coefficients under the restriction that the amplitude in java, pitch, and roll cannot exceed 10 arcsec. The latter is a conservative choice and was deliberately made to challenge the software that corrects for systematic long-term trends in the light curves. For a realistic representation of the noise budget for our P1 sample star, the preliminary PLATO pipeline (using PSF photometry) is employed.
As the majority of targets from the PIC are expected to have convection-driven variability we model and inject stellar granulation and oscillations into the simulated target. A full asteroseismic Hare and Hound exercise is out of the scope of this paper, but a full model description of the solar-like oscillator is presented in Appendix C. For the present showcase only the impact of the noise floor of stellar variability on the transit detection is of interest.
To generate the planet transits we use the open source software batman11(Kreidberg, 2015) tuning the radius, mass, and orbital period (\(P=365.25\) d) to that of the Sun-Earth system (assuming no third bodies) and choose the time of ephemeris to \(t_{0}=10\) d. For simplicity we consider circular orbits (\(e=0\)) and edge-on transits (\(i=90\)deg). We use a quadratic limb darkening (LD) law and calculate the LD coefficients using the software PyLDTk12(Parvainen & Aigrain, 2015) to calculate the stellar LD profile in the PLATO passband. For our simulated Earth analogue orbiting a G2V host star this results in a transit depth of \(\delta\approx 103\) ppm and a total transit duration of \(T_{\mathrm{tot}}\approx 13\) h. The transit depth overshoot of \(\sim 22\)% compared to the planet-to-star radius ratio of 84 ppm has been explained by the effect of stellar limb darkening (Heller, 2019).
Footnote 11: [https://github.com/lkreidberg/batman](https://github.com/lkreidberg/batman)
We follow the transit retrieval procedure of Heller et al. (2022) and use the open source software Wotan13(Hippke et al., 2019) for detrending. Wotan is optimised to preserve transiting signatures of exoplanets while effectively removing instrumental and stellar variability using a large library of available detrending filters. We here use Turkey's biweight method and a window size of \(3\times T_{\mathrm{tot}}\) shown by Hippke et al. (2019) to be an optimal choice for most transit searches. The detrending is performed on each individual camera and mission quarter segment. As a performance illustration of the detrending the top panel of Fig. 18 shows the simulated time series for a single camera across 13 mission quarters (coloured segments), the corresponding model trend (white/black line), and the mid transit times (dashed pink lines). Aside from the model trends caused primarily by the TED, Fig. 18 shows two additional dominating features; i) large jumps in the mean flux level from one mission quarter to the next caused by a change in the optical throughput as the quarterly rotations relocate the stars to different distances from the optical axis in the FPA (whilst the corresponding change in PRNU is a
Figure 18: Results of the Hare and Hound (injection and retrieval) exercise of an Earth-sized planet transiting a \(V=10\) Sun-like star. With an orbital period of \(365.25\) d the planet transits 4 times in the \(3.3\) yr simulated light curve. **Top:** Light curves observed with a single camera (coloured per quarter) and the corresponding Wötan trend (white/black lines). **Middle:** Detrended light of all 24 camera observations (black dots) with in-transit data highlighted (dots). **Bottom:** Phase folded light curve on the best fit period from TLS. For clarity a \(1\) h binned representation of the light curve is shown (pink circles), together with the injected variability (blue line), and a best fit transit model (orange line).
second order effect here); and ii) an overall decrease in intensity due to the combined set of ageing effects.
The final detrended light curve merged across all cameras and quarters is shown in the middle panel of Fig. 18 and is used for the transit vetting. The actual transit search is performed using the transit least-squares (TLS) method implemented in the open software TLS14(Hippke & Heller, 2019). Compared to the traditional low least-squares (BLS) method, TLS outperforms the BLS algorithm (Kovacs et al., 2002) especially in the domain of small planets (e.g. Heller et al., 2019, ) as it includes a precise modelling of the transiting signature (in particular the ingress, egress, and the LD profile). Upon execution TLS performs a grid search in the parameter space \(\{P,\ t_{0},\ T_{\rm tot}\}\) to find the minimum \(\chi^{2}\) value. While doing so, it calculates the Signal Detection Efficiency (SDE) being the key search metric of how significant the \(\chi^{2}\) minimum is compared to its surrounding \(\chi^{2}\) landscape as a function of orbital period.
Footnote 14: [https://github.com/hippke/tls](https://github.com/hippke/tls)
Before employing the actual vetting a \(4\sigma\) clipping filter was used to remove large outliers and all measurements from the same camera group were averaged. Lastly, a 1 h binned representation of the retrieved light curve was created (which here is justified for the planet detection however not for characterisation). The TLS vetting of 2, 3, and 4 transits resulted respectively in a SDE of around 24, 40, and 47. These are well beyond the previous reported detection threshold of SDE \(\geq 9\) that result in a false-alarm probability of FPR \(<10^{-4}\) for Earth-sized planets (Hippke & Heller, 2019). As an illustration the bottom panel of Fig. 18 shows the phase-folded light curve of the best TLS fit to the full dataset (orange line) together with the injected variability model (blue line) and a the 1 h binned light curve (pink dots). With a signal-to-noise (S/N) ratio of S/N \(>7\)(cf. Pont et al., 2006) in all above cases the presence of stellar granulation and pulsations does not avert the planet retrieval, which agrees well with the simulation study of Morris et al. (2020).
## 12 Discussion and conclusion
PlatoSim is an advanced end-to-end CCD and light-curve simulator dedicated to simulate PLATO measurements. Its algorithms and their underlying methodology aim to model realistic space-based photometric imaging. PlatoSim is available on GitHub15 together with a detailed documentation and tutorials.
Footnote 15: [https://github.com/IVS-KULeuven/PlatoSim3/tree/master](https://github.com/IVS-KULeuven/PlatoSim3/tree/master)
The mission requirements of PLATO's innovative multi-camera configuration pushes the frontier of the instrumental design. Reliable simulations are therefore essential - from the early design phases all the way to the in-flight operations - to continuously assist with the design, assessment, and validation of the instrument. PlatoSim was developed to accommodate for this niche exactly. We have provided several examples of applications, ranging from dedicated studies to address the technical assessment and verification of the instrument (Sect. 11.1), the development of data reduction and processing algorithms (Sect. 11.2), the performance assessment of the payload (Sect. 11.3), to the verification of a core science case relevant to the mission (Sect. 11.4).
To our knowledge, PlatoSim is one of the most feature-rich simulators currently available to simulate space-based photometry, including a wide range of instrumental noise sources both at platform level (e.g. AOCS jitter and TED), camera level (e.g. optical distortion and ghost images), and detector level (e.g. PRNU, CTI, and BFE), as well as astrophysical signals (granulation, stochastic oscillations, and exoplanet transits). A unique ability is the inclusion of a realistic PSF (as illustrated in Fig. 15b) either from Zemax simulations, or from a parametric description allowing a PSF that varies over the focal plane, while keeping the computational burden feasible. PlatoSim stands out due to its configurability of the N-CAM and F-CAM to simulate: i) a full suite of PLATO data products (i.e. images, meta data, housekeeping data, and light curves); ii) the different modes of operation (e.g. nominal observations and microscanning sessions); and iii) the different payload configurations. This together with its end-to-end functionality to generate light curves coming from 24 cameras makes it a key simulator for the PLATO mission consortium.
PlatoSim chooses to perform its simulations in the time domain rather than in the Fourier domain. The advantage is the ability to easily include time-dependent instrumental variations that may interfere with the stellar signal. The corresponding challenge is the computational resources needed to generate long time series. PlatoSim largely manages to overcome the computational bottlenecks and drastically reduce the execution time by making extensive use of efficient parametric models while still preserving a realistic simulation of each physical phenomenon. An additional mitigation effort is the coherent use of wavelength-averaged quantities such as the PSF, the throughput, the detector efficiency, and the quantum efficiency, to avoid a time-consuming numerical integration over the wavelength in the PLATO passband. The monochromatic light approximation, however, makes it difficult to compare the increased level of noise expected for the extreme spectral types of PLATO's targets (i.e. F5 and K7) had the wavelength dependence of the above-mentioned quantities been taken into account. Nevertheless, as PlatoSim is designed to simulate stars of spectral types similar to the Sun, quantitatively the impact on the photometry is expected to be small. Given PLATO's spectral response, the baseline at camera level will be that the noise in the light curve of a F5 dwarf star will be slightly higher than for a K7 dwarf star.
Despite already being an advanced simulator, PlatoSim remains a simulation tool that is continuously being improved and extended upon. At the time of writing, the AIV of the PLATO cameras is well under way, and laboratory measurements of the instrument characteristics such as the PRNU and the morphology of ghost images are becoming available. These measured quantities will in the near future replace any simulated quantities in PlatoSim where possible. On the topic of the PRNU, if inter-pixel sensitivity measurements become available, they will allow PlatoSim to accurately account for a lower pixel sensitivity near the pixel borders opposed to the pixel centre. With PlatoSim's ability to track time-dependent effects on time scales shorter than the exposure time, such an improved PRNU model will play a key role for future development concerning effects that are best described at the subpixel level.
Charge injection will be implemented in order to investigate its potential as mitigation of the CTI in the final stage of the mission's lifespan when the CTI could be significantly worse depending on the solar activity. Additionally, a relevant noise source for photometric measurements is scattered light coming from the Earth and the Moon. Although measures will be taken to avoid this as much as possible, some residual light scatter may still occur. Since this scatter is time dependent and could interfere with the detected stellar signal, it is worth considering and implementing this in PlatoSim. Defective pixels are also on the list of future implementations. In particular, RTS pixels are important as their variable dark current behaviour for slow state
switching (hours to days) will introduce flux jumps (similar to mask updates) and for fast state switching (from seconds to minutes) can introduce a significantly increased scatter in the extracted light curves. Finally, PlatoSim advantageously allows the user to simulate CCD subfields instead of full-frame images to reduce computational resources. As such, smearing and blooming trails from stars outside the subfield, plus light contribution from stars close to the pixel borders of the subfield, are not accounted for. Since the former is particularly important for designing the processing algorithms that account for the impact of smearing, it is a feature that belongs among the future enhancements of PlatoSim.
With the PLATO mission planned to see first light in less than three years the scientific justifications for the core and complimentary science programmes are constantly being verified with simulations. As demonstrated in this work, PlatoSim offers the opportunity to validate the performance of the PLATO payload and to undertake detailed simulation studies with a highly realistic approach that includes all foreseen random and systematic noise sources, an exact distribution of stars across the multi-camera arrangement in line with future planned pointing fields, and the inclusion of stellar variability for both target and contaminant stars. This makes PlatoSim a versatile software package under continuous development as a bedrock in the preparation for PLATO's future discoveries.
###### Acknowledgements.
This work presents results from the European Space Agency (ESA) space mission PLATO. The PLATO payload, the PLATO Ground Segment and PLATO data processing are joint developments of ESA and the PLATO mission consortium (PMC). Funding for the PMC is provided at national levels, in particular by countries participating in the PLATO Multilateral Agreement (Austria, Belgium, Czech Republic, Denmark, France, Germany, Italy, Netherlands, Portugal, Spain, Sweden, Switzerland, Norway, and United Kingdom) and institutions from Brazil. Members of the PLATO Consortium can be found at [https://platoission.com/](https://platoission.com/). The ESA PLATO mission website is [https://www.cosmos.esa.int/plato](https://www.cosmos.esa.int/plato). We thank the teams working for PLATO for all their work. The research behind these results has received funding from the BELjan federal Science Policy Office (BELSPO) through PRODEX grant PLATO: ZKEC200-01-D01. RH acknowledge support from the German Aerospace Agency (Deutsches Zentrum fur Luft- und Raumfunktur) under PLATO Data Center grant 50001501. This project additionally made use of the following published Python packages: Runny (Harris et al., 2020), Numba (Lam et al., 2015), Pandas (McKinney et al., 2011; Reback et al., 2022), SciPy (Virtanen et al., 2020), Matplotlib (Hunter, 2007), and Astropy (Astropy Collah et al., 2022, 2018, 2013).
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.